Full Text:   <3017>

Summary:  <1937>

CLC number: TP3

On-line Access: 2014-01-29

Received: 2013-06-20

Revision Accepted: 2013-11-09

Crosschecked: 2014-01-15

Cited: 1

Clicked: 7634

Citations:  Bibtex RefMan EndNote GB/T7714

-   Go to

Article info.
1. Reference List
Open peer comments

Journal of Zhejiang University SCIENCE C 2014 Vol.15 No.2 P.107-118


Transfer active learning by querying committee

Author(s):  Hao Shao, Feng Tao, Rui Xu

Affiliation(s):  School of WTO Research & Education, Shanghai University of International Business and Economics, Shanghai 200336, China; more

Corresponding email(s):   shaohao@suibe.edu.cn, ftao@ecust.edu.cn, rxu@ustc.edu.cn

Key Words:  Active learning, Transfer learning, Classification

Hao Shao, Feng Tao, Rui Xu. Transfer active learning by querying committee[J]. Journal of Zhejiang University Science C, 2014, 15(2): 107-118.

@article{title="Transfer active learning by querying committee",
author="Hao Shao, Feng Tao, Rui Xu",
journal="Journal of Zhejiang University Science C",
publisher="Zhejiang University Press & Springer",

%0 Journal Article
%T Transfer active learning by querying committee
%A Hao Shao
%A Feng Tao
%A Rui Xu
%J Journal of Zhejiang University SCIENCE C
%V 15
%N 2
%P 107-118
%@ 1869-1951
%D 2014
%I Zhejiang University Press & Springer
%DOI 10.1631/jzus.C1300167

T1 - Transfer active learning by querying committee
A1 - Hao Shao
A1 - Feng Tao
A1 - Rui Xu
J0 - Journal of Zhejiang University Science C
VL - 15
IS - 2
SP - 107
EP - 118
%@ 1869-1951
Y1 - 2014
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/jzus.C1300167

In real applications of inductive learning for classification, labeled instances are often deficient, and labeling them by an oracle is often expensive and time-consuming. active learning on a single task aims to select only informative unlabeled instances for querying to improve the classification accuracy while decreasing the querying cost. However, an inevitable problem in active learning is that the informative measures for selecting queries are commonly based on the initial hypotheses sampled from only a few labeled instances. In such a circumstance, the initial hypotheses are not reliable and may deviate from the true distribution underlying the target task. Consequently, the informative measures will possibly select irrelevant instances. A promising way to compensate this problem is to borrow useful knowledge from other sources with abundant labeled information, which is called transfer learning. However, a significant challenge in transfer learning is how to measure the similarity between the source and the target tasks. One needs to be aware of different distributions or label assignments from unrelated source tasks; otherwise, they will lead to degenerated performance while transferring. Also, how to design an effective strategy to avoid selecting irrelevant samples to query is still an open question. To tackle these issues, we propose a hybrid algorithm for active learning with the help of transfer learning by adopting a divergence measure to alleviate the negative transfer caused by distribution differences. To avoid querying irrelevant instances, we also present an adaptive strategy which could eliminate unnecessary instances in the input space and models in the model space. Extensive experiments on both the synthetic and the real data sets show that the proposed algorithm is able to query fewer instances with a higher accuracy and that it converges faster than the state-of-the-art methods.


创新要点:采用专家系统和混合模型,进一步优化迁移学习方法。在借助专家指导的过程中,主动学习(active learning)理论可以更好提供最有价值的数据集。因此,本研究引入专家系统对迁移算法的辅助方法设计,以及使用主动学习理论来进行未知数据的人工选择,以弥补迁移学习算法在初始数据集匮乏的情况下性能不足的弱点。


Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article


[1]Argyriou, A., Maurer, A., Pontil, M., 2008. An algorithm for transfer learning in a heterogeneous environment. Proc. European Conf. on Machine Learning and Knowledge Discovery in Databases, p.71-85.

[2]Balcan, M.F., Beygelzimer, A., Langford, J., 2006. Agnostic active learning. Proc. 23rd Int. Conf. on Machine Learning, p.65-72.

[3]Cao, B., Pan, S.J., Zhang, Y., et al., 2010. Adaptive transfer learning. Proc. 24th AAAI Conf. on Artificial Intelligence, p.407-412.

[4]Caruana, R., 1997. Multitask learning. Mach. Learn., 28(1):41-75.

[5]Chang, C.C., Lin, C.J., 2001. LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol., 2(3):27.

[6]Chattopadhyay, R., Fan, W., Davidson, I., et al., 2013. Joint transfer and batch-mode active learning. Proc. 30th Int. Conf. on Machine Learning, p.253-261.

[7]Church, K.W., Gale, W.A., 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating probabilities of English bigrams. Comput. Speech Lang., 5(1):19-54.

[8]Cohn, D., Atlas, L., Ladner, R., 1994. Improving generalization with active learning. Mach. Learn., 15(2):201-221.

[9]Dagan, I., Engelson, S.P., 1995. Committee-based sampling for training probabilistic classifiers. Proc. 12th Int. Conf. on Machine Learning, p.150-157.

[10]Dai, W., Yang, Q., Xue, G., et al., 2007. Boosting for transfer learning. Proc. 24th Int. Conf. on Machine Learning, p.193-200.

[11]Freund, Y., Schapire, R.E., 1997. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119-139.

[12]Harpale, A., Yang, Y., 2010. Active learning for multi-task adaptive filtering. Proc. 27th Int. Conf. on Machine Learning, p.431-438.

[13]Krause, A., Guestrin, C., 2009. Optimal value of information in graphical models. J. Artif. Intell., 35:557-591.

[14]Lewis, D.D., Gale, W.A., 1994. A sequential algorithm for training text classifiers. Proc. 17th Annual Int. ACM SIGIR Conf. on Research and Development in Information Retrieval, p.3-12.

[15]Li, H., Shi, Y., Chen, M.Y., et al., 2010. Hybrid active learning for cross-domain video concept detection. Proc. Int. Conf. on Multimedia, p.1003-1006.

[16]Li, L., Jin, X., Pan, S., et al., 2012. Multi-domain active learning for text classification. Proc. 18th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining, p.1086-1094.

[17]Lin, J.H., 1991. Divergence measures based on the Shannon entropy. IEEE Trans. Inform. Theory, 37(1):145-151.

[18]Luo, C.Y., Ji, Y.S., Dai, X.Y., et al., 2012. Active learning with transfer learning. Proc. ACL Student Research Workshop, p.13-18.

[19]McCallum, A.K., Nigam, K., 1998. Employing EM and pool-based active learning for text classification. Proc. 15th Int. Conf. on Machine Learning, p.350-358.

[20]Muslea, I., Minton, S., Knoblock, C.A., 2002. Active+semi-supervised learning = robust multi-view learning. Proc. 19th Int. Conf. on Machine Learning, p.435-442.

[21]Pereira, F., Tishby, N., Lee, L., 1993. Distributional clustering of English words. Proc. 31st Annual Meeting of Association for Computational Linguistics, p.183-190.

[22]Rajan, S., Ghosh, J., Crawford, M.M., 2006. An active learning approach to knowledge transfer for hyperspectral data analysis. Proc. IEEE Int. Conf. on Geoscience and Remote Sensing Symp., p.541-544.

[23]Reichart, R., Tomanek, K., Hahn, U., et al., 2008. Multi-task active learning for linguistic annotations. Proc. Annual Meeting of Association for Computational Linguistics, p.861-869.

[24]Rosenstein, M.T., Marx, Z., Kaelbling, L.P., et al., 2005. To transfer or not to transfer. Proc. NIPS Workshop on Inductive Transfer: 10 Years Later.

[25]Roy, N., McCallum, A., 2001. Toward optimal active learning through sampling estimation of error reduction. Proc. 18th Int. Conf. on Machine Learning, p.441-448.

[26]Settles, B., 2010. Active Learning Literature Survey. Technical Report No. 1648, University of Wisconsin, Madison.

[27]Seung, H.S., Opper, M., Sompolinsky, H., 1992. Query by committee. Proc. 5th Annual Workshop on Computational Learning Theory, p.287-294.

[28]Shao, H., Suzuki, E., 2011. Feature-based inductive transfer learning through minimum encoding. Proc. SIAM Int. Conf. on Data Mining, p.259-270.

[29]Shao, H., Tong, B., Suzuki, E., 2011. Compact coding for hyperplane classifiers in heterogeneous environment. Proc. European Conf. on Machine Learning and Knowledge Discovery in Databases, p.207-222.

[30]Shi, X.X., Fan, W., Ren, J.T., 2008. Actively transfer domain knowledge. Proc. European Conf. on Machine Learning and Knowledge Discovery in Databases, p.342-357.

[31]Shi, Y., Lan, Z.Z., Liu, W., et al., 2009. Extending semi-supervised learning methods for inductive transfer learning. Proc. 9th IEEE Int. Conf. on Data Mining, p.483-492.

[32]Yang, L., Hanneke, S., Carbonell, J., 2013. A theory of transfer learning with applications to active learning. Mach. Learn., 90(2):161-189.

[33]Zhang, Y., 2010. Multi-task active learning with output constraints. Proc. 24th AAAI Conf. on Artificial Intelligence, p.667-672.

[34]Zhu, Z., Zhu, X., Ye, Y., et al., 2011. Transfer active learning. Proc. 20th ACM Int. Conf. on Information and Knowledge Management, p.2169-2172.

[35]Zhuang, F., Luo, P., Shen, Z., et al., 2010. Collaborative Dual-PLSA: mining distinction and commonality across multiple domains for text classification. Proc. 19th ACM Int. Conf. on Information and Knowledge Management, p.359-368.

Open peer comments: Debate/Discuss/Question/Opinion


Please provide your name, email address and a comment

Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE