CLC number:
On-line Access: 2024-08-27
Received: 2023-10-17
Revision Accepted: 2024-05-08
Crosschecked: 2023-04-20
Cited: 0
Clicked: 1392
Citations: Bibtex RefMan EndNote GB/T7714
Shengyuan LIU, Ke CHEN, Tianlei HU, Yunqing MAO. Uncertainty-aware complementary label queries for active learning[J]. Frontiers of Information Technology & Electronic Engineering, 2023, 24(10): 1497-1503.
@article{title="Uncertainty-aware complementary label queries for active learning",
author="Shengyuan LIU, Ke CHEN, Tianlei HU, Yunqing MAO",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="24",
number="10",
pages="1497-1503",
year="2023",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2200589"
}
%0 Journal Article
%T Uncertainty-aware complementary label queries for active learning
%A Shengyuan LIU
%A Ke CHEN
%A Tianlei HU
%A Yunqing MAO
%J Frontiers of Information Technology & Electronic Engineering
%V 24
%N 10
%P 1497-1503
%@ 2095-9184
%D 2023
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2200589
TY - JOUR
T1 - Uncertainty-aware complementary label queries for active learning
A1 - Shengyuan LIU
A1 - Ke CHEN
A1 - Tianlei HU
A1 - Yunqing MAO
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 24
IS - 10
SP - 1497
EP - 1503
%@ 2095-9184
Y1 - 2023
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2200589
Abstract: Many active learning methods assume that a learner can simply ask for the full annotations of some training data from annotators. These methods mainly try to cut the annotation costs by minimizing the number of annotation actions. Unfortunately, annotating instances exactly in many real-world classification tasks is still expensive. To reduce the cost of a single annotation action, we try to tackle a novel active learning setting, named active learning with complementary labels (ALCL). ALCL learners ask only yes/no questions in some classes. After receiving answers from annotators, ALCL learners obtain a few supervised instances and more training instances with complementary labels, which specify only one of the classes to which the pattern does not belong. There are two challenging issues in ALCL: one is how to sample instances to be queried, and the other is how to learn from these complementary labels and ordinary accurate labels. For the first issue, we propose an uncertainty-based sampling strategy under this novel setup. For the second issue, we upgrade a previous ALCL method to fit our sampling strategy. Experimental results on various datasets demonstrate the superiority of our approaches.
[1]Arnab A, Sun C, Nagrani A, et al., 2020. Uncertainty-aware weakly supervised action detection from untrimmed videos. Proc 16th European Conf on Computer Vision, p.751-768.
[2]Blundell C, Cornebise J, Kavukcuoglu K, et al., 2015. Weight uncertainty in neural network. Proc 32nd Int Conf on Machine Learning, p.1613-1622.
[3]Cipolla R, Gal Y, Kendall A, 2018. Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.7482-7491.
[4]Clanuwat T, Bober-Irizar M, Kitamoto A, et al., 2018. Deep learning for classical Japanese literature. https://arxiv.org/abs/1812.01718
[5]Culotta A, McCallum A, 2005. Reducing labeling effort for structured prediction tasks. Proc 20th National Conf on Artificial Intelligence, p.746-751.
[6]Feng L, Kaneko T, Han B, et al., 2020. Learning with multiple complementary labels. Proc 37th Int Conf on Machine Learning, p.3072-3081.
[7]Gal Y, Ghahramani Z, 2016. Dropout as a Bayesian approximation: representing model uncertainty in deep learning. Proc 33rd Int Conf on Machine Learning, p.1050-1059.
[8]Geng Y, Han ZB, Zhang CQ, et al., 2021. Uncertainty-aware multi-view representation learning. Proc 35th AAAI Conf on Artificial Intelligence, p.7545-7553.
[9]Gonsior J, Thiele M, Lehner W, 2020. WEAKAL: combining active learning and weak supervision. Proc 23rd Int Conf on Discovery Science, p.34-49.
[10]He KM, Zhang XY, Ren SQ, et al., 2016. Deep residual learning for image recognition. IEEE Conf on Computer Vision and Pattern Recognition, p.770-778.
[11]Hu PY, Lipton ZC, Anandkumar A, et al., 2019. Active learning with partial feedback. https://arxiv.org/abs/1802.07427v2
[12]Ishida T, Niu G, Hu WH, et al., 2017. Learning from complementary labels. Proc 31st Conf on Neural Information Processing Systems, p.5639-5649.
[13]Ishida T, Niu G, Menon A, et al., 2019. Complementary-label learning for arbitrary losses and models. Proc 36th Int Conf on Machine Learning, p.2971-2980.
[14]Kingma DP, Ba J, 2015. Adam: a method for stochastic optimization. Proc 3rd Int Conf on Learning Representations, p.14-17.
[15]Krizhevsky A, Hinton G, 2009. Learning Multiple Layers of Features from Tiny Images. MS Thesis, University of Toronto, Toronto, Canada.
[16]LeCun Y, Bottou L, Bengio Y, et al., 1998. Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278-2324.
[17]Liu SY, Hu TL, Chen K, et al., 2023. Complementary label queries for efficient active learning. Proc 6th Int Conf on Image and Graphics Processing, p.1-7.
[18]Ren PZ, Xiao Y, Chang XJ, et al., 2021. A survey of deep active learning. ACM Comput Surv, 54(9):180.
[19]Scheffer T, Decomain C, Wrobel S, 2001. Active hidden Markov models for information extraction. Proc 4th Int Conf on Intelligent Data Analysis, p.309-318.
[20]Settles B, 2009. Active Learning Literature Survey. Technical Report No. 1648, University of Wisconsin-Madison, USA.
[21]Settles B, 2011. From theories to queries: active learning in practice. Active Learning and Experimental Design Workshop in Conjunction with AISTATS, Article 18.
[22]Settles B, Craven M, 2008. An analysis of active learning strategies for sequence labeling tasks. Conf on Empirical Methods in Natural Language Processing, p.1070-1079.
[23]Sinha S, Ebrahimi S, Darrell T, 2019. Variational adversarial active learning. IEEE/CVF Int Conf on Computer Vision, p.5971-5980.
[24]Wang HB, Liu WW, Zhao Y, et al., 2019. Discriminative and correlative partial multi-label learning. Proc 28th Int Joint Conf on Artificial Intelligence, p.3691-3697.
[25]Xiao H, Rasul K, Vollgraf R, 2017. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. https://arxiv.org/abs/1708.07747
[26]Yoo D, Kweon IS, 2019. Learning loss for active learning. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.93-102.
[27]Younesian T, Epema D, Chen LY, 2020. Active learning for noisy data streams using weak and strong labelers. https://arxiv.org/abs/2010.14149
[28]Zhang CC, Chaudhuri K, 2015. Active learning from weak and strong labelers. Proc 28th Int Conf on Neural Information Processing Systems, p.703-711.
[29]Zhang T, Zhou ZH, 2018. Semi-supervised optimal margin distribution machines. Proc 27th Int Joint Conf on Artificial Intelligence, p.3104-3110.
[30]Zhang ZZ, Lan CL, Zeng WJ, et al., 2020. Uncertainty-aware few-shot image classification. Proc 30th Int Joint Conf on Artificial Intelligence, p.3420-3426.
[31]Zhou ZH, 2018. A brief introduction to weakly supervised learning. Nat Sci Rev, 5(1):44-53.
Open peer comments: Debate/Discuss/Question/Opinion
<1>