Learnable instance-adaptive thresholds for semi-supervised multi-label learning

Shuxian XIONG , Mingkun XIE , Jiahao XIAO , Shengjun HUANG

Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (2) : 2102328

PDF (6093KB)
Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (2) :2102328 DOI: 10.1007/s11704-025-51185-3
Artificial Intelligence
RESEARCH ARTICLE
Learnable instance-adaptive thresholds for semi-supervised multi-label learning
Author information +
History +
PDF (6093KB)

Abstract

Semi-supervised multi-label learning (SSMLL) trains models efficiently by leveraging a small amount of labeled data along with a large set of unlabeled data. In SSMLL, given that each instance can be associated with multiple labels, a key problem of pseudo-labeling is how to transfer the soft predicted probabilities to hard positive/negative labels for unlabeled data. The recent work addresses this problem by developing a class-wise thresholding method but neglects the fact that different instances contain different contextual information, causing the model to make biased predictions for the same class. This phenomenon further leads to biased pseudo-labels, which in turn degrade the model’s performance. To solve this problem, we propose an instance-adaptive thresholding method for SSMLL, which aims to avoid introducing contextual bias into pseudo-labeling. The core idea is to introduce a learnable thresholding function that adaptively generates instance-wise thresholds to separate the positive and negative labels for each unlabeled instance. The thresholding function can be easily learned with an improved pairwise ranking loss on labeled data. Specifically, this strategy can be implemented as a plug-in solution for other SSMLL methods to generate hard pseudo-labels. Experimental results demonstrate that our thresholding strategy consistently improves existing SSMLL methods and achieves state-of-the-art performance when integrated into strong architectures.

Graphical abstract

Keywords

semi-supervised learning / multi-label classification / pseudo-labeling / instance-adaptive thresholding

Cite this article

Download citation ▾
Shuxian XIONG, Mingkun XIE, Jiahao XIAO, Shengjun HUANG. Learnable instance-adaptive thresholds for semi-supervised multi-label learning. Front. Comput. Sci., 2027, 21(2): 2102328 DOI:10.1007/s11704-025-51185-3

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Zhang M L, Zhou Z H . A review on multi-label learning algorithms. IEEE Transactions on Knowledge and Data Engineering, 2014, 26( 8): 1819–1837

[2]

Wei T, Mao Z, Shi J X, Li Y F, Zhang M L. A survey on extreme multi-label learning. 2022, arXiv preprint arXiv: 2210.03968

[3]

Yuan J, Chen S, Zhang Y, Shi Z, Geng X, Fan J, Rui Y . Graph attention transformer network for multi-label image classification. ACM Transactions on Multimedia Computing, Communications and Applications, 2023, 19( 4): 150

[4]

Zeng D, Zha E, Kuang J, Shen Y . Multi-label text classification based on semantic-sensitive graph convolutional network. Knowledge-Based Systems, 2024, 284: 111303

[5]

Luo Y, Chen Q . A method for personalised music recommendation based on emotional multi-label. International Journal of Reasoning-Based Intelligent Systems, 2023, 15( 2): 97–104

[6]

Zhang Y, Luo L, Dou Q, Heng P A . Triplet attention and dual-pool contrastive learning for clinic-driven multi-label medical image classification. Medical Image Analysis, 2023, 86: 102772

[7]

Xie M K, Xiao J H, Liu H Z, Niu G, Sugiyama M, Huang S J. Class-distribution-aware pseudo-labeling for semi-supervised multi-label learning. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 1119

[8]

Sohn K, Berthelot D, Li C L, Zhang Z, Carlini N, Cubuk E D, Kurakin A, Zhang H, Raffel C. FixMatch: Simplifying semi-supervised learning with consistency and confidence. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 51

[9]

Zhang B, Wang Y, Hou W, Wu H, Wang J, Okumura M, Shinozaki T. FlexMatch: boosting semi-supervised learning with curriculum pseudo labeling. In: Proceedings of the 35th International Conference on Neural Information Processing Systems. 2021, 1407

[10]

Wang Y, Chen H, Heng Q, Hou W, Fan Y, Wu Z, Wang J, Savvides M, Shinozaki T, Raj B, Schiele B, Xie X. FreeMatch: Self-adaptive thresholding for semi-supervised learning. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[11]

Guo L Z, Li Y F. Class-imbalanced semi-supervised learning with adaptive thresholding. In: Proceedings of the 39th International Conference on Machine Learning. 2022, 8082−8094

[12]

Boutell M R, Luo J, Shen X, Brown C M . Learning multi-label scene classification. Pattern Recognition, 2004, 37( 9): 1757–1771

[13]

Zhang M L, Li Y K, Liu X Y, Geng X . Binary relevance for multi-label learning: an overview. Frontiers of Computer Science, 2018, 12( 2): 191–202

[14]

Chen Z M, Wei X S, Wang P, Guo Y . Learning graph convolutional networks for multi-label recognition and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45( 6): 6969–6983

[15]

He Y, Xiong Q, Ke C, Wang Y, Yang Z, Yi H, Fan Q . MCICT: graph convolutional network-based end-to-end model for multi-label classification of imbalanced clinical text. Biomedical Signal Processing and Control, 2024, 91: 105873

[16]

Nam J, Loza Mencía E, Kim H J, Fürnkranz J. Maximizing subset accuracy with recurrent neural networks in multi-label classification. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 5419−5429

[17]

Li L, Wang S, Jiang S, Huang Q. Attentive recurrent neural network for weak-supervised multi-label image classification. In: Proceedings of the 26th ACM International Conference on Multimedia. 2018, 1092−1100

[18]

Sucar L E, Bielza C, Morales E F, Hernandez-Leal P, Zaragoza J H, Larrañaga P . Multi-label classification with Bayesian network-based chain classifiers. Pattern Recognition Letters, 2014, 41: 14–22

[19]

Li Y, Song Y, Luo J. Improving pairwise ranking for multi-label image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017, 1837−1845

[20]

Chen T, Xu M, Hui X, Wu H, Lin L. Learning semantic-specific graph representation for multi-label image recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019, 522−531

[21]

Liu X, Yang L, Yang Y . Context-aware multi-label classification: Synergistic effects of masked attention and graph neural networks. Alexandria Engineering Journal, 2025, 129: 77–89

[22]

Zhao H, Zhou W, Hou X, Zhu H . Double attention for multi-label image classification. IEEE Access, 2020, 8: 225539–225550

[23]

Zhu K, Wu J. Residual attention: a simple but effective method for multi-label recognition. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021, 184−193

[24]

Zhu X, Li J, Cao J, Tang D, Liu J, Liu B . Semantic-guided representation enhancement for multi-label image classification. IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34( 10): 10036–10049

[25]

Ridnik T, Ben-Baruch E, Zamir N, Noy A, Friedman I, Protter M, Zelnik-Manor L. Asymmetric loss for multi-label classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021, 82−91

[26]

Liu B Q, Jia B B, Zhang M L . Towards enabling binary decomposition for partial multi-label learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45( 11): 13203–13217

[27]

Rastogi R, Kumar S . Discriminatory label-specific weights for multi-label learning with missing labels. Neural Processing Letters, 2023, 55( 2): 1397–1431

[28]

Xu Y, Shang L, Ye J, Qian Q, Li Y, Sun B, Li H, Jin R. Dash: semi-supervised learning with dynamic thresholding. In: Proceedings of the 38th International Conference on Machine Learning. 2021, 11525−11536

[29]

Kuo C W, Ma C Y, Huang J B, Kira Z. FeatMatch: feature-based augmentation for semi-supervised learning. In: Proceedings of the 16th European Conference on Computer Vision. 2020, 479−495

[30]

Sajjadi M, Javanmardi M, Tasdizen T. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016, 1171−1179

[31]

Miyato T, Maeda S I, Koyama M, Ishii S . Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 41( 8): 1979–1993

[32]

Berthelot D, Carlini N, Cubuk E D, Kurakin A, Sohn K, Zhang H, Raffel C. ReMixMatch: semi-supervised learning with distribution matching and augmentation anchoring. In: Proceedings of the 8th International Conference on Learning Representations. 2020

[33]

Chen B, Jiang J, Wang X, Wan P, Wang J, Long M. Debiased self-training for semi-supervised learning. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 2349

[34]

Liu Y, Jin R, Yang L. Semi-supervised multi-label learning by constrained non-negative matrix factorization. In: Proceedings of the 21st National Conference on Artificial Intelligence and the 18th Innovative Applications of Artificial Intelligence Conference. 2006, 421−426

[35]

Zhao F, Guo Y. Semi-supervised multi-label learning with incomplete labels. In: Proceedings of the 24th International Joint Conference on Artificial Intelligence. 2015, 4062−4068

[36]

Wang L, Liu Y, Qin C, Sun G, Fu Y. Dual relation semi-supervised multi-label learning. In: Proceedings of the 34th AAAI Conference on Artificial Intelligence. 2020, 6227−6234

[37]

Xiao J H, Xie M K, Fan H B, Niu G, Sugiyama M, Huang S J. Dual-decoupling learning and metric-adaptive thresholding for semi-supervised multi-label learning. In: Proceedings of the 18th European Conference on Computer Vision. 2025, 437−454

[38]

Li X, Liang S, Li C, Wang P, Gu F. Semi-supervised multi-label learning with balanced binary angular margin loss. In: Proceedings of the 38th International Conference on Neural Information Processing Systems. 2024, 3105

[39]

Li Q, Luo T, Jiang M, Jiang Z, Hou C, Li F. Semi-supervised multi-view multi-label learning with view-specific transformer and enhanced pseudo-label. In: Proceedings of the 39th AAAI Conference on Artificial Intelligence. 2025, 18430−18438

[40]

Li Q, Luo T, Liao J. Theory-inspired deep multi-view multi-label learning with incomplete views and noisy labels. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2025, 20706−20715

[41]

Han W, Yin J, Shen J. Self-supervised monocular depth estimation by direction-aware cumulative convolution network. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023, 8579−8589

[42]

Gong Y, Jia Y, Leung T, Toshev A, Ioffe S. Deep convolutional ranking for multilabel image annotation. In: Proceedings of the 2nd International Conference on Learning Representations. 2014

[43]

Mohri M, Rostamizadeh A, Talwalkar A. Foundations of Machine Learning. 2nd ed. Cambridge: The MIT Press, 2018

[44]

Everingham M, Van Gool L, Williams C K I, Winn J, Zisserman A . The pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 2010, 88( 2): 303–338

[45]

Lin T Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick C L. Microsoft COCO: Common objects in context. In: Proceedings of the 13th European Conference on Computer Vision. 2014, 740−755

[46]

Chua T S, Tang J, Hong R, Li H, Luo Z, Zheng Y. NUS-WIDE: a real-world web image database from national university of Singapore. In: Proceedings of the ACM International Conference on Image and Video Retrieval. 2009, 48

[47]

Kim Y, Kim J M, Akata Z, Lee J. Large loss matters in weakly supervised multi-label classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 14136−14145

[48]

Xie M K, Xiao J H, Huang S J. Label-aware global consistency for multi-label learning with single positive labels. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 1339

[49]

He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016, 770−778

[50]

Deng J, Dong W, Socher R, Li L J, Li K, Fei-Fei L. ImageNet: a large-scale hierarchical image database. In: Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. 2009, 248−255

[51]

Loshchilov I, Hutter F. Decoupled weight decay regularization. In: Proceedings of the 7th International Conference on Learning Representations. 2019

[52]

DeVries T, Taylor G W. Improved regularization of convolutional neural networks with cutout. 2017, arXiv preprint arXiv: 1708.04552

[53]

Dong X, Shen J, Shao L. Rethinking clustering-based pseudo-labeling for unsupervised meta-learning. In: Proceedings of the 17th European Conference on Computer Vision. 2022, 169−186

[54]

Ouyang T, Dong X, Ye M, Du B, Shao L, Shen J . Semantic-aware pseudo-labeling for unsupervised meta-learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025, 47( 7): 5475–5488

[55]

Dong X, Ouyang T, Liao S, Du B, Shao L . Pseudo-labeling based practical semi-supervised meta-training for few-shot learning. IEEE Transactions on Image Processing, 2024, 33: 5663–5675

RIGHTS & PERMISSIONS

Higher Education Press

PDF (6093KB)

Supplementary files

Highlights

271

Accesses

0

Citation

Detail

Sections
Recommended

/