Bi-directional semi-supervised domain adaptation via gradient and class centroid alignment

Yimin WEN , Jiazhen TANG , Hang YU , Chuanbo QIN , Chuangquan CHEN

Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (1) : 2101309

PDF (3811KB)
Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (1) :2101309 DOI: 10.1007/s11704-025-50919-7
Artificial Intelligence
RESEARCH ARTICLE
Bi-directional semi-supervised domain adaptation via gradient and class centroid alignment
Author information +
History +
PDF (3811KB)

Abstract

With the advancement of machine learning, domain adaptation has become increasingly important. Traditional research in domain adaptation has primarily focused on Unsupervised Domain Adaptation (UDA) and Semi-Supervised Domain Adaptation (SSDA). However, in many practical applications, it is common to encounter scenarios where both domains have labeled and unlabeled samples, which complicates the handling of domain adaptation. The scarcity of solutions to these scenarios further underscores the necessity of developing new methods to effectively explore the labeled and unlabeled samples. This paper proposes the problem of Bi-directional Semi-Supervised Domain Adaptation (BiSSDA) and a method of Gradient discrepancy minimization and labeled Class Centroid Align (GCCA) to address this problem. In GCCA, labeled and unlabeled samples from both domains are passed through a generator G and two classifiers F1 and F2, the generator G opposes with F1 and F2 during training and in which both domains are better aligned via gradient and class centroid alignment. Extensive experiments on three widely used datasets demonstrate that GCCA significantly outperforms CGDM and several previous SSDA methods in terms of exploring the labeled and unlabeled samples in both domains and significantly reduce the reliance on labeled data in bi-directional domain adaptation through cooperation between two domains. The code of the proposed method is available at the website of gitee.com/ymw12345/gcca.

Graphical abstract

Keywords

bi-directional transfer learning / clustering alignment / domain adaptation / semi-supervised learning

Cite this article

Download citation ▾
Yimin WEN, Jiazhen TANG, Hang YU, Chuanbo QIN, Chuangquan CHEN. Bi-directional semi-supervised domain adaptation via gradient and class centroid alignment. Front. Comput. Sci., 2027, 21(1): 2101309 DOI:10.1007/s11704-025-50919-7

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Liu S, Lv J, Kang J, Zhang H, Liang Z, He S. MODfinity: unsupervised domain adaptation with multimodal information flow intertwining. In: Proceedings of 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2025, 5092−5101

[2]

Zhu Y, Wang S, Li Y, Yuan Y, Qiang J . Soft prompt-tuning for unsupervised domain adaptation via self-supervision. Neurocomputing, 2025, 617: 129008

[3]

Xu C, Song Y, Zheng Q, Wang Q, Heng P A . Unsupervised multi-source domain adaptation via contrastive learning for EEG classification. Expert Systems with Applications, 2025, 216: 125452

[4]

Zhang Y, Chen S, Jiang W, Zhang Y, Lu J, Kwok J T . Domain-guided conditional diffusion model for unsupervised domain adaptation. Neural Networks, 2025, 184: 107031

[5]

Chen Y, Li J, Yu H, Qi L, Li Y . Source-free unsupervised domain adaptation fundus image segmentation via entropy optimization and anatomical priors. Procedia Computer Science, 2024, 250: 182–187

[6]

Singhal P, Walambe R, Ramanna S, Kotecha K . Domain adaptation: challenges, methods, datasets, and applications. IEEE Access, 2023, 11: 6973–7020

[7]

Gan K, Ye B, Zhang M L, Wei T. Semi-supervised CLIP adaptation by enforcing semantic and trapezoidal consistency. In: Proceedings of the 13th International Conference on Learning Representations. 2025, 12142−12163

[8]

Ma L, Ding Y X, Zhao P, Zhou Z H . Learning objective adaptation by correlation-based model reuse. IEEE Transactions on Neural Networks and Learning Systems, 2025, 36( 8): 14440–14451

[9]

Pan S J, Yang Q . A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 2010, 22( 10): 1345–1359

[10]

Zhuang F, Qi Z, Duan K, Xi D, Zhu Y, Zhu H, Xiong H, He Q . A comprehensive survey on transfer learning. Proceedings of the IEEE, 2021, 109( 1): 43–76

[11]

Goodman S, Greenspan H, Goldberger J . Supervised Domain Adaptation by transferring both the parameter set and its gradient. Neurocomputing, 2023, 560: 126828

[12]

Rawat A, Dua I, Gupta S, Tallamraju R. Semi-supervised domain adaptation by similarity based pseudo-label injection. In: Karlinsky L, Michaeli T, Nishino K, eds. Computer Vision–ECCV 2022 Workshops. Cham: Springer, 2022, 150−166

[13]

Shu Y, Cao Z, Long M, Wang J. Transferable curriculum for weakly-supervised domain adaptation. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence. 2019, 4951−4958

[14]

Liu X, Yoo C, Xing F, Oh H, El Fakhri G, Kang J W, Woo J . Deep unsupervised domain adaptation: a review of recent advances and perspectives. APSIPA Transactions on Signal and Information Processing, 2022, 11( 1): e25

[15]

Li S Y, Zhao S J, Cao Z T, Huang S J, Chen S . Robust domain adaptation with noisy and shifted label distribution. Frontiers of Computer Science, 2025, 19( 3): 193310

[16]

Li J, Li G, Yu Y . Inter-domain mixup for semi-supervised domain adaptation. Pattern Recognition, 2024, 146: 110023

[17]

Tu K, Wang Z, Li J, Zhang Y. Semi-supervised domain adaptation via joint contrastive learning with sensitivity. In: Proceedings of the 31st ACM International Conference on Multimedia. 2023, 5645−5654

[18]

Saito K, Watanabe K, Ushiku Y, Harada T. Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018, 3723−3732

[19]

Du Z, Li J, Su H, Zhu L, Lu K. Cross-domain gradient discrepancy minimization for unsupervised domain adaptation. In: Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 3936−3945

[20]

Feng Q, Kang G, Fan H, Yang Y. Attract or distract: exploit the margin of open set. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 7989−7998

[21]

Ren C X, Zhai Y, Luo Y W, Yan H . Towards unsupervised domain adaptation via domain-transformer. International Journal of Computer Vision, 2024, 132( 12): 6163–6183

[22]

Ganin Y, Lempitsky V. Unsupervised domain adaptation by backpropagation. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning. 2015, 1180−1189

[23]

Saito K, Kim D, Sclaroff S, Darrell T, Saenko K. Semi-supervised domain adaptation via minimax entropy. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 8049−8057

[24]

Jiang P, Wu A, Han Y, Shao Y, Qi M, Li B. Bidirectional adversarial training for semi-supervised domain adaptation. In: Proceedings of the 29th International Joint Conference on Artificial Intelligence. 2020, 934−940

[25]

Yu Y C, Lin H T. Semi-supervised domain adaptation with source label adaptation. In: Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 24100−24109

[26]

Malhotra S, Kumar V, Agarwal A . Bidirectional transfer learning model for sentiment analysis of natural language. Journal of Ambient Intelligence and Humanized Computing, 2021, 12( 11): 10267–10287

[27]

McKay H, Griffiths N, Taylor P, Damoulas T, Xu Z . Bi-directional online transfer learning: a framework. Annals of Telecommunications, 2020, 75( 9): 523–547

[28]

Cheng C W, Qiao X, Cheng G. Mutual transfer learning for massive data. In: Proceedings of the 37th International Conference on Machine Learning. 2020, 168

[29]

Wulfmeier M, Posner I, Abbeel P. Mutual alignment transfer learning. In: Proceedings of the 1st Annual Conference on Robot Learning. 2017, 281−290

[30]

Jia L H, Guo L Z, Zhou Z, Shao J J, Xiang Y K, Li Y F. Bidirectional adaptation for robust semi-supervised learning with inconsistent data distributions. In: Proceedings of the 40th International Conference on Machine Learning. 2023, 607

[31]

Saito K, Ushiku Y, Harada T, Saenko K. Adversarial dropout regularization. In: Proceedings of the 6th International Conference on Learning Representations. 2018, 1−15

[32]

Lee C Y, Batra T, Baig M H, Ulbricht D. Sliced wasserstein discrepancy for unsupervised domain adaptation. In: Proceedings of 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, 10277−10287

[33]

Peng X, Usman B, Kaushik N, Wang D, Hoffman J, Saenko K. VISDA: a synthetic-to-real benchmark for visual domain adaptation. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). 2018, 2102−21025

[34]

Lin T Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick C L. Microsoft COCO: common objects in context. In: Proceedings of the 13th European Conference on Computer Vision. 2014, 740−755

[35]

Venkateswara H, Eusebio J, Chakraborty S, Panchanathan S. Deep hashing network for unsupervised domain adaptation. In: Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition. 2017, 5385−5394

[36]

Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. In: Proceedings of the 11th European Conference on Computer Vision. 2010, 213−226

[37]

Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, Desmaison A, Köpf A, Yang E, DeVito Z, Raison M, Tejani A, Chilamkurthy S, Steiner B, Fang L, Bai J, Chintala S. PyTorch: an imperative style, high-performance deep learning library. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 721

[38]

He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition. 2016, 770−778

[39]

Deng J, Dong W, Socher R, Li L J, Li K, Fei-Fei L. ImageNet: a large-scale hierarchical image database. In: Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. 2009, 248−255

[40]

Long M, Cao Z, Wang J, Jordan M I. Conditional adversarial domain adaptation. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 1647−1657

[41]

Xu R, Li G, Yang J, Lin L. Larger norm more transferable: an adaptive feature norm approach for unsupervised domain adaptation. In: Proceedings of 2019 IEEE/CVF International Conference on Computer Vision. 2019, 1426−1435

[42]

Li J, Li G, Shi Y, Yu Y. Cross-domain adaptive clustering for semi-supervised domain adaptation. In: Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 2505−2514

[43]

van der Maaten L, Hinton G . Visualizing data using t-SNE. Journal of Machine Learning Research, 2008, 9: 2579–2605

RIGHTS & PERMISSIONS

Higher Education Press

PDF (3811KB)

Supplementary files

Highlights

1013

Accesses

0

Citation

Detail

Sections
Recommended

/