Self-corrected unsupervised domain adaptation

Yunyun WANG, Chao WANG, Hui XUE, Songcan CHEN

PDF(2345 KB)
PDF(2345 KB)
Front. Comput. Sci. ›› 2022, Vol. 16 ›› Issue (5) : 165323. DOI: 10.1007/s11704-021-1010-8
Artificial Intelligence
RESEARCH ARTICLE

Self-corrected unsupervised domain adaptation

Author information +
History +

Abstract

Unsupervised domain adaptation (UDA), which aims to use knowledge from a label-rich source domain to help learn unlabeled target domain, has recently attracted much attention. UDA methods mainly concentrate on source classification and distribution alignment between domains to expect the correct target prediction. While in this paper, we attempt to learn the target prediction end to end directly, and develop a Self-corrected unsupervised domain adaptation (SCUDA) method with probabilistic label correction. SCUDA adopts a probabilistic label corrector to learn and correct the target labels directly. Specifically, besides model parameters, those target pseudo-labels are also updated in learning and corrected by the anchor-variable, which preserves the class candidates for samples. Experiments on real datasets show the competitiveness of SCUDA.

Graphical abstract

Keywords

unsupervised domain adaptation / adversarial Learning / deep neural network / pseudo-labels / label corrector

Cite this article

Download citation ▾
Yunyun WANG, Chao WANG, Hui XUE, Songcan CHEN. Self-corrected unsupervised domain adaptation. Front. Comput. Sci., 2022, 16(5): 165323 https://doi.org/10.1007/s11704-021-1010-8

References

[1]
Li X C , Zhan D C , Yang J Q , Shi Y . Deep multiple instance selection. Science China Information Sciences, 2021, 64( 3): 130102–
[2]
Li S Y , Huang S J , Chen S C . Crowdsourcing aggregation with deep Bayesian learning. Science China Information Sciences, 2021, 64( 3): 130104–
[3]
Xu M , Guo L Z . Learning from group supervision: how supervision deficiency impacts multi-label learning. Science China Information Sciences, 2021, 64( 3): 130101–
[4]
Wang X G , Feng J P , Liu W Y . Deep graph cut network for weakly-supervised semantic segmentation. Science China Information Sciences, 2021, 64( 3): 130105–
[5]
Zhao X , Pang N , Wang W , Xiao W D , Guo D K . Few-shot text classification by leveraging bi-directional attention and cross-class knowledge. Science China Information Sciences, 2021, 64( 3): 130103–
[6]
Ben-David J , S K , Blitzer A , Crammer F , Kulesza J W . A theory of learning from different domains. Machine Learning, 2010, 79( 1−2): 151– 175
[7]
Sun B C, Saenko K. Deep coral: correlation alignment for deep domain adaptation. In: Proceedings of European Conference on Computer Vision. 2016, 443– 450
[8]
Zellinger W, Grubinger T, Lughofer E, Natschläger T, Saminger-Platz S. Central moment discrepancy (CMD) for domain-invariant representation learning. In: Proceedings of International Conference on Learning Representations. 2017
[9]
Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014, 2672−2680
[10]
Pan S J , Yang Q . A survey on transfer learning. IEEE Transactions on Knowledge Data Engineering, 2009, 22( 10): 1345– 1359
[11]
Iscen A, Tolias G, Avrithis Y, Chum O. Label propagation for deep semi-supervised learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019, 5070−5079
[12]
Tzeng E, Hoffman J, Zhang N, Saenko K, Darrell T. Deep domain confusion: maximizing for domain invariance. 2014, arXiv preprint arXiv:1412.3474
[13]
Ghifary M, Kleijn W B, Zhang M J. Domain adaptive neural networks for object recognition. In: Proceedings of Pacific Rim International Conference on Artificial Intelligence. 2014, 898– 904
[14]
Yan H, Ding Y K, Li P H, Wang Q L, Xu Y, Zuo W M. Mind the class weight bias: weighted maximum mean discrepancy for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2017, 2272−2281
[15]
Zhu Y , Zhuang F , Wang J , Ke G , He Q . Deep subdomain adaptation network for image classification. IEEE Transactions on Neural Networks and Learning Systems, 2020, 99 : 1– 10
[16]
Saito K, Ushiku Y, Harada T. Asymmetric tri-training for unsupervised domain adaptation. In: Proceedings of International Conference on Machine Learning. 2017, 2988−2997
[17]
Zhang X, Yu F X, Chang S, Wang S J. Deep transfer network: unsupervised domain adaptation. 2015, arXiv preprint arXiv: 1503.0059
[18]
Long M S, Zhu H, Wang J M, Jordan M I. Unsupervised domain adaptation with residual transfer networks. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016, 136– 144
[19]
Ganin Y , Ustinova E , Ajakan H , Germain P , Larochelle H , Laviolette F , Marchand M , Lempitsky V . Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 2016, 17( 1): 2096– 2030
[20]
Tzeng E, Hoffman J, Saenko K, Darrell T. Adversarial discriminative domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2017, 7167−7176
[21]
Xie S A, Zheng Z B, Chen L, Chen C. Learning semantic representations for unsupervised domain adaptation. In: Proceedings of International Conference on Machine Learning. 2018, 5423−5432
[22]
Pei Z Y, Cao Z J, Long M S, Wang J M. Multi-adversarial domain adaptation. In: Proceedings of AAAI Conference on Artificial Intelligence. 2018
[23]
Wang Y Y , Gu J M , Wang C , Chen S C . Discrimination-aware domain adversarial neural network. Journal of Computer Science and Technology, 2020, 35( 2): 1– 9
[24]
Wang S N, Chen X Y, Wang Y B, Long M S, Wang J M. Progressive adversarial networks for fine-grained domain adaptation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2020, 9213−9222
[25]
Kumar A, Sattigeri P, Wadhawan K, Karlinsky L, Feris R, Freeman B, Wornell G. Co-regularized alignment for unsupervised domain adaptation. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 9345−9356
[26]
Saito K, Watanabe K, Ushiku Y, Harada T. Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2018, 3723−3732
[27]
Yu C H, Wang J D, Chen Y Q, Huang M Y. Transfer learning with dynamic adversarial adaptation network. In: Proceedings of International Conference on Data Mining. 2019, 778– 786
[28]
Li Y F , Guo L Z , Zhou Z H . Towards safe weakly supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43( 1): 334– 346
[29]
Li Y F , Liang D M . Safe semi-supervised learning: a brief introduction. Frontiers of Computer Science, 2019, 13( 4): 669– 676
[30]
Yi K, Wu J X. Probabilistic end-to-end noise correction for learning with noisy labels. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2019, 7017−7025
[31]
Wang G H, Wu J. Repetitive reprediction deep decipher for semi-supervised learning. In: Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 6170– 6177
[32]
Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. In: Proceedings of European Conference on Computer Vision. 2010, 213– 226
[33]
Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng A Y. Reading digits in natural images with unsupervised feature learning. In: Proceedings of NIPS Workshop on Deep Learning and Unsupervised Feature Learning. 2011
[34]
LeCun Y, Matan O, Boser B, Henderson D, Howard R E, Hubbard W, Jacket LD, Baird H S. Handwritten zip code recognition with multilayer networks. In: Proceedings of the 10th International Conference on Pattern Recognition. 1990, 35−40
[35]
LeCun Y , Bottou L , Bengio Y , Haffner P . Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86( 11): 2278– 2324
[36]
He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2016, 770– 778
[37]
Pan S J , Tsang L W , Kwok J T , Yang Q . Domain adaptation via transfer component analysis. IEEE Transactions on Neural Networks, 2010, 22( 2): 199– 210
[38]
Gong B Q, Shi Y, Sha F, Grauman K. Geodesic flow kernel for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 2066−2073
[39]
Long M S, Cao Y, Wang J M, Jordan M I. Learning transferable features with deep adaptation networks. In: Proceedings of International Conference on Machine Learning. 2015, 97−105
[40]
Long M S, Zhu H, Wnag J M, Jordan M I. Deep transfer learning with joint adaptation networks. In: Proceedings of International Conference on Machine Learning. 2017, 2208−2217
[41]
Zhang W C. Oouyang W L, Li W, Wu D. Collaborative and adversarial network for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2018, 3801−3809
[42]
Bhushan Damodaran B, Kellenberger B, Flamary R, Tuia D, Courty N. Deepjdot: deep joint distribution optimal transport for unsupervised domain adaptation. In: Proceedings of European Conference on Computer Vision. 2018, 447– 463
[43]
Hoffman J, Tzeng E, Park T, Zhu J Y, Isola K, P A A, Saenko T. Cycada: cycle-consistent adversarial domain adaptation. In: Proceedings of International Conference on Machine Learning. 2018, 1989−1998
[44]
Donahue J, Jai Y Q, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T. Decaf: a deep convolutional activation feature for generic visual recognition. In: Proceedings of International Conference on Machine Learning. 2014, 647– 655

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant Nos. 61876091, 61772284), the China Postdoctoral Science Foundation (2019M651918), and the Open Foundation of MIIT Key Laboratory of Pattern Analysis and Machine Intelligence.

RIGHTS & PERMISSIONS

2022 Higher Education Press
AI Summary AI Mindmap
PDF(2345 KB)

Accesses

Citations

Detail

Sections
Recommended

/