SPA++: generalized graph spectral alignment for versatile domain adaptation

Zhi-Qing XIAO , Hao-Bo WANG , Xu LU , Wen-Tao YE , Gang CHEN , Jun-Bo ZHAO

Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (2) : 2102317

PDF (3426KB)
Front. Comput. Sci. ›› 2027, Vol. 21 ›› Issue (2) :2102317 DOI: 10.1007/s11704-025-50328-w
Artificial Intelligence
RESEARCH ARTICLE
SPA++: generalized graph spectral alignment for versatile domain adaptation
Author information +
History +
PDF (3426KB)

Abstract

Domain Adaptation (DA) aims to transfer knowledge from a labeled source domain to an unlabeled or sparsely labeled target domain under domain shifts. Most prior works focus on capturing the inter-domain transferability but largely overlook rich intra-domain structures, which empirically results in even worse discriminability. To tackle this tradeoff, we propose a generalized graph SPectral Alignment framework, SPA++. Its core is briefly condensed as follows: (1) by casting the DA problem to graph primitives, it composes a coarse graph alignment mechanism with a novel spectral regularizer toward aligning the domain graphs in eigenspaces; (2) we further develop a fine-grained neighbor-aware propagation mechanism for enhanced discriminability in the target domain; (3) by incorporating data augmentation and consistency regularization, SPA++ can adapt to complex scenarios including most DA settings and even challenging distribution scenarios. Furthermore, we also provide theoretical analysis to support our method, including the generalization bound of graph-based DA and the role of spectral alignment and smoothing consistency. Extensive experiments on benchmark datasets demonstrate that SPA++ consistently outperforms existing cutting-edge methods, achieving superior robustness and adaptability across various challenging adaptation scenarios.

Graphical abstract

Keywords

domain adaptation / graph alignment / transfer learning

Cite this article

Download citation ▾
Zhi-Qing XIAO, Hao-Bo WANG, Xu LU, Wen-Tao YE, Gang CHEN, Jun-Bo ZHAO. SPA++: generalized graph spectral alignment for versatile domain adaptation. Front. Comput. Sci., 2027, 21(2): 2102317 DOI:10.1007/s11704-025-50328-w

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Gong B, Shi Y, Sha F, Grauman K. Geodesic flow kernel for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 2066−2073

[2]

Ganin Y, Ustinova E, Ajakan H, Germain P, Larochelle H, Laviolette F, Marchand M, Lempitsky V . Domain-adversarial training of neural networks. Journal of Machine Learning Research, 2016, 17( 59): 1–35

[3]

Tzeng E, Hoffman J, Saenko K, Darrell T. Adversarial discriminative domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2017, 2962−2971

[4]

Saito K, Kim D, Sclaroff S, Saenko K. Universal domain adaptation through self-supervision. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 1366

[5]

Lee S, Kim D, Kim N, Jeong S G. Drop to adapt: learning discriminative features for unsupervised domain adaptation. In: Proceedings of IEEE International Conference on Computer Vision. 2019, 91−100

[6]

Quiñonero-Candela J, Sugiyama M, Schwaighofer A, Lawrence N D. Dataset Shift in Machine Learning. Cambridge: MIT Press, 2009

[7]

Tommasi T, Lanzi M, Russo P, Caputo B. Learning the roots of visual domain shift. In: Proceedings of European Conference on Computer Vision Workshops. 2016, 475−482

[8]

Ben-David S, Blitzer J, Crammer K, Pereira F. Analysis of representations for domain adaptation. In: Proceedings of the 20th International Conference on Neural Information Processing Systems. 2006, 137−144

[9]

Li M, Zhai Y M, Luo Y W, Ge P F, Ren C X. Enhanced transport distance for unsupervised domain adaptation. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, 13933−13941

[10]

Long M, Cao Y, Wang J, Jordan M I. Learning transferable features with deep adaptation networks. In: Proceedings of the 32nd International Conference on Machine Learning. 2015, 97−105

[11]

Sun B, Saenko K. Deep CORAL: correlation alignment for deep domain adaptation. In: Proceedings of European Conference on Computer Vision Workshops. 2016, 443−450

[12]

Ganin Y, Lempitsky V S. Unsupervised domain adaptation by backpropagation. In: Proceedings of the 32nd International Conference on Machine Learning. 2015, 1180−1189

[13]

Saito K, Watanabe K, Ushiku Y, Harada T. Maximum classifier discrepancy for unsupervised domain adaptation. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018, 3723−3732

[14]

Long M, Cao Z, Wang J, Jordan M I. Conditional adversarial domain adaptation. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 1647−1657

[15]

Cui S, Wang S, Zhuo J, Su C, Huang Q, Tian Q. Gradually vanishing bridge for adversarial domain adaptation. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, 12452−12461

[16]

Li S, Lv F, Xie B, Liu C H, Liang J, Qin C. Bi-classifier determinacy maximization for unsupervised domain adaptation. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 8455−8464

[17]

Chen L, Chen H, Wei Z, Jin X, Tan X, Jin Y, Chen E. Reusing the task-specific classifier as a discriminator: discriminator-free adversarial domain adaptation. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 7171−7180

[18]

Chen X, Wang S, Long M, Wang J. Transferability vs. discriminability: batch spectral penalization for adversarial domain adaptation. In: Proceedings of the 36th International Conference on Machine Learning. 2019, 1081−1090

[19]

Kundu J N, Kulkarni A R, Bhambri S, Mehta D, Kulkarni S A, Jampani V, Radhakrishnan V B. Balancing discriminability and transferability for source-free domain adaptation. In: Proceedings of the 39th International Conference on Machine Learning. 2022, 11710−11728

[20]

Chen L, Gan Z, Cheng Y, Li L, Carin L, Liu J. Graph optimal transport for cross-domain alignment. In: Proceedings of the 37th International Conference on Machine Learning. 2020, 1542−1553

[21]

Zhang Y, Miao S, Liao R. Structural domain adaptation with latent graph alignment. In: Proceedings of the 25th IEEE International Conference on Image Processing. 2018, 3753−3757

[22]

Luo L, Chen L, Hu S . Attention regularized Laplace graph for domain adaptation. IEEE Transactions on Image Processing, 2022, 31: 7322–7337

[23]

McPherson M, Smith-Lovin L, Cook J M . Birds of a feather: homophily in social networks. Annual Review of Sociology, 2001, 27: 415–444

[24]

Xiao Z, Wang H, Jin Y, Feng L, Chen G, Huang F, Zhao J. SPA: a graph spectral alignment perspective for domain adaptation. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 1619

[25]

Li J, Chen E, Ding Z, Zhu L, Lu K, Shen H T . Maximum density divergence for domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43( 11): 3918–3930

[26]

Cui S, Wang S, Zhuo J, Li L, Huang Q, Tian Q. Towards discriminability and diversity: batch nuclear-norm maximization under label insufficient situations. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, 3940−3949

[27]

Rahman M M, Panda R, Alam M A U. Semi-supervised domain adaptation with auto-encoder via simultaneous learning. In: Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision. 2023, 402−411

[28]

Xu R, Chen Z, Zuo W, Yan J, Lin L. Deep cocktail network: multi-source unsupervised domain adaptation with category shift. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018, 3964−3973

[29]

Peng X, Bai Q, Xia X, Huang Z, Saenko K, Wang B. Moment matching for multi-source domain adaptation. In: Proceedings of IEEE/CVF International Conference on Computer Vision. 2019, 1406−1415

[30]

Zhao H, Zhang S, Wu G, Costeira J P, Moura J M F, Gordon G J. Adversarial multiple source domain adaptation. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 8568−8579

[31]

Kang G, Jiang L, Wei Y, Yang Y, Hauptmann A . Contrastive adaptation network for single- and multi-source domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44( 4): 1793–1804

[32]

French G, Mackiewicz M, Fisher M. Self-ensembling for visual domain adaptation. In: Proceedings of the 6th International Conference on Learning Representations. 2018

[33]

Peng X, Huang Z, Sun X, Saenko K. Domain agnostic learning with disentangled representations. In: Proceedings of the 36th International Conference on Machine Learning. 2019, 5102−5112

[34]

Nguyen-Meidine L T, Belal A, Kiran M, Dolz J, Blais-Morin L A, Granger E. Unsupervised multi-target domain adaptation through knowledge distillation. In: Proceedings of IEEE Winter Conference on Applications of Computer Vision. 2021, 1338−1346

[35]

Isobe T, Jia X, Chen S, He J, Shi Y, Liu J, Lu H, Wang S. Multi-target domain adaptation with collaborative consistency learning. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 8183−8192

[36]

Kumar V, Lal R, Patil H, Chakraborty A. CoNMix for source-free single and multi-target domain adaptation. In: Proceedings of IEEE/CVF Winter Conference on Applications of Computer Vision. 2023, 4167−4177

[37]

Yang X, Yao H, Zhou A, Finn C. Multi-domain long-tailed learning by augmenting disentangled representations. Transactions on Machine Learning Research, 2023

[38]

Yang Y, Wang H, Katabi D. On multi-domain long-tailed recognition, imbalanced domain generalization and beyond. In: Proceedings of the 17th European Conference on Computer Vision. 2022, 57−75

[39]

Li X, Li J, Zhu L, Wang G, Huang Z. Imbalanced source-free domain adaptation. In: Proceedings of the 29th ACM International Conference on Multimedia. 2021, 3330−3339

[40]

Shi W, Zhu R, Li S. Pairwise adversarial training for unsupervised class-imbalanced domain adaptation. In: Proceedings of the 28th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2022, 1598−1606

[41]

HaoChen J Z, Wei C, Kumar A, Ma T. Beyond separability: analyzing the linear transferability of contrastive representations to related subpopulations. In: Proceedings of the 36th Conference on Neural Information Processing Systems. 2022, 1949

[42]

Garg S, Erickson N, Sharpnack J, Smola A, Balakrishnan S, Lipton Z C. RLSbench: domain adaptation under relaxed label shift. In: Proceedings of the 40th International Conference on Machine Learning. 2023, 10879−10928

[43]

Yi H C, You Z H, Huang D S, Kwoh C K . Graph representation learning in bioinformatics: trends, methods and applications. Briefings in Bioinformatics, 2022, 23( 1): bbab340

[44]

Li Q, Li X, Chen L, Wu D. Distilling knowledge on text graph for social media attribute inference. In: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2022, 2024−2028

[45]

Song Y, He Z. Research on the application of graph neural networks in financial asset valuation. In: Proceedings of International Conference on Artificial Intelligence, Systems and Network Security. 2023, 27−31

[46]

Bruna J, Zaremba W, Szlam A, LeCun Y. Spectral networks and locally connected networks on graphs. In: Proceedings of the 2nd International Conference on Learning Representations. 2014

[47]

Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations. 2017

[48]

Veličković P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph attention networks. In: Proceedings of the 6th International Conference on Learning Representations. 2018

[49]

Andrej K, Li F. Deep visual-semantic alignments for generating image descriptions. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2015, 3128−3137

[50]

Li W, Liu X, Yuan Y. SIGMA: semantic-complete graph matching for domain adaptive object detection. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 5281−5290

[51]

Vasudevan V, Bassenne M, Islam M T, Xing L . Image classification using graph neural network and multiscale wavelet superpixels. Pattern Recognition Letters, 2023, 166: 89–96

[52]

Defferrard M, Bresson X, Vandergheynst P. Convolutional neural networks on graphs with fast localized spectral filtering. In: Proceedings of the 30th Conference on Neural Information Processing Systems. 2016, 3844−3852

[53]

Cai R, Wu F, Li Z, Wei P, Yi L, Zhang K . Graph domain adaptation: a generative view. ACM Transactions on Knowledge Discovery from Data, 2024, 18( 3): 60

[54]

Pang J, Wang Z, Tang J, Xiao M, Yin N. SA-GDA: spectral augmentation for graph domain adaptation. In: Proceedings of the 31st ACM International Conference on Multimedia. 2023, 309−318

[55]

You Y, Chen T, Wang Z, Shen Y. Graph domain adaptation via theory-grounded spectral regularization. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[56]

Yang L, Chen X, Zhuo J, Jin D, Wang C, Cao X, Wang Z, Guo Y. Disentangled graph spectral domain adaptation. In: Proceedings of the 42nd International Conference on Machine Learning. 2025

[57]

Chung F R K. Spectral Graph Theory. Providence: American Mathematical Society, 1997

[58]

Iscen A, Tolias G, Avrithis Y, Chum O. Label propagation for deep semi-supervised learning. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019, 5065−5074

[59]

Lee D H. Pseudo-label: the simple and efficient semi-supervised learning method for deep neural networks. In: Proceedings of International Conference on Machine Learning Workshop. 2013

[60]

Liu Y, Lee J, Park M, Kim S, Yang E, Hwang S J, Yang Y. Learning to propagate labels: transductive propagation network for few-shot learning. In: Proceedings of the 7th International Conference on Learning Representations. 2019

[61]

Wu Z, Xiong Y, Yu S X, Lin D. Unsupervised feature learning via non-parametric instance discrimination. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018, 3733−3742

[62]

Xie S M, Kumar A, Jones R, Khani F, Ma T, Liang P. In-N-Out: pre-training and self-training using auxiliary information for out-of-distribution robustness. In: Proceedings of the 9th International Conference on Learning Representations. 2021

[63]

Litrico M, Del Bue A, Morerio P. Guiding pseudo-labels with uncertainty estimation for source-free unsupervised domain adaptation. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023, 7640−7650

[64]

Sosea T, Caragea C. Leveraging training dynamics and self-training for text classification. In: Proceedings of Conference on Empirical Methods in Natural Language Processing Findings. 2022, 4750−4762

[65]

Zhang S, Zhang L, Liu Z. Refined pseudo labeling for source-free domain adaptive object detection. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing. 2023, 1670−1674

[66]

Liang J, Hu D, Feng J. Domain adaptation with auxiliary target domain-oriented classifier. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 16627−16637

[67]

Luo Y, Wang Z, Huang Z, Baktashmotlagh M. Progressive graph learning for open-set domain adaptation. In: Proceedings of the 37th International Conference on Machine Learning. 2020, 6468−6478

[68]

Wang Y, Peng J, Zhang Z. Uncertainty-aware pseudo label refinery for domain adaptive semantic segmentation. In: Proceedings of IEEE/CVF International Conference on Computer Vision. 2021, 9072−9081

[69]

Kundu J N, Bhambri S, Kulkarni A, Sarkar H, Jampani V, Babu R V. Subsidiary prototype alignment for universal domain adaptation. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 2150

[70]

Pham H, Dai Z, Xie Q, Le Q V. Meta pseudo labels. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 11552−11563

[71]

Arazo E, Ortego D, Albert P, O’Connor N E, McGuinness K. Pseudo-labeling and confirmation bias in deep semi-supervised learning. In: Proceedings of International Joint Conference on Neural Networks. 2020, 1−8

[72]

Shimodaira H . Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 2000, 90( 2): 227–244

[73]

David S B, Lu T, Luu T, Pal D. Impossibility theorems for domain adaptation. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics. 2010, 129−136

[74]

Li L, Gan Z, Cheng Y, Liu J. Relation-aware graph attention network for visual question answering. In: Proceedings of IEEE/CVF International Conference on Computer Vision. 2019, 10312−10321

[75]

Cheng Y, Zhu X, Qian J, Wen F, Liu P . Cross-modal graph matching network for image-text retrieval. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 2022, 18( 4): 95

[76]

Zhu Q, Yang C, Xu Y, Wang H, Zhang C, Han J. Transfer learning of graph neural networks with ego-graph information maximization. In: Proceedings of the 35th International Conference on Neural Information Processing Systems. 2021, 136

[77]

Bai L, Hancock E R . Fast depth-based subgraph kernels for unattributed graphs. Pattern Recognition, 2016, 50: 233–245

[78]

Redko I, Morvant E, Habrard A, Sebban M, Bennani Y. A survey on domain adaptation theory: learning bounds and theoretical guarantees. 2020, arXiv preprint arXiv: 2004.11829

[79]

Sejdinovic D, Sriperumbudur B, Gretton A, Fukumizu K . Equivalence of distance-based and RKHS-based statistics in hypothesis testing. The Annals of Statistics, 2013, 41( 5): 2263–2291

[80]

Zhuo Z, Wang Y, Ma J, Wang Y. Towards a unified theoretical understanding of non-contrastive learning via rank differential mechanism. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[81]

von Luxburg U . A tutorial on spectral clustering. Statistics and Computing, 2007, 17( 4): 395–416

[82]

Brouwer A E, Haemers W H. Spectra of Graphs. New York: Springer, 2012

[83]

Wang Y, Zhang Q, Du T, Yang J, Lin Z, Wang Y. A message passing perspective on learning dynamics of contrastive learning. In: Proceedings of the 11th International Conference on Learning Representations. 2023

[84]

Aouchiche M, Hansen P . Distance spectra of graphs: a survey. Linear Algebra and its Applications, 2014, 458: 301–386

[85]

Stevanović D . Research problems from the Aveiro workshop on graph spectra. Linear Algebra and its Applications, 2007, 423( 1): 172–181

[86]

Gu J, Hua B, Liu S. Spectral distances on graphs. Discrete Applied Mathematics, 2015, 190−191: 56−74

[87]

Raab C, Schleif F M. Low-rank subspace override for unsupervised domain adaptation. In: Proceedings of the 43rd German Conference on Artificial Intelligence. 2020, 132−147

[88]

Wang H, Leskovec J. Unifying graph convolutional neural networks and label propagation. 2020, arXiv preprint arXiv: 2002.06755

[89]

Berthelot D, Carlini N, Goodfellow I, Oliver A, Papernot N, Raffel C. MixMatch: a holistic approach to semi-supervised learning. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 454

[90]

Guo C, Pleiss G, Sun Y, Weinberger K Q. On calibration of modern neural networks. In: Proceedings of the 34th International Conference on Machine Learning. 2017, 1321−1330

[91]

Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2016, 2818−2826

[92]

Li J, Li G, Shi Y, Yu Y. Cross-domain adaptive clustering for semi-supervised domain adaptation. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 2505−2514

[93]

Laine S, Aila T. Temporal ensembling for semi-supervised learning. In: Proceedings of the 5th International Conference on Learning Representations. 2017

[94]

Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. In: Proceedings of the 11th European Conference on Computer Vision. 2010, 213−226

[95]

Venkateswara H, Eusebio J, Chakraborty S, Panchanathan S. Deep hashing network for unsupervised domain adaptation. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2017, 5385−5394

[96]

Peng X, Usman B, Kaushik N, Wang D, Hoffman J, Saenko K. VisDA: a synthetic-to-real benchmark for visual domain adaptation. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. 2018, 2021−2026

[97]

Lin T Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick C L. Microsoft COCO: common objects in context. In: Proceedings of the 13th European Conference on Computer Vision. 2014, 740−755

[98]

Na J, Jung H, Chang H J, Hwang W. FixBi: bridging domain spaces for unsupervised domain adaptation. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 1094−1103

[99]

Jiang J, Chen B, Fu B, Long M. Transfer-learning-library.See github.com/thuml/Transfer-Learning-Library website,2020

[100]

Saito K, Kim D, Sclaroff S, Darrell T, Saenko K. Semi-supervised domain adaptation via minimax entropy. In: Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV). 2019, 8049−8057

[101]

He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2016, 770−778

[102]

He K, Zhang X, Ren S, Sun J. Identity mappings in deep residual networks. In: Proceedings of the 14th European Conference on Computer Vision. 2016, 630−645

[103]

Jin Y, Wang X, Long M, Wang J. Minimum class confusion for versatile domain adaptation. In: Proceedings of the 16th European Conference on Computer Vision. 2020, 464−480

[104]

Liang J, He R, Sun Z, Tan T . Exploring uncertainty in pseudo-label guided unsupervised domain adaptation. Pattern Recognition, 2019, 96: 106996

[105]

Zhong E, Fan W, Yang Q, Verscheure O, Ren J. Cross validation framework to choose amongst models and datasets for transfer learning. In: Proceedings of European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. 2010, 547−562

[106]

Rangwani H, Aithal S K, Mishra M, Jain A, Radhakrishnan V B. A closer look at smoothness in domain adversarial training. In: Proceedings of the 39th International Conference on Machine Learning. 2022, 18378−18399

[107]

Wang X, Zhuo J, Zhang M, Wang S, Fang Y. Revisiting unsupervised domain adaptation models: a smoothness perspective. In: Proceedings of the 16th Asian Conference on Computer Vision. 2023, 338−356

[108]

Xu R, Li G, Yang J, Lin L. Larger norm more transferable: an adaptive feature norm approach for unsupervised domain adaptation. In: Proceedings of IEEE/CVF International Conference on Computer Vision. 2019, 1426−1435

[109]

Gu X, Sun J, Xu Z. Spherical space domain adaptation with robust pseudo-label loss. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, 9098−9107

[110]

Wang S, Chen X, Wang Y, Long M, Wang J. Progressive adversarial networks for fine-grained domain adaptation. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020, 9210−9219

[111]

Kalluri T, Sharma A, Chandraker M. MemSAC: memory augmented sample consistency for large scale domain adaptation. In: Proceedings of the 17th European Conference on Computer Vision. 2022, 550−568

[112]

Grandvalet Y, Bengio Y. Semi-supervised learning by entropy minimization. In: Proceedings of the 18th International Conference on Neural Information Processing Systems. 2004, 529−536

[113]

Kim T, Kim C. Attract, perturb, and explore: learning a feature alignment network for semi-supervised domain adaptation. In: Proceedings of the 16th European Conference on Computer Vision. 2020, 591−607

[114]

Yang L, Balaji Y, Lim S N, Shrivastava A. Curriculum manager for source selection in multi-source domain adaptation. In: Proceedings of the 16th European Conference on Computer Vision. 2020, 608−624

[115]

Wang H, Xu M, Ni B, Zhang W. Learning to combine: knowledge aggregation for multi-source domain adaptation. In: Proceedings of the 16th European Conference on Computer Vision Workshops. 2020, 727−744

[116]

Cai Z, Zhang D, Zhang T, Hu C, Jing X Y . Single-/multi-source domain adaptation via domain separation: a simple but effective method. Pattern Recognition Letters, 2023, 174: 124–129

[117]

Jin Y, Cao Z, Wang X, Wang J, Long M . One fits many: class confusion loss for versatile domain adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46( 11): 7251–7266

[118]

Lin T Y, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. In: Proceedings of IEEE International Conference on Computer Vision. 2017, 2999−3007

[119]

Cao K, Wei C, Gaidon A, Arechiga N, Ma T. Learning imbalanced datasets with label-distribution-aware margin loss. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 140

[120]

Kang B, Xie S, Rohrbach M, Yan Z, Gordo A, Feng J, Kalantidis Y. Decoupling representation and classifier for long-tailed recognition. In: Proceedings of the 8th International Conference on Learning Representations. 2020

[121]

Zhong Z, Cui J, Liu S, Jia J. Improving calibration for long-tailed recognition. In: Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 16484−16493

[122]

Chou H P, Chang S C, Pan J Y, Wei W, Juan D C. Remix: rebalanced mixup. In: Proceedings of European Conference on Computer Vision Workshops. 2020, 95−110

[123]

Raileanu R, Rocktäschel T. RIDE: rewarding impact-driven exploration for procedurally-generated environments. In: Proceedings of the 8th International Conference on Learning Representations. 2020

[124]

Cui J, Zhong Z, Liu S, Yu B, Jia J. Parametric contrastive learning. In: Proceedings of IEEE/CVF International Conference on Computer Vision. 2021, 695−704

[125]

Arjovsky M, Bottou L, Gulrajani I, Lopez-Paz D. Invariant risk minimization. 2019, arXiv preprint arXiv: 1907.02893

[126]

Sagawa S, Koh P W, Hashimoto T B, Liang P. Distributionally robust neural networks. In: Proceedings of the 8th International Conference on Learning Representations. 2020

[127]

Yao H, Wang Y, Li S, Zhang L, Liang W, Zou J, Finn C. Improving out-of-distribution robustness via selective augmentation. In: Proceedings of the 39th International Conference on Machine Learning. 2022, 25407−25437

[128]

Zhou K, Yang Y, Qiao Y, Xiang T. Domain generalization with MixStyle. In: Proceedings of the 9th International Conference on Learning Representations. 2021

[129]

Zhou F, Jiang Z, Shui C, Wang B, Chaib-Draa B . Domain generalization via optimal transport with metric similarity learning. Neurocomputing, 2021, 456: 469–480

[130]

Van der Maaten L, Hinton G . Visualizing data using t-SNE. Journal of Machine Learning Research, 2008, 9( 86): 2579–2605

[131]

Shen J, Qu Y, Zhang W, Yu Y. Wasserstein distance guided representation learning for domain adaptation. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018, 4058−4065

[132]

Horn R A, Johnson C R. Matrix Analysis. Cambridge: Cambridge University Press, 1985

RIGHTS & PERMISSIONS

Higher Education Press

PDF (3426KB)

Supplementary files

Highlights

281

Accesses

0

Citation

Detail

Sections
Recommended

/