DC-LoRA: Domain correlation low-rank adaptation for domain incremental learning

Lin Li , Shiye Wang , Changsheng Li , Ye Yuan , Guoren Wang

High-Confidence Computing ›› 2025, Vol. 5 ›› Issue (4) : 100270

PDF
High-Confidence Computing ›› 2025, Vol. 5 ›› Issue (4) :100270 DOI: 10.1016/j.hcc.2024.100270
Research Articles
research-article

DC-LoRA: Domain correlation low-rank adaptation for domain incremental learning

Author information +
History +
PDF

Abstract

Continual learning, characterized by the sequential acquisition of multiple tasks, has emerged as a prominent challenge in deep learning. During the process of continual learning, deep neural networks experience a phenomenon known as catastrophic forgetting, wherein networks lose the acquired knowledge related to previous tasks when training on new tasks. Recently, parameter-efficient fine-tuning (PEFT) methods have gained prominence in tackling the challenge of catastrophic forgetting. However, within the realm of domain incremental learning, a type characteristic of continual learning, there exists an additional overlooked inductive bias, which warrants attention beyond existing approaches. In this paper, we propose a novel PEFT method called Domain Correlation Low-Rank Adaptation for domain incremental learning. Our approach put forward a domain correlated loss, which encourages the weights of the LoRA module for adjacent tasks to become more similar, thereby leveraging the correlation between different task domains. Furthermore, we consolidate the classifiers of different task domains to improve prediction performance by capitalizing on the knowledge acquired from diverse tasks. To validate the effectiveness of our method, we conduct comparative experiments and ablation studies on publicly available domain incremental learning benchmark dataset. The experimental results demonstrate that our method outperforms state-of-the-art approaches.

Keywords

Continual learning / Domain incremental learning / Parameter-efficient fine-tuning / Domain correlation

Cite this article

Download citation ▾
Lin Li, Shiye Wang, Changsheng Li, Ye Yuan, Guoren Wang. DC-LoRA: Domain correlation low-rank adaptation for domain incremental learning. High-Confidence Computing, 2025, 5(4): 100270 DOI:10.1016/j.hcc.2024.100270

登录浏览全文

4963

注册一个新账户 忘记密码

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported by the NSFC (62122013, U2001211). This work was also supported by the Innovative Development Joint Fund Key Projects of Shandong NSF (ZR2022LZH007).

References

[1]

J. Ning, L. Xie, C. Wang, Y. Bu, F. Xu, D.W. Zhou, S. Lu, B. Ye, RF-badge: Vital sign-based authentication via RFID tag array on badges, IEEE Trans. Mob. Comput. (2021).

[2]

N. Gunasekara, B. Pfahringer, H.M. Gomes, A. Bifet, Survey on online streaming continual learning, in: IJCAI, 2023, pp. 6628-6637.

[3]

M. McCloskey, N.J. Cohen, Catastrophic interference in connectionist networks: The sequential learning problem, in: Psychology of Learning and Motivation, vol. 24, Elsevier, 1989, pp. 109-165.

[4]

G. Oren, L. Wolf, In defense of the learning without forgetting for task incremental learning, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 2209-2218.

[5]

M. Masana, X. Liu, B. Twardowski, M. Menta, A.D. Bagdanov, J. Van De Weijer, Class-incremental learning: survey and performance evaluation on image classification, IEEE Trans. Pattern Anal. Mach. Intell. 45 (5) (2022) 5513-5533.

[6]

M. Li, Z. Yan, C. Li, Class incremental learning with important and diverse memory, in: International Conference on Image and Graphics, Springer, 2023, pp. 164-175.

[7]

M.J. Mirza, M. Masana, H. Possegger, H. Bischof, An efficient domain-incremental learning approach to drive in all weather conditions, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 3001-3011.

[8]

M. Jia, L. Tang, B.C. Chen, C. Cardie, S. Belongie, B. Hariharan, S.N. Lim, Visual prompt tuning, in: European Conference on Computer Vision, Springer, 2022, pp. 709-727.

[9]

Z. Wang, Z. Zhang, C.Y. Lee, H. Zhang, R. Sun, X. Ren, G. Su, V. Perot, J. Dy, T. Pfister, Learning to prompt for continual learning, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 139-149.

[10]

B. Lester, R. Al-Rfou, N. Constant, The power of scale for parameter-efficient prompt tuning, 2021, arXiv preprint arXiv:2104.08691.

[11]

E.J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, W. Chen, Lora: Low-rank adaptation of large language models, 2021, arXiv preprint arXiv:2106.09685.

[12]

M. Wistuba, P.T. Sivaprasad, L. Balles, G. Zappella, Continual learning with low rank adaptation, 2023, arXiv preprint arXiv:2311.17601.

[13]

N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, S. Gelly, Parameter-efficient transfer learning for NLP, in:International Conference on Machine Learning, PMLR, 2019, pp. 2790-2799.

[14]

X.L. Li, P. Liang, Prefix-tuning: Optimizing continuous prompts for generation, 2021, arXiv preprint arXiv:2101.00190.

[15]

S. Chen, C. Ge, Z. Tong, J. Wang, Y. Song, J. Wang, P. Luo, Adaptformer: Adapting vision transformers for scalable visual recognition, Adv. Neural Inf. Process. Syst. 35 (2022) 16664-16678.

[16]

Q. Sun, F. Lyu, F. Shang, W. Feng, L. Wan, Exploring example influence in continual learning, Adv. Neural Inf. Process. Syst. 35 (2022) 27075-27086.

[17]

Y.S. Liang, W.J. Li, Loss decoupling for task-agnostic continual learning, Adv. Neural Inf. Process. Syst. 36 (2024).

[18]

F. Zenke, B. Poole, S. Ganguli, Continual learning through synaptic intelligence, in:International Conference on Machine Learning, PMLR, 2017, pp. 3987-3995.

[19]

J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A.A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al., Overcoming catastrophic forgetting in neural networks, Proc. Natl. Acad. Sci. 114 (13) (2017) 3521-3526.

[20]

A.A. Rusu, N.C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K. Kavukcuoglu, R. Pascanu, R. Hadsell, Progressive neural networks, 2016.

[21]

K. He, X. Chen, S. Xie, Y. Li, P. Dollár, R. Girshick, Masked autoencoders are scalable vision learners, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16000-16009.

[22]

Z. Zheng, M. Ma, K. Wang, Z. Qin, X. Yue, Y. You, Preventing zero-shot transfer degradation in continual learning of vision-language models, in:Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023, pp. 19125-19136.

[23]

Z. Wang, Z. Zhang, S. Ebrahimi, R. Sun, H. Zhang, C.Y. Lee, X. Ren, G. Su, V. Perot, J. Dy, et al., Dualprompt: Complementary prompting for rehearsal-free continual learning, in: European Conference on Computer Vision, Springer, 2022, pp. 631-648.

[24]

J.S. Smith, L. Karlinsky, V. Gutta, P. Cascante-Bonilla, D. Kim, A. Arbelle, R. Panda, R. Feris, Z. Kira, Coda-prompt: Continual decomposed attention-based prompting for rehearsal-free continual learning,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 11909-11919.

[25]

Y. Wang, Z. Huang, X. Hong, S-prompts learning with pre-trained trans- formers: An occam’s razor for domain incremental learning, Adv. Neural Inf. Process. Syst. 35 (2022) 5682-5695.

[26]

Y.S. Liang, W.J. Li, InfLoRA: Interference-free low-rank adaptation for continual learning,in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024, pp. 23638-23647.

[27]

A. Farahani, S. Voghoei, K. Rasheed, H.R. Arabnia, A brief review of domain adaptation, Adv. Data Sci. Inf. Engineering: Proc. from ICDATA 2020 IKE 2020 (2021) 877-894.

[28]

A.H. Ali, Green AI for sustainability: leveraging machine learning to drive a circular economy, Babylon. J. Artif. Intell. 2023 (2023) 15-16.

[29]

C. Li, Z. Huang, D.P. Paudel, Y. Wang, M. Shahbazi, X. Hong, L. Van Gool, A continual deepfake detection benchmark: Dataset, methods, and essentials, in:Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2023, pp. 1339-1349.

[30]

Z. Li, D. Hoiem, Learning without forgetting, IEEE Trans. Pattern Anal. Mach. Intell. 40 (12) (2017) 2935-2947.

[31]

A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16 words: Transformers for image recognition at scale, 2020, arXiv preprint arXiv:2010. 11929.

PDF

180

Accesses

0

Citation

Detail

Sections
Recommended

/