Optimizing low-rank adaptation with decomposed matrices and adaptive rank allocation

Dacao ZHANG, Fan YANG, Kun ZHANG, Xin LI, Si WEI, Richang HONG, Meng WANG

Front. Comput. Sci. ›› 2025, Vol. 19 ›› Issue (5) : 195337.

PDF(382 KB)
Front. Comput. Sci. All Journals
PDF(382 KB)
Front. Comput. Sci. ›› 2025, Vol. 19 ›› Issue (5) : 195337. DOI: 10.1007/s11704-024-40317-w
Artificial Intelligence

Optimizing low-rank adaptation with decomposed matrices and adaptive rank allocation

Author information +
History +

Graphical abstract

Cite this article

Download citation ▾
Dacao ZHANG, Fan YANG, Kun ZHANG, Xin LI, Si WEI, Richang HONG, Meng WANG. Optimizing low-rank adaptation with decomposed matrices and adaptive rank allocation. Front. Comput. Sci., 2025, 19(5): 195337 https://doi.org/10.1007/s11704-024-40317-w
This is a preview of subscription content, contact us for subscripton.

References

[1]
Wang A, Singh A, Michael J, Hill F, Levy O, Bowman S. GLUE: a multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 2018, 353−355
[2]
Chen L, Wu L, Zhang K, Hong R, Lian D, Zhang Z, Zhou J, Wang M. Improving recommendation fairness via data augmentation. In: Proceedings of the ACM Web Conference 2023. 2023, 1012−1020
[3]
Hu E J, Shen Y, Wallis P, Allen-Zhu Z, Li Y, Wang S, Wang L, Chen W. LoRA: low-rank adaptation of large language models. In: Proceedings of the 10th International Conference on Learning Representations. 2021, 1−26
[4]
Zhang Q, Chen M, Bukharin A, He P, Cheng Y, Chen W, Zhao T. Adaptive budget allocation for parameter-efficient fine-tuning. In: Proceedings of the 11th International Conference on Learning Representations. 2023, 1−17
[5]
Valipour M, Rezagholizadeh M, Kobyzev I, Ghodsi A. DyLoRA: parameter-efficient tuning of pre-trained models using dynamic search-free low-rank adaptation. In: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023, 3274−3287
[6]
Wang Y, Lin Y, Zeng X, Zhang G. MultiLoRA: democratizing LoRA for better multi-task learning. 2023, arXiv preprint arXiv: 2311.11501

Acknowledgements

This research was partially supported by the National Science and Technology Major Project (Grant No. 2023ZD0121103), and the National Natural Science Foundation of China (Grant Nos. 62376086, U23B2031).

Computer interests

The authors declare that they have no competing interests or financial conflicts to disclose.

RIGHTS & PERMISSIONS

2025 Higher Education Press
AI Summary AI Mindmap
PDF(382 KB)

Supplementary files

Highlights (297 KB)

Part of a collection:

Excellent Young Computer Scientists Vision on Foundation Models

805

Accesses

0

Citations

Detail

Sections
Recommended

/