
Optimizing low-rank adaptation with decomposed matrices and adaptive rank allocation
Dacao ZHANG, Fan YANG, Kun ZHANG, Xin LI, Si WEI, Richang HONG, Meng WANG
Front. Comput. Sci. ›› 2025, Vol. 19 ›› Issue (5) : 195337.
Optimizing low-rank adaptation with decomposed matrices and adaptive rank allocation
[1] |
Wang A, Singh A, Michael J, Hill F, Levy O, Bowman S. GLUE: a multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. 2018, 353−355
|
[2] |
Chen L, Wu L, Zhang K, Hong R, Lian D, Zhang Z, Zhou J, Wang M. Improving recommendation fairness via data augmentation. In: Proceedings of the ACM Web Conference 2023. 2023, 1012−1020
|
[3] |
Hu E J, Shen Y, Wallis P, Allen-Zhu Z, Li Y, Wang S, Wang L, Chen W. LoRA: low-rank adaptation of large language models. In: Proceedings of the 10th International Conference on Learning Representations. 2021, 1−26
|
[4] |
Zhang Q, Chen M, Bukharin A, He P, Cheng Y, Chen W, Zhao T. Adaptive budget allocation for parameter-efficient fine-tuning. In: Proceedings of the 11th International Conference on Learning Representations. 2023, 1−17
|
[5] |
Valipour M, Rezagholizadeh M, Kobyzev I, Ghodsi A. DyLoRA: parameter-efficient tuning of pre-trained models using dynamic search-free low-rank adaptation. In: Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics. 2023, 3274−3287
|
[6] |
Wang Y, Lin Y, Zeng X, Zhang G. MultiLoRA: democratizing LoRA for better multi-task learning. 2023, arXiv preprint arXiv: 2311.11501
|
Supplementary files
Highlights (297 KB)
Part of a collection:
Excellent Young Computer Scientists Vision on Foundation Models
/
〈 |
|
〉 |