MiMu: mitigating multiple shortcut learning behavior of transformers
Lili ZHAO , Qi LIU , Wei CHEN , Liyi CHEN , Ruijun SUN , Min HOU , Yang WANG , Shijin WANG , Pingping REN , Jiafeng ZHOU
Front. Comput. Sci. ›› 2025, Vol. 19 ›› Issue (12) : 1912380
MiMu: mitigating multiple shortcut learning behavior of transformers
Empirical Risk Minimization (ERM) models often rely on spurious correlations between features and labels during the learning process, leading to shortcut learning behavior that undermines robustness generalization performance. Current research mainly targets identifying or mitigating a single shortcut; however, in real-world scenarios, cues within the data are diverse and unknown. In empirical studies, we reveal that models rely more on strong shortcuts than weak ones, with their performance under multiple shortcuts typically falling between that of an individual shortcut. To address these challenges, we propose MiMu, a novel method integrated with Transformer-based ERMs designed to Mitigate Multiple shortcut learning behavior, which incorporates self-calibration strategy and self-improvement strategy. In the source model, we first propose the self-calibration strategy to prevent the model from relying on shortcuts and make overconfident predictions. Then, we design self-improvement strategy in target model to further reduce the reliance on multiple shortcuts. The random mask strategy involves randomly masking partial attention positions to diversify the focus of target model avoiding fixation on a fixed region. Meanwhile, the adaptive attention alignment module facilitates the alignment of attention weights to the calibrated source model, without the need for post-hoc attention maps or supervision. Finally, extensive experiments conducted on Natural Language Processing (NLP) and Computer Vision (CV) demonstrate the effectiveness of MiMu in improving the robustness generalization abilities.
shortcut learning / robustness / generalizability
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
Schuster T, Shah D, Yeo Y J S, Ortiz D R F, Santus E, Barzilay R. Towards debiasing fact verification models. In: Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019, 3419−3425 |
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
Du M, Manjunatha V, Jain R, Deshpande R, Dernoncourt F, Gu J, Sun T, Hu X. Towards interpreting and mitigating shortcut learning behavior of NLU models. In: Proceedings of 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2021, 915−929 |
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
Yuan Y, Zhao L, Zhang K, Zheng G, Liu Q. Do LLMs overcome shortcut learning? An evaluation of shortcut challenges in large language models. In: Proceedings of 2024 Conference on Empirical Methods in Natural Language Processing. 2024, 12188−12200 |
| [29] |
|
| [30] |
Qi F, Chen Y, Zhang X, Li M, Liu Z, Sun M. Mind the style of text! Adversarial and backdoor attacks based on text style transfer. In: Proceedings of 2021 Conference on Empirical Methods in Natural Language Processing. 2021, 4569−4580 |
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
Utama P A, Moosavi N S, Gurevych I. Towards debiasing NLU models from unknown biases. In: Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing. 2020, 7597−7610 |
| [39] |
|
| [40] |
Clark C, Yatskar M, Zettlemoyer L. Don’t take the easy way out: Ensemble based methods for avoiding known dataset biases. In: Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019, 4069−4082 |
| [41] |
|
| [42] |
|
| [43] |
|
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
| [48] |
|
| [49] |
|
| [50] |
|
| [51] |
|
| [52] |
|
| [53] |
|
| [54] |
|
| [55] |
|
| [56] |
|
| [57] |
|
| [58] |
|
| [59] |
|
| [60] |
|
| [61] |
|
| [62] |
Zhang Y, Baldridge J, He L. PAWS: Paraphrase adversaries from word scrambling. In: Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2019, 1298−1308 |
| [63] |
|
| [64] |
|
| [65] |
Devlin J, Chang M W, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2019, 4171−4186 |
| [66] |
|
| [67] |
|
| [68] |
|
| [69] |
|
| [70] |
|
| [71] |
|
| [72] |
|
| [73] |
|
Higher Education Press
/
| 〈 |
|
〉 |