Association rule mining for aircraft assembly process information based on fine-tuned LLM
Jiaji Shen , Weidong Zhao , Xianhui Liu , Ning Jia , Yingyao Zhang
Autonomous Intelligent Systems ›› 2026, Vol. 6 ›› Issue (1) : 2
Association rule mining for aircraft assembly process information based on fine-tuned LLM
The aircraft final assembly is a complex system, encompassing various aspects and multidimensional production factors. These numerous factors are interconnected, significantly impacting the efficiency of the final assembly process. To investigate the interrelationships among various production factors, this study introduces a specialized fine-tuning large language model for aircraft final assembly, termed Aircraft Final Assembly ChatGLM (AFA-ChatGLM). This model is designed to automatically extract essential information regarding key production factors from process documentation. Furthermore, the FP-Growth algorithm is employed to uncover association rules between these production factors and the various stages of the final assembly. Experimental results indicate that our method demonstrates outstanding performance in the aircraft final assembly domain. Specifically, for the assembly process documents of the C919 large passenger aircraft, our proposed model achieved a Precision of 82.7%, Recall of 89.1%, and F1 score of 85.4%, representing a substantial improvement over traditional word segmentation methods. leveraging the superior performance of the model, we utilized association rule mining techniques to construct 44,851 high-confidence association rules for the final assembly line of the C919, laying a foundation for subsequent optimization of the production line.
Aircraft final assembly / Production factors / Large language model / Association rule mining
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
B. Zhang, D. Ding, L. Jing, G. Dai, N. Yin, How would stance detection techniques evolve after the launch of chatgpt? (2022). arXiv preprint. arXiv:2212.14548 |
| [6] |
|
| [7] |
X.L. Li, P. Liang, Prefix-tuning: optimizing continuous prompts for generation (2021). arXiv preprint. arXiv:2101.00190 |
| [8] |
X. Liu, K. Ji, Y. Fu, W.L. Tam, Z. Du, Z. Yang, J. Tang, P-tuning v2: prompt tuning can be comparable to fine-tuning universally across scales and tasks (2021). arXiv preprint. arXiv:2110.07602 |
| [9] |
E.J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, W. Chen, Lora: low-rank adaptation of large language models (2021). arXiv preprint. arXiv:2106.09685 |
| [10] |
L. Zhang, L. Zhang, S. Shi, X. Chu, B. Li, Lora-fa: memory-efficient low-rank adaptation for large language models fine-tuning (2023). arXiv preprint. arXiv:2308.03303 |
| [11] |
J. Chen, A. Zhang, X. Shi, M. Li, A. Smola, D. Yang, Parameter-efficient fine-tuning design spaces (2023). arXiv preprint. arXiv:2301.01821 |
| [12] |
L. Wang, B. Lin, R. Chen, K.-H. Lu, Using data mining methods to develop manufacturing production rule in iot environment. J. Supercomput., 1–24 (2022) |
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
H. He, J.D. Choi, The stem cell hypothesis: dilemma behind multi-task learning with transformer encoders (2021). arXiv preprint. arXiv:2109.06939 |
| [30] |
H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al., Llama: open and efficient foundation language models (2023). arXiv preprint. arXiv:2302.13971 |
| [31] |
J. Bai, S. Bai, Y. Chu, Z. Cui, K. Dang, X. Deng, Y. Fan, W. Ge, Y. Han, F. Huang, B. Hui, L. Ji, M. Li, J. Lin, R. Lin, D. Liu, G. Liu, C. Lu, K. Lu, J. Ma, R. Men, X. Ren, X. Ren, C. Tan, S. Tan, J. Tu, P. Wang, S. Wang, W. Wang, S. Wu, B. Xu, J. Xu, A. Yang, H. Yang, J. Yang, S. Yang, Y. Yao, B. Yu, H. Yuan, Z. Yuan, J. Zhang, X. Zhang, Y. Zhang, Z. Zhang, C. Zhou, J. Zhou, X. Zhou, T. Zhu, Qwen technical report (2023). arXiv preprint. arXiv:2309.16609 |
| [32] |
|
The Author(s)
/
| 〈 |
|
〉 |