Automatic analysis of alarm embedded with large language model in police robot

Zirui Liu , Haichun Sun , Deyu Yuan

Biomimetic Intelligence and Robotics ›› 2025, Vol. 5 ›› Issue (3) : 100220 -100220.

PDF (1292KB)
Biomimetic Intelligence and Robotics ›› 2025, Vol. 5 ›› Issue (3) : 100220 -100220. DOI: 10.1016/j.birob.2025.100220
Research Article
research-article

Automatic analysis of alarm embedded with large language model in police robot

Author information +
History +
PDF (1292KB)

Abstract

Police robots are used to assist police officers in performing tasks in complex environments, so as to improve the efficiency of law enforcement, ensure the safety of police officers and maintain social stability. With the rapid development of science and technology, police robots are widely used in the field of public security, such as alarm reception, patrol, explosive disposal, reconnaissance and so on. However, police robots still have the problem of analysis deviation in the process of receiving the alarm, which leads to the low efficiency of police dispatch. This study aims to enhance the police alarm automatic analysis ability of the police robots to assist in the dispatch of police. In this paper, we propose a novel method (FSTC-LLM) for sample augmentation based on large language model and noise reduction. The experimental evaluations are carried out on the alarm data set and the THUC News data set. The results show that the proposed FSTC-LLM has excellent performance in few shot text augmentation tasks, and can assist police robots to complete the task of automatic analysis of alarm with high quality, which is of great significance to enhance public security.

Keywords

Police robot / Large language models / Few-shot learning

Cite this article

Download citation ▾
Zirui Liu, Haichun Sun, Deyu Yuan. Automatic analysis of alarm embedded with large language model in police robot. Biomimetic Intelligence and Robotics, 2025, 5(3): 100220-100220 DOI:10.1016/j.birob.2025.100220

登录浏览全文

4963

注册一个新账户 忘记密码

CRediT authorship contribution statement

Zirui Liu: Writing - original draft, Validation, Supervision, Methodology, Investigation, Conceptualization. Haichun Sun: Writing - review & editing, Visualization, Funding acquisition. Deyu Yuan: Formal analysis, Data curation.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

3Acknowledgments

This work was funded by the Basic research expenses of People’s Public Security University of China (2024JKF02), and the Key Project of the Ministry of Public Security Technology Research Program (2024JSZ01).

References

[1]

D.S. Navare, Y.R. Kapde, S. Maurya, D. Pardeshi, P. William, Robotic bomb detection and disposal: Application using arduino, in: 2022 7th Interna-tional Conference on Communication and Electronics Systems, ICCES, 2022, pp. 479-483, http://dx.doi.org/10.1109/ICCES54183.2022.9836011.

[2]

J. Zhou, Y. Zheng, J. Tang, L. Jian, Z. Yang, FlipDA: Effective and robust data augmentation for few-shot learning,in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 8646-8665, http://dx.doi.org/10.18653/v1/2022.acl-long.592.

[3]

S. Lei, X. Zhang, J. He, F. Chen, C.-T. Lu, TART: Improved few-shot text classification using task-adaptive reference transformation,in: Proceedings of the 61st Annual Meeting of the Association for Computational Linguis-tics (Volume 1: Long Papers), 2023, pp. 11014-11026, http://dx.doi.org/10.18653/v1/2023.acl-long.617.

[4]

K. Zhao, X. Jin, Y. Wang, urvey on few-shot learning, J. Softw. 32 (2)(2021) 349-369, http://dx.doi.org/10.13328/j.cnki.jos.006138.

[5]

Z. Mao, R. Kobayashi, H. Nabae, K. Suzumori, Multimodal strain sensing system for shape recognition of tensegrity structures by combining tradi-tional regression and deep learning approaches, IEEE Robot. Autom. Lett. 9 (11) (2024) 10050-10056, http://dx.doi.org/10.1109/LRA.2024.3469811.

[6]

H. Ma, A. Song, J. Li, L. Ge, C. Fu, G. Zhang, Legged odometry based on fusion of leg kinematics and IMU information in a humanoid robot, Biomim. Intell. Robot. 5 (1) (2025) 100196, http://dx.doi.org/10.1016/j.birob.2024.100196.

[7]

S. Sun, C. Li, Z. Zhao, H. Huang, W. Xu, Leveraging large language models for comprehensive locomotion control in humanoid robots design, Biomim. Intell. Robot. 4 (4) (2024) 100187, http://dx.doi.org/10.1016/j.birob.2024.100187.

[8]

Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning applied to document recognition, Proc. IEEE 86 (11) (1998) 2278-2324, http://dx.doi.org/10.1109/5.726791.

[9]

S. Hochreiter, J. Schmidhuber, Long short-term memory, Neural Comput. 9 (8) (1997) 1735-1780, http://dx.doi.org/10.1162/neco.1997.9.8.1735.

[10]

X. Guo, H. Zhang, L. Ye, S. Li, RnnTd: An approach based on LSTM and tensor decomposition for classification of crimes in legal cases, in: 2019 IEEE Fourth International Conference on Data Science in Cyberspace, DSC, 2019, pp. 16-22, http://dx.doi.org/10.1109/DSC.2019.00012.

[11]

W. Wang, D. Feng, B. Li, J. Tian, Atextcnn model: a new multi-classification method for police situation, in: International Conference on Advanced Data Mining and Applications, 2020, pp. 135-147, http://dx.doi.org/10.1007/978-3-030-65390-3_11.

[12]

J. Zhou, H. Xu, Z. Zhang, J. Lu, W. Guo, Z. Li, Using recurrent neural network structure and multi-head attention with convolution for fraudulent phone text recognition, Comput. Syst. Sci. Eng. 46 (2) (2023) 2277-2297, http://dx.doi.org/10.32604/csse.2023.036419.

[13]

S. Yuan, Q. Wang, Imbalanced traffic accident text classification based on Bert-RCNN, J. Phys.: Conf. Ser. 2170 (1) (2022) 012003, http://dx.doi.org/10.1088/1742-6596/2170/1/012003.

[14]

C. Zhang, J. Chen, J. Li, Y. Peng, Z. Mao, Large language models for human-robot interaction: A review, Biomim. Intell. Robot. 3 (4) (2023) 100131, http://dx.doi.org/10.1016/j.birob.2023.100131.

[15]

J. Chung, E. Kamar, S. Amershi, Increasing diversity while maintaining accuracy: Text data generation with large language models and human interventions,in: Proceedings of the 61st Annual Meeting of the Asso-ciation for Computational Linguistics (Volume 1: Long Papers), 2023, pp. 575-593, http://dx.doi.org/10.18653/v1/2023.acl-long.34.

[16]

B. Ding, C. Qin, L. Liu, Y.K. Chia, B. Li, S. Joty, L. Bing, Is GPT-3 a good data annotator? in: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023, pp. 11173-11195, http://dx.doi.org/10.18653/v1/2023.acl-long.626.

[17]

Z. Chen, Q. Gao, A. Bosselut, A. Sabharwal, K. Richardson, DISCO: Distilling counterfactuals with large language models,in: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023, pp. 5514-5528, http://dx.doi.org/10.18653/v1/2023.acl-long.302.

[18]

X. Xing, P. Chen, Entity extraction of key elements in 110 police reports based on large language models, Appl. Sci. 14 (17) (2024) 7819, http://dx.doi.org/10.3390/app14177819.

[19]

B. Yu, C. Xingye, W. Jingxuan, Few-shot text classification method based on prompt learning, J. Comput. Appl. 43 (09) (2023) 2735-2740, http://dx.doi.org/10.11772/j.issn.1001-9081.2022081295.

[20]

Y. Wang, C. Xu, Q. Sun, H. Hu, C. Tao, X. Geng, D. Jiang, PromDA: Prompt-based data augmentation for low-resource NLU tasks,in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 4242-4255, http://dx.doi.org/10.18653/v1/2022.acl-long.292.

[21]

C. Northcutt, L. Jiang, I. Chuang, Confident learning: Estimating uncertainty in dataset labels, J. Artificial Intelligence Res. 70 (2021) 1373-1411, http://dx.doi.org/10.1613/jair.1.12125.

[22]

X.L. Li, P. Liang, Prefix-tuning: Optimizing continuous prompts for gen-eration,in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 4582-4597, http://dx.doi.org/10.18653/v1/2021.acl-long.353.

[23]

X. Liu, Y. Zheng, Z. Du, M. Ding, Y. Qian, Z. Yang, J. Tang, GPT understands, too, AI Open 5 (2024) 208-215, http://dx.doi.org/10.1016/j.aiopen.2023.08.012.

[24]

X. Liu, K. Ji, Y. Fu, W.L. Tam, Z. Du, Z. Yang, J. Tang, P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks,in: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 2021, pp. 61-68, http://dx.doi.org/10.18653/v1/2022.acl-short.8.

[25]

E.J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, W. Chen, LoRA: Low-rank adaptation of large language models, in: The Tenth Inter-national Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022, 2022, http://dx.doi.org/10.48550/arXiv.2106.09685.

[26]

V.B. Parthasarathy, A. Zafar, A. Khan, A. Shahid, The ultimate guide to fine-tuning llms from basics to breakthroughs: An exhaustive review of technologies, research, best practices, applied research challenges and opportunities, 2024, http://dx.doi.org/10.48550/arXiv.2408.13296, arXiv preprint arXiv:2408.13296.

AI Summary AI Mindmap
PDF (1292KB)

406

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/