Can large language models effectively process and execute financial trading instructions?
Yu KANG , Xin YANG , Ge WANG , Yuda WANG , Zhanyu WANG , Mingwen LIU
Front. Inform. Technol. Electron. Eng ›› 2025, Vol. 26 ›› Issue (10) : 1832 -1846.
Can large language models effectively process and execute financial trading instructions?
The development of large language models (LLMs) has created transformative opportunities for the financial industry, especially in the area of financial trading. However, how to integrate LLMs with trading systems has become a challenge. To address this problem, we propose an intelligent trade order recognition pipeline that enables the conversion of trade orders into a standard format for trade execution. The system improves the ability of human traders to interact with trading platforms while addressing the problem of misinformation acquisition in trade execution. In addition, we create a trade order dataset of 500 pieces of data to simulate the real-world trading scenarios. Moreover, we design several metrics to provide a comprehensive assessment of dataset reliability and the generative power of big models in finance by using five state-of-the-art LLMs on our dataset. The results show that most models generate syntactically valid JavaScript object notation (JSON) at high rates (about 80%-99%) and initiate clarifying questions in nearly all incomplete cases (about 90%-100%). However, end-to-end accuracy remains low (about 6%-14%), and missing information is substantial (about 12%-66%). Models also tend to over-interrogate—roughly 70%-80% of follow-ups are unnecessary—raising interaction costs and potential information-exposure risk. The research also demonstrates the feasibility of integrating our pipeline with the real-world trading systems, paving the way for practical deployment of LLM-based trade automation solutions.
Large language model / Financial instruction / Evaluation / Dataset construction
Zhejiang University Press
/
| 〈 |
|
〉 |