Intelligent Decision-Making Driven by Large AI Models: Progress, Challenges and Prospects

You He , Shulan Ruan , Dong Wang , Huchuan Lu , Zhi Li , Yang Liu , Xu Chen , Shaohui Li , Jie Zhao , Jiaxuan Liang

CAAI Transactions on Intelligence Technology ›› 2025, Vol. 10 ›› Issue (6) : 1573 -1592.

PDF (4236KB)
CAAI Transactions on Intelligence Technology ›› 2025, Vol. 10 ›› Issue (6) :1573 -1592. DOI: 10.1049/cit2.70084
REVIEW
research-article

Intelligent Decision-Making Driven by Large AI Models: Progress, Challenges and Prospects

Author information +
History +
PDF (4236KB)

Abstract

With the rapid development of large AI models, large decision models have further broken through the limits of human cognition and promoted the innovation of decision-making paradigms in extensive fields such as medicine and transportation. In this paper, we systematically expound on the intelligent decision-making technology and prospects driven by large AI models. Specifically, we first review the development of large AI models in recent years. Then, from the perspective of methods, we introduce important theories and technologies of large decision models, such as model architecture and model adaptation. Next, from the perspective of applications, we introduce the cutting-edge applications of large decision models in various fields, such as autonomous driving and knowledge decision-making. Finally, we discuss existing challenges, such as security issues, decision bias and hallucination phenomenon as well as future prospects, from both technology development and domain ap-plications. We hope this review paper can help researchers understand the important progress of intelligent decision-making driven by large AI models.

Keywords

artificial intelligence / intelligent decision-making / large AI model / large decision model

Cite this article

Download citation ▾
You He, Shulan Ruan, Dong Wang, Huchuan Lu, Zhi Li, Yang Liu, Xu Chen, Shaohui Li, Jie Zhao, Jiaxuan Liang. Intelligent Decision-Making Driven by Large AI Models: Progress, Challenges and Prospects. CAAI Transactions on Intelligence Technology, 2025, 10(6): 1573-1592 DOI:10.1049/cit2.70084

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

S. Ruan, H. Liu, Z. Chen, et al., “CPWS: Confident Programmatic Weak Supervision for High-Quality Data Labeling,” ACM Transactions on Information Systems 43, no. 4 (2025): 1-26, https://doi.org/10.1145/3725730.

[2]

H. Liu, S. Ruan, H. Wu, et al., “Multi-View Heterogeneous HyperGNN for Heterophilic Knowledge Combination Prediction,” IEEE Transactions on Big Data 11, no. 5 (2025): 2321-2337, https://doi.org/10.1109/tbdata.2025.3527216.

[3]

T. Chen, Z. Pang, S. He, et al., “Machine Intelligence-Accelerated Discovery of All-Natural Plastic Substitutes,” Nature Nanotechnology 19, no. 6 (2024): 782-791, https://doi.org/10.1038/s41565-024-01635-z.

[4]

J. Shi, S. Ruan, Z. Zhu, et al., “Predictive Accuracy-based Active Learning for Medical Image Segmentation,” in Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (ACM, 2024), 4885-4893.

[5]

J. Shi, H. Kan, S. Ruan, et al., “H-DenseFormer:An Efficient Hybrid Densely Connected Transformer for Multimodal Tumor Segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Springer, 2023), 692-702.

[6]

K. Bi, L. Xie, H. Zhang, X. Chen, X. Gu, and Q. Tian, “Accurate Medium-Range Global Weather Forecasting With 3D Neural Net-works,” Nature 619, no. 7970 (2023): 533-538, https://doi.org/10.1038/s41586-023-06185-3.

[7]

I. Price, A. Sanchez-Gonzalez, F. Alet, et al., “Probabilistic Weather Forecasting With Machine Learning,” Nature 637, no. 8044 (2025):84-90, https://doi.org/10.1038/s41586-024-08252-9.

[8]

H. Jeffreys, The Theory of Probability (Oxford University Press, 1998).

[9]

S. H. Liao,“Expert System Methodologies and Applications—A Decade Review From 1995 to 2004,” Expert Systems with Applications 28, no. 1 (2005): 93-103, https://doi.org/10.1016/j.eswa.2004.08.003.

[10]

M. Malekipirbazari and V. Aksakalli, “Risk Assessment in Social Lending via Random Forests,” Expert Systems with Applications 42, no. 10 (2015): 4621-4631, https://doi.org/10.1016/j.eswa.2015.02.001.

[11]

C. L. Huang, M. C. Chen, and C. J. Wang, “Credit Scoring With a Data Mining Approach Based on Support Vector Machines,” Expert Systems with Applications 33, no. 4 (2007): 847-856, https://doi.org/10.1016/j.eswa.2006.07.007.

[12]

M. Beyeler, N. D. Dutt, and J. L. Krichmar, “Categorization and Decision-Making in a Neurobiologically Plausible Spiking Network Using a STDP-Like Learning Rule,” Neural Networks 48 (2013): 109-124, https://doi.org/10.1016/j.neunet.2013.07.012.

[13]

C. Li,E. G. Jones, and S. Furber, “Unleashing the Potential of Spiking Neural Networks With Dynamic Confidence,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (IEEE, 2023), 13350-13360.

[14]

D. Silver, A. Huang, C. J. Maddison, et al., “Mastering the Game of Go With Deep Neural Networks and Tree Search,” Nature 529, no. 7587 (2016): 484-489, https://doi.org/10.1038/nature16961.

[15]

L. He, C. Liu, R. Li, et al., “Refining Sentence Embedding Model Through Ranking Sentences Generation With Large Language Models,” arXiv preprint arXiv:2502.13656 (2025): 10627-10643, https://doi.org/10.18653/v1/2025.findings-acl.553.

[16]

R. Cordeschi, “AI Turns Fifty: Revisiting Its Origins,” Applied Artificial Intelligence 21, no. 4-5 (2007): 259-279, https://doi.org/10.1080/08839510701252304.

[17]

Z. Li, F. Liu, W. Yang, S. Peng, and J. Zhou, “A Survey of Con-volutional Neural Networks: Analysis, Applications, and Prospects,” IEEE Transactions on Neural Networks and Learning Systems 33, no. 12 (2021): 6999-7019, https://doi.org/10.1109/tnnls.2021.3084827.

[18]

S. Ruan, K. Zhang, L. Wu, T. Xu, Q. Liu, and E. Chen, “Color Enhanced Cross Correlation Net for Image Sentiment Analysis,” IEEE Transactions on Multimedia 26 (2024): 4097-4109, https://doi.org/10.1109/tmm.2021.3118208.

[19]

B. Feng, S. Ruan, M. Yang, et al., “SentiFormer: Metadata Enhanced Transformer for Image Sentiment Analysis,” in ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) IEEE, 2025), 1-5.

[20]

H. Touvron, L. Martin, K. Stone, et al., “Llama 2: Open Foundation and Fine-Tuned Chat Models,” arXiv preprint arXiv:2307. 09288 (2023), https://doi.org/10.48550/arXiv.2307.09288.

[21]

W. Zhang, “Large Decision Models in International Joint Confer-ence on Artificial Intelligence (ACM, 2023), 7062-7067.

[22]

P. Hager, F. Jungmann, R. Holland, et al., “Evaluation and Miti-gation of the Limitations of Large Language Models in Clinical Decision-Making,” Nature Medicine 30, no. 9 (2024): 2613-2622, https://doi.org/10.1038/s41591-024-03097-1.

[23]

S. Ruan, R. Wang, X. Shen, et al., “A Survey of Multi-Sensor Fusion Perception for Embodied AI: Background, Methods, Challenges and Prospects,” arXiv preprint arXiv:2506. 19769 (2025), https://doi.org/10.48550/arXiv.2506.19769.

[24]

B. Xiao, S. Ruan, Z. Li, et al., “Improving Radar-Camera Fusion for 3D Object Detection via Eliciting Knowledge From Foundation Model,” IEEE/ASME Transactions on Mechatronics (2025): 1-10, https://doi.org/10.1109/tmech.2025.3584379.

[25]

M. Li, S. Zhao, Q. Wang, et al., “Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making,” Advances in Neural Information Processing Systems 37 (2024): 100428-100534, https://doi.org/10.52202/079017-3188.

[26]

C. H. Song, J. Wu, C. Washington, B. M. Sadler,W. L. Chao, and Y. Su, “LLM-Planner: Few-Shot Grounded Planning for Embodied Agents With Large Language Models,” in Proceedings of the IEEE/CVF Inter-national Conference on Computer Vision (IEEE, 2023), 2998-3009.

[27]

J. Qiu, K. Lam, G. Li, et al., “LLM-Based Agentic Systems in Med-icine and Healthcare,” Nature Machine Intelligence 6, no. 12 (2024): 1418-1420, https://doi.org/10.1038/s42256-024-00944-1.

[28]

Z. Zhou, J. X. Shi, P. X. Song, et al., “LawGPT: A Chinese Legal Knowledge-Enhanced Large Language Model,” arXiv preprint arXiv: 2406. 04614 (2024).

[29]

N. Xiao, C. Fan, L. Wang, T. Tao, and W. Gao, “Changes and Ap-plications of AI in the Customer Service Industry,” in Proceedings of the 2024 7th International Conference on Computer Information Science and Artificial Intelligence (ACM, 2024), 46-56.

[30]

Q. Wen, J. Liang, C. Sierra, et al., “AI for Education (AI4EDU): Advancing Personalized Education With LLM and Adaptive Learning,” in Proceedings of the 30th ACM SIGKDD Conference on Knowledge Dis-covery and Data Mining (ACM, 2024), 6743-6744.

[31]

B. Romera-Paredes, M. Barekatain, A. Novikov, et al., “Mathemat-ical Discoveries From Program Search With Large Language Models,” Nature 625, no. 7995 (2024): 468-475, https://doi.org/10.1038/s41586-023-06924-6.

[32]

J. Abramson, J. Adler, J. Dunger, et al., “Accurate Structure Pre-diction of Biomolecular Interactions With AlphaFold 3,” Nature 630, no. 8016 (2024): 493-500, https://doi.org/10.1038/s41586-024-07487-w.

[33]

W. Street, J. O. Siy, G. Keeling, et al., “LLMs Achieve Adult Human Performance on Higher-Order Theory of Mind Tasks,” arXiv preprint arXiv:2405. 18870 (2024), https://doi.org/10.48550/arXiv.2405.18870.

[34]

T. Brown, B. Mann, N. Ryder, et al., “Language Models Are Few-Shot Learners,” Advances in Neural Information Processing Systems 33 (2020): 1877-1901, https://doi.org/10.5555/3495724.3495883.

[35]

C. Qin, A. Zhang, Z. Zhang, J. Chen, M. Yasunaga, and D. Yang, “Is ChatGPT a General-Purpose Natural Language Processing Task Solver?,” in Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (ACL, 2023), 1339-1384.

[36]

J. Achiam, S. Adler, S. Agarwal, et al., “GPT-4 Technical Report,” arXiv preprint arXiv:2303.08774 (2023), https://doi.org/10.48550/arXiv.2303.08774.

[37]

A. Hurst, A. Lerer, A. P. Goucher, et al., “GPT-4o System Card,” arXiv preprint arXiv:2410. 21276 (2024), https://doi.org/10.48550/arXiv.2410.21276.

[38]

A. Jaech, A. Kalai, A. Lerer, et al., “OpenAI o1 System Card,” arXiv preprint arXiv:2412.16720 (2024), https://doi.org/10.48550/arXiv.2412.16720.

[39]

D. Guo, D. Yang, H. Zhang, et al., “DeepSeek-R1: Incentivizing Reasoning Capability in Llms via Reinforcement Learning,” arXiv pre-print arXiv:2501. 12948 (2025), https://doi.org/10.48550/arXiv.2501.12948.

[40]

OpenAI, Introducing GPT- 5 (2025), https://openai.com/index/introducing-gpt-5/.

[41]

A. Kirillov, E. Mintun, N. Ravi, et al. “Segment Anything , in Proceedings of the IEEE/CVF International Conference on Computer Vision (IEEE, 2023), 4015-4026.

[42]

J. Devlin, M. W. Chang,K. Lee, and K. Toutanova, “Bert: Pre-Training of Deep Bidirectional Transformers for Language Under-standing,” in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Lan-guage Technologies, Volume 1 (Long and Short Papers) (ACL, 2019), 4171-4186.

[43]

P. F. Christiano, J. Leike, T. Brown, M. Martic, S. Legg, and D. Amodei, “Deep Reinforcement Learning From Human Preferences,” Advances in Neural Information Processing Systems 30 (2017), https://doi.org/10.5555/3294996.3295184.

[44]

R. Rafailov, A. Sharma, E. Mitchell, C. D. Manning, S. Ermon, and C. Finn, “Direct Preference Optimization: Your Language Model Is Secretly a Reward Model,” Advances in Neural Information Processing Systems 36 (2023): 53728-53741, https://doi.org/10.5555/3666122.3668460.

[45]

I. Obi, R. Pant, S. S. Agrawal, M. Ghazanfar, and A. Basiletti, “Value Imprint: A Technique for Auditing the Human Values Embedded in RLHF Datasets,” Advances in Neural Information Processing Systems 37 (2024): 114210-114235, https://doi.org/10.52202/079017-3628.

[46]

E. J. Hu, Y. Shen, P. Wallis, et al., “LoRA: Low-Rank Adaptation of Large Language Models,” in International Conference on Learning Representations (2022).

[47]

N. Houlsby, A. Giurgiu, S. Jastrzebski, et al. “Parameter-Efficient Transfer Learning for NLP,” , in International Conference on Machine Learning (PMLR, 2019), 2790-2799.

[48]

B. Lester, R. Al-Rfou, and N. Constant, “The Power of Scale for Parameter-Efficient Prompt Tuning,” in Proceedings of the 2021 Con-ference on Empirical Methods in Natural Language Processing (ACL, 2021), 3045-3059.

[49]

X. L. Li and P. Liang, “Prefix-Tuning: Optimizing Continuous Prompts for Generation,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1:Long Pa-pers) (ACL, 2021), 4582-4597.

[50]

A. Radford, J. Wu, R. Child, et al., “Language Models Are Unsu-pervised Multitask Learners,” OpenAI blog 1, no. 8 (2019): 9, https://www.bibsonomy.org/bibtex/61ea7e007d6c95171a2ff3396b1af7d9.

[51]

J. Wei, X. Wang, D. Schuurmans, et al., “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” Advances in Neural Information Processing Systems 35 (2022): 24824-24837, https://doi.org/10.48550/arXiv.2201.11903.

[52]

M. U. Khattak, H. Rasheed, M. Maaz,S. Khan, and F. S. Khan, “MaPLe: Multi-Modal Prompt Learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2023), 19113-19122.

[53]

Gemini Team Google, R. Anil, S. Borgeaud, et al., “Gemini: A Family of Highly Capable Multimodal Models,” arXiv preprint arXiv:2312. 11805 (2023), https://doi.org/10.48550/arXiv.2312.11805.

[54]

G. Team, P. Georgiev, V. I. Lei, et al., “Gemini 1.5: Unlocking Multimodal Understanding Across Millions of Tokens of Context,” arXiv preprint arXiv:2403. 05530 (2024), https://doi.org/10.48550/arXiv.2403.05530.

[55]

D. Hafner, J. Pasukonis, J. Ba, and T. Lillicrap, “Mastering Diverse Domains Through World Models,” arXiv preprint arXiv:2301. 04104 (2023), https://doi.org/10.48550/arXiv.2301.04104.

[56]

J. Cao, H. Guo, Z. Wang, et al. “Spiking Diffusion Models , IEEE Transactions on Artificial Intelligence 6, no. 1 (2025): 132-143, https://doi.org/10.1109/tai.2024.3453229.

[57]

A. Polyak, A. Zohar, A. Brown, et al., “Movie Gen: A Cast of Media Foundation Models,” arXiv preprint arXiv:2410. 13720 (2024), https://doi.org/10.48550/arXiv.2410.13720.

[58]

Y. Zhu, H. Yuan, S. Wang, et al., “Large Language Models for In-formation Retrieval: A Survey,” ACM Transactions on Information Systems (2025): 3748304, https://doi.org/10.1145/3748304.

[59]

Y. Liu, W. Chen, Y. Bai, et al., “Aligning Cyber Space With Physical World: A Comprehensive Survey on Embodied AI,” IEEE/ASME Transactions on Mechatronics (2025): 1-22, https://doi.org/10.1109/tmech.2025.3574943.

[60]

J. Bauer, K. Baumli, F. Behbahani, et al., “Human-Timescale Adaptation in an Open-Ended Task Space,” in International Confer-ence on Machine Learning (PMLR, 2023), 1887-1935.

[61]

A. Brohan, N. Brown, J. Carbajal, et al., “RT-1: Robotics Trans-former for Real-World Control at Scale,” arXiv preprint arXiv:2212. 06817 (2022), https://doi.org/10.48550/arXiv.2212.06817.

[62]

B. Zitkovich, T. Yu, S. Xu, et al., “RT-2:Vision-Language-Action Models Transfer Web Knowledge to Robotic Control,” in Proceedings of the 7th Conference on Robot Learning, Vol. 229 (PMLR, 2023),2165-2183.

[63]

Gemini Robotics Team, S. Abeyruwan J. Ainslie, et al., “Gemini Robotics: Bringing AI Into the Physical World,” arXiv preprint arXiv:2503. 20020 (2025), https://doi.org/10.48550/arXiv.2503.20020.

[64]

T. Guo, X. Chen, Y. Wang, et al., “Large Language Model Based Multi-Agents: A Survey of Progress and Challenges,” in Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (ACM, 2024), 8048-8057.

[65]

B. Ni and M. J. Buehler, “MechAgents: Large Language Model Multi-Agent Collaborations Can Solve Mechanics Problems, Generate New Data, and Integrate Knowledge,” Extreme Mechanics Letters 67 (2024): 102131, https://doi.org/10.1016/j.eml.2024.102131.

[66]

Y. Hu, J. Yang, L. Chen, et al. “Planning-Oriented Autonomous Driving , in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (IEEE, 2023), 17853-17862.

[67]

Z. Xu, Y. Zhang, E. Xie, et al., “DriveGPT4: Interpretable end-to-end Autonomous Driving via Large Language Model,” IEEE Robotics and Automation Letters 9, no. 10 (2024): 8186-8193, https://doi.org/10.1109/lra.2024.3440097.

[68]

X. Tian, J. Gu, B. Li, et al., “DriveVLM: The Convergence of Autonomous Driving and Large Vision-Language Models,” in Confer-ence on Robot Learning (PMLR, 2025), 4698-4726.

[69]

Hk Chiu R. Hachiuma C. Y. Wang S. F. Smith Y. C. F. Wang and M. H. Chen, “V2V-LLM: Vehicle-to-Vehicle Cooperative Autonomous Driving With Multi-Modal Large Language Models,” arXiv preprint arXiv:2502. 09980 (2025), https://doi.org/10.48550/arXiv.2502.09980.

[70]

H. Wu, G. Li, and D. Ivanov, “The Transformative Power of Generative AI for Supply Chain Management: Theoretical Framework and Agenda,” Frontiers of Engineering Management 12, no. 2 (2025): 425-433, https://doi.org/10.1007/s42524-025-4240-x.

[71]

C. Zhang, H. Song, Y. Wei, C. Yu, J. Lu, and Y. Tang, “GeoLRM: Geometry-Aware Large Reconstruction Model for high-quality 3d Gaussian Generation,” Advances in Neural Information Processing Sys-tems 37 (2024): 55761-55784, https://doi.org/10.52202/079017-1772.

[72]

W. Zeng, X. Ren, T. Su, et al., “PanGu-α: Large-Scale Autoregressive Pretrained Chinese Language Models With Auto-Parallel Computation,” arXiv preprint arXiv:2104. 12369 (2021), https://doi.org/10.48550/arXiv.2104.12369.

[73]

B. Yan, S. Liu, Z. Zeng, et al., “Unlocking Scaling Law in Industrial Recommendation Systems With a Three-Step Paradigm Based Large User Model,” arXiv preprint arXiv:2502. 08309 (2025), https://doi.org/10.48550/arXiv.2502.08309.

[74]

Y. Tian, R. Gan, Y. Song,J. Zhang, and Y. Zhang, “ChiMed-GPT: A Chinese Medical Large Language Model With Full Training Regime and Better Alignment to Human Preferences,” in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1:Long Papers) (ACL, 2024), 7156-7173.

[75]

L. Chen, X. Gao, H. Hou, C. Ye, Y. Liu, and Z. Meihui, “Application of Generative Large Language Models in Chinese Radiology Domain,” Journal of Frontiers of Computer Science & Technology 18, no. 9 (2024), http://fcst.ceaj.org/EN/10.3778/j.issn.1673-9418.2406041.

[76]

Ling Team B. Zeng C. Huang, et al., “Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM Without Premium GPUs,” arXiv preprint arXiv:2503. 05139 (2025), https://doi.org/10.48550/arXiv.2503.05139.

[77]

K. M. Wu, Q. H. Xu, Y. Q. Liu, et al., “Neuronal FAM171A2 Me-diates α-Synuclein Fibril Uptake and Drives Parkinson’s Disease,” Sci-ence 387, no. 6736 (2025): 892-900, https://doi.org/10.1126/science.adp3645.

[78]

Z. Nie, X. Liu, J. Chen, et al., “A Unified Evolution-Driven Deep Learning Framework for Virus Variation Driver Prediction,” Nature Machine Intelligence 7, no. 1 (2025): 131-144, https://doi.org/10.1038/s42256-024-00966-9.

[79]

J. Pathak, S. Subramanian, P. Harrington, et al., “FourCastNet: A Global Data-Driven High-Resolution Weather Model Using Adaptive Fourier Neural Operators,” arXiv preprint arXiv:2202. 11214 (2022), https://doi.org/10.48550/arXiv.2202.11214.

[80]

K. Bi, L. Xie, H. Zhang, X. Chen, X. Gu, and Q. Tian, “Pangu-Weather: A 3D High-Resolution Model for Fast and Accurate Global Weather Forecast,” arXiv preprint arXiv:2211. 02556 (2022), https://doi.org/10.48550/arXiv.2211.02556.

[81]

R. Lam, A. Sanchez-Gonzalez, M. Willson, et al., “Learning Skillful Medium-Range Global Weather Forecasting,” Science 382, no. 6677 (2023): 1416-1421, https://doi.org/10.1126/science.adi2336.

[82]

I. Price, A. Sanchez-Gonzalez, F. Alet, et al., “GenCast: Diffusion-Based Ensemble Forecasting for Medium-range Weather,” in 105th Annual AMS Meeting 2025Vol. 105 (AMS, 2025).449275

[83]

K. Chen, T. Han, J. Gong, et al., “FengWu: Pushing the Skillful Global Medium-Range Weather Forecast Beyond 10 Days Lead,” arXiv preprint arXiv:2304.02948 (2023), https://doi.org/10.48550/arXiv.2304.02948.

[84]

X. Si, X. Wu, Z. Li, S. Wang, and J. Zhu, “An All-in-One Seismic Phase Picking, Location, and Association Network for Multi-Task Multi-Station Earthquake Monitoring,” Communications Earth & Environment 5, no. 1 (2024): 22, https://doi.org/10.1038/s43247-023-01188-4.

[85]

H. Luo, B. Karki, D. B. Ghosh, and H. Bao, “Diffusional Fraction-ation of Helium Isotopes in Silicate Melts,” Geochemical Perspectives Letters 19 (2021): 19-22, https://doi.org/10.7185/geochemlet.2128.

[86]

G. Xu, J. Liu, M. Yan, et al., “CValues: Measuring the Values of Chinese Large Language Models From Safety to Responsibility,” arXiv preprint arXiv:2307. 09705 (2023), https://doi.org/10.48550/arXiv.2307.09705.

[87]

J. Yao, X. Yi, X. Wang, Y. Gong, and X. Xie, “Value FULCRA: Mapping Large Language Models to the Multidimensional Spectrum of Basic Human Values,” arXiv preprint arXiv:2311. 10766 (2023), https://doi.org/10.48550/arXiv.2311.10766.

[88]

Y. Zhang, Y. Li, L. Cui, et al., “Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models,” Computational Linguistics (2025): 1-46, https://doi.org/10.1162/coli.a.16.

[89]

P. Radhakrishnan, J. Chen, B. Xu, et al., “Knowing When to Ask-Bridging Large Language Models and Data,” arXiv preprint arXiv:2409. 13741 (2024), https://doi.org/10.48550/arXiv.2409.13741.

AI Summary AI Mindmap
PDF (4236KB)

48

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/