A survey on deep learning-based algorithms for the traveling salesman problem

Jingyan SUI, Shizhe DING, Xulin HUANG, Yue YU, Ruizhi LIU, Boyang XIA, Zhenxin DING, Liming XU, Haicang ZHANG, Chungong YU, Dongbo BU

PDF(16377 KB)
PDF(16377 KB)
Front. Comput. Sci. ›› 2025, Vol. 19 ›› Issue (6) : 196322. DOI: 10.1007/s11704-024-40490-y
Artificial Intelligence
REVIEW ARTICLE

A survey on deep learning-based algorithms for the traveling salesman problem

Author information +
History +

Abstract

This paper presents an overview of deep learning (DL)-based algorithms designed for solving the traveling salesman problem (TSP), categorizing them into four categories: end-to-end construction algorithms, end-to-end improvement algorithms, direct hybrid algorithms, and large language model (LLM)-based hybrid algorithms. We introduce the principles and methodologies of these algorithms, outlining their strengths and limitations through experimental comparisons. End-to-end construction algorithms employ neural networks to generate solutions from scratch, demonstrating rapid solving speed but often yielding subpar solutions. Conversely, end-to-end improvement algorithms iteratively refine initial solutions, achieving higher-quality outcomes but necessitating longer computation times. Direct hybrid algorithms directly integrate deep learning with heuristic algorithms, showcasing robust solving performance and generalization capability. LLM-based hybrid algorithms leverage LLMs to autonomously generate and refine heuristics, showing promising performance despite being in early developmental stages. In the future, further integration of deep learning techniques, particularly LLMs, with heuristic algorithms and advancements in interpretability and generalization will be pivotal trends in TSP algorithm design. These endeavors aim to tackle larger and more complex real-world instances while enhancing algorithm reliability and practicality. This paper offers insights into the evolving landscape of DL-based TSP solving algorithms and provides a perspective for future research directions.

Graphical abstract

Keywords

traveling salesman problem / algorithms design / deep learning / neural network

Cite this article

Download citation ▾
Jingyan SUI, Shizhe DING, Xulin HUANG, Yue YU, Ruizhi LIU, Boyang XIA, Zhenxin DING, Liming XU, Haicang ZHANG, Chungong YU, Dongbo BU. A survey on deep learning-based algorithms for the traveling salesman problem. Front. Comput. Sci., 2025, 19(6): 196322 https://doi.org/10.1007/s11704-024-40490-y

Jingyan Sui is a PhD candidate at State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, China. Her primary research interests encompass algorithm design, machine learning, deep learning, and combinatorial optimization

Shizhe Ding is a PhD candidate at State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, China. His primary research interests encompass machine learning, and combinatorial optimization

Xulin Huang is a graduate student at Henan Institute of Advanced Technology, Zhengzhou University, and State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, China. His primary research interests encompass algorithm design, deep learning, combinatorial optimization, and bioinformatics

Yue Yu is a graduate student at Hangzhou Institute for Advanced Study, and State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, China. Her main research interests encompass algorithm design, deep learning, and bioinformatics

Ruizhi Liu is a PhD candidate at State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, China. His main research interests encompass algorithm design, deep learning, combinatorial optimization, and chip design

Boyang Xia is a graduate student at State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, China. His main research interests encompass machine learning and combinatorial optimization

Zhenxin Ding is a graduate student at State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, China. His main research interests encompass algorithm design, machine learning, and combinatorial optimization

Liming Xu is a graduate student at State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, China. Her main research interests encompass algorithm design, machine learning, and combinatorial optimization

Haicang Zhang is a PhD, an associate researcher, a Master’s supervisor at State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, China. His main research focuses on machine learning, protein design, and protein structure prediction

Chungong Yu is a Master, a senior engineer at State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, China. His main research interests encompass bioinfomatics and protein structure prediction

Dongbo Bu is a PhD, a Professor, and a PhD supervisor at State Key Lab of Processor, Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences, and Central China Institute of Artificial Intelligence, China. His main research interests encompass algorithm design, bioinformatics, protein structure prediction, and deep learning

References

[1]
Cook W J. In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation. Princeton: Princeton University Press, 2012
[2]
La Maire B F J, Mladenov V M. Comparison of neural networks for solving the travelling salesman problem. In: Proceedings of the 11th Symposium on Neural Network Applications in Electrical Engineering. 2012, 21–24
[3]
LeCun Y, Bengio Y, Hinton G . Deep learning. Nature, 2015, 521( 7553): 436–444
[4]
Wei K, Li T, Huang F, Chen J, He Z . Cancer classification with data augmentation based on generative adversarial networks. Frontiers of Computer Science, 2022, 16( 2): 162601
[5]
Shi H, Wang J, Cheng J, Qi X, Ji H, Struchiner C J, Villela D A M, Karamov E V, Turgiev A S . Big data technology in infectious diseases modeling, simulation, and prediction after the COVID-19 outbreak. Intelligent Medicine, 2023, 3( 2): 85–96
[6]
Wu Y, Zhang P, Shen H, Zhai H . Visualizing a neural network that develops quantum perturbation theory. Physical Review A, 2018, 98( 1): 010701(R)
[7]
Zhang P, Shen H, Zhai H . Machine learning topological invariants with neural networks. Physical Review Letters, 2018, 120( 6): 066401
[8]
Sun N, Yi J, Zhang P, Shen H, Zhai H . Deep learning topological invariants of band insulators. Physical Review B, 2018, 98( 8): 085402
[9]
Jumper J, Evans R, Pritzel A, Green T, Figurnov M, Ronneberger O, Tunyasuvunakool K, Bates R, Žídek A, Potapenko A, Bridgland A, Meyer C, Kohl S A A, Ballard A J, Cowie A, Romera-Paredes B, Nikolov S, Jain R, Adler J, Back T, Petersen S, Reiman D, Clancy E, Zielinski M, Steinegger M, Pacholska M, Berghammer T, Bodenstein S, Silver D, Vinyals O, Senior A W, Kavukcuoglu K, Kohli P, Hassabis D . Highly accurate protein structure prediction with AlphaFold. Nature, 2021, 596( 7873): 583–589
[10]
Tunyasuvunakool K, Adler J, Wu Z, Green T, Zielinski M, Žídek A, Bridgland A, Cowie A, Meyer C, Laydon A, Velankar S, Kleywegt G J, Bateman A, Evans R, Pritzel A, Figurnov M, Ronneberger O, Bates R, Kohl S A A, Potapenko A, Ballard A J, Romera-Paredes B, Nikolov S, Jain R, Clancy E, Reiman D, Petersen S, Senior A W, Kavukcuoglu K, Birney E, Kohli P, Jumper J, Hassabis D . Highly accurate protein structure prediction for the human proteome. Nature, 2021, 596( 7873): 590–596
[11]
Baek M, DiMaio F, Anishchenko I, Dauparas J, Ovchinnikov S, Lee G R, Wang J, Cong Q, Kinch L N, Schaeffer R D, Millán C, Park H, Adams C, Glassman C R, Degiovanni A, Pereira J H, Rodrigues A V, Van Dijk A A, Ebrecht A C, Opperman D J, Sagmeister T, Buhlheller C, Pavkov-Keller T, Rathinaswamy M K, Dalwadi U, Yip C K, Burke J E, Garcia K C, Grishin N V, Adams P D, Read R J, Baker D . Accurate prediction of protein structures and interactions using a three-track neural network. Science, 2021, 373( 6557): 871–876
[12]
Ju F, Zhu J, Shao B, Kong L, Liu T Y, Zheng W M, Bu D . CopulaNet: learning residue co-evolution directly from multiple sequence alignment for protein structure prediction. Nature Communications, 2021, 12( 1): 2535
[13]
Davies A, Veličković P, Buesing L, Blackwell S, Zheng D, Tomašev N, Tanburn R, Battaglia P, Blundell C, Juhász A, Lackenby M, Williamson G, Hassabis D, Kohli P . Advancing mathematics by guiding human intuition with AI. Nature, 2021, 600( 7887): 70–74
[14]
Fawzi A, Balog M, Huang A, Hubert T, Romera-Paredes B, Barekatain M, Novikov A, Ruiz F J R, Schrittwieser J, Swirszcz G, Silver D, Hassabis D, Kohli P . Discovering faster matrix multiplication algorithms with reinforcement learning. Nature, 2022, 610( 7930): 47–53
[15]
Tenachi W, Ibata R, Diakogiannis F I . Deep symbolic regression for physics guided by units constraints: toward the automated discovery of physical laws. The Astrophysical Journal, 2023, 959( 2): 99
[16]
Romera-Paredes B, Barekatain M, Novikov A, Balog M, Kumar M P, Dupont E, Ruiz F J R, Ellenberg J S, Wang P, Fawzi O, Kohli P, Fawzi A . Mathematical discoveries from program search with large language models. Nature, 2024, 625( 7995): 468–475
[17]
Wu N, Wang J, Zhao W X, Jin Y. Learning to effectively estimate the travel time for fastest route recommendation. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management. 2019, 1923–1932
[18]
Wu N, Zhao X W, Wang J, Pan D. Learning effective road network representation with hierarchical graph neural networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020, 6–14
[19]
Ji J, Wang J, Jiang Z, Ma J, Zhang H. Interpretable spatiotemporal deep learning model for traffic flow prediction based on potential energy fields. In: Proceedings of IEEE International Conference on Data Mining. 2020, 1076–1081
[20]
Wang J, Wu N, Zhao W X . Personalized route recommendation with neural network enhanced search algorithm. IEEE Transactions on Knowledge and Data Engineering, 2022, 34( 12): 5910–5924
[21]
Wang Z, Pan Z, Chen S, Ji S, Yi X, Zhang J, Wang J, Gong Z, Li T, Zheng Y . Shortening passengers’ travel time: a dynamic metro train scheduling approach using deep reinforcement learning. IEEE Transactions on Knowledge and Data Engineering, 2023, 35( 5): 5282–5295
[22]
Wang J, Ji J, Jiang Z, Sun L . Traffic flow prediction based on spatiotemporal potential energy fields. IEEE Transactions on Knowledge and Data Engineering, 2023, 35( 9): 9073–9087
[23]
Ji J, Wang J, Huang C, Wu J, Xu B, Wu Z, Zhang J, Zheng Y. Spatio-temporal self-supervised learning for traffic flow prediction. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. 2023, 4356–4364
[24]
Bengio Y, Lodi A, Prouvost A . Machine learning for combinatorial optimization: a methodological tour d’horizon. European Journal of Operational Research, 2021, 290( 2): 405–421
[25]
Junior Mele U, Maria Gambardella L, Montemanni R. Machine learning approaches for the traveling salesman problem: a survey. In: Proceedings of the 8th International Conference on Industrial Engineering and Applications (Europe). 2021, 182–186
[26]
Shi Y, Zhang Y . The neural network methods for solving traveling salesman problem. Procedia Computer Science, 2022, 199: 681–686
[27]
Yang Y, Whinston A. A survey on reinforcement learning for combinatorial optimization. In: Proceedings of IEEE World Conference on Applied Intelligence and Computing. 2023, 131–136
[28]
Matai R, Singh S P, Mittal M L. Traveling salesman problem: an overview of applications, formulations, and solution approaches. In: Davendra D, ed. Traveling Salesman Problem, Theory and Applications. Rijeka: InTech, 2010, 1
[29]
Eastman W L. Linear programming with pattern constraints. Harvard University, Dissertation, 1958
[30]
Miller D L, Pekny J F . Exact solution of large asymmetric traveling salesman problems. Science, 1991, 251( 4995): 754–761
[31]
Held M, Karp R M . A dynamic programming approach to sequencing problems. Journal of the Society for Industrial and Applied Mathematics, 1962, 10( 1): 196–210
[32]
Laporte G . The traveling salesman problem: an overview of exact and approximate algorithms. European Journal of Operational Research, 1992, 59( 2): 231–247
[33]
Dantzig G, Fulkerson R, Johnson S . Solution of a large-scale traveling-salesman problem. Journal of the Operations Research Society of America, 1954, 2( 4): 393–410
[34]
Miller C E, Tucker A W, Zemlin R A . Integer programming formulation of traveling salesman problems. Journal of the ACM, 1960, 7( 4): 326–329
[35]
Gutin G, Punnen A P. The Traveling Salesman Problem and Its Variations. New York: Springer, 2007
[36]
Applegate D, Bixby R, Chvatal V, Cook W. Concorde TSP solver. See Math. uwaterloo. ca/tsp/concorde website, 2006
[37]
Rosenkrantz D J, Stearns R E, Lewis II P M . An analysis of several heuristics for the traveling salesman problem. SIAM Journal on Computing, 1977, 6( 3): 563–581
[38]
Johnson D S, McGeoch L A . The traveling salesman problem: a case study in local optimization. Local Search in Combinatorial Optimization, 1997, 1( 1): 215–310
[39]
Voudouris C, Tsang E . Guided local search and its application to the traveling salesman problem. European Journal of Operational Research, 1999, 113( 2): 469–499
[40]
Nilsson C . Heuristics for the traveling salesman problem. Linkoping University, 2003, 38( 0085-9): 26
[41]
Sui J, Ding S, Liu R, Xu L, Bu D. Learning 3-opt heuristics for traveling salesman problem via deep reinforcement learning. In: Proceedings of the 13th Asian Conference on Machine Learning. 2021, 1301–1316
[42]
Johnson D S. Local optimization and the traveling salesman problem. In: Proceedings of the 17th International Colloquium on Automata, Languages, and Programming. 1990, 446–461
[43]
Helsgaun K . An effective implementation of the Lin-Kernighan traveling salesman heuristic. European Journal of Operational Research, 2000, 126( 1): 106–130
[44]
Helsgaun K . General k-opt submoves for the Lin-Kernighan TSP heuristic. Mathematical Programming Computation, 2009, 1( 2-3): 119–163
[45]
Helsgaun K. An extension of the Lin-Kernighan-Helsgaun TSP solver for constrained traveling salesman and vehicle routing problems: technical report. Roskilde: Roskilde University, 2017, 966–980
[46]
Vinyals O, Fortunato M, Jaitly N. Pointer networks. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. 2015, 2692–2700
[47]
Bello I, Pham H, Le Q V, Norouzi M, Bengio S. Neural combinatorial optimization with reinforcement learning. In: Proceedings of the 5th International Conference on Learning Representations. 2017
[48]
Nazari M, Oroojlooy A, Takáč M, Snyder L. Reinforcement learning for solving the vehicle routing problem. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 9861–9871
[49]
Deudon M, Cournut P, Lacoste A, Adulyasak Y, Rousseau L M. Learning heuristics for the TSP by policy gradient. In: Proceedings of the 15th International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research. 2018, 170–181
[50]
Kool W, Van Hoof H, Welling M. Attention, learn to solve routing problems!2018, arXiv preprint arXiv: 1803.08475
[51]
Kwon Y D, Choo J, Kim B, Yoon I, Gwon Y, Min S. POMO: policy optimization with multiple optima for reinforcement learning. In: Proceedings of the 34th Conference on Neural Information Processing Systems. 2020, 21188–21198
[52]
Bresson X, Laurent T. The transformer network for the traveling salesman problem. 2021, arXiv preprint arXiv: 2103.03012
[53]
Pan X, Jin Y, Ding Y, Feng M, Zhao L, Song L, Bian J. H-TSP: hierarchically solving the large-scale traveling salesman problem. In: Proceedings of the 37th AAAI Conference on Artificial Intelligence. 2023, 9345–9353
[54]
Lischka A, Wu J, Basso R, Chehreghani M H, Kulcsár B. Less is more-on the importance of sparsification for transformers and graph neural networks for TSP. 2024, arXiv preprint arXiv: 2403.17159
[55]
Luo F, Lin X, Liu F, Zhang Q, Wang Z. Neural combinatorial optimization with heavy decoder: toward large scale generalization. In: Proceedings of the 37th Conference on Neural Information Processing Systems. 2024
[56]
Dai H, Khalil E B, Zhang Y, Dilkina B, Song L. Learning combinatorial optimization algorithms over graphs. In: Proceedings of the 31st Conference on Neural Information Processing Systems. 2017, 30
[57]
Ma Q, Ge S, He D, Thaker D, Drori I. Combinatorial optimization by graph pointer networks and hierarchical reinforcement learning. 2019, arXiv preprint arXiv: 1911.04936
[58]
Drori I, Kharkar A, Sickinger W R, Kates B, Ma Q, Ge S, Dolev E, Dietrich B, Williamson D P, Udell M. Learning to solve combinatorial optimization problems on real-world graphs in linear time. In: Proceedings of the 19th IEEE International Conference on Machine Learning and Applications. 2020, 19–24
[59]
Ouyang W, Wang Y, Weng P, Han S. Generalization in deep RL for TSP problems via equivariance and local search. 2021, arXiv preprint arXiv: 2110.03595
[60]
Sun Z, Yang Y. DIFUSCO: graph-based diffusion solvers for combinatorial optimization. In: Proceedings of the 37th International Conference on Neural Information Processing Systems. 2023, 164
[61]
Liu H, Kuang Y, Wang J, Li X, Zhang Y, Wu F. Promoting generalization for exact solvers via adversarial instance augmentation. 2023, arXiv preprint arXiv: 2310.14161
[62]
Wu Y, Song W, Cao Z, Zhang J, Lim A. Learning improvement heuristics for solving routing problems. 2019, arXiv preprint arXiv: 1912.05784
[63]
Costa P R D O, Rhuggenaath J, Zhang Y, Akcay A. Learning 2-opt heuristics for the traveling salesman problem via deep reinforcement learning. In: Proceedings of the 12th Asian Conference on Machine Learning. 2020, 465–480
[64]
Ma Y, Li J, Cao Z, Song W, Zhang L, Chen Z, Tang J. Learning to iteratively solve routing problems with dual-aspect collaborative transformer. In: Proceedings of the 34th Conference on Neural Information Processing Systems. 2021, 11096–11107
[65]
Joshi C K, Laurent T, Bresson X. An efficient graph convolutional network technique for the travelling salesman problem. 2019, arXiv preprint arXiv: 1906.01227
[66]
Hudson B, Li Q, Malencia M, Prorok A. Graph neural network guided local search for the traveling salesperson problem. 2021, arXiv preprint arXiv: 2110.05291
[67]
Sui J, Ding S, Xia B, Liu R, Bu D . NeuralGLS: learning to guide local search with graph convolutional network for the traveling salesman problem. Neural Computing and Applications, 2024, 36( 17): 9687–9706
[68]
Fu Z H, Qiu K B, Zha H. Generalize a small pre-trained model to arbitrarily large TSP instances. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 7474–7482
[69]
Sanyal S, Roy K . Neuro-Ising: accelerating large-scale traveling salesman problems via graph neural network guided localized Ising solvers. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2022, 41( 12): 5408–5420
[70]
Zheng J, He K, Zhou J, Jin Y, Li C M. Combining reinforcement learning with Lin-Kernighan-Helsgaun algorithm for the traveling salesman problem. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 12445–12452
[71]
Xin L, Song W, Cao Z, Zhang J. NeuroLKH: combining deep learning model with Lin-Kernighan-Helsgaun heuristic for solving the traveling salesman problem. In: Proceedings of the 35th International Conference on Neural Information Processing Systems. 2021, 572
[72]
Zheng J, He K, Zhou J, Jin Y, Li C M . Reinforced Lin–Kernighan–Helsgaun algorithms for the traveling salesman problems. Knowledge-Based Systems, 2023, 260: 110144
[73]
Liu F, Tong X, Yuan M, Zhang Q. Algorithm evolution using large language model. 2023, arXiv preprint arXiv: 2311.15249
[74]
Ye H, Wang J, Cao Z, Berto F, Hua C, Kim H, Park J, Song G. Large language models as hyper-heuristics for combinatorial optimization. 2024, arXiv preprint arXiv: 2402.01145
[75]
Liu F, Tong X, Yuan M, Lin X, Luo F, Wang Z, Lu Z, Zhang Q. Evolution of heuristics: towards efficient automatic algorithm design using large language mode. 2024, arXiv preprint arXiv: 2401.02051
[76]
Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. In: Proceedings of the 3rd International Conference on Learning Representations. 2015
[77]
Sato R. A survey on the expressive power of graph neural networks. 2020, arXiv preprint arXiv: 2003.04078
[78]
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A N, Kaiser Ł, Polosukhin I. Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 6000–6010
[79]
Naveed H, Khan A U, Qiu S, Saqib M, Anwar S, Usman M, Akhtar N, Barnes N, Mian A. A comprehensive overview of large language models. 2024, arXiv preprint arXiv: 2307.06435
[80]
Hu D. An introductory survey on attention mechanisms in NLP problems. In: Proceedings of Intelligent Systems Conference (IntelliSys) Volume 2. 2020, 432–448
[81]
Luong M T, Pham H, Manning C D. Effective approaches to attention-based neural machine translation. In: Proceedings of 2015 Conference on Empirical Methods in Natural Language Processing. 2015
[82]
Sperduti A, Starita A . Supervised neural networks for the classification of structures. IEEE Transactions on Neural Networks, 1997, 8( 3): 714–735
[83]
Baskin I I, Palyulin V A, Zefirov N S . A neural device for searching direct correlations between structures and properties of chemical compounds. Journal of Chemical Information and Computer Sciences, 1997, 37( 4): 715–721
[84]
Gori M, Monfardini G, Scarselli F. A new model for learning in graph domains. In: Proceedings of IEEE International Joint Conference on Neural Networks. 2005, 729–734
[85]
Scarselli F, Gori M, Tsoi A C, Hagenbuchner M, Monfardini G . The graph neural network model. IEEE Transactions on Neural Networks, 2009, 20( 1): 61–80
[86]
Duvenaud D, Maclaurin D, Aguilera-Iparraguirre J, Gómez-Bombarelli R, Hirzel T, Aspuru-Guzik A, Adams R P. Convolutional networks on graphs for learning molecular fingerprints. In: Proceedings of the 28th International Conference on Neural Information Processing Systems. 2015
[87]
Li Y, Tarlow D, Brockschmidt M, Zemel R S. Gated graph sequence neural networks. In: Proceedings of the 4th International Conference on Learning Representations. 2016
[88]
Dai H, Dai B, Song L. Discriminative embeddings of latent variable models for structured data. In: Proceedings of the 33rd International Conference on Machine Learning. 2016, 2702–2711
[89]
Gilmer J, Schoenholz S S, Riley P F, Vinyals O, Dahl G E. Neural message passing for quantum chemistry. In: Proceedings of the 34th International Conference on Machine Learning. 2017, 1263–1272
[90]
Hamilton W L, Ying R, Leskovec J. Inductive representation learning on large graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 1025–1035
[91]
Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations. 2017
[92]
Bresson X, Laurent T. Residual gated graph convnets. 2017, arXiv preprint arXiv: 1711.07553
[93]
Velickovic P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph attention networks. In: Proceedings of the 6th International Conference on Learning Representations. 2018
[94]
Zhao W X, Zhou K, Li J, Tang T, Wang X, Hou Y, Min Y, Zhang B, Zhang J, Dong Z, Du Y, Yang C, Chen Y, Chen Z, Jiang J, Ren R, Li Y, Tang X, Liu Z, Liu P, Nie J Y, Wen J R. A survey of large language models. 2023, arXiv preprint arXiv: 2303.18223
[95]
Min B, Ross H, Sulem E, Veyseh A P B, Nguyen T H, Sainz O, Agirre E, Heinz I, Roth D . Recent advances in natural language processing via large pre-trained language models: a survey. ACM Computing Surveys, 2024, 56( 2): 30
[96]
Tian H, Lu W, Li T O, Tang X, Cheung S C, Klein J, Bissyandé T F. Is ChatGPT the ultimate programming assistant-how far is it? 2023, arXiv preprint arXiv: 2304.11938
[97]
Jablonka K M, Schwaller P, Ortega-Guerrero A, Smit B. Is GPT-3 all you need for low-data discovery in chemistry? See Chemrxiv. org/engage/chemrxiv/article-details/63eb5a669da0bc6b33e97a35 website, 2023
[98]
Lee P, Bubeck S, Petro J . Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. New England Journal of Medicine, 2023, 388( 13): 1233–1239
[99]
Nori H, King N, McKinney S M, Carignan D, Horvitz E. Capabilities of GPT-4 on medical challenge problems. 2023, arXiv preprint arXiv: 2303.13375
[100]
Cheng K, Guo Q, He Y, Lu Y, Gu S, Wu H . Exploring the potential of GPT-4 in biomedical engineering: the dawn of a new era. Annals of Biomedical Engineering, 2023, 51( 8): 1645–1653
[101]
Blocklove J, Garg S, Karri R, Pearce H. Chip-chat: challenges and opportunities in conversational hardware design. In: Proceedings of the 5th ACM/IEEE Workshop on Machine Learning for CAD. 2023, 1–6
[102]
He Z, Wu H, Zhang X, Yao X, Zheng S, Zheng H, Yu B. ChatEDA: a large language model powered autonomous agent for EDA. 2023, arXiv preprint arXiv: 2308.10204
[103]
Shah D, Equi M R, Osiński B, Xia F, Ichter B, Levine S. Navigation with large language models: semantic guesswork as a heuristic for planning. In: Proceedings of the 7th Conference on Robot Learning. 2023
[104]
Xiao H, Wang P. LLM A*: human in the loop large language models enabled A* search for robotics. 2023, arXiv preprint arXiv: 2312.01797
[105]
Yang C, Wang X, Lu Y, Liu H, Le Q V, Zhou D, Chen X. Large language models as optimizers. In: Proceedings of ICLR 2024 Conference. 2024
[106]
Wu X, Zhong Y, Wu J, Jiang B, Tan K C. Large language model-enhanced algorithm selection: towards comprehensive algorithm representation. In: Proceedings of the 33rd International Joint Conference on Artificial Intelligence. 2024
[107]
Liu F, Lin X, Wang Z, Yao S, Tong X, Yuan M, Zhang Q. Large language model for multi-objective evolutionary optimization. 2024, arXiv preprint arXiv: 2310.12541
[108]
Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014, 3104–3112
[109]
Rumelhart D E, Hinton G E, Williams R J . Learning representations by back-propagating errors. Nature, 1986, 323( 6088): 533–536
[110]
Hochreiter S, Schmidhuber J . Long short-term memory. Neural Computation, 1997, 9( 8): 1735–1780
[111]
Mnih V, Badia A P, Mirza M, Graves A, Harley T, Lillicrap T P, Silver D, Kavukcuoglu K. Asynchronous methods for deep reinforcement learning. In: Proceedings of the 33rd International Conference on Machine Learning. 2016, 1928–1937
[112]
Christofides N. Worst-case analysis of a new heuristic for the travelling salesman problem. Pittsburgh: Carnegie-Mellon University, 1976
[113]
Perron L, Furnon V. OR-Tools. See developers.google.com/optimization/ website. 2023
[114]
Williams R J . Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 1992, 8( 3-4): 229–256
[115]
Kingma D P, Ba J. Adam: a method for stochastic optimization. In: Proceedings of the 3rd International Conference on Learning Representations. 2015
[116]
Sutton R S, Barto A G . Reinforcement learning: an introduction. Robotica, 1999, 17( 2): 229–235
[117]
Reinelt G . TSPLIB—A traveling salesman problem library. ORSA Journal on Computing, 1991, 3( 4): 376–384
[118]
Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 574
[119]
Perez E, Strub F, De Vries H, Dumoulin V, Courville A. FiLM: visual reasoning with a general conditioning layer. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence. 2018
[120]
Misevičius A . Using iterated Tabu search for the traveling salesman problem. Information Technology and Control, 2004, 32( 3): 29–40
[121]
Vikhar P A. Evolutionary algorithms: a critical review and its future prospects. In: Proceedings of International Conference on Global Trends in Signal Processing, Information Computing and Communication. 2016, 261–265
[122]
Arnold F, Sörensen K . Knowledge-guided local search for the vehicle routing problem. Computers & Operations Research, 2019, 105: 32–46

Acknowledgements

This study was funded by the National Key R&D Program of China (2020YFA0907000), the National Natural Science Foundation of China (Grant Nos. 32270657, 32271297, 82130055, 62072435) and the Youth Innovation Promotion Association, Chinese Academy of Sciences. We appreciate the ComputeX center, ICT, CAS for providing computation service.

Competing interests

The authors declare that they have no competing interests or financial conflicts to disclose.

Open Access

This article is licensed under a Creative Commons Attribution4.0 International License, which permits use, sharing, adaptation, distributionand reproduction in any medium or format, as long as you give appropriatecredit to the original author(s) and the source, provide a link to the CreativeCommons licence, and indicate if changes were made.
The images or other third party material in this article are included in thearticle’s Creative Commons licence, unless indicated otherwise in a credit lineto the material. If material is not included in the article’s Creative Commonslicence and your intended use is not permitted by statutory regulation orexceeds the permitted use, you will need to obtain permission directly fromthe copyright holder.
To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/

RIGHTS & PERMISSIONS

2024 The Author(s) 2024. This article is published with open access at link.springer.com and journal.hep.com.cn
AI Summary AI Mindmap
PDF(16377 KB)

Accesses

Citations

Detail

Sections
Recommended

/