Natural language interface for urban network analytics

Yuri Bogomolov , Daniel Bretsko , Swam Pyae Paing , Stanislav Sobolevsky

Computational Urban Science ›› 2025, Vol. 5 ›› Issue (1) : 71

PDF
Computational Urban Science ›› 2025, Vol. 5 ›› Issue (1) :71 DOI: 10.1007/s43762-025-00230-9
Original Paper
research-article

Natural language interface for urban network analytics

Author information +
History +
PDF

Abstract

We introduce the first natural language interface for complex urban analytics, leveraging Large Language Models (LLMs) and Spatio-Temporal Transactional Networks (STTNs). By combining intuitive natural language querying with structured data analytics, our framework simplifies complex urban analyses, such as identifying commuter patterns, detecting anomalies, and exploring mobility networks. We propose a comprehensive evaluation dataset that demonstrates that minor architectural improvements can significantly improve analytical accuracy. Our approach bridges the gap between non-expert users and sophisticated urban insights, paving the way for accessible, reliable, and scalable urban data analytics.

Keywords

Urban analytics / Large language models / Prompt engineering / Network analysis / Geospatial data

Cite this article

Download citation ▾
Yuri Bogomolov, Daniel Bretsko, Swam Pyae Paing, Stanislav Sobolevsky. Natural language interface for urban network analytics. Computational Urban Science, 2025, 5(1): 71 DOI:10.1007/s43762-025-00230-9

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Agarwal, R., Singh, A., Zhang, L. M., Bohnet, B., Rosias, L., Chan, S., Zhang, B., Anand, A., Abbas, Z., & Nova, A. (2024). Many-shot in-context learning. Advances in Neural Information Processing Systems37, 76930–76966 https://arxiv.org/abs/2404.11018

[2]

Aleithan, R., Xue, H., Mohajer, M. M., Nnorom, E., Uddin, G., & Wang, S. (2024). Swe-bench+: Enhanced coding benchmark for llms. arXiv preprint arXiv:2410.06992.

[3]

Baily, M., Brynjolfsson, E., & Korinek, A. (2023). Machines of mind: The case for an ai-powered productivity boom. Brookings. 

[4]

Bang, Y., Cahyawijaya, S., Lee, N., Dai, W., Su, D., Wilie, B., Lovenia, H., Ji, Z., Yu, T., Chung, W., Do, Q. V., Xu, Y., & Fung, P. (2023). A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023.

[5]

Bogomolov, Y. & Sobolevsky, S. (2022). A scalable spatio-temporal analytics framework for urban networks. In Fifth Networks in the Global World Conference. Springer International Publishing.

[6]

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A. (2020). Language models are few-shot learners. Advances in neural information processing systems. 33, 1877–1901. arXiv preprint arXiv:2005.14165.

[7]

Cao G, Wang S, Hwang M, Padmanabhan A, Zhang Z, Soltani K. A scalable framework for spatiotemporal analysis of location-based social media data. Computers, Environment and Urban Systems, 2015, 51: 70-82.

[8]

Chang, S., & Fosler-Lussier, E. (2023). How to prompt llms for text-to-sql: A study in zero-shot, single-domain, and cross-domain settings. arXiv preprint arXiv:2305.11853.

[9]

Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov, M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray, S., ... Zaremba, W. (2021a). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.

[10]

Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto, H. P., Kaplan, J., & Zaremba, W. (2021b). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.

[11]

Grauwin S, Szell M, Sobolevsky S, Hövel P, Simini F, Vanhoof M, Smoreda Z, Barabási A-L, Ratti C. Identifying and modeling the structural discontinuities of human interactions. Scientific Reports, 2017, 7(1): 1-11.

[12]

Guo, J., Du, L., & Liu, H. (2023a). Gpt4graph: Can large language models understand graph structured data? An empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066.

[13]

Guo, C., Tian, Z., Tang, J., Wang, P., Wen, Z., Yang, K., & Wang, T. (2023b). A case-based reasoning framework for adaptive prompting in cross-domain text-to-sql. CoRR. arXiv preprint arXiv:2304.13301.

[14]

Jimenez, C. E., Yang, J., Wettig, A., Yao, S., Pei, K., Press, O., & Narasimhan, K. (2023). Swe-bench: Can language models resolve real-world github issues? arXiv preprint arXiv:2310.06770.

[15]

Kurkcu, A., Ozbay, K., & Morgul, E. (2016). Evaluating the usability of geo-located twitter as a tool for human activity and mobility patterns: A case study for nyc. In Transportation Research Board’s 95th Annual Meeting (pp. 1–20). Transportation Research Board.

[16]

Lei, F., Chen, J., Ye, Y., Cao, R., Shin, D., Su, H., Suo, Z., Gao, H., Hu, W., Yin, P., Zhong, V., Xiong, C., Sun, R., Liu, Q., Wang, S., & Yu, T. (2024). Spider 2.0: Evaluating language models on real-world enterprise text-to-sql workflows. arXiv preprint arXiv:2411.07763.

[17]

Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., Leike, J., Schulman, J., Sutskever, I., & Cobbe, K. (2023). Let’s verify step by step. The Twelfth International Conference on Learning Representations. arXiv preprint arXiv:2305.20050.

[18]

Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao, L., Wiegreffe, S., Alon, U., Dziri, N., Prabhumoye, S., & Yang, Y. (2023). Self-refine: Iterative refinement with self-feedback. Advances in Neural Information Processing Systems. 36, 46534-46594. https://arxiv.org/abs/2303.17651

[19]

Nan, L., Zhao, Y., Zou, W., Ri, N., Tae, J., Zhang, E., Cohan, A., & Radev, D. (2023). Enhancing few-shot text-to-sql capabilities of large language models: A study on prompt design strategies. Findings of the Association for Computational Linguistics: EMNLP. (pp. 14935-14956). arXiv preprint arXiv:2305.12586.

[20]

Pei T, Sobolevsky S, Ratti C, Shaw S-L, Li T, Zhou C. A new insight into land use classification based on aggregated mobile phone data. International Journal of Geographical Information Science, 2014, 28(9): 1988-2007.

[21]

Pourreza, M., & Rafiei, D. (2023). Din-sql: Decomposed in-context learning of text-to-sql with self-correction. Advances in Neural Information Processing Systems. 36, 36339–36348. arXiv preprint arXiv:2304.11015.

[22]

Prystawski, B., Li, M., & Goodman, N. (2024). Why think step by step? reasoning emerges from the locality of experience. Advances in Neural Information Processing Systems, 36, 70926–70947.

[23]

Santi P, Resta G, Szell M, Sobolevsky S, Strogatz SH, Ratti C. Quantifying the benefits of vehicle pooling with shareability networks. Proceedings of the National Academy of Sciences, 2014, 111(37): 13290-13294.

[24]

Schulterbrandt Gragg R, Anandhi A, Jiru M, Usher KM. A conceptualization of the urban food-energy-water nexus sustainability paradigm: Modeling from theory to practice. Frontiers in Environmental Science, 2018, 6: 133.

[25]

Sobolevsky S, Campari R, Belyi A, Ratti C. General optimization technique for high-quality community detection in complex networks. Physical Review E, 2014, 90(1): 012811.

[26]

Sobolevsky S, Sitko I, Combes R, Hawelka B, Murillo Arias J, Ratti C. Cities through the prism of people’s spending behavior. PLoS ONE, 2016, 11(2): 0146291.

[27]

Sobral T, Galvão T, Borges J. Visualization of urban mobility data from intelligent transportation systems. Sensors, 2019, 19(2): 332.

[28]

Wei J, Wang X, Schuurmans D, Bosma M, Xia F, Chi E, Zhou D. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 2022, 35: 24824-24837

[29]

Wu, T., Terry, M., & Cai, C. J. (2022). Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In Proceedings of the 2022 CHI conference on human factors in computing systems (pp. 1–22).

[30]

Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., & Wen, J. R. (2023). A survey of large language models. arXiv preprint arXiv:2303.18223.

Funding

New York University Abu Dhabi(CG001)

RIGHTS & PERMISSIONS

The Author(s)

PDF

41

Accesses

0

Citation

Detail

Sections
Recommended

/