PDF
Abstract
Urban AI discourse has been dominated by Large Language Models (LLMs), yet these models misalign with the specific operational needs of cities, which demand low-latency, context-sensitive, and infrastructure-light solutions. This opinion paper addresses that gap by proposing Small Language Models (SLMs) as a viable alternative and introduces “SLM Urbanism,” a layered conceptual framework that reimagines urban AI deployment. Drawing on recent literature, the framework comprises five layers, computational, task-specialised, application, governance, and citizen-centric, each aligning technical affordances with urban imperatives. Using a normative, design-oriented method, the study contrasts SLMs’ low-cost, edge-native, and interpretable architectures with the compute-heavy, opaque nature of LLMs. The discussion situates SLMs as enablers of locally tuned, explainable, and democratically aligned intelligence that better serve urban equity and efficiency goals. Findings highlight that SLMs often outperform LLMs in resource-constrained settings, enhancing trust, transparency, and civic agency in AI-mediated governance. Importantly, the paper does not reject LLMs entirely but advocates a hybrid future of modular urban intelligence where SLMs lead a shift from centralised automation to distributed, planner-guided agency.
Keywords
Small Language Models (SLMs)
/
Urban artificial intelligence
/
Edge AI deployment
/
SLM urbanism framework
Cite this article
Download citation ▾
Alok Tiwari.
A proposal to localising urban AI: a conceptual shift from generalist LLMs to task-specific SLMs.
Computational Urban Science, 2026, 6(1): 11 DOI:10.1007/s43762-026-00241-0
| [1] |
Adadi A, Berrada M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 2018, 6: 52138-52160
|
| [2] |
Aralimatti, R., Shakhadri, S. A. G., Kruthika, K. R., & Angadi, K. B. (2025). Fine-tuning small language models for domain-specific AI: An edge AI perspective. In Intelligent Systems Conference (pp. 503-520). Cham: Springer Nature Switzerland.
|
| [3] |
Bansal, H., Hosseini, A., Agarwal, R., Tran, V. Q., & Kazemi, M. (2024). Smaller, weaker, yet better: Training llm reasoners via compute-optimal sampling. arXiv preprint arXiv:2408.16737.
|
| [4] |
Barnett C. The wicked city: Genealogies of interdisciplinary hubris in urban thought. Transactions of the Institute of British Geographers, 2022, 47(1): 271-284
|
| [5] |
Batty M. Artificial intelligence and smart cities. Environment and Planning b: Urban Analytics and City Science, 2018, 45(1): 3-6
|
| [6] |
Batty M. The emergence and evolution of urban AI. AI and SocieTy, 2023, 38(3): 1045-1048
|
| [7] |
Belcak, P., Heinrich, G., Diao, S., Fu, Y., Dong, X., Muralidharan, S., ... & Molchanov, P. (2025). Small language models are the future of agentic AI. arXiv preprint arXiv:2506.02153.
|
| [8] |
Bijker, W. E., & Pinch, T. (1987). The social construction of facts and artifacts. The social construction of technological systems. 17–51.
|
| [9] |
Bucher, M. J. J., & Martini, M. (2024). Fine-tuned'small'LLMs (still) significantly outperform zero-shot generative AI models in text classification. arXiv preprint arXiv:2406.08660.
|
| [10] |
Caballar, R. D. (2024). What are small language models? IBM. https://www.ibm.com/think/topics/small-language-models
|
| [11] |
Chandra, R. (2025). Top 20 Agentic AI use cases in the real world. Daffodil Software Insights. https://insights.daffodilsw.com/blog/top-20-agentic-ai-use-cases-in-the-real-world
|
| [12] |
Charles, X. (2023). Small language models vs. large language models. Synergy Technical. https://www.synergy-technical.com/blogs/small-vs-large-language-models
|
| [13] |
Chen, L., & Varoquaux, G. (2024). What is the role of small models in the llm era: A survey. arXiv preprint arXiv:2409.06857.
|
| [14] |
Corradini F, Leonesi M, Piangerelli M. State of the art and future directions of small language models: A systematic review. Big Data and Cognitive Computing, 2025, 9(7 189
|
| [15] |
Cugurullo F, Caprotti F, Cook M, Karvonen A, MᶜGuirk P, Marvin S. The rise of AI urbanism in post-smart cities: A critical commentary on urban artificial intelligence. Urban Studies, 2024, 616): 1168-1182
|
| [16] |
Dean J, Barroso LA. The tail at scale. Communications of the ACM, 2013, 56(2): 74-80
|
| [18] |
Jaakkola E. Designing conceptual articles: Four approaches. AMS Review, 2020, 10(1): 18-26
|
| [19] |
Jiang, H., Li, M., Witte, P., Geertman, S., & Pan, H. (2025). Urban chatter: Exploring the potential of ChatGPT-like and generative AI in enhancing planning support. Cities,158, 105701.
|
| [20] |
Khambholja, M. (2025). SLM vs LLM: Choosing the right AI model for your business. Openxcell. https://www.openxcell.com/blog/slm-vs-llm/
|
| [21] |
Kumar P. Large language models (LLMs): Survey, technical frameworks, and future challenges. Artificial Intelligence Review, 2024, 57(10): 260
|
| [22] |
Lee, L. (2024). Tiny Titans: How small language models outperform LLMs for less. Salesforce. https://www.salesforce.com/blog/small-language-models
|
| [23] |
Li, Z., Xia, L., Ren, X., Tang, J., Chen, T., Xu, Y., & Huang, C. (2025). Urban computing in the era of large language models. arXiv preprint arXiv:2504.02009. Lu, T., Gao, M., Yu, K., Byerly, A., & Khashabi, D. (2024). Insights into llm long-context failures: When transformers know but don't tell. arXiv preprint arXiv:2406.14673.
|
| [24] |
Lu, Z., Li, X., Cai, D., Yi, R., Liu, F., Liu, W., ... & Xu, M. (2025). Demystifying small language models for edge deployment. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 14747–14764).
|
| [25] |
Mahmud D, Hajmohamed H, Almentheri S, Alqaydi S, Aldhaheri L, Khalil RA, Saeed N. Integrating LLMs with ITS: Recent advances, potentials, challenges, and future directions. IEEE Transactions on Intelligent Transportation Systems, 2025
|
| [26] |
Mathav Raj, J., Kushala, V. M., Warrier, H. , & Gupta, Y. (2024). Fine tuning LLMs for enterprise: Practical guidelines and recommendations. arXiv. https://arxiv.org/abs/2404.10779.
|
| [28] |
Newman, P., & Kenworthy, J. (2013). Greening urban transportation. In State of the World 2007 (pp. 98-121). Routledge.
|
| [29] |
Ong, I., Almahairi, A., Wu, V., Chiang, W. L., Wu, T., Gonzalez, J. E., ... & Stoica, I. (2024). Routellm: Learning to route llms with preference data. arXiv preprint arXiv:2406.18665.
|
| [30] |
Peng ZR, Lu KF, Liu Y, Zhai W. The pathway of urban planning AI: From planning support to plan-making. Journal of Planning Education and Research, 2024, 4442263-2279
|
| [31] |
Popov, R. O., Karpenko, N. V., & Gerasimov, V. V. (2024). Overview of small language models in practice. In CS&SE@ SW (pp. 164–182).
|
| [32] |
Quan SJ, Lee S. Enhancing participatory planning with ChatGPT-assisted planning support systems: A hypothetical case study in Seoul. International Journal of Urban Sciences, 2025, 29(1): 89-122
|
| [33] |
Rawat, A. S., Sadhanala, V., Rostamizadeh, A., Chakrabarti, A., Jitkrittum, W., Feinberg, V., ... & Kumar, S. (2024). A little help goes a long way: Efficient llm training by leveraging small lms. arXiv preprint arXiv:2410.18779.
|
| [34] |
Raza, M. (2025). LLMs vs. SLMs: The differences in large & small language models. Splunk. https://www.splunk.com/en_us/blog/learn/language-models-slm-vs-llm.html
|
| [35] |
Rieder E, Schmuck M, Tugui A. A scientific perspective on using artificial intelligence in sustainable urban development. Big Data and Cognitive Computing, 2022, 7(1): 3
|
| [36] |
Sanchez TW. Planning on the verge of AI, or AI on the verge of planning. Urban Science, 2023, 73 70
|
| [37] |
Sanchez TW, Shumway H, Gordner T, Lim T. The prospects of artificial intelligence in urban planning. International Journal of Urban Sciences, 2023, 272179-194
|
| [38] |
Sartain, M. (2024). Large Language Models (LLMs) vs. Small Language Models (SLMs). Rackspace Technology. https://www.rackspace.com/blog/large-language-models-llms-vs-small-language-models-slms
|
| [39] |
Sherman S. The polyopticon: A diagram for urban artificial intelligences. AI and SocieTy, 2023, 38(3): 1209-1222
|
| [40] |
Simms, K. (2024). 10 differences between small language models (SLM) & large language models (LLM). LinkedIn. https://www.linkedin.com/pulse/10-differences-between-small-language-models-slm-large-kane-simms-edvee/
|
| [41] |
Srinivas, S. M. (2025). SLM vs LLM: Unlocking superior performance and efficiency in AI Tally Solutions. https://tallysolutions.com/technology/slm-vs-llm/
|
| [42] |
Tiwari A. Conceptualising the emergence of agentic urban AI: From automation to agency. Urban Informatics, 2025, 4(1): 13
|
| [43] |
Tseng YS. Assemblage thinking as a methodology for studying urban AI phenomena. AI and SocieTy, 2023, 3831099-1110
|
| [44] |
Wang, F., Zhang, Z., Zhang, X., Wu, Z., Mo, T., Lu, Q., ... & Wang, S. (2025). A comprehensive survey of small language models in the era of large language models: Techniques, enhancements, applications, collaboration with llms, and trustworthiness. ACM Transactions on Intelligent Systems and Technology.
|
| [45] |
Yang, Y., Mishra, S., Chiang, J., & Mirzasoleiman, B. (2024). Smalltolarge (s2l): Scalable data selection for fine-tuning large language models by summarizing training trajectories of small models. Advances in Neural Information Processing Systems, 37, 83465-83496.
|
| [46] |
Ye X, Newman G, Lee C, Van Zandt S, Jourdan D. Toward urban artificial intelligence for developing justice-oriented smart cities. Journal of Planning Education and Research, 2023, 43(1): 6-7
|
| [47] |
Yigitcanlar, T., Desouza, K. C., Butler, L., & Roozkhosh, F. (2020). Contributions and risks of artificial intelligence (AI) in building smarter cities: Insights from a systematic review of the literature. Energies, 13(6), 1473.
|
| [48] |
Yigitcanlar T, Hossain ST, Shaamala A, Ye X. Quantum AI urbanism: Redefining the future of artificial intelligence in cities. Journal of Urban Technology, 2025, 34: 1-14
|
| [49] |
Zaidi, S. (2025). SLM vs LLM: The beginner’s guide. Opkey. https://www.opkey.com/blog/slm-vs-llm-the-beginners-guide
|
| [50] |
Zhang Q, Liu Z, Pan S. The rise of small language models. Ieee Intelligent Systems, 2025, 40(1): 30-37
|
| [51] |
Zhou, Z., Li, L., Chen, X., & Li, A. (2023). Mini-Giants: Small Language Models and Open Source Win-Win. arXiv preprint arXiv:2307.08189.
|
| [52] |
Zong C, Wan J, Tang S, Zhnag L. Evidence Map: Learning evidence analysis to unleash the power of small language models for biomedical question answering. Artificial Intelligence in Medicine, 2025, 1692025103246
|
RIGHTS & PERMISSIONS
The Author(s)