A proposal to localising urban AI: a conceptual shift from generalist LLMs to task-specific SLMs
Alok Tiwari
Computational Urban Science ›› 2026, Vol. 6 ›› Issue (1) : 11
Urban AI discourse has been dominated by Large Language Models (LLMs), yet these models misalign with the specific operational needs of cities, which demand low-latency, context-sensitive, and infrastructure-light solutions. This opinion paper addresses that gap by proposing Small Language Models (SLMs) as a viable alternative and introduces “SLM Urbanism,” a layered conceptual framework that reimagines urban AI deployment. Drawing on recent literature, the framework comprises five layers, computational, task-specialised, application, governance, and citizen-centric, each aligning technical affordances with urban imperatives. Using a normative, design-oriented method, the study contrasts SLMs’ low-cost, edge-native, and interpretable architectures with the compute-heavy, opaque nature of LLMs. The discussion situates SLMs as enablers of locally tuned, explainable, and democratically aligned intelligence that better serve urban equity and efficiency goals. Findings highlight that SLMs often outperform LLMs in resource-constrained settings, enhancing trust, transparency, and civic agency in AI-mediated governance. Importantly, the paper does not reject LLMs entirely but advocates a hybrid future of modular urban intelligence where SLMs lead a shift from centralised automation to distributed, planner-guided agency.
Small Language Models (SLMs) / Urban artificial intelligence / Edge AI deployment / SLM urbanism framework
| [1] |
|
| [2] |
Aralimatti, R., Shakhadri, S. A. G., Kruthika, K. R., & Angadi, K. B. (2025). Fine-tuning small language models for domain-specific AI: An edge AI perspective. In Intelligent Systems Conference (pp. 503-520). Cham: Springer Nature Switzerland. |
| [3] |
Bansal, H., Hosseini, A., Agarwal, R., Tran, V. Q., & Kazemi, M. (2024). Smaller, weaker, yet better: Training llm reasoners via compute-optimal sampling. arXiv preprint arXiv:2408.16737. |
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
Belcak, P., Heinrich, G., Diao, S., Fu, Y., Dong, X., Muralidharan, S., ... & Molchanov, P. (2025). Small language models are the future of agentic AI. arXiv preprint arXiv:2506.02153. |
| [8] |
Bijker, W. E., & Pinch, T. (1987). The social construction of facts and artifacts. The social construction of technological systems. 17–51. |
| [9] |
Bucher, M. J. J., & Martini, M. (2024). Fine-tuned'small'LLMs (still) significantly outperform zero-shot generative AI models in text classification. arXiv preprint arXiv:2406.08660. |
| [10] |
Caballar, R. D. (2024). What are small language models? IBM. https://www.ibm.com/think/topics/small-language-models |
| [11] |
Chandra, R. (2025). Top 20 Agentic AI use cases in the real world. Daffodil Software Insights. https://insights.daffodilsw.com/blog/top-20-agentic-ai-use-cases-in-the-real-world |
| [12] |
Charles, X. (2023). Small language models vs. large language models. Synergy Technical. https://www.synergy-technical.com/blogs/small-vs-large-language-models |
| [13] |
Chen, L., & Varoquaux, G. (2024). What is the role of small models in the llm era: A survey. arXiv preprint arXiv:2409.06857. |
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
Dhamodharan, B. (2025). The next big thing in AI: Small language models for enterprises. Forbes Technology Council. https://www.forbes.com/councils/forbestechcouncil/2025/03/03/the-next-big-thing-in-ai-small-language-models-for-enterprises/ |
| [18] |
|
| [19] |
Jiang, H., Li, M., Witte, P., Geertman, S., & Pan, H. (2025). Urban chatter: Exploring the potential of ChatGPT-like and generative AI in enhancing planning support. Cities,158, 105701. |
| [20] |
Khambholja, M. (2025). SLM vs LLM: Choosing the right AI model for your business. Openxcell. https://www.openxcell.com/blog/slm-vs-llm/ |
| [21] |
|
| [22] |
Lee, L. (2024). Tiny Titans: How small language models outperform LLMs for less. Salesforce. https://www.salesforce.com/blog/small-language-models |
| [23] |
Li, Z., Xia, L., Ren, X., Tang, J., Chen, T., Xu, Y., & Huang, C. (2025). Urban computing in the era of large language models. arXiv preprint arXiv:2504.02009. Lu, T., Gao, M., Yu, K., Byerly, A., & Khashabi, D. (2024). Insights into llm long-context failures: When transformers know but don't tell. arXiv preprint arXiv:2406.14673. |
| [24] |
Lu, Z., Li, X., Cai, D., Yi, R., Liu, F., Liu, W., ... & Xu, M. (2025). Demystifying small language models for edge deployment. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp. 14747–14764). |
| [25] |
|
| [26] |
Mathav Raj, J., Kushala, V. M., Warrier, H. , & Gupta, Y. (2024). Fine tuning LLMs for enterprise: Practical guidelines and recommendations. arXiv. https://arxiv.org/abs/2404.10779. |
| [27] |
Monetizely. (2025). Agentic AI Smart Cities: Balancing efficiency gains against infrastructure cost sharing. https://www.getmonetizely.com/articles/agentic-ai-smart-cities-balancing-efficiency-gains-against-infrastructure-cost-sharing |
| [28] |
Newman, P., & Kenworthy, J. (2013). Greening urban transportation. In State of the World 2007 (pp. 98-121). Routledge. |
| [29] |
Ong, I., Almahairi, A., Wu, V., Chiang, W. L., Wu, T., Gonzalez, J. E., ... & Stoica, I. (2024). Routellm: Learning to route llms with preference data. arXiv preprint arXiv:2406.18665. |
| [30] |
|
| [31] |
Popov, R. O., Karpenko, N. V., & Gerasimov, V. V. (2024). Overview of small language models in practice. In CS&SE@ SW (pp. 164–182). |
| [32] |
|
| [33] |
Rawat, A. S., Sadhanala, V., Rostamizadeh, A., Chakrabarti, A., Jitkrittum, W., Feinberg, V., ... & Kumar, S. (2024). A little help goes a long way: Efficient llm training by leveraging small lms. arXiv preprint arXiv:2410.18779. |
| [34] |
Raza, M. (2025). LLMs vs. SLMs: The differences in large & small language models. Splunk. https://www.splunk.com/en_us/blog/learn/language-models-slm-vs-llm.html |
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
Sartain, M. (2024). Large Language Models (LLMs) vs. Small Language Models (SLMs). Rackspace Technology. https://www.rackspace.com/blog/large-language-models-llms-vs-small-language-models-slms |
| [39] |
|
| [40] |
Simms, K. (2024). 10 differences between small language models (SLM) & large language models (LLM). LinkedIn. https://www.linkedin.com/pulse/10-differences-between-small-language-models-slm-large-kane-simms-edvee/ |
| [41] |
Srinivas, S. M. (2025). SLM vs LLM: Unlocking superior performance and efficiency in AI Tally Solutions. https://tallysolutions.com/technology/slm-vs-llm/ |
| [42] |
|
| [43] |
|
| [44] |
Wang, F., Zhang, Z., Zhang, X., Wu, Z., Mo, T., Lu, Q., ... & Wang, S. (2025). A comprehensive survey of small language models in the era of large language models: Techniques, enhancements, applications, collaboration with llms, and trustworthiness. ACM Transactions on Intelligent Systems and Technology. |
| [45] |
Yang, Y., Mishra, S., Chiang, J., & Mirzasoleiman, B. (2024). Smalltolarge (s2l): Scalable data selection for fine-tuning large language models by summarizing training trajectories of small models. Advances in Neural Information Processing Systems, 37, 83465-83496. |
| [46] |
|
| [47] |
Yigitcanlar, T., Desouza, K. C., Butler, L., & Roozkhosh, F. (2020). Contributions and risks of artificial intelligence (AI) in building smarter cities: Insights from a systematic review of the literature. Energies, 13(6), 1473. |
| [48] |
|
| [49] |
Zaidi, S. (2025). SLM vs LLM: The beginner’s guide. Opkey. https://www.opkey.com/blog/slm-vs-llm-the-beginners-guide |
| [50] |
|
| [51] |
Zhou, Z., Li, L., Chen, X., & Li, A. (2023). Mini-Giants: Small Language Models and Open Source Win-Win. arXiv preprint arXiv:2307.08189. |
| [52] |
|
The Author(s)
/
| 〈 |
|
〉 |