Computational and urban science amid generative AI: the enduring relevance of McDermott’s critique (opinion paper)
Christophe Claramunt , Lars De Sloover , Nico Van de Weghe
Computational Urban Science ›› 2026, Vol. 6 ›› Issue (1) : 29
This paper revisits McDermott’s seminal 1976 critique, Artificial Intelligence Meets Natural Stupidity, in light of the rapid rise of generative AI. McDermott’s warnings about conceptual imprecision, rhetorical inflation, and anthropomorphic metaphors remain strikingly relevant today, particularly in domains where linguistic fluency is mistaken for genuine understanding. Using computational and urban science as a case study, the paper highlights the limitations of large language models (LLMs) in spatial reasoning, including their hallucination of map features, distortion of topological relationships, and failure to grasp metric consistency. These shortcomings exemplify the epistemological risks McDermott identified nearly five decades ago. The paper argues for a renewed commitment to epistemic discipline and humility in the development and communication of generative AI, especially in high-stakes domains like geospatial applications. It proposes concrete guidelines for integrating spatial theory, documenting failures, and avoiding misleading terminology, advocating for hybrid approaches that combine LLMs with GIS frameworks and human oversight. By operationalizing McDermott’s insights, the paper calls for transparent, grounded, and responsible AI systems that can genuinely support, rather than distort, urban decision-making.
Generative AI / Computational GIS / Epistemic discipline / McDermott’s critique
| [1] |
|
| [2] |
Bender, E. M., & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In Proceedings of the 58th annual meeting of the association for computational linguistics, 5185–5198. |
| [3] |
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, 610–623. |
| [4] |
Chen, B., Xu, Z., Kirmani, S., Ichter, B., Driess, D., Florence, P., Sadigh, D., Guibas, L. & Xia, F. (2024). SpatialVLM: Endowing vision-language models with spatial reasoning capabilities. arXiv preprint arXiv:2401.12168. |
| [5] |
Chi, H., Li, H., Yang, W., Liu, F., Lan, L., Ren, X., Liu, T. & Han, B. (2024). Unveiling causal reasoning in large language models: Reality or mirage? In: Advances in Neural Information Processing Systems 37 (NeurIPS 2024), pp 96640–96670. |
| [6] |
Claramunt, C. (2020). Ontologies for geospatial information: Progress and challenges ahead. Journal of Spatial Information Science,20. |
| [7] |
Feng, Y., Ding, L., & Xiao, G. (2023). Geoqamap-geographic question answering with maps leveraging LLM and open knowledge base (short paper). In 12th International Conference on Geographic Information Science (GIScience) (pp. 28–1). Schloss Dagstuhl-Leibniz-Zentrum für Informatik. |
| [8] |
|
| [9] |
|
| [10] |
Kalai, A.T., Nachum, O., Vempala, S.S., & Zhang, E. (2025). Why Language Models Hallucinate. arXiv preprint arXiv:2509.04664. |
| [11] |
|
| [12] |
Li, X., Cai, Z., Wang, S., Yu, K. & Chen, F. (2025a). A survey on enhancing causal reasoning ability of large language models. arXiv:2503.09326. |
| [13] |
|
| [14] |
Mai, G., Huang, W., Sun, J., Song, S., Mishra, D., Liu, N., Gao, S., Liu, T., Cong, G., Hu, Y., Cundy, C., Li, Z., Zhu, R. and Lao, N. (2023). On the opportunities and challenges of foundation models for geospatial artificial intelligence. arXiv preprint arXiv:2304.06798. |
| [15] |
Manvi, R., Khanna, S., Mai, G., Burke, M., Lobell, D. & Ermon, S. (2024). GeoLLM: Extracting geospatial knowledge from large language models. In Proceedings of the International Conference on Learning Representations (ICLR 2024). |
| [16] |
|
| [17] |
|
| [18] |
Mitchell, M. (2023). Artificial Intelligence: A Guide for Thinking Humans (Rev. Ed.). Penguin. |
| [19] |
Ngo, R., Chan, A., & Mindermann (2022). The alignment problem from a deep learning perspective. arXiv:2209.00626. |
| [20] |
Raji, I. D., Bender, E. M., et al. (2022). The fallacy of AI functionality. arXiv:2206.09511. |
| [21] |
Razavi, A., Soltangheis, M., Arabzadeh, N., Salamat, S., Zihayat, M. & Bagheri, E. (2025). Benchmarking prompt sensitivity in large language models. In: Advances in Information Retrieval (ECIR 2025), LNCS 15574, 303–313. |
| [22] |
|
| [23] |
|
| [24] |
Tang, Z. & Kejriwal, M. (2025). GRASP: A Grid-Based Benchmark for Evaluating Commonsense Spatial Reasoning. arXiv preprint. arXiv:2407.01892 . |
| [25] |
Trauger, J. & Tewari, A. (2025). On next‑token prediction in LLMs: How end goals determine the consistency of decoding algorithms. arXiv:2505.11183 . |
| [26] |
|
| [27] |
|
| [28] |
Wu, W., Mao, S., Zhang, Y., Xia, Y., Dong, L., Cui, L. and Wei, F. (2024). Mind’s Eye of LLMs: Visualization-of-Thought elicits spatial reasoning in large language models. arXiv preprint arXiv:2404.03622. |
| [29] |
Yu, H., Gan, A., Zhang, K., Tong, S., Liu, Q. & Liu, Z. (2024). Evaluation of retrieval‑augmented generation: A survey. arXiv:2405.07437 (v2, 2024). |
| [30] |
Yue, Y; Yan, et al. (2025). Shaping future sustainable cities with AI-powered urban informatics: Toward human-AI symbiosis. Computational Urban Science,5(31), Springer Nature. |
| [31] |
|
| [32] |
|
The Author(s)
/
| 〈 |
|
〉 |