Health GeoAI beyond algorithms: Embedding equity, accountability, and environmental responsibility

Anquan Xia , Jia-jing Xu , Wangyi Shang , Xiang Wei , Di Zhao , Cheng Ma , Xiaolan Lv , Qining Yang , Yi Xu

Geography and Sustainability ›› 2026, Vol. 7 ›› Issue (2) : 100445

PDF
Geography and Sustainability ›› 2026, Vol. 7 ›› Issue (2) :100445 DOI: 10.1016/j.geosus.2026.100445
Comment
research-article
Health GeoAI beyond algorithms: Embedding equity, accountability, and environmental responsibility
Author information +
History +
PDF

Abstract

Health GeoAI—the integration of artificial intelligence with geographically contextualized health data—offers transformative potential for precision public health. Yet its rapid expansion, often driven by algorithmic performance, risks reinforcing spatial inequities, obscuring decision pathways, and generating environmental externalities. This study introduces a forward-looking framework for Responsible Health GeoAI that embeds geographical equity, accountability, and environmental sustainability as core design imperatives rather than peripheral considerations. Building on advances in foundation models and multimodal learning, the framework establishes two measurable boundaries—an equity floor ensuring subgroup fairness and calibration, and a carbon ceiling constraining computational and energy costs. These operational principles align GeoAI innovation with the broader goals of fairness, transparency, and sustainability. By situating GeoAI as a socio-technical system and integrating spatial validation, participatory governance, and carbon accountability, this study provides a structured pathway for developing GeoAI that is not only intelligent but also equitable, explainable, and environmentally responsible. The framework offers strategic insights for the institutionalization of responsible AI in global health and sustainability policy.

Keywords

Health GeoAI / Responsible AI / Geographical equity / Foundation models / Digital health governance / Sustainability

Cite this article

Download citation ▾
Anquan Xia, Jia-jing Xu, Wangyi Shang, Xiang Wei, Di Zhao, Cheng Ma, Xiaolan Lv, Qining Yang, Yi Xu. Health GeoAI beyond algorithms: Embedding equity, accountability, and environmental responsibility. Geography and Sustainability, 2026, 7(2): 100445 DOI:10.1016/j.geosus.2026.100445

登录浏览全文

4963

注册一个新账户 忘记密码

Declaration of generative AI and AI-assisted technologies in the writing process

During the preparation of this work the authors used ChatGPT in order to improve the readability of the language. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.

CRediT authorship contribution statement

Anquan Xia: Writing - review & editing, Conceptualization. Jia-jing Xu: Writing - original draft, Conceptualization. Wangyi Shang: Investigation. Xiang Wei: Project administration, Methodology. Di Zhao: Investigation. Cheng Ma: Investigation. Xiaolan Lv: Validation. Qining Yang: Resources. Yi Xu: Writing - original draft, Conceptualization.

Declaration of competing interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This work was supported by the Central Fiscal Geological Survey Project (Grant No. DD20230203903). The authors sincerely acknowledge the financial support provided by this project. We also thank our colleagues and collaborators for their valuable discussions and technical assistance during the course of this research.

References

[1]

Bolón-Canedo V., Morán-Fernández L., Cancela B., Alonso-Betanzos A., 2024. A review of green artificial intelligence: towards a more sustainable future. Neurocomputing 599, 128096. doi: 10.1016/j.neucom.2024.128096.

[2]

Butz W.P., Torrey B.B., 2006. Some frontiers in social science. Science 312 (5782), 1898-1900. doi: 10.1126/science.1130121.

[3]

Chen R.J., Wang J.J., Williamson D.F.K., Chen T.Y., Lipkova J., Lu M.Y., Sahai S., Mahmood F., 2023. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat. Biomed. Eng. 7, 719-742. doi: 10.1038/s41551-023-01056-8.

[4]

Collins G.S., Reitsma J.B., Altman D.G., Moons K.G.M., 2015. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement. BMC. Med. 13 (1), 1. doi: 10.1186/s12916-014-0241-z.

[5]

Golden C.D., Childs M.L., Mudele O.E., Andriamizarasoa F.A., Bouley T.A., De Nicola G., Fontaine M.A., Huybers P.J., Mahatante P.T., Rabemananjara R., Rakotoarison N., Ramambason H.R., Ramihantaniarivo H., Randriamady H.J., Randriatsara H., Ravelomanantsoa M.A., Razafinimanana A.K.S., Rigden A.J., Shumake-Guillemot J., Yasmine L.L., Dominici F., 2025. Climate-smart public health for global health resilience. Lancet Planet. Health 9 (8), 101293. doi: 10.1016/j.lanplh.2025.101293.

[6]

Kaissis G.A., Makowski M.R., Rückert D., Braren R.F., 2020. Secure, privacy-preserving and federated machine learning in medical imaging. Nat. Mach. Intell. 2 (6), 305-311. doi: 10.1038/s42256-020-0186-1.

[7]

Maharana A., Nsoesie E.O., 2018. Use of deep learning to examine the association of the built environment with prevalence of neighborhood adult obesity. JAMA Netw. Open 1 (4), e181535. doi: 10.1001/jamanetworkopen.2018.1535.

[8]

Mehrabi N., Morstatter F., Saxena N., Lerman K., Galstyan A., 2021. A survey on bias and fairness in machine learning. ACM Comput. Surv. 54 (6), 1-35. doi: 10.1145/3457607.

[9]

Obermeyer Z., Powers B., Vogeli C., Mullainathan S., 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science 366 (6464), 447-453. doi: 10.1126/science.aax2342.

[10]

Rajkomar A., Hardt M., Howell M.D., Corrado G., Chin M.H., 2018. Ensuring fairness in machine learning to advance health equity. Ann. Intern. Med. 169 (12), 866-872. doi: 10.7326/M18-1990.

[11]

Rajpurkar P., Chen E., Banerjee O., Topol E.J., 2022. AI in health and medicine. Nat. Med. 28 (1), 31-38. doi: 10.1038/s41591-021-01614-0.

[12]

Roberts D.R., Bahn V., Ciuti S., Boyce M.S., Elith J., Guillera-Arroita G., Hauenstein S., Lahoz-Monfort J.J., Schröder B., Thuiller W., Warton D.I., Wintle B.A., Hartig F., Dormann C.F., 2017. Cross-validation strategies for data with temporal, spatial, hierarchical, or phylogenetic structure. Ecography 40 (8), 913-929. doi: 10.1111/ecog.02881.

[13]

Shiode N., Shiode S., Rod-Thatcher E., Rana S., Vinten-Johansen P., 2015. The mortality rates and the space-time patterns of John Snow’s cholera epidemic map. Int. J. Health Geogr. 14 (1), 21. doi: 10.1186/s12942-015-0011-y.

[14]

Wiggins W.F., Tejani A.S., 2022. On the opportunities and risks of foundation models for natural language processing in radiology. Radiol. Artif. Intell. 4 (4), e220119. doi: 10.1148/ryai.220119.

[15]

Wu J.R., Tian X., Wang Y.-G., Li T., Liu Q.Y., Li Y.Y., Cui L.Z., Tian Z.C., Xu J., Lyu X.Z., Mo Y.M., 2026. AI ethics in geoscience: toward trustworthy and responsible innovation. Geogr. Sustain. 7 (1), 100414. doi: 10.1016/j.geosus.2026.100414.

PDF

0

Accesses

0

Citation

Detail

Sections
Recommended

/