The application of eXplainable artificial intelligence in studying cognition: A scoping review

Shakran Mahmood , Colin Teo , Jeremy Sim , Wei Zhang , Jiang Muyun , R. Bhuvana , Kejia Teo , Tseng Tsai Yeo , Jia Lu , Balazs Gulyas , Cuntai Guan

Ibrain ›› 2024, Vol. 10 ›› Issue (3) : 245 -265.

PDF
Ibrain ›› 2024, Vol. 10 ›› Issue (3) : 245 -265. DOI: 10.1002/ibra.12174
REVIEW

The application of eXplainable artificial intelligence in studying cognition: A scoping review

Author information +
History +
PDF

Abstract

The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta-analyses extension for scoping review guidelines, we searched for peer-reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution-based (41.7%) and example-based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.

Keywords

artificial intelligence / cognition / cognitive neuroscience / eXplainable artificial intelligence / neuroscience / XAI models

Cite this article

Download citation ▾
Shakran Mahmood, Colin Teo, Jeremy Sim, Wei Zhang, Jiang Muyun, R. Bhuvana, Kejia Teo, Tseng Tsai Yeo, Jia Lu, Balazs Gulyas, Cuntai Guan. The application of eXplainable artificial intelligence in studying cognition: A scoping review. Ibrain, 2024, 10(3): 245-265 DOI:10.1002/ibra.12174

登录浏览全文

4963

注册一个新账户 忘记密码

References

RIGHTS & PERMISSIONS

2024 The Author(s). Ibrain published by Affiliated Hospital of Zunyi Medical University (AHZMU) and Wiley-VCH GmbH.

AI Summary AI Mindmap
PDF

174

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/