The application of eXplainable artificial intelligence in studying cognition: A scoping review

Shakran Mahmood , Colin Teo , Jeremy Sim , Wei Zhang , Jiang Muyun , R. Bhuvana , Kejia Teo , Tseng Tsai Yeo , Jia Lu , Balazs Gulyas , Cuntai Guan

Ibrain ›› 2024, Vol. 10 ›› Issue (3) : 245 -265.

PDF
Ibrain ›› 2024, Vol. 10 ›› Issue (3) : 245 -265. DOI: 10.1002/ibra.12174
REVIEW

The application of eXplainable artificial intelligence in studying cognition: A scoping review

Author information +
History +
PDF

Abstract

The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta-analyses extension for scoping review guidelines, we searched for peer-reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution-based (41.7%) and example-based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.

Keywords

artificial intelligence / cognition / cognitive neuroscience / eXplainable artificial intelligence / neuroscience / XAI models

Cite this article

Download citation ▾
Shakran Mahmood, Colin Teo, Jeremy Sim, Wei Zhang, Jiang Muyun, R. Bhuvana, Kejia Teo, Tseng Tsai Yeo, Jia Lu, Balazs Gulyas, Cuntai Guan. The application of eXplainable artificial intelligence in studying cognition: A scoping review. Ibrain, 2024, 10(3): 245-265 DOI:10.1002/ibra.12174

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Meek T, Barham H, Beltaif N, Kaadoor A, Akhter T. Managing the ethical and risk implications of rapid advances in artificial intelligence: a literature review. 2016 Portland International Conference on Management of Engineering and Technology (PICMET). IEEE;2016:682-693.

[2]

Xu Y, Liu X, Cao X, et al. Artificial intelligence: a powerful paradigm for scientific research. Innovation (Cambridge (Mass.)). 2021;2(4):100179.

[3]

Collins C, Dennehy D, Conboy K, Mikalef P. Artificial intelligence in information systems research: a systematic literature review and research agenda. Int J Inf Manage. 2021;60:102383.

[4]

Basu K, Sinha R, Ong A, Basu T. Artificial intelligence: how is it changing medical sciences and its future? Indian J Dermatol. 2020;65(5):365-370.

[5]

Anagnostou M, Karvounidou O, Katritzidaki C, et al. Characteristics and challenges in the industries towards responsible AI: a systematic literature review. Ethics Inf Technol. 2022;24(3):1-18.

[6]

Lipton ZC. The mythos of model interpretability. Queue. 2018;16(3):31-57.

[7]

Kamath U, Liu J. Model interpretability: advances in interpretable machine learning. explainable artificial intelligence: an introduction to interpretable machine learning. Published online. 2021:121-165.

[8]

Adadi A, Berrada M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access. 2018;6:52138-52160.

[9]

Confalonieri R, Coba L, Wagner B, Besold TR. A historical perspective of explainable artificial intelligence. Wiley Interdiscip Rev Data Min Knowl Discov. 2021;11(1):e1391.

[10]

Tjoa E, Guan C. A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI . 2019.

[11]

Saeed W, Omlin C. Explainable AI (XAI): a systematic meta-survey of current challenges and future opportunities. Knowl Based Syst. 2023;263:110273.

[12]

Haar LV, Elvira T, Ochoa O. An analysis of explainability methods for convolutional neural networks. Eng Appl of Artif Intell. 2023;117:105606.

[13]

Phillips PJ, Hahn CA, Fontana PC, et al. Four Principles of Explainable Artificial Intelligence.

[14]

Markus AF, Kors JA, Rijnbeek PR. The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inf. 2021;113:103655.

[15]

Fellous JM, Sapiro G, Rossi A, Mayberg H, Ferrante M. Explainable artificial intelligence for neuroscience: behavioral neurostimulation. Front Neurosci. 2019;13:1346.

[16]

Harvey PD. Domains of cognition and their assessment. Dialogues Clin Neurosci. 2019;21(3):227-237.

[17]

Arioli M, Crespi C, Canessa N. Social cognition through the lens of cognitive and clinical neuroscience. BioMed Res Int. 2018;2018:4283427.

[18]

Lombardi A, Tavares JMRS, Tangaro S. Editorial: explainable artificial intelligence (XAI) in systems neuroscience. Front Syst Neurosci. 2021;15:766980.

[19]

Palacio S, Lucieri A, Munir M, Ahmed S, Hees J, Dengel A. XAI handbook: towards a unified framework for explainable AI. IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE;2021.

[20]

Vu MT, Adalı T, Ba D, et al. A shared vision for machine learning in neuroscience. J Neurosci. 2018;38(7):1601-1607.

[21]

Langlotz CP, Allen B, Erickson BJ, et al. A roadmap for foundational research on artificial intelligence in medical imaging: from the 2018 NIH/RSNA/ACR/The academy workshop. Radiology. 2019;291(3):781-791.

[22]

Holzinger A, Malle B, Kieseberg P, et al. Towards the augmented pathologist: challenges of explainable-ai in digital pathology. ArXiv. 2017. doi:10.48550/ARXIV.1712.06657

[23]

Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc. 2015;13(3):141-146.

[24]

Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):1-7.

[25]

Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467-473.

[26]

Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA G. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;151(4):264-269.

[27]

Du M, Liu N, Hu X. Techniques for interpretable machine learning. Commun ACM. 2019;63(1):68-77.

[28]

Linardatos P, Papastefanopoulos V, Kotsiantis S. Explainable AI: a review of machine learning interpretability methods. Entropy (Basel, Switzerland). 2020;23(1):1-45.

[29]

Moradi M, Samwald M. Post-hoc explanation of black-box classifiers using confident itemsets. Expert Syst Appl. 2021;165:113941.

[30]

Bonifazi G, Cauteruccio F, Corradini E, et al. A model-agnostic, network theory-based framework for supporting xai on classifiers. Expert Syst with Appl. 2024;241:122588.

[31]

Machlev R, Heistrene L, Perl M, et al. Explainable artificial intelligence (XAI) techniques for energy and power systems: review, challenges and opportunities. Energy and AI. 2022;9:100169.

[32]

Yamins DL, Hong H, Cadieu CF, Solomon EA, Seibert D, DiCarlo JJ. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc Natl Acad Sci U S A. 2014;111(23):8619-8624.

[33]

Kell AJE, Yamins DLK, Shook EN, Norman-Haignere SV, McDermott JH. A Task-optimized neural network replicates human auditory behavior, predicts brain responses, and reveals a cortical processing hierarchy. Neuron. 2018;98(3):630-644.

[34]

Das A, Mock J, Irani F, Huang Y, Najafirad P, Golob E. Multimodal explainable AI predicts upcoming speech behavior in adults who stutter. Front Neurosci. 2022;16:912798.

[35]

Tang J, LeBel A, Jain S, Huth AG. Semantic reconstruction of continuous language from non-invasive brain recordings. Nature Neurosci. 2023;26(5):858-866.

[36]

Ellis CA, Sancho ML, Miller R, Calhoun V. Exploring relationships between functional network connectivity and cognition with an explainable clustering approach. bioRxiv.

[37]

Andreu-Perez J, Emberson LL, Kiani M, Filippetti ML, Hagras H, Rigato S. Explainable artificial intelligence based analysis for interpreting infant fNIRS data in developmental cognitive neuroscience. Communications biology. 2021;4(1):1077.

[38]

Itani S, Thanou D. Combining anatomical and functional networks for neuropathology identification: a case study on autism spectrum disorder. Med Image Anal. 2021;69:101986.

[39]

Morabito FC, Ieracitano C, Mammone N. An explainable artificial intelligence approach to study MCI to AD conversion via HD-EEG processing. Clinical EEG and neuroscience. 2023;54(1):51-60.

[40]

Partamian H, Mansour M, Mahmoud R, et al. A deep model for EEG seizure detection with explainable AI using connectivity features focal electrically administered seizure therapy (FEAST) view project reconfigurable active solid state devices view project a deep model for eeg seizure detection with explainable ai using connectivity features. Int J Biomed Eng Sci (IJBES). 2021;8(1):1-19.

[41]

Sano T, Horii T, Abe K, Nagai T. Temperament estimation of toddlers from child–robot interaction with explainable artificial intelligence. Adv Robot. 2021;35(17):1068-1077.

[42]

Morita K, Kawaguchi Y. Computing reward-prediction error: an integrated account of cortical timing and basal-ganglia pathways for appetitive and aversive learning. Eur J Neurosci. 2015;42(4):2003-2021.

[43]

Zhang W, Lim BY. Towards relatable explainable AI with the perceptual process. Conference on Human Factors in Computing Systems—Proceedings. 2021.

[44]

Lindsay GW. Attention in psychology, neuroscience, and machine learning. Front Comput Neurosci. 2020;14:29.

[45]

Gunning D, Vorm E, Wang JY, Turek M. DARPA’s explainable AI (XAI) program: a retrospective. Applied AI Letters. 2021;2(4):e61.

[46]

Love PED, Fang W, Matthews J, Porter S, Luo H, Ding L. Explainable artificial intelligence (XAI): precepts, models, and opportunities for research in construction. Advanced Engineering Informatics. 2023;57:102024.

[47]

Clement T, Kemmerzell N, Abdelaal M, Amberg M. XAIR: a systematic metareview of explainable AI (XAI) aligned to the software development process. Mach Learn Knowl Extr. 2023;5(1):78-108.

[48]

Ali S, Abuhmed T, El-Sappagh S, et al. Explainable artificial intelligence (XAI): what we know and what is left to attain trustworthy artificial intelligence. Information Fusion. 2023;99:101805.

[49]

Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, et al. Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion. 2020;58:82-115.

[50]

Kathpalia A, Nagaraj N. Measuring causality. Resonance. 2021;26(2):191-210.

[51]

Chou Y-L, Moreira C, Bruza P, Ouyang C, Jorge J. Counterfactuals and causability in explainable artificial intelligence: theory, algorithms, and applications. Inf Fusion. 2022;81:59-83.

[52]

Baron S. Explainable AI and causal understanding: counterfactual approaches considered. Minds Machines. 2023;33(2):347-377.

[53]

Byrne RM. Counterfactuals in explainable artificial intelligence (XAI): Evidence from human reasoning. Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence.

[54]

Wang W, Shen J, Cheng M-M, Shao L. An iterative and cooperative top-down and bottom-up inference network for Salient Object Detection. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).

[55]

Biswas R, Barz M, Sonntag D. Towards explanatory interactive image captioning using top-down and bottom-up features, beam search and re-ranking. KI -Kunstliche Intelligenz. 2020;34(4):571-584.

[56]

Payrovnaziri SN, Chen Z, Rengifo-Moreno P, et al. Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review. Journal of the American Medical Informatics Association: JAMIA. 2020;27(7):1173-1185.

[57]

Heil BJ, Hoffman MM, Markowetz F, Lee SI, Greene CS, Hicks SC. Reproducibility standards for machine learning in the life sciences. Nature Methods. 2021;18(10):1132-1135.

[58]

Huang X, Marques-Silva J. On the failings of Shapley values for explainability. Int J Approx Reason. 2024;171:109112.

[59]

Tan S, Hooker G, Koch P, Gordo A, Caruana R. Considerations when learning additive explanations for black-box models. Mach Learn. 2023;112(9):3333-3359.

[60]

Páez A. The pragmatic turn in explainable artificial intelligence (XAI). Minds Mach (Dordr). 2019;29(3):441-459.

[61]

Kulesza T, Stumpf S, Burnett M, Yang S, Kwan I, Wong WK. Too much, too little, or just right? Ways explanations impact end users’ mental models. Proceedings of IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC.

RIGHTS & PERMISSIONS

2024 The Author(s). Ibrain published by Affiliated Hospital of Zunyi Medical University (AHZMU) and Wiley-VCH GmbH.

AI Summary AI Mindmap
PDF

422

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/