Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation
AffiliationUniv Arizona, Dept Psychol & Biomed Engn
data-driven discoveries of brain circuit theories
neuro-behavioral decisions systems
MetadataShow full item record
PublisherFRONTIERS MEDIA SA
CitationFellous J-M, Sapiro G, Rossi A, Mayberg H and Ferrante M (2019) Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation. Front. Neurosci. 13:1346. doi: 10.3389/fnins.2019.01346
JournalFRONTIERS IN NEUROSCIENCE
RightsCopyright © 2019 Fellous, Sapiro, Rossi, Mayberg and Ferrante. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at firstname.lastname@example.org.
AbstractThe use of Artificial Intelligence and machine learning in basic research and clinical neuroscience is increasing. AI methods enable the interpretation of large multimodal datasets that can provide unbiased insights into the fundamental principles of brain function, potentially paving the way for earlier and more accurate detection of brain disorders and better informed intervention protocols. Despite AI’s ability to create accurate predictions and classifications, in most cases it lacks the ability to provide a mechanistic understanding of how inputs and outputs relate to each other. Explainable Artificial Intelligence (XAI) is a new set of techniques that attempts to provide such an understanding, here we report on some of these practical approaches. We discuss the potential value of XAI to the field of neurostimulation for both basic scientific inquiry and therapeutic purposes, as well as, outstanding questions and obstacles to the success of the XAI approach.
NoteOpen access journal
VersionFinal published version
SponsorsUnited States Department of Health & Human Services National Institutes of Health (NIH) - USA NIH National Institute of Mental Health (NIMH); Computational Psychiatry Program at NIMH; Theoretical and Computational Neuroscience Program at NIMH; 'Machine Intelligence in Healthcare: Perspectives on Trustworthiness, Explainability, Usability and Transparency' workshop at NIH/NCATS; SUBNETS program at DARPA; GARD programs at DARPA
- Mitigating belief projection in explainable artificial intelligence via Bayesian teaching.
- Authors: Yang SC, Vong WK, Sojitra RB, Folke T, Shafto P
- Issue date: 2021 May 10
- Explainable AI: A Review of Machine Learning Interpretability Methods.
- Authors: Linardatos P, Papastefanopoulos V, Kotsiantis S
- Issue date: 2020 Dec 25
- Explainable AI (xAI) for Anatomic Pathology.
- Authors: Tosun AB, Pullara F, Becich MJ, Taylor DL, Fine JL, Chennubhotla SC
- Issue date: 2020 Jul
- Explainable Deep Learning for Personalized Age Prediction With Brain Morphology.
- Authors: Lombardi A, Diacono D, Amoroso N, Monaco A, Tavares JMRS, Bellotti R, Tangaro S
- Issue date: 2021
- Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review.
- Authors: Payrovnaziri SN, Chen Z, Rengifo-Moreno P, Miller T, Bian J, Chen JH, Liu X, He Z
- Issue date: 2020 Jul 1