Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation
Affiliation
Univ Arizona, Dept Psychol & Biomed EngnIssue Date
2019-12-13Keywords
behavioral paradigmsclosed-loop neurostimulation
computational psychiatry
data-driven discoveries of brain circuit theories
explain AI
machine learning
neuro-behavioral decisions systems
Metadata
Show full item recordPublisher
FRONTIERS MEDIA SACitation
Fellous J-M, Sapiro G, Rossi A, Mayberg H and Ferrante M (2019) Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation. Front. Neurosci. 13:1346. doi: 10.3389/fnins.2019.01346Journal
FRONTIERS IN NEUROSCIENCERights
Copyright © 2019 Fellous, Sapiro, Rossi, Mayberg and Ferrante. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY).Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
The use of Artificial Intelligence and machine learning in basic research and clinical neuroscience is increasing. AI methods enable the interpretation of large multimodal datasets that can provide unbiased insights into the fundamental principles of brain function, potentially paving the way for earlier and more accurate detection of brain disorders and better informed intervention protocols. Despite AI’s ability to create accurate predictions and classifications, in most cases it lacks the ability to provide a mechanistic understanding of how inputs and outputs relate to each other. Explainable Artificial Intelligence (XAI) is a new set of techniques that attempts to provide such an understanding, here we report on some of these practical approaches. We discuss the potential value of XAI to the field of neurostimulation for both basic scientific inquiry and therapeutic purposes, as well as, outstanding questions and obstacles to the success of the XAI approach.Note
Open access journalISSN
1662-4548PubMed ID
31920509Version
Final published versionSponsors
United States Department of Health & Human Services National Institutes of Health (NIH) - USA NIH National Institute of Mental Health (NIMH); Computational Psychiatry Program at NIMH; Theoretical and Computational Neuroscience Program at NIMH; 'Machine Intelligence in Healthcare: Perspectives on Trustworthiness, Explainability, Usability and Transparency' workshop at NIH/NCATS; SUBNETS program at DARPA; GARD programs at DARPAae974a485f413a2113503eed53cd6c53
10.3389/fnins.2019.01346
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Copyright © 2019 Fellous, Sapiro, Rossi, Mayberg and Ferrante. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY).
Related articles
- Applications of Explainable Artificial Intelligence in Diagnosis and Surgery.
- Authors: Zhang Y, Weng Y, Lund J
- Issue date: 2022 Jan 19
- eXplainable Artificial Intelligence (XAI) in aging clock models.
- Authors: Kalyakulina A, Yusipov I, Moskalev A, Franceschi C, Ivanchenko M
- Issue date: 2024 Jan
- Obtaining genetics insights from deep learning via explainable artificial intelligence.
- Authors: Novakovsky G, Dexter N, Libbrecht MW, Wasserman WW, Mostafavi S
- Issue date: 2023 Feb
- Explainable AI in medical imaging: An overview for clinical practitioners - Saliency-based XAI approaches.
- Authors: Borys K, Schmitt YA, Nauta M, Seifert C, Krämer N, Friedrich CM, Nensa F
- Issue date: 2023 May
- Modern views of machine learning for precision psychiatry.
- Authors: Chen ZS, Kulkarni PP, Galatzer-Levy IR, Bigio B, Nasca C, Zhang Y
- Issue date: 2022 Nov 11