Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information
Name:
Transparency_and_accountabilit ...
Size:
705.9Kb
Format:
PDF
Description:
Final Accepted Manuscript
Affiliation
Univ Arizona, Eller Coll ManagementIssue Date
2020-07-01Keywords
Convolutional neural networkMachine learning interpretability
Class activation mapping
Explainable artificial intelligence
Metadata
Show full item recordPublisher
ELSEVIERCitation
Kim, B., Park, J., & Suh, J. (2020). Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information. Decision Support Systems, 134, 113302. doi: 10.1016/j.dss.2020.113302Journal
DECISION SUPPORT SYSTEMSRights
Copyright © 2020 Elsevier B.V. All rights reserved.Collection Information
This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu.Abstract
Proliferating applications of deep learning, along with the prevalence of large-scale text datasets, have revolutionized the natural language processing (NLP) field, thereby driving the recent explosive growth. Nevertheless, it is argued that state-of-the-art studies focus excessively on producing quantitative performances superior to existing models, by playing "the Kaggle game." Hence, the field requires more effort in solving new problems and proposing novel approaches and architectures. We claim that one of the promising and constructive efforts would be to design transparent and accountable artificial intelligence (AI) systems for text analytics. By doing so, we can enhance the applicability and problem-solving capacity of the system for real-world decision support. It is widely accepted that deep learning models demonstrate remarkable performances compared to existing algorithms. However, they are often criticized for being less interpretable, i.e., the "black box." In such cases, users tend to hesitate to utilize them for decision-making, especially in crucial tasks. Such complexity obstructs transparency and accountability of the overall system, potentially debilitating the deployment of decision support systems powered by AI. Furthermore, recent regulations are emphasizing fairness and transparency in algorithms to a greater extent, turning explanations more compulsory than voluntary. Thus, to enhance the transparency and accountability of the decision support system and preserve the capacity to model complex text data at the same time, we propose the Explaining and Visualizing Convolutional neural networks for Text information (EVCT) framework. By adopting and ameliorating cutting-edge methods in NLP and image processing, the EVCT framework provides a human-interpretable solution to the problem of text classification while minimizing information loss. Experimental results with large-scale, real-world datasets show that EVCT performs comparably to benchmark models, including widely used deep learning models. In addition, we provide instances of human-interpretable and relevant visualized explanations obtained from applying EVCT to the dataset and possible applications for real-world decision support.Note
24 month embargo; published online: 01 July 2020ISSN
0167-9236Version
Final accepted manuscriptae974a485f413a2113503eed53cd6c53
10.1016/j.dss.2020.113302