DARPA's Explainable Artificial Intelligence Program
Top Cited Papers
- 1 June 2019
- journal article
- research article
- Published by Wiley in AI Magazine
- Vol. 40 (2), 44-58
- https://doi.org/10.1609/aimag.v40i2.2850
Abstract
Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.This publication has 20 references indexed in Scilit:
- Interpretable Learning for Self-Driving Cars by Visualizing Causal AttentionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- Explaining Explanation, Part 2: Empirical FoundationsIEEE Intelligent Systems, 2017
- Top-Down Visual Saliency Guided by CaptionsPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2017
- The Effect of the Degree of Astigmatism on Optical Quality in ChildrenJournal of Ophthalmology, 2017
- Explaining Explanation, Part 1: Theoretical FoundationsIEEE Intelligent Systems, 2017
- An Interpretable Classification Framework for Information Extraction from Online Healthcare ForumsJournal of Healthcare Engineering, 2017
- Interactive Learning of Grounded Verb Semantics towards Human-Robot CommunicationPublished by Association for Computational Linguistics (ACL) ,2017
- "Why Should I Trust You?"Published by Association for Computing Machinery (ACM) ,2016
- Probabilistic theorem provingCommunications of the ACM, 2016
- Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction modelThe Annals of Applied Statistics, 2015