Toward Human-Understandable, Explainable AI
Top Cited Papers
- 4 October 2018
- journal article
- research article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in Computer
- Vol. 51 (9), 28-36
- https://doi.org/10.1109/mc.2018.3620965
Abstract
Recent increases in computing power, coupled with rapid growth in the availability and quantity of data have rekindled our interest in the theory and applications of artificial intelligence (AI). However, for AI to be confidently rolled out by industries and governments, users want greater transparency through explainable AI (XAI) systems. The author introduces XAI concepts, and gives an overview of areas in need of further exploration-such as type-2 fuzzy logic systems-to ensure such systems can be fully understood and analyzed by the lay user.Keywords
This publication has 8 references indexed in Scilit:
- Methods for interpreting and understanding deep neural networksDigital Signal Processing, 2018
- European Union Regulations on Algorithmic Decision Making and a “Right to Explanation”AI Magazine, 2017
- Uncertain Rule-Based Fuzzy SystemsPublished by Springer Science and Business Media LLC ,2017
- "Why Should I Trust You?"Published by Association for Computing Machinery (ACM) ,2016
- Multiobjective Evolutionary Optimization of Type-2 Fuzzy Rule-Based Systems for Financial Data ClassificationIEEE Transactions on Fuzzy Systems, 2016
- On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance PropagationPLOS ONE, 2015
- A Compact Evolutionary Interval-Valued Fuzzy Rule-Based Classification System for the Modeling and Prediction of Real-World Financial Applications With Imbalanced DataIEEE Transactions on Fuzzy Systems, 2014
- A Hierarchical Type-2 Fuzzy Logic Control Architecture for Autonomous Mobile RobotsIEEE Transactions on Fuzzy Systems, 2004