Explainable AI for non-experts: Is this a chimera?

Abstract
Artificial intelligence (AI) offers transformative benefits in many areas of economic activity, through the innovative use of profuse, multifaceted streams of data. This power comes at a cost: the systems evolve so organically that the rationale behind its decisions can be a mystery even to its developers. The AI-based system thus becomes a “black box” that is, one that is not transparent, making it difficult to trust the system’s decisions, assess model fairness, detect bias, and meet regulatory demands. These concerns have led to the consideration of explainable AI (XAI) as a cognitive bridge between the two worlds. However, the majority of deployments are for machine learning engineers. Thus there is a need for human-understandable AI systems that can tailor explanations relative to the needs, knowledge, and goals of non-experts. This essay explores some of the literature over the past half century pertaining to the explainability needs of different stakeholders, including efforts made to address such needs as well as lessons learned.