Why Explainable Artificial Intelligence?

An Explainable AI (XAI) or Transparent AI is an artificial intelligence (AI) whose actions can be easily understood by humans. It contrasts with the concept of the "black box" in machine learning, meaning the "interpretability" of the workings of complex algorithms, where even their designers cannot explain why the AI arrived at a specific decision.  

XAI can be used to implement a social right to explanation

Transparency rarely comes for free; there are often tradeoffs between how "smart" an AI is and how transparent it is, and these tradeoffs are expected to grow larger as AI systems increase in internal complexity.

The technical challenge of explaining AI decisions is sometimes known as the interpretability problem.

AI systems optimize behavior to satisfy a mathematically-specified goal system chosen by the system designers, such as the command, "maximize accuracy of assessing how positive film reviews are in the test dataset".

The AI may learn useful general rules from the testset, such as "reviews containing the word 'horrible'" are likely to be negative".

However, it may also learn inappropriate rules, such as "reviews containing 'Daniel Day-Lewis' are usually positive"; such rules may be undesirable if they are deemed likely to fail to generalize outside the test set, or if people consider the rule to be "cheating" or "unfair".

A human can audit rules in an XAI to get an idea how likely the system is to generalize to future real-world data outside the test-set.

Goals

AI systems sometimes learn undesirable tricks that do an optimal job of satisfying explicit pre-programmed goals on the training data, but that do not reflect the complicated implicit desires of the human system designers.

For example, a 2017 system tasked with image recognition learned to "cheat" by looking for a copyright tag that happened to be associated with horse pictures, rather than learning how to tell if a horse was actually pictured. 

In another 2017 system, a supervised learning AI tasked with grasping items in a virtual world learned to cheat by placing its manipulator between the object and the viewer in a way such that it falsely appeared to be grasping the object.

One transparency project, the DARPA XAI program, aims to produce "glass box" models that are explainable to a "human-in-the-loop", without greatly sacrificing AI performance.

Human users should be able to understand the AI's cognition (both in real-time and after the fact), and should be able to determine when to trust the AI and when the AI should be distrusted.

Sectors XAI has been researched in many sectors, including: Neural Network Tank imaging Antenna design (evolved antenna) Algorithmic trading (high-frequency trading) Medical diagnoses Autonomous vehicles .