Pearse Keane
September 1, 2021
GyroGlove™
September 1, 2021

Explainable Artificial Intelligence

Making Artificial Intelligence (AI) processes more understandable to human users is the main focus of Explainable AI…

The unceasing evolution in Artificial Intelligence (AI), specifically in Machine Learning (ML), has developed applications with numerous benefits in several fields.(1)  However, many of these systems cannot explain the basis of their decisions or actions to human users.(1)  There is no available information about how they arrive at their results because they are developed in a black-box manner – the systems cannot explain the rationale behind their results.(2) 

Explainable Artificial Intelligence (XAI) is a field that aims to make AI systems more understandable to human users.(3) D. Gunning described it in 2017 as “a field that will create a suite of machine learning techniques that enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners”.(4) There needs to be further clarification on what XAI truly encompasses, why it is such an important field, and how it can be approached in order to develop a more responsible AI.

What XAI Entails

XAI proposes models with specific characteristics like Explainability,  Interpretability, Understandability, and Transparency. The term explainability refers to an active characteristic that allows clarifying or detailing the internal functions of a model, denoting any action or procedure performed. Another essential characteristic is interpretability, which can passively explain or provide an understandable meaning of the machine’s output to a human without detailing internal functions or actions taken. In addition, when a model allows a human to understand its function without explaining its internal structure or process, it is thought to have Understandability.

Finally, models are thought to be Transparent when they are Understandable, and these models are divided according to their degree of Understandability as simulatable, decomposable, and algorithmically transparent models.(4)

What XAI Entails

XAI proposes models with specific characteristics like Explainability,  Interpretability, Understandability, and Transparency. The term explainability refers to an active characteristic that allows clarifying or detailing the internal functions of a model, denoting any action or procedure performed. Another essential characteristic is interpretability, which can passively explain or provide an understandable meaning of the machine’s output to a human without detailing internal functions or actions taken. In addition, when a model allows a human to understand its function without explaining its internal structure or process, it is thought to have Understandability. 

Finally, models are thought to be Transparent when they are Understandable, and these models are divided according to their degree of Understandability as simulatable, decomposable, and algorithmically transparent models (4)

Reasons Why We Need XAI

The explainability of AI is needed in order to be able to justify, control, improve, and learn from this amazing technology. XAI is crucial to reach a point where users understand, trust, and successfully manage AI results. Being able to explain the rationale behind an AI system result allows us to defend the algorithmic decisions as fair and ethical. Moreover, understanding these systems’ behavior and processing can also help identify vulnerabilities and flaws, preventing errors from happening, and giving room to more control over them. It not only shows flaws but also aids in identifying things that can be improved.(3) All of this allows creators and users to obtain more knowledge in all the variety of fields AI is  applied.

How can AI be Explainable?

XAI can be reached by either creating inherently explainable models or using post hoc explainability techniques. When models are Interpretable and Understandable by design, they are said to be transparent. These can have simulatability, which depends on the complexity of the model since it entails the capability of a human to simulate it.

Decomposability refers to the ability to understand, interpret, or explain the behavior of a model; Algorithmic Transparency is rooted in the capacity of a user to understand the route followed by a model to give a result from input data. Models that are not readily interpretable by design need post hoc explainability techniques like text explanations, visualizations, local explanations, explanations by example, explanations by simplification, and feature relevance.(4) 

Summing It All Up

Out of the many reasons XAI came to exist, developing users’ trust in the many AI system applications has gained importance. Since trust in experts is based on their capability to explain and justify their decisions, AI systems need to be able to do the same. Particularly in Medicine, decisions cannot be made without verifying the logic behind them since the ethical responsibility of a practitioner cannot be based on more uncertainty and incompleteness than what the expanding medical knowledge already offers.(5) 

Contact Us