Interpretability, the key success factor when opting for Artificial Intelligence.

According to Yannick Martel, Managing Director of Bleckwen, a fintech specialized in Artificial Intelligence for fraud prevention, interpretability is today a major stake to ensure a broad adoption of Artificial Intelligence. 

 

First of all, could you explain what interpretability is?

Interpretability is the ability to explain the logic and the reasons that lead an algorithms to produce its results. It applies at two levels:

  • Global: to understand the major trends, i.e. the most significant factors
  • Local: to precisely analyze the specific factors that contributed to the machine’s decision for a group of closely-related individuals

 

How does it work?

The interpretation of a model is obtained by applying an algorithm which will explain the contribution of each variable to the results.Imagine that you enjoyed a delicious black forest cake bought in a bakery (= a black box, eg. a model with opaque internal operations). If you want to make this cake at home, you will gather the ingredients (the data), follow the recipe (the algorithms) and you will get your cake (the model). But why is it not as good as the one from the bakery? Although you have used exactly the same ingredients, you probably lack the chef’s tips to explain to you the reasons why certain ingredients, at certain stages of the recipe, are important and how to combine them!

In this example, interpretability techniques will allow you to discover the chef’s tips.

 

what is interpretable AI

There are two types of interpretability techniques:

  • the model-agnostic techniques: these techniques do not take into consideration the model to which they are applied and only analyze the data used in input and the decisions (ex: LIME: Local Interpretable Model-Agnostic Explanations, SHAP: Shapely, etc …)
  • the model-specific techniques: they rely on analyzing the inherent architecture of the model that one wants to explain to understand it (eg. for Deep Learning: Deeplift, for random forests: Ando Saabas, etc.). Both techniques can explain provide local and global model interpretation.

 

Could you give a concrete example of the application of interpretability?

At Bleckwen, we indifferently apply the techniques of the two types, depending on what we aim to explain. For example, for a customer’s credit request, we shall look for the reasons of its scoring by combining agnostic and specific techniques, at the local level. On another level, a global interpretability makes it possible to understand the overall logic of a model and to check for the variables deemed as important (for example, to ensure that an explanatory variable does not contain “too much” information, which is usually suspect …).

 

Why this need for transparency towards machine learning models?

AI is starting to be used to make critical or even vital decisions, for example in medical diagnosis, fight against fraud or terrorism, autonomous vehicles … At Bleckwen, we apply AI on sensitive topics of safety to help our customers to make decisions that are often difficult to take (eg. accepting a 40,000 euros credit granting or validating a 3,000,000 euros transfer towards a risky country). The challenges of avoiding an error and understanding a decision, or helping an expert to confirm a decision, are all the more important.

Our systems interact with human beings who ultimately must be in control of their decisions. The human being needs to understand the reasoning followed by our algorithms, to know on what elements the algorithm has based its decision. Interpretability allows them to make an informed decision in an efficient and reliable way.

 

4 reasons to explain why interpretability in AI is importantHow does interpretability become a societal and political issue?

Recent events show a growing concern about the use of personal data. The entry into force of the GDPR in May this year, is an important step for the data protection of European citizens. It also requires companies to be able to justify algorithmic decision making. The techniques to understand the algorithms have therefore become critical. Also in the United States, many people are questioning the “Fair use” of the algorithms’ data so as to structure their uses. This is what Cathy O’Neil, a renowned mathematician and data scientist, suggests on her blog (https://mathbabe.org) and in her book “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” by inviting us to be careful in the trust we place in Big Data and algorithmic decisions.

The Cambridge Analytica case will certainly contribute to reinforce this trend. Algorithmic decision-making becomes a major societal, political and ethical subject.

The adoption of AI will not happen without transparency.

At Bleckwen, we have made it a major focus of our offer and our technological developments.

 


 

Do you want to understand better the topic of interpretability?  Go on reading :

Leave a Reply

Your email address will not be published. Required fields are marked *