Interpretability of machine learning models – Part 2

 

In the previous article, we explained why the interpretability of machine learning models is an important factor in the adoption of AI in industries, and more specifically in fraud detection. (https://www.bleckwen.ai/2017/09/06/interpretable-machine-learning-in-fraud-prevention/ ).

In this article, we’re going to explain how LIME works. It’s an intuitive technique that we have tested at Bleckwen.

Before looking at LIME in detail, it is necessary to situate it among other existing techniques. In general, interpretability techniques are categorized along two axes:

  • Applicability: Model-specific versus Model-agnostic
  • Scope: Global versus Local

Model-specific versus Model-agnostic

There are two types of techniques:

  • Specific techniques: these techniques apply to a single type of model because they rely on the internal structure of a machine learning algorithm. Some examples of specific techniques are: Deeplift for Deep Learning, Treeinterpreter for models tree-based models like RandomForest, XGBoost, etc.
    • One of the biggest advantages of model-specific techniques is that they generate, potentially more precise explanations because they are directly dependent on the model to be interpreted
    • However, the disadvantage is that the explainabilty process is therefore attached to the algorithm used by the model and any change to another model can become complicated.

 

  • Agnostic techniques: these techniques don’t take into account the model to which they apply and they only analyze the data used from inputs and the decisions taken out. Examples of agnostic techniques are: LIME, SHAP, Influence Functions,
    • The main advantage of agnostic techniques is their flexibility. The data scientist is free to use any type of machine learning model because the explanation process is separate from the algorithm used for the model.
    • The disadvantage is that often these techniques are based on replacement models (surrogates models) which can seriously reduce the quality of explanations provided.

 

Global versus Local

The underlying logic of a machine learning model can be explained on two levels:

Global Explanation: It’s important to understand the model as a whole, then to focus on a specific case (or group of cases). The global explanation provides an overview of the most influential variables in the model, based on the data input and the predicted variable. The most common method for obtaining an overall explanation of a model is the computation of features importance.

Local explanations identify the specific variables that contributed to an individual decision, a requirement that is increasingly critical for apps using machine learning.

The most important variables in the overall explanation of an algorithm doesn’t necessarily correspond to the most important variables of a local prediction.

When trying to understand why a machine learning algorithm reaches a particular decision, especially when this decision has an impact on an individual with a “right to explanation” (as stated in the service provider obligations under the GDPR) local explanations are generally more relevant.

 

Case study for banks

Let’s take an illustrative case study to understand MLI (machine learning interpretability) techniques better:

The BankCorp Bank offers its customers a mobile application to lend money instantly. The loan application consists of four pieces of information: age, income, SPC (socio-professional categories) and amount requested. To respond quickly to its customers, BankCorp uses a machine learning model that assigns a risk score (between 0 and 100) for each case in real time. Cases with a score greater than 50 require a manual review by bank risk analysts. The image below illustrates the utilization mechanism of this model:

 

Scoring of credit applications with a black box machine learning model.

 

A BankCorp risk analyst believes that the score of Case 3 is strangely high compared to the demand characteristic and wants to obtain detailed reasons for the score. The BankCorp data scientist team use a complex black-box model, given the financial performance constraints of the market and can’t provide an explanation for each case. However, the model used makes it possible to extract a global explanation of the important variables according to the model (figure below):

 

Global interpretation of the BankCorp black box model.

 

The global interpretation of the model provides an insight into the logic of the model through the level of importance of each variable. The level of importance of a variable is assigned by the model during the learning process (training) but this doesn’t indicate the absolute contribution of each factor, in the final score. In our example, we can see that the requested amount variable, as expected, is the most important variable from the models point of view for calculating the score. Income and age variables are slightly less important while the borrower’s SPC doesn’t seem to affect the score too much.

Although this level of interpretation offers a first understanding of the model, it’s not sufficient to explain why Case 3 is twice as poorly rated as Case 1, when both ask for the same amount and have income and similar ages. To answer this question, we must use a local and agnostic method (since the model is a black box).

 

Understanding the decisions made by a machine learning model with LIME

LIME (Local Interpretable Model-Agnostic Explanations) is an interpretation technique, applicable to all types of models (agnostic) that provides an explanation at the individual level (local)
. It was created in 2016 by three researchers from the University of Washington and remains one of the most known methods.

The idea of ​​LIME is quite intuitive: instead of explaining the results of a complex model as a whole, LIME will create another model, simple and explainable, applicable only in the vicinity of the case to be explained. By vicinity we mean the cases close to the case that we want to explain (in our example Case 3). LIME’s mathematical hypothesis is to demonstrate that this new model, also known as the “surrogate model” or replacement model, approximates the complex model (black-box) with good precision, in a very limited region.

The only prerequisites for using LIME is to have the input data (cases) and for it to be able to ask the black-box model as many times as necessary to know the scores. LIME then carries out a kind of, “reverse engineering” to reconstruct the inter logic workings around the specific case.

To do this, LIME will create new examples for the case slightly different from those you want to explain. This consists of changing the information in the original case, one at a time, and presenting it to the original model (black-box). This process is repeated a few thousand times depending on the number of variables to be modified. This process is known as “data perturbation” and the modified cases are called “perturbed data”.

Eventually, LIME will have set up a database of “local” labelled data (i.e., case → score) where LIME knows what it has changed from one case to another and the decision issued by the black-box model.

 

Construction of the training database from the case to be explained by the data disruption process.

 

From this database of cases similar to the one that we want to explain, LIME will create a new machine learning model, that is simpler but explainable. It’s therefore this new model of “replacement” that is used by LIME to extract the explanations.

 

Creation of the replacement machine learning model created by LIME

 

The figure below shows the explanation of the score provided by LIME for Case 3. The variable SPC and the amount requested contribute to a high score (+49 and +29 points respectively). On the other hand, the age and income variables reduce the risk score of demand (-6 and -2 points respectively). This level of interpretation highlights that for this particular case, the variable SPC is very important, contrary to what one could expect by looking only at the global interpretation of the model.

Therefore, the risk analyst would now be able to understand the particular reasons that led this case having a poor score (in this case SPC equal to craftspeople). The risk analyst then could compare this decision with their experience to judge whether the model responds correctly to the bank’s granting policy or if it’s biased towards a population.

 

Explanation of the score for Case 3

 

In its current version, LIME uses a linear regression (Ridge Regression) for building the replacement model. The explanations are therefore derived from the regression coefficients, which are immediately interpreted. It should be noted, that some of the concepts explained here differ slightly in LIME’s Python implementation. However, the idea presented makes it possible to understand the intuition of the technique as a whole. This video made by the author of the framework offers a little more understanding in the operation details of LIME.

The official implementation of LIME is available in Python. There are also other frameworks that offer LIME in Python (eli5 and Skater). A port in R language is also available here.

 

Advantages and disadvantages of LIME

At Bleckwen, we were able to test LIME with real data and in different case studies. From our experience, we are able to share with you the following advantages and disadvantages:

 

Advantages:

  • The use of LIME means that the data scientist doesn’t need to change the way they work or the models deployed to make them interpretable.
  • The official implementation supports structured (tabular), textual, and image (pixel) data.
  • The method is easy to understand and the implementation is well documented and open source.

 

Disadvantages:

  • Finding the right neighborhood level (close cases): LIME has a parameter to find the right neighborhood radius. However, its tuning is empirical and requires a trial and error approach.
  • The discretization of the variables: the continuous variables can be discretized in several ways. We found that the explanations were highly unstable, depending on the parameter used.
  • For rare targets, which are common cases in fraud detection, LIME gives rather unstable results because it’s difficult to perturb new data sufficiently to cover enough fraud cases.
  • Time consuming: LIME is a little slow at computing explanations for the results (a matter of seconds). This prevents us from using it in real time.

 

Conclusion

The interpretability of machine learning models is a blooming market where much remains to be done. Over the past three years, a growing number of new approaches have been seen and it’s important to be able to identify them according to two main axes: its application – Agnostic vs. Specific methods and its scope of interpretation – Global vs. Local.

In this article, we have introduced LIME, a local and agnostic technique created in 2016. LIME works by creating a local model from the inputs and outputs of the black-box model and then deriving the explanations from a replacement model, which is easier to interpret. This model is only applicable in a region well defined by the vicinity of the case that one wants to explain.

Other techniques like SHAP and Influence Functions are also promising because they are based on strong mathematical theory and will be the subject of a future blog post.

Leave a Reply

Your email address will not be published. Required fields are marked *