Fraud and interpretability of Machine Learning models – part 1

Interpretability: the missing link in Machine Learning adoption for fraud detection

Machine learning methods are increasingly used, especially in anti-fraud products (developed by Bleckwen and other vendors) to capture weak signals and spot patterns in data that humans would otherwise miss.

If the relevance of these methods for fraud detection is widely recognized, they are still mistrusted in certain industries such as banking, insurance or healthcare, due to their black box” nature. The decisions made by a predictive model can be difficult to interpret by a business analyst, in part because the complexity behind of the calculations and the lack of transparency in the “recipe” that was used to produce the final output. Therefore, it seems quite understandable that an analyst having to make an important decision, for example granting a credit application or refusing the reimbursement of healthcare expenses, is reluctant to automatically apply the predictive model output without understanding the underlying reasons.

The predictive power of a machine learning model and its interpretability have long been considered as opposite. But that was before! For the past two or three years, there has been renewed interest from researchers, the industry and more broadly the data science community, to make machine learning more transparent, or even make it  “white box”.

Advantages of Machine Learning for fraud detection

Fraud is a complex phenomenon to detect because fraudsters are always a step ahead and constantly adapt their techniques. Rare by definition, fraud comes in many forms (from the simple falsification of an identity card to very sophisticated social engineering techniques) and represents a potentially high financial and reputational risk (money laundering, financing terrorism…). And, on top of that, fraud mechanism is known to be “adversarial”, which means that fraudsters are constantly working to subvert the procedures and detection systems in place to exploit the slightest breach.

Most anti-fraud systems currently in place are based on rules determined by a human because the derived results are relatively simple to understand and considered transparent by the industry. As a first step, these systems are easy to set up and prove to be effective. However, they become very difficult to maintain when the number of rules increases. With fraudsters adapting themselves to the rules in place, the system requires additional or updated rules, which makes the system more and more complicated to maintain.

One of the perverse effects is a steadily degradation of the anti-fraud defense. The system ends up becoming too intrusive (with rules capturing the specificities of data), or conversely, too broad. In both cases, it has a negative impact on good customers because fraudsters know how to perfectly mimic the “average customer”. It is a well-known fact for risk managers: “The typical fraudster profile? My best customers!”

| Tracking fraudsters is therefore a difficult task and often causes friction in the customer experience, which generates significant direct and indirect costs.

As a result, an effective detection system that is not very intrusive and detect the latest fraud techniques must address considerable challenges. The machine learning is proving to be an effective solution to get around this problem.

Artificial Intelligence against fraud

Moreover, with the latest interpretability techniques, business analysts can be showed with the reasons that led the machine learning algorithm to emit one input or another.

Interpretability, why is it important?

In a general way, machine learning is becoming ubiquitous in our lives, and the need to understand and collaborate with machines is growing. On the other hand, machines do not often explain the results of their predictions, which can lead to a lack of confidence from end-users and ultimately hinder the adoption of these methods.

Obviously, certain machine learning applications do not require explanations. When used in a low-risk environment, such as music recommendation engines or to optimize online advertisements, errors have no significant impact. In contrast, when deciding who will be hired, braking a self-driving car or deciding whether to release someone on bail, the lack of transparency in the decision raises legitimate concerns from users, regulators and, more broadly, society.

In her book published in 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Cathy O’Neil, a renowned mathematician and data scientist, calls on society and politicians to be extremely vigilant about what she defines as “the era of blind faith in big data”. Among the most denouncing flaws, she highlights the lack of transparency and the discriminatory aspect of the algorithms that govern us. Techniques to understand decisions made by a machine have become a societal issue.

Interpretability means that we are able to understand why an algorithm makes a particular decision. Even if there is no real consensus as to its definition, an interpretable model can increase confidence, meet regulatory requirements (eg GDPRand CNIL, the French administrative regulatory body), explain the decisions to humans and improve existing models.

The need for interpretable models is not shared by all leading researchers in artificial intelligence field. Critics suggest instead a paradigm shift in how we can model and interpret the world around us. For example, few people really worry today about the explainability of a computer processor and trust the results displayed on screen. This topic is a source of debate even in machine learning conferences like NIPS.

 

Conclusion

Fraud is a complex phenomenon to detect, and the use of machine learning is a strong ally to fight it effectively. Interpretability favors its adoption by business analysts. The emergence of a new category of techniques in the last two years has made the interpretability of machine learning more accessible and directly applicable to AI products. With these techniques, we can now obtain very high predictive power without compromising their ability to explain the results to a human. In our next blog post, we will explain how techniques such as LIME, Influence Functions or SHAP, are used in machine learning models to bring more transparent decisions.

 

Further reading:

Miller, Tim. 2017. Explanation in Artificial Intelligence: Insights from the Social Sciences.”

The Business Case for Machine Learning Interpretability http://blog.fastforwardlabs.com/2017/08/02/business-interpretability.html

Is there a ‘right to explanation’ for machine learning in the GDPR? https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-the-gdpr/

 

Also from Bleckwen:

Leave a Reply

Your email address will not be published. Required fields are marked *