Interpretability of machine learning models – Part 2

  In the previous article, we explained why the interpretability of machine learning models is an important factor in the adoption of AI in industries, and more specifically in fraud detection. (https://www.bleckwen.ai/2017/09/06/interpretable-machine-learning-in-fraud-prevention/ ). In this article, we’re going to explain how LIME works. It’s an intuitive technique that we have tested at Bleckwen. Before looking … Continue reading Interpretability of machine learning models – Part 2