European AI Night : Why AI should be explainable?

Explainability Game changer for Ai in production

1_ Introduction

 

Interpretable, or Explainable, Artificial Intelligence (“AI”) has turned into an important topic for those software vendors and users in today business world working within the space. As AI has increasing impact on day to day activities; trust, transparency, liability and auditability have become prerequisites for any project deployed at a large scale.

A workshop was organized on this theme at the 2019 European AI Night in Paris. Four French noteworthy AI players were welcomed by France Digitale and Hub France IA, to examine why they are now increasingly focused on Explainable AI (“XAI): Bleckwen, D-Edge, Craft.ai and Thales.

Three AI use cases, already running today in production, were displayed, demonstrating how explainable AI can be leveraged to make better, more efficient and usable tools for projects within corporations.

 

2_ Presentation

 

⌈  Interpretability is about communication: it’s mandatory to know the end users’ activities and processes in order to adapt the presentation of the results to their needs ⌋ 

Yannick Martel, Chief Strategist at Bleckwen

Created in 2016, Bleckwen is a French fintech, leveraging Behavioural Analytics and Machine Learning to help Banks and Financial Institutions to fight against fraud. Up until this point, the appropriation of Artificial Intelligence in a critical sector such as Financial Services has been limited. Yannick Martel believes that interpretability is a key success factor in AI adoption, as both experts, customers and compliance officers need to get a better understanding of the results of algorithmic models to establish a trustful collaboration with technology-based solutions such as Bleckwen’s.

A significant area for improvement is ensuring providers give the best clarifications to clients, choosing among all mathematically correct explanations those that match with their thought processes and activities. This is a key strength of the Bleckwen platform as outlined and illustrated by Yannick through the discussion. Another test, as Yannick clarified, is to ensure there is a clear explanation for decision making within the platform – clearly illustrating and helping illuminating factors leading to decisions which helps create understanding. In Bleckwen’s case, Yannick was able to illustrate how this thought process directed the design process and how it was fostering the building of trust in the ultimate detection processes.

 

⌈  Explanations are mandatory when AI empowers humans to perform complex tasks  

Antoine Buhl, CTO @D-Edge

D-Edge offers SaaS solutions for lodgings and inn networks. 11 000 lodgings in Europe and Asia are using the D-Edge solution for optimizing their distribution. D-Edge uses Artificial Intelligence alongside with statistical models to improve rooms pricing and to make reservation withdrawals predictions.

Choosing the right price for a room is very complicated and requires the combination of numerous elements (room officially sold, costs of the contenders, nearby events, and so on) including external events which can’t be foreseen. Antoine Buhl took the example of the ongoing “Gilets Jaunes” crisis in France, started by the end of 2018, which brought on an unusual and significant rise in the cancellation rate for hotels. What seems to be an “AI bug” can be effectively analysed if the AI lets the revenue manager knows that he does not recognise those elements. Additionally, D-Edge faces another challenge: analysing, even after the occasions, if a room price was ideal or not, is nearly an unachievable objective in an ever-evolving environment.

D-Edge solution presents recommendations, but the final decision is the Revenue Managers’ job. To settle on the correct choices in this evolving and complex environment, Revenue Managers need clarifications of the suggestions. Adoption is key in this cooperation between humans and machines. At D-Edge, they measure how Revenue Managers use the recommended prices to constantly quantify this selection (both the nature of the suggestion and the nature of the clarification). To an ever-increasing extent, they see Revenue Managers giving the AI a chance to change independently the price proposal according to the clarifications and different parameters.

 

⌈  Without interpretability, predictions have no value ⌋ 

Caroline Chopinaud, CCO @ Craft.ai

Craft.ai offers Explainable AI as-a-service to empower product and operational teams to develop and run XAI projects. Craft.ai manages information stream to computerise business processes, enable predictive maintenance or boost user engagement. Caroline explained how Dalkia uses Craft.ai to improve the efficiency of their energy managers by providing them with detailed analyses and recommendations. Explainability is a prerequisite; without it, human specialists would need to reinvestigate to understand the results, thus invalidating the efficiency advantage. That is only one illustration among others of why explainability is a key for AI deployment and that is the reason why craft.ai builds up their own whitebox Machine Learning models!

 

⌈  When it comes to create AI for critical systems, trustability and certifiability are mandatory ⌋ 

David Sadek, VP Research, Innovation & Technology @ Thales

David Sadek presented the difficulties faced by Thales as they create AI for complex frameworks: space, communications, avionics, defence…

A key issue is building trust between machines and the people that collaborate with them. It is critical to consider how explanations are passed on: for instance, through a conversational interface ready to dialog in a natural language and using explanatory variables that matter to the operators. Another significant field where explainability is key is autonomous vehicles certification. While current algorithmic models’ calculations are secret, having the option to understand decisions will be critical to certify such systems: why an obstruction was perceived, why an identified shape was not viewed as an obstacle, etc. To this end, hybrid solutions consolidating effective but unexplainable deep learning techniques and symbolic AI reasoning are investigated at Thales.

 

Explainable AI workshop at AI Night

3_Rountable

 

The workshop finished up with a talk between the participants and the panellists on the key issues for interpretable AI.

The main issue raised was on the nature of the explanations: the fidelity of explanations and the trust people can have on them. The panellists pointed out that those two aspects are clearly connected.

Yannick Martel disclosed that because fraud is a complex phenomenon, especially regarding the number of meaningful features that need to be considered, Bleckwen decided to develop a dual methodology: forecasts depending on non-explainable AI models, combined with local explanations based on surrogate models. This approach helps provide efficient insights to the users. While creating the AI, Bleckwen verified that the forecasts did not miss genuine frauds and that the explanations made sense to business experts.

Caroline Chopinaud depicted an explainable-by-design approach where the same model is used for prediction and explanation – which means no gap between prediction and insights provided to users. To be really insightful, the algorithms have to work on business-meaningful features and combinations of features – not just any combination that “works” for the data scientists but those which talk to business experts. This is the reason for Craft.ai investment in natively interpretable machine learning algorithms. Evaluating whether an explanation is useful and understandable requires a feedback from users – no quantitative assessment is currently provided.

A comparable explainable by-design approach is also used by D-Edge, Antoine Buhl clarified, relying on various AI models. Since approving a recommended price is complex, D-Edge concentrates its KPI on the trust the revenue manager put in the suggestions and the clarifications, by following how regularly Revenue managers approve the suggested prices as it stands.

David Sadek ended the discussion by presenting the ethics issue in AI. For him AI ought to be evaluated on three dimensions: accuracy, interpretability and morality. For a long time, most of the AI players have focussed on the first aspect. The two others are critical when it comes to put AI in production, especially in complex systems. Explainability is mandatory to control and audit the ethics of an AI model, helping to spot bias for instance, yet it isn’t enough to guarantee an ethical behaviour.

 

4_Key Take Away

 

Explainable AI may be a concern that has arrived only recently in the spotlight, however for certain players in the field, it has been key for quite a while. It’s not by chance that those actors could run in production AI projects affecting key parts of organizations.

Explainability is not just another feature of those AI projects,

it is a critical factor in the decision to go live!

 

> Want to know more about explainability, a key  success factor in fighting financial crime?

Contact our experts: contact@bleckwen.ai

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *