This past November 2021, I was invited by Hub France IA to an event organized by BPI with Martin Ulbricht from DG CONNECT, European Commission.
It was a great opportunity to discuss with him and other companies the harmonized rules on AI.
In the coming months, this proposal will have a huge impact on the industry. In this post, I’ll be sharing with you my views on the following questions:
- Who is impacted?
- Do we need to add controls on technology usage?
- Will it apply to technologies created outside of Europe?
- What will be the cost for companies to comply with this new regulation?
The idea of this proposal is based on a risk-based approach described here.
The unacceptable risk class includes “AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned, from social scoring by governments to toys using voice assistance that encourages dangerous behaviour”.
The high-risk systems are considered to be systems like “essential private and public services (e.g., credit scoring denying citizens opportunity to obtain a loan)”.
This means that it has a huge impact on the financial industry including banks, but also software vendors and service providers like Bleckwen.
Furthermore, this means that a lot of actors will be impacted, thus we all need to understand the impact of such a regulation.
In the meantime, the EU must discuss and consult with us (vendors) in order to design an applicable regulation.
The question here concerns the use of the technology or service provided.
So, who must ensure that the technology falls in line with the regulation?
In my opinion, it should be the responsibility of the consumer of the technology to ensure compliance with the respective regulations.
Of course, the technology consumer may require tech providers to deliver features to aid and monitor the system for compliance.
So, the answer is yes. We need to find the proportionate level of control and set up the end user with the right tooling to demonstrate he.she is respecting the regulation.
It’s a no brainer - yes, it must!
If the technology provider is outside of the EU and the responsibility has been deferred by the technology consumer to the provider, how to ensure the provider will comply with the harmonized rules?
Ultimately, this is another reason why it must be the responsibility of the technology consumer to check for compliance with the regulations on AI.
Otherwise, it may mean that a vendor based in other geographies could sell a service using AI without EU control and the end user would argue this is the provider's responsibility.
Different organizations including the European Commission’s Directorate-General for Communications Networks, Content and Technology (DG Connect) made some estimations, and to be honest, it’s hard to say.
The range of estimates goes from €5k to €250k.
For companies, and especially for young startups which use a lot of AI technologies in their systems, it will have a huge impact, and it may be an inhibitor for EU innovation and the EU startup ecosystem.
This regulation will certainly have a significant impact on the financial industry due to the prevalence of AI already in production.
It is important to prepare now for this regulation.
Elements that need to be addressed while deploying your AI project include: data governance, traceability, auditability, documentation, transparency, human control and cybersecurity robustness.
At Bleckwen, we haven’t waited for this EU regulation project to address these topics as they were already important to us.
Do you want to discuss your credit fraud detection project? Book a demo today.