European Union lawmakers have presented their risk-based proposal for regulating high risk applications of artificial intelligence within the bloc’s single market.
The plan includes prohibitions on a small number of use-cases that are considered too dangerous to people’s safety or EU citizens’ fundamental rights, such as a China-style social credit scoring system or certain types of AI-enabled mass surveillance.
Most uses of AI won’t face any regulation (let alone a ban) under the proposal but a subset of so-called “high risk” uses will be subject to specific regulatory requirements, both ex ante and ex post.
There are also transparency requirements for certain use-cases — such as chatbots and deepfakes — where EU lawmakers believe that potential risk can be mitigated by informing users that they are interacting with something artificial.
The overarching goal for EU lawmakers is to foster public trust in how AI is implemented to help boost uptake of the technology. Senior Commission officials talk about wanting to develop an excellence ecosystem that’s aligned with European values.
“Today, we aim to make Europe world-class in the development of a secure, trustworthy and human-centered Artificial Intelligence, and the use of it,” said Commission EVP, Margrethe Vestager, announcing adoption of the proposal at a press conference.
“On the one hand, our regulation addresses the human and societal risks associated with specific uses of AI. This is to create trust. On the other hand, our coordinated plan outlines the necessary steps that Member States should take to boost investments and innovation. To guarantee excellence. All this, to ensure that we strengthen the uptake of AI across Europe.”
Under the proposal, mandatory requirements are attached to a “high risk” category of applications of AI — meaning those that present a clear safety risk or threaten to impinge on EU fundamental rights (such as the right to non-discrimination).
Examples of high risk AI use-cases that will be subject to the highest level of regulation on use are set out in annex 3 of the regulation — which the Commission said it will have the power to expand by delegate acts, as use-cases of AI continue to develop and risks evolve.
For now cited high risk examples fall into the following categories: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes.
Military uses of AI are specifically excluded from scope as the regulation is focused on the bloc’s internal market.
The makers of high risk applications will have a set of ex ante obligations to comply with before bringing their product to market, including around the quality of the data-sets used to train their AIs and a level of human oversight over not just design but use of the system — as well as ongoing, ex post requirements, in the form of post-market surveillance.
Other requirements include a need to create records of the AI system to enable compliance checks and also to provide relevant information to users. The robustness, accuracy and security of the AI system will also be subject to regulation.
Commission officials suggested the vast majority of applications of AI will fall outside this highly regulated category. Makers of those ‘low risk’ AI systems will merely be encouraged to adopt (non-legally binding) codes of conduct on use.
Penalties for infringing the rules on specific AI use-case bans have been set at up to 6% of global annual turnover or 30M (whichever is greater). While violations of the rules related to high risk applications can scale up to 4% (or 20M).
Enforcement will involve multiple agencies in each EU Member State — with the proposal intending oversight be carried out by existing (relevant) agencies, such as product safety bodies and data protection agencies.
That raises immediate questions over adequate resourcing of national bodies, given the additional work and technical complexity they will face in policing the AI rules; and also how enforcement bottlenecks will be avoided in certain Member States. (Notably, the EU’s General Data Protection Regulation is also overseen at the Member State level and has suffered from lack of uniformly vigorous enforcement.)
There will also be an EU-wide database set up to create a register of high risk systems implemented in the bloc (which will be managed by the Commission).
A new body, called the European Artificial Intelligence Board (EAIB), will also be set up to support a consistent application of the regulation — in a mirror to the European Data Protection Board which offers guidance for applying the GDPR.
In step with rules on certain uses of AI, the plan includes measures to co-ordinate EU Member State support for AI development — such as by establishing regulatory sandboxes to help startups and SMEs develop and test AI-fuelled innovations — and via the prospect of targeted EU funding to support AI developers.
Internal market commissioner Thierry Breton said investment is a crucial piece of the plan.
“Under our Digital Europe and Horizon Europe program we are going to free up a billion euros per year. And on top of that we want to generate private investment and a collective EU-wide investment of 20BN per year over the coming decade — the ‘digital decade’ as we have called it,” he said.“We also want to have 140BN which will finance digital investments under Next Generation EU [COVID-19 recovery fund] — and going into AI in part.”
Shaping rules for AI has been a key priority for EU president Ursula von der Leyen who took up her post at the end of 2019. A white paper was published last year, following a 2018 AI for EU strategy — and Vestager said that today’s proposal is the culmination of three years’ work.
Breton added that providing guidance for businesses to apply AI will give them legal certainty and Europe an edge. “Trust… we think is vitally important to allow the development we want of artificial intelligence,” he said. [Applications of AI] need to be trustworthy, safe, non-discriminatory — that is absolutely crucial — but of course we also need to be able to understand how exactly these applications will work.”
“What we need is to have guidance. Especially in a new technology… We are, we will be, the first continent where we will give guidelines — we’ll say ‘hey, this is green, this is dark green, this is maybe a little bit orange and this is forbidden’. So now if you want to use artificial intelligence applications, go to Europe! You will know what to do, you will know how to do it, you will have partners who understand pretty well and, by the way, you will come also in the continent where you will have the largest amount of industrial data created on the planet for the next ten years.
“So come here — because artificial intelligence is about data — we’ll give you the guidelines. We will also have the tools to do it and the infrastructure.”
In the event the final proposal does treat remote biometric surveillance as a particularly high risk application of AI — and there is a prohibition in principal on the use of the technology in public by law enforcement.
However use is not completely proscribed, with a number of exceptions where law enforcement would still be able to make use of it, subject to a valid legal basis and appropriate oversight.
Today’s proposal kicks off the start of the EU’s co-legislative process, with the European Parliament and Member States via the EU Council set to have their say on the draft — meaning a lot could change ahead of agreement on a final pan-EU regulation.
Commissioners declined to give a timeframe for when legislation might be adopted, saying only that they hoped the other EU institutions would engage immediately and that the process could be done asap. It could, nonetheless, be several years before the AI regulation is ratified and in force.