As governments around the world consider how to regulate AI, the European Union is planning first-of-its-kind legislation that would put strict limits on the technology. On Wednesday, the European Commission, the body’s executive branch, detailed a regulatory approach that calls for a four-tier system that groups AI software into separate risk categories and applies an appropriate level of regulation to each.
At the top would be systems that pose an “unacceptable” risk to people’s rights and safety. The EU would outright ban these types of algorithms under the Commission’s proposed legislation. An example of software that would fall under this category is any AI that would allow governments and companies to implement social scoring systems.
Below that is a category for so-called high-risk AIs. This section is the most expansive in terms of both the variety of included software and proposed limits. The Commission says these systems will be subject to strict regulation that will touch on everything from the dataset used to train them to what constitutes an appropriate level of human oversight and how they relay information to the end-user, among other things. Included in this category are law enforcement-related AIs and all forms of remote biometric identification. Police would not be allowed to use the latter in public spaces, though the EU would carve out some exceptions for national security concerns and the like.
Then there’s a category for limited-risk AIs like chatbots. The legislation will require that these programs disclose you’re talking to an AI so that you can make an informed decision on whether you want to continue using them or not. Lastly, there’s a section for programs that pose a minimal risk to people. The Commission says the “vast” majority of AI systems will fall under this category. Programs that fall under this section include things like spam filters. Here, the body doesn’t plan to impose regulation.
“AI is a means, not an end,” Internal Market Commissioner Thierry Breton said in a statement. “Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”
The legislation, which the EU is likely to take years to debate and implement, could see companies face fines of up to six percent of their global sales for breaking the rules. In GDPR, the EU already has some of the most stringent data privacy policies in the world, and it’s considering similar measures when it comes content moderation and anti-trust laws.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.