On Wednesday, the European Union made a significant advancement towards creating the first laws in the world governing the use of artificial intelligence by businesses.
With this ambitious effort, Brussels seeks to pave the road for international standards for a technology that is utilized in everything from surgery to fraud detection in banks and chatbots like OpenAI’s ChatGPT.
Brando Benifei, a member of the European Parliament working on the EU AI Act, told journalists, “We have achieved history today.
A draught of the Act has been approved by lawmakers; it will now go through negotiations with the European Union Council and EU member states before becoming a law. While Big Tech firms are raising concerns about their own innovations, Benifei continued, “Europe has gone ahead and proposed a concrete response to the risks AI is starting to pose.”
Numerous well-known people, including Microsoft President Brad Smith and OpenAI CEO Sam Altman, have urged for more regulation after hundreds of leading AI scientists and academics expressed concern last month that the technology presented an extinction risk to humanity.
At the Yale CEO Summit this week, more than 40% of business executives, including James Quincy, CEO of Coca-Cola (KO), and Doug McMillion, CEO of Walmart, predicted AI might wipe out humanity in five to ten years.
To combat this, the EU AI Act aims to “promote the uptake of human-centric and trustworthy artificial intelligence and to ensure a high level of protection of health, safety, fundamental rights, democracy, and rule of law, as well as the environment from harmful effects.”
Here are the main conclusions.
Low-Risk, High-Risk, And Forbidden
Once adopted, the Act will be applicable to any entities, including those outside the EU, who develop and implement AI systems within the EU.
The level of regulation varies from minimal to “unacceptable” depending on the risks brought about by a certain application.
Systems in the latter type are outright forbidden. These consist of social scoring systems, such as those used in China, which give people a “health score” based on their behavior, real-time facial recognition systems in public places, and predictive policing tools.
According to Racheal Muldoon, a barrister (litigator) of the London legal firm Maitland Chambers, most AI systems would certainly fall into the high-risk or restricted categories, putting their owners susceptible to potentially large fines if they violate the restrictions.
A fine of up to €40 million ($43 million) or an amount equal to up to 7% of a company’s worldwide annual turnover, whichever is higher, might be imposed for engaging in illegal AI practices.
At the same time, sanctions would be “proportionate” and take into account small-scale providers’ market position, indicating there might be some leniency for start-ups.
The Act also mandates that each EU member state create at least one regulatory “sandbox” where AI technologies can be tested prior to deployment.
Dragoș Tudorache, a member of the European Parliament, told reporters, “We wanted to establish balance with this text. The Act safeguards individuals while also “promoting innovation, not hindering creativity, and deployment and development of AI in Europe,” he continued.
Microsoft (MSFT), which, along with Google, is leading AI development globally, hailed the Act’s progress but expressed excitement for “further refinement.”
A Microsoft representative said in a statement that “we believe that AI requires legislative guardrails, alignment efforts at the international level, and meaningful voluntary actions by companies that develop and deploy AI.”