In a landmark move, the European Parliament overwhelmingly voted on Wednesday to adopt the world’s first comprehensive set of rules aimed at governing the rapidly developing field of artificial intelligence.
The AI Act, as it is known, establishes a risk-based framework to regulate the use of AI systems across the 27-nation economic block known as the European Union.
With 523 votes in favor, 46 against, and 49 abstentions, the Parliament gave its final seal of approval to the long-awaited legislation. This follows years of negotiations and a provisional political agreement reached in December between EU co-legislators.
“Europe is NOW a global standard-setter in AI”, proclaimed Thierry Breton, the European Commissioner for the Internal Market.
Parliament has approved the Artificial Intelligence Act that aims to ensure safety and compliance with fundamental rights, while boosting innovation.
Find out more: https://t.co/zUkjfl49Gg pic.twitter.com/RINidgBwhO
— European Parliament (@Europarl_EN) March 13, 2024
At the same time, Dragos Tudorache, a member of the Romanian parliament, said: “The AI Act has pushed the future of AI toward a more human-focused path, where people are in control of the technology, and where it — the technology — aids us in making new discoveries, boosting economic growth, advancing society, and unlocking human potential.”
The objective of the AI Act is to establish guardrails and create an ecosystem of trustworthy artificial intelligence that safeguards fundamental rights and facilitates innovation. It takes a risk-based approach, categorizing AI systems from unacceptable risk, which would be banned outright, to high-risk applications facing strict requirements, down to low-risk uses with limited oversight.
“The adoption of the AI Act marks the beginning of a new AI era and its importance cannot be overstated”, said Enza Iannopollo, an analyst at Forrester. “The EU AI Act is the world’s first and only set of binding requirements to mitigate AI risks”, she further commented.
What Will the AI Act Do?
Certain potential uses of AI are deemed so risky and unethical that the AI Act outright prohibits them. This includes AI systems for indiscriminate surveillance, exploitation of vulnerable groups like children, and pseudoscientific practices like phrenology.
Social scoring systems that judge people’s trustworthiness based on their social behavior or established opinions also make the prohibited list. The same applies to real-time remote biometric identification systems used in public spaces for law enforcement, though some exceptions apply for issues like preventing terror attacks or finding missing children.
At the other end of the spectrum, AI applications considered high-risk must comply with strict requirements before being marketed or put into use. These include automated systems deployed in areas like education, hiring, healthcare, transportation, law enforcement, and migration management, among other critical infrastructure sectors.
Developers of high-risk AI systems must conduct rigorous risk assessment and mitigation testing, implement robust human oversight measures, ensure their training data is of high quality and follows rules, provide clear usage instructions, and more. Their systems must be designed with security and privacy in mind from the ground up.
Comprehensive Regulation for Generative AI
A key focus of the AI Act is on regulating the booming generative AI industry, which has captured global attention with tools like ChatGPT being able to produce human-like text, images, audio, and video content from simple text prompts.
The law’s co-rapporteur, Dragos Tudorache, said that the provisions governing generative AI models were heavily lobbied portions of the legislation, with companies advocating against transparency requirements around their training data and model construction.
Under these newly adopted rules, providers of general-purpose AI models must disclose copyrighted material used for training and comply with EU copyright laws. The models deemed as ‘highly risky’ are those that use enormous computational power above 10^25 FLOPs. These will face stricter risk assessment and mitigation requirements.
Some stipulations also force companies to monitor AI-generated content like deepfakes of real people and events. This content must be clearly labeled as artificially produced.
“… we promoted the idea of transparency, particularly for copyrighted material, because we thought it is the only way to give effect to the rights of authors out there.”, Tudorache added.
When Will the AI Act Be Enforced?
While violations of prohibited AI practices could draw fines up to €35 million or 7% of a company’s global annual revenue (whichever is higher), enforcement will be a crucial next step.
The AI Act has been fully adopted! Now on to the important part – implementation. pic.twitter.com/i3kxCeGO0O
— Dragoș Tudorache (@IoanDragosT) March 13, 2024
Each EU member state will establish a national AI watchdog to receive complaints from citizens and penalize companies that breach these rules. The European Commission will also create an AI Office to police the rules specifically for high-risk general-purpose models like ChatGPT.
The law is expected to officially enter into force 20 days after being published in the EU’s Official Journal sometime in May or June, following procedural steps like translation into all EU languages.
Also read: Biden Signs Executive Order To Enhance AI Governance
However, its full implementation will not occur immediately as enforcing many of these provisions will require the establishment and funding of new institutions, hiring key personnel, and drafting policies and procedures.
The first subset of provisions banning unacceptable risk AI systems will apply six months after the law takes effect. Rules specifically targeting AI models like ChatGPT will kick in 12 months later while requirements for high-risk systems go into force two years after the legislation is approved. By mid-2027, the entire AI Act should be fully implemented across the EU.
AI Regulatory Efforts Have Only Just Begun
While marking a pioneering step, the AI Act is seen as just the beginning of broader efforts to build governance around artificial intelligence as the technology evolves rapidly.
Lawmakers recognized that this bill is just the beginning of a long journey that aims to regulate a ground-breaking technology that will change society forever.
Brando Benifei, another co-rapporteur of the law, emphasized the need for additional legislation governing AI’s deployment, including directives for the workplace. He also stressed the importance of increased EU investment in AI research and computing power to foster innovation and competitiveness.
There are also calls for greater international cooperation and interoperability on AI governance between like-minded democratic nations.
“We still have a duty to try to be as interoperable as possible — to be open to build a governance with as many democracies, with as many like-minded partners out there”, Tudorache urged. “… the technology is one, irrespective of which quarter of the world you might be in. Therefore, we have to invest in joining up this governance is in a framework that makes sense.”
It Took Three Years for Legislators to Approve the AI Act Despite Intense Lobbying Efforts
The European Commission first proposed the Artificial Intelligence Act in April 2021 as part of its broader approach to foster the development of safe and trustworthy AI. Over two years of intense negotiations and lobbying followed as lawmakers shaped and amended what would become the world’s first comprehensive rulebook for AI systems.
The process revealed tensions between promoting innovation and competitiveness within the region in this up-and-coming field while still upholding the bloc’s core values around human rights, privacy, transparency, and ethics.
Some European AI startups and Big Tech firms like Microsoft (MSFT) and Alphabet (GOOG) heavily lobbied against stringent requirements that they feared could hamper the development and commercialization of generative AI.
At the same time, digital rights groups and civil society organizations pushed for stronger safeguards around biometric surveillance, social scoring systems, and protecting copyrighted works.
The final compromise aimed to balance these interests by creating a tiered, risk-based framework that targeted oversight only on the highest-risk AI applications while leaving ample room for innovation in lower-risk areas. However, the intense industry lobbying even led to a last-minute push by France to secure a carveout for general AI models trained by domestic champion Mistral AI.
The landmark AI Act has been years in the making, tapping input from experts across AI companies, civil society groups, academics, and others. It puts the EU at the forefront of a race that seeks to establish global governance standards for AI as the technology rapidly permeates more spheres of everyday life, from education to transportation to healthcare and beyond.
While challenges remain regarding the implementation and international coordination of the law, the European Parliament’s decisive vote marks a new era where pioneering rules have been put in place to ensure artificial intelligence develops in line with democratic principles and core human values. As the technology charges ahead, the AI Act gives the EU an influential voice in shaping those contours on a global scale.