STRASBOURG: The European Parliament on Wednesday gave final approval to the world’s most far-reaching rules to govern artificial intelligence (AI), including powerful systems like OpenAI’s ChatGPT.
Known as the AI Act, the legislation primarily focuses on regulating higher-risk uses of AI technology by both the private and public sectors. It imposes stricter obligations on AI providers, enhances transparency rules for advanced models like ChatGPT, and implements outright bans on tools deemed excessively hazardous.
Senior officials within the European Union (EU) assert that these rules, initially proposed in 2021, aim to safeguard citizens from the risks associated with rapidly advancing AI technology, while simultaneously fostering innovation across the continent.
EU chief Ursula von der Leyen lauded the approval of the legislation, describing it as a “pioneering framework for innovative AI, with clear guardrails.” She emphasized that this framework would not only benefit Europe’s wealth of talent but also serve as a blueprint for trustworthy AI worldwide.
The AI Act garnered support from 523 EU lawmakers, with 46 voting against it. The legislation is now expected to be endorsed by the EU’s 27 member states in April before its publication in the bloc’s Official Journal, likely in May or June.
Regulation of Artificial Intelligence Models
The urgency to pass these regulations was heightened following the emergence of OpenAI’s ChatGPT in late 2022, which sparked a global AI race. While AI technologies like ChatGPT demonstrated remarkable capabilities, concerns quickly arose regarding potential threats, including the proliferation of AI-generated deepfakes and disinformation campaigns.
The AI Act adopts a risk-based approach, with tougher requirements imposed on systems deemed higher-risk. Violations of these regulations could result in fines ranging from €7.5 million to €35 million ($8.2 million to $38.2 million), depending on the severity of the infringement and the size of the company.
Notably, the legislation includes strict bans on using AI for predictive policing and prohibits the use of biometric information to infer an individual’s race, religion, or sexual orientation. Real-time facial recognition in public spaces is also banned, except under certain circumstances for law enforcement purposes, subject to approval from a judicial authority.