The EU has reached an agreement on the AI Act, which likely will see it come into effect in 2024 to 2025 as scheduled. 

This landmark decision resulted from an intense 37-hour negotiation session involving the European Parliament and EU member states.

Thierry Breton, the European Commissioner primarily accountable for this recent suite of laws, described the agreement as “historic.” The talks have been in progress since Tuesday, including some days where negotiators worked through the night. 

Carme Artigas, Spain’s Secretary of State for AI, was crucial in steering these negotiations to a successful conclusion.

Artigas identified the numerous backing the text received from major European countries, specifically stating, “France and Germany supported the text.” This is notable, as France and Germany, keen to encourage their very own growing AI industries, were questioning among the stricter elements of the law. 

The EU is now set to paved the way for AI regulation. While specific contents and implications of the brand new laws are still emerging, they are going to likely be effective in 2024/2025. 

Key agreements and points of the EU AI Act

The provisional agreement on the AI Act represents a historic step in regulating AI. It follows within the footsteps of other EU technology regulations, similar to GDPR, which subjected tech firms to billions of fines through the years. 

Carme Artigas, Spanish secretary of state for digitalization and artificial intelligence, said of the law, “This is a historical achievement and an enormous milestone towards the longer term! Today’s agreement effectively addresses a world challenge in a fast-evolving technological environment on a key area for the longer term of our societies and economies. And on this endeavour, we managed to maintain an especially delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the basic rights of our residents.”

Here are the core recent points of the agreed laws:

  • High-impact and high-risk AI systems: The agreement introduces rules on general-purpose AI models that might pose ‘systemic’ risks. Precisely what this implies stays ambiguous, nevertheless it’s broadly designed to cater to recent generations of models at GPT-4 level and beyond. 
  • Governance and enforcement: A revised governance system was established, including some enforcement powers on the EU level. This ensures a centralized approach to regulating AI across member states.
  • Prohibitions and law enforcement exceptions: The agreement extends the list of prohibited AI practices but allows for the usage of distant biometric identification by law enforcement in public spaces under strict conditions. This goals to balance public safety with privacy and civil liberties.
  • Rights protection: A key aspect of the agreement is the duty for deployers of high-risk AI systems to evaluate impacts on people’s rights before using an AI system. “Deployers” is the keyword here, because the Act obligates responsibilities all throughout the AI value chain.

Other agreed areas include:

  • Definitions and scope: The definition of AI systems has been aligned with the OECD’s definition, and the regulation excludes AI systems used exclusively for military, defense, research, and innovation purposes or by individuals for non-professional reasons.
  • Classification of AI systems: AI systems shall be classified based on risk, with high-risk systems subject to stringent requirements and lighter transparency obligations for systems with limited risk. This has been the intention all along.
  • Foundation models: The agreement addresses foundation models, large AI systems able to performing various tasks like ChatGPT/Bard/Claude 2. Specific transparency obligations are set for these models, with stricter regulations for high-impact foundation models.
  • High-risk AI systems requirements: High-risk AI systems shall be allowed within the EU market but must comply with specific requirements, including data quality and technical documentation, especially for SMEs.
  • Responsibilities in AI value chains: The agreement clarifies the roles and responsibilities of various actors in AI system value chains, including providers and users, and the way these relate to existing EU laws.
  • Prohibited AI practices: The act bans certain unacceptable AI practices, similar to cognitive behavioral manipulation, untargeted scraping of facial images, and emotion recognition in workplaces and academic institutions.
  • Emergency procedures for law enforcement: An emergency procedure allows law enforcement agencies to deploy high-risk AI tools which have not passed the conformity assessment in urgent situations.
  • Real-time biometric identification: Law enforcement’s use of real-time distant biometric identification systems in public spaces is permitted under strict conditions for specific purposes like stopping terrorist attacks or looking for serious crime suspects.

Governance, penalties, and enforcement:

  • Governance: An AI Office throughout the Commission will oversee advanced AI models, supported by a scientific panel of independent experts and the AI Board comprising member states’ representatives.
  • Penalties: The agreement sets fines based on a percentage of the corporate’s global annual turnover for various violations, with more proportionate caps for SMEs and start-ups.
  • Fines: Penalties for non-compliance have been established based on the severity of the violation. The fines are calculated either as a percentage of the corporate’s global annual turnover from the previous fiscal 12 months or as a hard and fast amount. The highest of the 2 is taken. The fines are structured as follows: €35 million or 7% of turnover for violations involving banned AI applications, €15 million or 3% for breaches of the Act’s obligations, and €7.5 million or 1.5% for providing misinformation.
  • Support: The agreement includes AI regulatory sandboxes to check revolutionary AI systems under real-world conditions and provides support to smaller firms.

This tentative deal still requires approval from the European Parliament and the EU’s 27 member states.

How will the laws affect ‘frontier models?’

Under the brand new regulations, all developers of general-purpose AI systems, particularly those with a big selection of potential applications like ChatGPT and Bard, must maintain up-to-date information on how their models are trained, provide an in depth summary of the information utilized in training, and have policies that respect copyright laws and ensure acceptable use.

The Act also classifies certain models as posing a “systemic risk.” This assessment is based totally on the computational power training of those models. The EU has set a threshold for this category at models that employ greater than 10 trillion trillion operations per second.

Currently, OpenAI’s GPT-4 is the one model that robotically meets this threshold. However, the EU could label other models under this definition. 

Models deemed systemic risk shall be subject to additional, more stringent rules. These include:

  • Mandatory reporting of energy consumption.
  • Conducting red-teaming or adversarial tests.
  • Assessing and mitigating potential systemic risks.
  • Ensuring robust cybersecurity controls.
  • Reporting each the knowledge used for fine-tuning the model and details about their system architecture.

How has the agreement been received?

The AI Act has sparked a myriad of reactions commenting on its innovation, regulation, and societal impact. 

Fritz-Ulli Pieper, a specialist in IT law at Taylor Wessing, identified that, while the tip is in sight, the Act remains to be liable to alter.

He remarked, “Many points still to be further worked on in technical trilogue. No one knows how the ultimate wording will seem like and if or how you’ll be able to really push current agreement in a final law text.” 

Pieper’s insights reveal the complexity and uncertainty surrounding the AI Act, suggesting that much work stays to make sure the final laws is effective and practical.

A key theme of those meetings has been balancing AI risks and opportunities, particularly as models could be ‘dual-natured,’ meaning they will provide advantages and inflict harm. Alexandra van Huffelen, Dutch Minister of Digitalisation, noted, “Dealing with AI means fairly distributing the opportunities and the risks.” 

The Act also seemingly did not protect EU residents from large-scale surveillance, which caught the eye of advocacy groups like Amnesty International.

Mher Hakobyan, Advocacy Advisor on Artificial Intelligence said on this point of controversy, “Not ensuring a full ban on facial recognition is due to this fact a hugely missed opportunity to stop and forestall colossal damage to human rights, civic space and rule of law which can be already under threat throughout the EU.”

Following this provisional agreement, the AI Act is ready to develop into applicable two years after its official enactment, enabling governments and businesses across the EU to organize for compliance with its provisions. 

In the interim, officials will negotiate the technical details of the regulation. Once the technical refinements are concluded, the compromise text shall be submitted to the member states’ representatives for endorsement.

The final step involves a legal-linguistic revision to make sure clarity and legal accuracy, followed by the formal adoption. The AI Act is certain to alter the industry each within the EU and globally, however the extent of which is hard to predict.


This article was originally published at dailyai.com