EU AI act goes into effect, enforcing stringent regulations on AI companies

Written by

Published 5 Aug 2024

Fact checked by

NSFW AI Why trust Greenbot

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

Disclosure

Free Brown Gavel Stock Photo

European Union’s AI Act took effect on August 1, marking the first-ever law to enforce regulations on emerging artificial intelligence (AI) technology, with stringent penalties for companies breaching it.

Proposed by the European Commission in 2020, the AI Act is a piece of European Union (EU) legislation made to control artificial intelligence models, especially those with “systemic” risks.

The AI Act targets a wide array of applications, focusing on general-purpose systems like OpenAI’s ChatGPT and Google’s Gemini, as well as generative technologies like Midjourney. Its primary goal is to mitigate risks and safeguard user rights for companies investing in AI development.

The Law and Its Scope

In order to tailor to the different level of risk each application poses to society, a risk-based approach to AI regulation will reportedly be used.

The European AI Office, a regulatory body established by the European Commission in February 2024, will oversee the monitoring and evaluation of AI models to ensure they comply with the new framework.

Under the AI Act, high-risk AI applications, such as autonomous vehicles, medical devices, loan decision systems, educational scoring, and remote biometric identification systems, will face stringent requirements. These include regular activity tracking, enforced use of high-quality training datasets to reduce bias, thorough risk assessment and mitigation, and transparency on model documentation with authorities for compliance evaluation.

Eventually, “unacceptable” risks are defined as any systems that undermine fundamental rights, violate privacy, or result in discriminatory outcomes. These systems are bound to be banned by the law.

Open-source projects, such as Meta’s LLaMa, Stability AI’s Stable Diffusion, and Mistral’s 7B, would not be exempted as well. To qualify, model parameters must be fully available to the public, and providers should enable “access, usage, modification, and distribution of the model.”

Impact on the Industry

The law will likely affect big tech companies, such as Microsoft, Meta, Google, and Amazon, even those outside the EU’s supervision. If these companies choose to ignore it, they will prevent substantial access to data and infrastructure needed for training their models.

“This will bring much more scrutiny on tech giants when it comes to their operations in the EU market and their use of EU citizen data,” states Charlie Thompson, the Senior Vice President of Europe, Middle East, and Africa (EMEA) and Latin America (LATAM) at enterprise software company Appian.

Companies could face substantial fines of 7% to 1.5% of their global annual revenues if they breach the EU AI Act. These are higher than those possible under the General Data Protection Regulation (GDPR), Europe’s strict digital privacy law. The final penalty will be based on the size of the company being sanctioned as well as the level of infringement. 

Jamil Jiva, the global head of asset management at Linedata, stated that the EU recognizes the necessity of imposing significant fines on non-compliant companies if they want the regulations to have an impact.

The Road Ahead

Most of the Act’s provisions will not take effect until 2026. Restrictions on general-purpose AI will start 12 months after the act comes into force.

Public and commercial systems such as ChatGPT and Gemini will have a 36-month transition period, enough to align their operations with the new regulations.

This bold move by the EU should set a precedent for other nations to follow. The full impact of the legislation is yet to come in later years, but it is already clear that it will reshape the AI landscape.