The European Union Artificial Intelligence (AI) Act marks a significant milestone in regulating AI technologies, setting up a comprehensive framework for their oversight. As the first of its kind globally, it seeks to ensure that AI is deployed safely, transparently, and ethically, with implications extending beyond the EU. Officially published in the EU’s Official Journal on July 12, 20241, the Act concludes a legislative process that began in April 2021, when the European Commission first proposed it. Having taken effect on August 1, 2024, the AI Act features a phased enforcement timeline, making it essential for global businesses—especially those operating in or with the EU—to familiarize themselves with and adhere to its requirements.
Prohibitions Under the Act
Article 5 of the EU AI Regulation prohibits harmful AI practices to protect individual rights. It bans AI systems that manipulate behavior through deceptive techniques, exploit vulnerabilities, or create unfair social scores. It also prohibits profiling for criminal behavior predictions and certain facial recognition practices. Strict limits are placed on real-time biometric identification systems in public spaces for law enforcement, requiring judicial authorization and proportional use. Member states must establish laws ensuring compliance with EU standards, balancing technological advancement with fundamental rights protection, particularly law enforcement.
Classification Rules for High-Risk AI Systems
An AI system is considered high-risk if it is a safety component of a product regulated by EU legislation or if the product requires third-party conformity assessment. Additionally, AI systems are deemed high-risk unless they pose minimal risk to health, safety, or rights, such as systems performing narrow tasks or supporting human decisions. Providers must document assessments for systems they consider low-risk. The Commission will provide guidelines and may modify conditions based on evidence. Any amendments will not reduce health, safety, or rights protection.3 It is required that:
- High-risk AI systems must comply with AI-specific requirements and Union harmonization legislation, integrating necessary testing and documentation to ensure consistency and reduce duplication.4
- High-risk AI systems must implement a continuous, documented risk management system, identifying, evaluating, and mitigating risks throughout their lifecycle, ensuring safety and compliance, including testing for real-world conditions.5
- High-risk AI systems must use high-quality, representative training, validation, and testing data sets, ensuring appropriate data governance, bias detection, and correction, with safeguards when processing sensitive data.6
- High-risk AI systems must have up-to-date technical documentation demonstrating compliance with regulations, which SMEs may simplify using a Commission-provided form.7
- High-risk AI systems must record event logs to ensure traceability, monitor operations, and identify risks or modifications.8
- High-risk AI systems must be transparent, providing clear instructions, performance details, risks, and necessary maintenance for deployers to use them appropriately.9
- High-risk AI systems must enable effective human oversight to prevent risks, ensure understanding, and allow intervention or override when necessary.10
- High-risk AI systems must ensure accuracy, robustness, cybersecurity, and resilience, with safeguards against errors, vulnerabilities, and feedback loops.11
Transparency Obligations for Providers and Deployers of Certain AI Systems
AI system providers, except for law enforcement, must ensure users are aware when interacting with AI. Synthetic content must be labeled as artificial, and emotion recognition or biometric systems must inform individuals. Systems generating deep fakes must disclose the content’s artificial nature unless used for criminal investigations or artistic purposes. All disclosures should be clear, prompt, and accessible, ensuring compliance with data protection laws, except in law enforcement cases with proper safeguards.12
Classification of General-Purpose AI Models as General-Purpose AI Models with Systemic Risk
General-purpose AI systems are versatile models designed for a wide range of tasks, both intended and unintended by their creators. They require minimal adjustments for use across various fields, gaining commercial value as computational resources and innovative methods advance.13
A general-purpose AI model under the Act is classified as one with systemic risk if it meets certain conditions. First, it must have high-impact capabilities, assessed using technical tools, indicators, and benchmarks. Alternatively, the European Commission may determine it meets these criteria based on scientific panel alerts. Additionally, a model is presumed to have a high impact if its training requires over 10^25 floating-point operations. The Commission may adjust these thresholds and supplement indicators as technology evolves, ensuring the thresholds remain relevant to algorithm advancements and hardware efficiency through delegated acts.14
Obligations for Providers of General-Purpose AI Model
Providers of general-purpose AI models must maintain up-to-date technical documentation covering the model's training, testing, and evaluation, which must be available upon request to the AI Office and national authorities. They must also provide information to AI system providers integrating the model, ensuring these parties understand its capabilities, limitations, and legal obligations. Additionally, AI providers must comply with copyright laws and make public summaries of the content used to train the model. Exceptions apply to open-source models but don't extend to those with systemic risks. Providers may use codes of practice or harmonized standards for compliance, with alternative means assessed by the Commission.15
Authorized Representatives of Providers of General-Purpose AI Models
Providers of general-purpose AI models from third countries must appoint an authorized representative in the EU before placing their model on the Union market. This representative must verify the model’s compliance with technical documentation and legal obligations, retain copies of documentation for 10 years, and cooperate with the AI Office and national authorities. The authorities may contact the representative on compliance matters. If the provider violates its obligations, the representative must terminate the mandate and notify the AI Office. This requirement does not apply to open-source models unless they pose systemic risks.
Obligations for Providers of General-Purpose AI Models with Systemic Risk
Providers of general-purpose AI models with systemic risks must evaluate the model using standardized protocols, including adversarial testing, to identify and mitigate risks. They must assess and address potential systemic risks at the Union level, track and report serious incidents to the AI Office, and ensure strong cybersecurity protection. Providers can use codes of practice or European harmonised standards to demonstrate compliance, but must show alternative compliance if not adhering to these. All information, including trade secrets, must be handled confidentially by data protection regulations.16
Data Protection Concerns Related To AI Models Processing Personal Data
On 17 December 2024, the European Data Protection Board (EDPB) issued an opinion addressing data protection concerns related to AI models processing personal data, prompted by a request from the Irish supervisory authority. Key issues clarified include the requirement for AI models to demonstrate proper anonymization, where outputs must not relate to individuals in the training dataset. The EDPB also stressed the need for thorough documentation to demonstrate GDPR compliance and clarified that legitimate interest is a valid legal basis for data processing but must be carefully assessed. AI controllers must ensure models are not developed using unlawfully processed data.17
Enforcement of the Obligations of Providers of General-Purpose AI Models
Member States must establish rules for penalties and enforcement measures for AI regulation violations, ensuring they are effective, proportionate, and dissuasive while considering SMEs' interests. Non-compliance with AI prohibitions can result in fines of up to €35 million or 7% of annual turnover. Fines vary based on the nature of the infringement, and SMEs may face reduced penalties. Member States must report fines to the Commission annually18. Enforcement of the obligations of providers of general-purpose AI models comes into effect on August 2, 2026.19
Conclusion
As the EU AI Act takes effect, it is anticipated to reshape AI deployment worldwide, establishing new standards for safety, transparency, and data protection. Businesses operating within or engaging with the EU must adapt by ensuring their AI systems comply with the Act’s rigorous requirements. Companies should prepare for heightened regulatory scrutiny, invest in risk management, and adopt transparent practices. Noncompliance could lead to significant penalties, making early preparation crucial for sustainable AI development and market competitiveness.
1 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) < https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng> Accessed on January 5, 2025
2 Article 5 of the EU Artificial Intelligence Act
3 Article 6 of the EU Artificial Intelligence Act
4 Article 8 of the EU Artificial Intelligence Act
5 Article 9 of the EU Artificial Intelligence Act
6 Article 10 of the EU Artificial Intelligence Act
7 Article 11 of the EU Artificial Intelligence Act
8 Article 12 of the EU Artificial Intelligence Act
9 Article 13 of the EU Artificial Intelligence Act
10 Article 14 of the EU Artificial Intelligence Act
11 Article 15 of the EU Artificial Intelligence Act
12 Article 50 of the EU Artificial Intelligence Act
13 Future of Life Institute, General Purpose AI and the AI Act, < https://artificialintelligenceact.eu/wp-content/uploads/2022/05/General-Purpose-AI-and-the-AI-Act.pdf> Accessed on January 4, 2025
14 Article 51 of the EU Artificial Intelligence Act
15 Article 53 of the EU Artificial Intelligence Act
16 Article 55 of the EU Artificial Intelligence Act
17 EDPB opinion on AI models: GDPR principles support responsible <https://www.edpb.europa.eu/news/news/2024/edpb-opinion-ai-models-gdpr-principles-support-responsible-ai_en> accessed on January 4, 2025
18 Article 99 of the EU Artificial Intelligence Act
19 Article 88 of the EU Artificial Intelligence Act