Navigating the EU Artificial Intelligence Act

Three people looking at a laptop.

To balance artificial intelligence innovation with safeguards for fundamental rights, the European Union has enacted a comprehensive AI regulatory framework known as the EU Artificial Intelligence Act.

The act follows a risk-based classification system that assigns organizational obligations based on the potential harm posed by AI applications in several categories:

  • Unacceptable risk: Certain uses, such as social scoring, manipulative behavioral AI, or real-time biometric surveillance in public, are prohibited outright (with narrow exceptions for law enforcement).
  • High risk: Systems in critical infrastructure, education, employment, healthcare, migration, credit scoring, access to essential services, and law enforcement. These face strict compliance requirements, including risk management, data governance, transparency, and oversight.
  • Limited risk: Applications like chatbots and deepfakes are allowed but must disclose AI involvement to users.
  • Minimal/no risk: Tools like spam filters or AI in video games face no specific regulatory obligations.

This tiered approach aims to protect fundamental rights while enabling innovation in lower-risk areas.

Which Companies Are Covered?

The EU AI Act casts a wide net, applying to organizations inside and outside the EU. Obligations fall on different participants in the AI value chain:

  • Providers: Developers of AI systems or general-purpose models, regardless of location, if placed in the EU market.
  • Importers: EU-based companies introducing non-EU AI systems into the union.
  • Distributors: Entities that make AI systems available in the EU (even if not the original developer).
  • Deployers: Professional users of AI systems within the EU.

Notably, extraterritorial reach means U.S. and other non-EU companies are in scope if their systems are used in the EU. Research, internal testing, and open-source projects largely remain exempt until they are commercialized.

Defining General-Purpose AI

A critical focus of the act is general-purpose AI systems (GPAI), sometimes called foundation models. These are AI systems with “significant generality,” able to perform a wide range of tasks across domains.

For example, these may include large language models like ChatGPT or multimodal models that generate text, images, and speech. Because such systems underpin countless downstream applications, the act requires transparency and safeguards, especially if they are adapted for high-risk use cases such as medical devices or hiring systems.

Regulation of General-Purpose AI

The act creates a two-tiered regulatory structure for GPAI:

  • Transparency obligations: Providers must document training data sources, model limitations, and copyright compliance. They must also share information to help downstream users meet their compliance obligations.
  • Systemic risk models: Large-scale GPAI models with broad reach and societal impact face heightened obligations, including:
    • Risk and safety assessments
    • Robustness and cybersecurity testing
    • Incident tracking and reporting
    • Cooperation with the new EU AI Office (the central supervisory authority)

To guide compliance, the EU is developing Codes of Practice for GPAI, a collaborative framework that providers can use to demonstrate adherence.

High-Risk AI Systems: A Stricter Regime

High-risk AI is treated with the same rigor as product safety regulation. This classification covers safety-critical components (e.g., in cars or medical devices) and applications such as biometrics, credit scoring, and access to essential services.

Organizational obligations include:

  • Risk management across the system lifecycle.
  • Data governance to ensure accuracy, representativeness, and fairness.
  • Technical documentation and ongoing record-keeping.
  • Conformity assessments before market entry (sometimes requiring third-party validation).
  • Human oversight mechanisms allowing intervention and override.
  • Post-market monitoring with incident reporting to authorities.

For providers, importers, and deployers, this means embedding compliance into design and operations. Penalties for violations are significant, echoing the General Data Protection Regulation (GDPR) enforcement framework. The penalties could be up to €35 million, or 7% of global annual revenue.

Enforcement Timeline

The act introduces obligations in stages, giving businesses time to adapt. The act entered into force in August 2024, with prohibited practices and literacy obligation enforcement starting in August 2025.

The following deadlines are upcoming:

  • August 2, 2026: Most requirements for high-risk AI become mandatory.
  • August 2, 2027: Full enforceability, including embedded high-risk systems in regulated products.

This phased approach gives compliance teams time to prioritize preparation and allocate resources effectively.

Preparing for Compliance: Practical Steps

The path to compliance depends on your organization’s role, the systems you use, and the level of risk. Compliance executives can take several proactive measures:

  1. AI System Inventory and Risk Classification: Catalog all AI systems, identifying their purpose, deployment, and likely classification under the act. Pay particular attention to GPAI models and high-risk applications.
  2. Governance and Oversight: Develop AI governance frameworks aligned with international standards like ISO/IEC 42001. Assign accountability to designated officers and establish clear policies for monitoring and reporting.
  3. Gap Analysis and Action Plans: Benchmark current practices against EU AI Act obligations. Prioritize areas such as data quality, documentation, and transparency, and build an action plan to address gaps.
  4. Documentation and Transparency: Begin preparing system-level documentation, user guidance, and disclosure mechanisms. Ensure downstream users can understand and comply with requirements.
  5. Employee Training and AI Literacy: Invest in training for staff across functions—legal, IT, product, and compliance—on AI governance, responsible use, and risk management.
  6. Ongoing Monitoring and Adaptation: Establish processes for continuous monitoring AI performance and compliance. Stay agile by updating policies as EU regulators release guidance and Codes of Practice evolve. These will be developed collaboratively with industry and regulators.

For companies that act early, compliance can become a competitive differentiator, signaling trustworthiness to customers, partners, and regulators alike.

By mapping AI systems, aligning governance with best practices, and embedding transparency into operations, organizations can meet the act’s requirements and build resilience for the future of AI regulation.

To learn more about AI governance and risk management, contact us.

Author