Get 20% OFF Your Subscription! Join SwapED today and save 20% on all plans. Use code SWAPED20 at checkout.

EU AI Act Explained

Created by SwapED in Policies & Initiatives 23 Dec 2025
Share
Summary of Regulation (EU) 2024/1689 on Artificial Intelligence

The European Union Artificial Intelligence Act is a horizontal regulation establishing harmonised rules for placing artificial intelligence systems on the market, putting them into service, and using them within the European Union. Its objectives combine support for human-centric and trustworthy artificial intelligence and innovation with a high level of protection for health, safety, and fundamental rights, including democracy, the rule of law, and environmental protection.

Scope, actors, and key concepts

The Regulation applies across the artificial intelligence value chain. It covers providers that place artificial intelligence systems on the market or put them into service in the European Union, as well as providers that place general-purpose artificial intelligence models on the market. It also applies to deployers established or located in the European Union and captures certain extraterritorial situations, including where the output of a system is used in the European Union. It assigns roles and responsibilities to other actors, including importers, distributors, and certain manufacturers that place an artificial intelligence system on the market under their name or trademark in relevant circumstances.

The Regulation contains exclusions. It does not apply to systems placed on the market, put into service, or used exclusively for military, defence, or national security purposes, and it does not affect Member State competences regarding national security. It excludes certain scientific research and development situations and excludes research, testing, and development activities prior to placing systems on the market or putting them into service. Testing in real-world conditions is treated under a distinct framework rather than being covered by a general research and development exclusion.

The Act is intended to operate alongside existing European Union law, including personal data protection and privacy rules, which continue to apply when personal data are processed.

A cross-cutting element is artificial intelligence literacy. Providers and deployers must take measures, to their best extent, to ensure a sufficient level of artificial intelligence literacy among their staff and other persons dealing with operation and use on their behalf, taking account of their technical knowledge, experience, education and training, and the context in which systems are used.

Risk-based regulatory structure

The Regulation follows a risk-based approach, distinguishing between prohibited practices, high-risk systems subject to extensive lifecycle requirements and operator duties, transparency obligations for certain systems irrespective of high-risk classification, and lower-risk uses where voluntary measures such as codes of conduct are encouraged.

Prohibited practices

The Act prohibits specified artificial intelligence practices considered incompatible with Union values and fundamental rights safeguards. These include, among others, manipulative or deceptive techniques that materially distort behaviour by impairing informed decision-making where this causes or is likely to cause significant harm; exploitation of vulnerabilities of individuals or specific groups due to age, disability, or specific social or economic situations where this is likely to cause significant harm; defined forms of social scoring that lead to unjustified or disproportionate detrimental or unfavourable treatment; and certain risk assessments of natural persons to assess or predict the risk of committing a criminal offence when based solely on profiling or personality traits, with narrowly framed allowances for specific support to human assessment where that assessment is already based on objective and verifiable facts directly linked to criminal activity.

The Act also prohibits the creation or expansion of facial recognition databases through untargeted scraping of facial images from the internet or closed-circuit television footage, and prohibits emotion recognition in workplaces and educational institutions except in limited circumstances. 

It also prohibits biometric categorisation systems that infer sensitive attributes such as political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, subject to the Regulation’s defined conditions and exceptions.

Real-time remote biometric identification in publicly accessible spaces for law enforcement is addressed through a general prohibition with narrowly framed exceptions, subject to strict necessity, defined objectives, and procedural safeguards, including authorisation and proportionality-related conditions.

High-risk artificial intelligence systems

High-risk classification operates through two principal routes.

  • First, an artificial intelligence system is high risk when it is intended to be used as a safety component of a product, or is itself a product, covered by specified Union harmonisation legislation, and the relevant product is required to undergo third-party conformity assessment before being placed on the market or put into service.
  • Second, certain stand-alone systems listed in Annex III are classified as high risk across areas including biometrics, critical infrastructure, education and vocational training, employment and workers management, access to essential private services and essential public services, law enforcement, migration and border management, and the administration of justice and democratic processes.

The Regulation also provides a derogation mechanism for some Annex III systems: a listed system is not considered high risk where it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing decision-making, and where it falls within specified lower-risk conditions. However, an Annex III system is always considered high risk where it performs profiling of natural persons, and providers claiming a derogation must document their assessment and comply with related duties.

For high-risk systems, the Act sets out core lifecycle requirements. These include an ongoing risk management system covering intended use and reasonably foreseeable misuse; data governance requirements for training, validation, and testing datasets (including relevance, representativeness, and bias-related measures); technical documentation enabling assessment of compliance; record-keeping and logging capability; transparency and instructions for use enabling correct deployment and oversight; effective human oversight measures aligned to the risk and context; and performance expectations regarding accuracy, robustness, and cybersecurity. 

Obligations across the value chain

The Regulation assigns duties to multiple actors, including providers, authorised representatives, importers, distributors, and deployers.

Providers of high-risk systems have extensive obligations, including establishing quality management systems, ensuring compliance with legal requirements, performing conformity assessments, preparing declarations of conformity, applying required marking, registering systems where required, maintaining technical documentation, enabling logging, undertaking corrective actions, and cooperating with competent authorities.

Deployers of high-risk systems must use systems in line with instructions, ensure effective human oversight by competent and trained persons with appropriate authority and support, and monitor operation. Where deployers control input data, they must take steps to ensure input data are relevant and sufficiently representative for the intended purpose.

The Act also introduces a fundamental rights impact assessment requirement for deployers in specified circumstances before deploying certain high-risk systems, linked to defined content expectations and procedural interaction with market surveillance authorities.

Transparency obligations for certain systems

Separate from high-risk classification, the Act imposes transparency duties for certain systems. These include informing people when they interact with an artificial intelligence system where it is not obvious in context, and obligations related to synthetic or manipulated content, including marking in a machine-readable format as far as technically feasible, and disclosure duties for certain deepfake uses. The Act also sets disclosure-related duties in specified public-interest communication contexts, subject to defined exceptions and safeguards.

General-purpose artificial intelligence models and systemic risk

The Act creates a dedicated regime for general-purpose artificial intelligence models. Providers must prepare technical documentation and provide information to downstream providers integrating the model so that downstream operators can understand capabilities and limitations and meet relevant obligations. Providers must also implement a policy to comply with Union copyright and related rights law, including respecting reservations of rights, and must publish a sufficiently detailed summary of the content used for training in accordance with a Union-level template.

For general-purpose models released under a free and open-source licence, the Act provides a targeted exemption from some obligations (notably certain technical documentation and downstream information obligations), subject to defined conditions. This exemption does not remove the copyright policy duty or the training content summary duty, and it does not apply to general-purpose models with systemic risk.

The Act also introduces general-purpose artificial intelligence models with systemic risk. There is a presumption of high-impact capabilities where the cumulative computation used for training exceeds 10^25 floating point operations, without prejudice to other assessment routes. Providers of such models have additional obligations, including model evaluation, adversarial testing, systemic risk management and mitigation, serious incident reporting, and cybersecurity for the model and supporting infrastructure.

Governance, rights, enforcement, and phased application

The governance framework includes an Artificial Intelligence Office within the European Commission and a European Artificial Intelligence Board, alongside supporting structures such as an advisory forum and a scientific panel.

The Regulation provides routes for complaints to market surveillance authorities and includes explanation-related provisions for affected persons in defined decision-making contexts, particularly where decisions based on outputs from certain high-risk systems have legal or similarly significant effects and are alleged to have adversely affected an individual.

Penalties must be effective, proportionate, and dissuasive. Member States must provide for administrative fines with maximum levels specified by infringement category, with tailored treatment for small and medium-sized enterprises. The Act also contains a separate penalty provision for European Union institutions, bodies, offices, and agencies with different maximum amounts. The Commission has direct supervisory and fining powers in the general-purpose artificial intelligence model regime in defined circumstances.

Finally, the Act applies on a phased timetable. It enters into force on the twentieth day following publication in the Official Journal of the European Union and applies from 2 August 2026, with earlier application for Chapters I and II from 2 February 2025, additional parts applying from 2 August 2025 (with a stated exception), and Article 6(1) and corresponding obligations applying from 2 August 2027.

Read the Act here: AI Act | Shaping Europe’s digital future

Share

Share this post with others

🚀 Get 20% OFF Your Subscription!

🚀 Get 20% OFF Your Subscription!

Join SwapED today and save 20% on all plans. Use Code SWAPED20 at checkout.

GDPR

When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, that blocking some types of cookies may impact your experience of the site and the services we are able to offer.