AI Act: Obligations for GPAI model providers and penalties

19 August 2025

AI Act update

From 2 August 2026 the EU AI Act will be fully applicable. Parts of the AI Act already entered into force on 2 February 2025, and as from 2 August 2025, various other provisions became applicable, including rules for general-purpose AI models (GPAI models) and penalties for violations of the AI Act. The other provisions that entered into force relate to notified bodies (Chapter III, Section 4), governance (Chapter VII), and confidentiality (Article 78). In this News Update, we will look in more detail at the obligations for providers of GPAI models and the penalties for violations.

Hero image

New obligations for GPAI model providers

Chapter V of the AI Act, which includes various obligations for providers placing general-purpose AI (GPAI) models on the market, entered into force on 2 August 2025. For GPAI models that were already available before 2 August 2025, providers are granted a transitional period and must achieve compliance by 2 August 2027.

Definition of GPAI Models and Providers

The AI Act defines a GPAI model as “an AI model, including where such a model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, devel-opment or prototyping activities before they are placed on the market.
Examples of such models are the technology behind ChatGPT or DALL-E.

A ‘provider’ of a GPAI model is a natural or legal person, public authority, agency or other body that develops a GPAI model or that has a GPAI model developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.

Obligations for Providers of GPAI Models

From 2 August 2025, providers of GPAI models must comply with the following obligations:

  1. Technical Documentation: Up-to-date technical documentation for the GPAI model must be put into place. This documentation must contain, at a minimum, the information set out in Annex XI of the AI Act, such as the design specifications of the model and training process. This information must be made available to the AI Office and national competent authorities upon request.
  2. Information for Downstream Providers: Up-to-date information and documentation for providers of AI systems who intend to integrate the GPAI model into their own systems must be put into place and maintained. This information must enable downstream providers to understand the capabilities and limitations of the model and comply with their own obligations under the AI Act. The minimum content that must be included is specified in Annex XII of the AI Act.
  3. Copyright Compliance Policy: Providers must implement a policy to comply with EU copyright law, particularly to identify and respect any reservation of rights expressed by rightsholders.
  4. Summary of Training Content: A sufficiently detailed summary about the content used for training the GPAI model must be drawn up and made publicly available. A template was published by the EU AI Office on 24 July 2025. This template is provided here.
  5. Cooperation with Authorities: Providers must cooperate with the European Commission and national competent authorities.
  6. Authorized Representative: Providers established outside the EU must appoint an authorized representative in the EU before placing a GPAI model on the EU market. The representative is responsible for ensuring compliance with the AI Act and acts as the contact for authorities.

Providers of GPAI models released under a free and open-source license, and whose parameters, architecture, and usage information are made publicly available, are exempt from some documentation and information obligations, unless the model presents systemic risks. However, they must still provide a summary of training content and comply with copyright policies.

Systemic Risk: additional obligations

If a GPAI model presents a ‘systemic risk’, additional obligations apply. Systemic risk refers to the potential for large-scale harm from the most advanced models, such as those that could lower barriers to the development of chemical or biological weapons, or that present challenges in maintaining human control over autonomous systems. There is a presumption of systemic risk for any GPAI model where the cumulative computation used for training exceeds 10^25 floating point operations.

Providers of such models face additional obligations, including:

  1. Performing model evaluation in accordance with standardized protocols and tools reflecting the state of the art.
  2. Assessing and mitigating possible systemic risks at EU level.
  3. Tracking, documenting and reporting serious incidents and corrective measures to the AI Office and national authorities.
  4. Ensuring an adequate level of cybersecurity protection for the GPAI model with systemic risk and its infrastructure.

Codes of Practice

The AI Office has published the General-Purpose AI Code of Practice on 10 July 2025. This is a voluntary tool that can be signed by AI model providers to show their compliance with the AI Act. The code consists of three chapters. The transparency and copyright chapters demonstrate the compliance with the obligations under article 53 of the AI Act. The chapter on safety and security only applies to GPAI models with systemic risk. The chapters of the code can be accessed here.

Penalties for violations of AI Act

Articles 99 and 100 of the AI Act set out the penalty framework. For SMEs the lowest of the two amounts applies. Fines are imposed by the competent national court or regulator, as determined by local implementing legislation. These fines include:

  • Fines up to EUR 35 million or 7% of the global turnover, whichever is higher, for non-compliance with the prohibited AI practices as defined as unacceptable risk in Article 5.
  • Fines up to EUR 15 million or 3% of the global turnover, whichever is higher, for non-compliance with the articles listed in Article 99(4) of the AI Act.
  • Fines up to EUR 7.5 million or 1% of the global turnover, whichever is higher, may be trig-gered for the supply of incorrect, incomplete or misleading information to notified bod-ies or national competent authorities.

Penalties for providers of GPAI models will apply from 2 August 2026. Only the European Commission (not national authorities) may impose fines up to €15 million or 3% of global turnover for certain acts of non-compliance for providers of GPAI models.

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.