GPAI Obligations Under Articles 53–55: EU AI Act Foundation Model Compliance
GPAI obligations have been in force since August 2, 2025. This guide covers every requirement under Articles 53–55 — baseline obligations for all GPAI providers, systemic risk obligations, the open-source exemption, the Code of Practice, and what changes when AI Office enforcement powers activate in August 2026.
Summary
- Who: Any organisation that places a GPAI model on the EU market, regardless of establishment location.
- What: Four Article 53(1) baseline obligations for all providers, plus Article 55 systemic risk obligations if the 10²⁵ FLOPs training compute threshold is met.
- Deadline: August 2, 2026 — AI Office enforcement powers and fines activate. Obligations under Articles 53–55 have applied since August 2, 2025.
Under the EU AI Act, all GPAI model providers must meet four core obligations under Article 53: (1) draw up and maintain technical documentation per Annex XI; (2) provide information and documentation to downstream AI system providers per Annex XII; (3) implement a copyright compliance policy respecting text-and-data-mining opt-outs; and (4) publish a training data summary using the AI Office's mandatory template.
What is a GPAI model under the EU AI Act?
Article 3(63) of Regulation 2024/1689 defines a general-purpose AI model as an AI model trained on large amounts of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of how it is placed on the market, and that can be integrated into a variety of downstream systems or applications. The emphasis on "significant generality" and "wide range of distinct tasks" is the key distinguishing criterion — it excludes narrowly trained single-purpose models that can only perform one task type, regardless of how large they are.
Article 51(1) sets an indicative criterion: a model trained with cumulative compute of at or above 10²³ FLOPs is considered likely to meet the GPAI definition. This is an indicative — not a determinative — threshold. A model trained below 10²³ FLOPs can still be a GPAI model if it displays significant generality; conversely, a large model trained on a narrow task may not qualify. The key question remains functional: does the model perform a wide range of distinct tasks competently?
The regulation also distinguishes between a GPAI model and a GPAI system. A GPAI model is the underlying trained artefact — the weights, architecture, and associated technical specifications. A GPAI system is a GPAI model that has been integrated into an AI system — one that has been given the ability to interact with users, execute actions, or process inputs in a specific deployment context. This distinction matters because some Article 53 obligations (particularly Annex XII downstream documentation) are calibrated to the model layer, while the GPAI system layer may also trigger additional high-risk AI obligations under Annex III depending on deployment context.
The EU AI Act has explicit extraterritorial reach. Under Article 2(1), the regulation applies to any provider placing a GPAI model on the EU market regardless of where that provider is established. A US-based or Asian GPAI provider making their model available to EU users — whether via direct API access, open-source release downloaded in the EU, or commercial licensing — is within scope. For non-EU providers, this triggers additional obligations under Article 54, discussed below.
Key compliance deadlines at a glance
| Date | Milestone |
|---|---|
| 1 August 2024 | AI Act enters into force |
| 2 February 2025 | Prohibited AI practices (Article 5) become enforceable |
| 2 August 2025 | GPAI obligations (Articles 53–55) apply to newly placed models |
| 2 August 2026 | AI Office enforcement powers activate; fines become applicable |
| 2 August 2027 | Compliance deadline for GPAI models placed on market before August 2025 |
The Digital Omnibus package (COM/2025/836) proposes further deadline adjustments for some Annex III high-risk obligations; GPAI deadlines are not affected by current proposals.
Article 53 — Obligations for all GPAI providers Art. 53
Article 53 is the baseline compliance tier. It applies to every provider of a GPAI model, regardless of company size, compute used in training, or whether the model has been designated as posing systemic risk. There are no thresholds or exemptions at the Article 53 level (other than the open-source carve-out described in Section 53(2), addressed later). If you place a GPAI model on the EU market, all four obligations apply from day one of that placement.
Technical documentation (Annex XI)
Article 53(1)(a) requires providers to draw up and maintain technical documentation in accordance with Annex XI of the regulation. Annex XI has a two-section structure. Section 1 applies to all GPAI providers. Section 2 applies only to systemic risk models and adds evaluation results and adversarial testing details.
Annex XI Section 1 requirements cover: a general description of the model including the tasks it can perform, the types of architecture used, whether it is multimodal, the licence under which it is placed on the market, and a parameter count where available; a description of the development process including training methodology, optimisation objectives and techniques, and the results of any internal evaluation; a description of training data covering the type of data used, provenance, curation and filtering methods, the approximate number of data points in each modality, feedback from human reviewers, and any bias detection activities conducted; an estimate of compute resources consumed, expressed in FLOPs and training time; and an estimate of the total energy consumed during training.
Annex XI Section 2, applicable only to systemic risk models, requires in addition: the results of all evaluations performed before market placement including evaluation methodology, benchmarks used, and results; detailed information on any adversarial testing performed before placement; and system architecture diagrams sufficient to understand the model's technical structure.
Under Article 53(1)(a), technical documentation must be retained for 10 years after the model is placed on the market or put into service. The Commission holds a delegated-act power under Article 53(5) to update Annex XI as the technology evolves, which means the documentation requirements can be amended without a full legislative process.
Downstream provider information (Annex XII)
Article 53(1)(b) requires GPAI providers to make available to downstream AI system providers — those who build products and services on top of the GPAI model — the technical information and documentation specified in Annex XII. The rationale is practical: downstream providers cannot meet their own AI Act obligations (including Annex IV technical file requirements for high-risk systems) without understanding the capabilities, limitations, and risk profile of the model underlying their system.
Annex XII specifies the minimum fields that must be provided. These include: a description of the model's capabilities including performance benchmarks; a description of known limitations and circumstances where performance may degrade; the intended uses and uses the provider explicitly excludes; information on safety performance including known hazards and failure modes; and a description of the interaction modalities supported (e.g. text-in/text-out, multimodal, tool use). Downstream providers may require more detailed information than the Annex XII minimum to satisfy their own obligations — this creates a market dynamic where GPAI providers offering more comprehensive documentation gain a compliance advantage with enterprise customers. Assess your downstream information obligations using AI Act Gap's free tool.
Training data summary — the mandatory AI Office template
Article 53(1)(b) also requires providers to make publicly available a sufficiently detailed summary of the content used to train the GPAI model. On 24 July 2025, the AI Office published a mandatory template that all providers must use — eliminating the previous ambiguity about what "sufficiently detailed" meant in practice.
The mandatory template has three sections. Section 1 — General information requires: a description of data modalities covered (text, image, audio, code, etc.); the approximate size of the dataset expressed in wide categorical ranges (e.g. "less than 1 billion tokens," "between 1T and 10T tokens," "more than 10T tokens"); and language coverage. The wide categorical ranges are a deliberate trade secret protection — providers are not required to disclose exact dataset sizes.
Section 2 — Data source list requires identifying large datasets individually by name, and for web-scraped content, listing the top 10% of domain names by volume (reduced to the top 5% or top 1,000 domains, whichever is lower, for SMEs and startups). Providers must also disclose the crawler or scraping tools used. This section is specifically designed to enable rightsholders to determine whether their content was included in training data.
Section 3 — Data processing requires describing the processing methods applied to training data including filtering, deduplication, and quality thresholds; an explanation of how copyright opt-outs were identified and respected; and a description of any content moderation measures applied to training data to prevent inclusion of illegal content.
Where trade secret protections apply, providers may redact specific details from the publicly available version, but must maintain a non-redacted version and provide it to authorities on reasoned request. The training data summary must be updated at least every six months or when there are material changes to the training data or methodology. Critically, this obligation applies to open-source providers as well — the Article 53(2) open-source exemption does not extend to the training data summary requirement.
Copyright policy — respecting opt-outs under Directive 2019/790
Article 53(1)(c) requires GPAI providers to put in place and implement a policy to comply with Union copyright law, in particular to identify and respect the reservations of rights under Article 4(3) of Directive 2019/790 (the Digital Single Market Directive). This is the text-and-data-mining (TDM) framework.
Two TDM exceptions exist under the DSM Directive. Article 3 creates a mandatory exception for research organisations — this exception cannot be contracted out of or overridden by rights holder action. Article 4 creates a broader exception permitting commercial TDM, but with a critical carve-out: rights holders may reserve their rights against Article 4 TDM by expressing that reservation in a machine-readable form. A compliant GPAI copyright policy must respect these Article 4(3) reservations.
In practice, a compliant policy requires: respecting robots.txt instructions (including crawl-delay and Disallow directives per IETF RFC 9309); respecting ai.txt files and TDM Reservation Protocol markers where present; respecting opt-out signals communicated via HTTP headers; not circumventing technological protection measures or paywalls; excluding known piracy sites from training sources; implementing safeguards against infringing outputs; and designating a rightsholder contact point for objections or licensing discussions.
⚠️ Evolving area: Whether robots.txt alone constitutes a legally valid machine-readable opt-out under Article 4(3) DSM is actively contested — the legislative text predates the current generation of LLM training pipelines and national courts have not yet resolved this question. Additionally, whether terms-of-use restrictions qualify as "machine-readable" reservations under Article 4(3) is under active legal debate at both EU and national level.
One important clarification: Code of Practice adherence does not constitute copyright compliance. Copyright obligations are governed by the DSM Directive and are adjudicated by national courts under Member State transpositions — not by the AI Office. Code of Practice compliance and copyright compliance are parallel, separate matters.
The GPAI Code of Practice — what it is and why it matters
The GPAI Code of Practice was published on July 10, 2025 and formally endorsed by the Commission and AI Board on August 1, 2025, immediately before GPAI obligations became applicable. It is a voluntary compliance instrument developed through a multi-stakeholder process convened by the AI Office, and it covers three substantive chapters.
Chapter I — Transparency operationalises the Annex XI and Annex XII documentation requirements, providing concrete guidance on what a compliant technical documentation package and training data summary should contain and how they should be structured. Chapter II — Copyright contains five copyright measures that a signatory commits to implement, covering the identification and respect of machine-readable opt-outs, provenance tracking, and rightsholder communication channels. Chapter III — Safety and Security applies primarily to systemic risk models and sets out specific evaluation protocols, adversarial testing methodologies, and cybersecurity safeguards — but non-systemic providers are encouraged to adopt basic safety measures from this chapter as well.
While the Code is voluntary, signing it creates three material compliance advantages. First, AI Office supervision will be less intensive for signatories, particularly during the first year of enforcement. Second, the Commission has explicitly committed to a good-faith grace period for Code signatories running through August 2026, during which compliance gaps identified through AI Office review will trigger corrective action requests rather than immediate fine proceedings. Third, Code adherence functions as a mitigating factor in the AI Office's fine calculations under Article 101 — it is unlikely to reduce a fine to zero for a serious breach, but it materially reduces maximum exposure.
Approximately 26 organisations signed the Code at launch, including Amazon, Anthropic, Google, IBM, Microsoft, OpenAI, Mistral AI, Aleph Alpha, and xAI (Safety and Security chapter only). Meta declined to sign, citing concerns that the Code would constrain innovation. Chinese AI providers including Alibaba, Baidu, and DeepSeek have not signed.
Non-signatories face a more demanding compliance path. They must demonstrate compliance with Articles 53–55 through alternative adequate means and will receive increased information requests from the AI Office under Article 91 supervisory powers. The Commission has stated it will approach non-signatories' compliance posture with less understanding than signatories during the initial enforcement period. There is no formal grace period protection for non-signatories.
⚠️ Evolving area: No harmonised European standards specific to GPAI models have been requested or published as of March 2026. The Code of Practice is the primary compliance pathway and is expected to remain so until at least 2027, when the first GPAI-relevant standards may complete development.
Check your Code of Practice gap against AI Act Gap's free assessment tool.
Article 54 — Authorised representatives for non-EU providers Art. 54
Article 54(1) requires that any GPAI provider not established in the EU must, before placing a model on the EU market, appoint an authorised representative (AR) in the EU by written mandate. The mandate must be in place before any EU market placement occurs — it is not a retroactive obligation. The AR acts as the provider's legal point of contact for the AI Office and national authorities, and its appointment does not transfer the substantive compliance obligations from the provider to the representative.
Under Article 54(3), the authorised representative's duties are specific and enforceable. The AR must verify that the technical documentation required under Annex XI exists and that the provider has fulfilled its obligations under Articles 53 and 55; retain copies of technical documentation and the mandate for 10 years after the model is placed on the market; provide documentation and information to the AI Office on reasoned request; and cooperate with national competent authorities and the AI Office in any action taken with respect to the model.
Article 54(5) creates a mandatory exit clause for authorised representatives. If the AR has reasonable grounds to consider that a provider is acting contrary to its obligations under the AI Act, the AR must immediately notify the AI Office and, where the provider does not remedy the breach, terminate the mandate. This provision creates a meaningful conflict-of-interest dynamic: an AR cannot simply be a passive mailbox entity — they must actively monitor compliance and bear legal exposure for failure to do so.
Article 54(6) provides a targeted open-source exemption: non-EU providers of open-source GPAI models are exempt from the authorised representative requirement, unless their model presents systemic risk under Article 51. The systemic risk exception overrides the open-source exemption in all cases — a non-EU provider of an open-source systemic risk model must still appoint an AR.
The EU SEND (Serious Incident and Notification) platform, operational since August 2025, is the primary channel for non-EU providers and their representatives to submit systemic risk notifications under Article 52, report serious incidents under Article 73, submit Safety and Security Framework reports under the Code of Practice, and file compliance documentation if not covered by the Code.
Article 55 — Systemic risk obligations Art. 55
Article 55 imposes a second tier of obligations on providers of GPAI models classified as posing systemic risk. These obligations are cumulative — they add to Article 53 obligations, not replace them. Article 55 providers must comply with everything in Article 53, plus four additional requirements below.
The 10²⁵ FLOPs threshold — a rebuttable presumption, not a hard ceiling
Article 51(2) establishes that a GPAI model trained with cumulative training compute at or above 10²⁵ floating point operations (FLOPs) is presumed to present systemic risk. This is explicitly a rebuttable presumption, not a binary classification rule. A provider whose model exceeds the threshold may contest the designation by demonstrating to the AI Office's satisfaction that the model lacks the capabilities that make it a systemic risk — but the threshold sets the burden: the provider must show an absence of systemic risk, not merely that they have mitigated it. Mitigation measures do not rebut the presumption.
Conversely, models below 10²⁵ FLOPs are not automatically exempt. Article 51(2) also empowers the Commission to designate individual models as posing systemic risk based on the criteria in Annex XIII, which include: the capabilities evaluated against state-of-the-art benchmarks, the number of users in the EU, the degree of autonomy the model can exercise, and its scalability to high-impact deployments. This means a very capable model trained at 10²⁴ FLOPs could still receive systemic risk designation if the Commission determines the Annex XIII criteria are met.
Providers approaching the 10²⁵ FLOPs threshold are subject to a notification obligation under Article 52(1): within two weeks of meeting or reasonably foreseeing that training compute will reach or exceed the threshold, the provider must notify the AI Office. The Commission holds a delegated-act power to update the threshold as compute efficiency and model capabilities evolve. Approximately 11–25 models globally are estimated to currently exceed the 10²⁵ FLOPs threshold.
Adversarial testing and red teaming
Article 55(1)(a) requires systemic risk model providers to conduct adversarial testing in accordance with state-of-the-art methodologies before model placement and at regular intervals after placement. Adversarial testing encompasses red teaming (both automated and human-in-the-loop), fine-tuning resistance testing to probe whether safety measures survive common adaptation scenarios, alignment testing against articulated safety objectives, and capability evaluations targeting the areas that constitute systemic risk — catastrophic harm capability, large-scale fraud enablement, critical infrastructure attacks, and chemical or biological weapon assistance.
The Code of Practice Safety and Security chapter operationalises this requirement with specific evaluation protocols, including a minimum set of capability domains that must be assessed and minimum test budgets. Results of adversarial testing must be documented and included in the Annex XI Section 2 documentation. Where adversarial testing identifies material risks, providers must implement mitigation measures and document the effectiveness of those measures before market placement.
Serious incident reporting
Article 55(1)(b) requires providers of systemic risk models to report serious incidents to the AI Office without undue delay, using the EU SEND platform. Incidents must be reported immediately upon discovery — there is no defined grace period between discovering an incident and filing the report. This creates an operational requirement for automated anomaly detection pipelines that can surface potential serious incidents in near real-time and feed into a compliance alerting workflow.
⚠️ Evolving area: The specific criteria for what constitutes a "serious incident" for GPAI systemic risk models are not yet fully defined in the Regulation text or in the current Code of Practice. The AI Office published draft guidance on September 26, 2025, but final criteria are subject to ongoing development. Providers should apply a conservative interpretation pending further clarification.
Cybersecurity protection
Article 55(1)(d) requires providers of systemic risk models to implement adequate cybersecurity protections at model and infrastructure level, taking into account the systemic nature of the risks posed. The regulatory concern is broad: preventing exfiltration of model weights, preventing unauthorised access to training infrastructure and data pipelines, preventing attacks on model serving infrastructure that could cause large-scale harm, and preventing adversarial exploitation of API access at scale.
Article 42(2) provides that a cybersecurity certification under the EU Cybersecurity Act (Regulation 2019/881) creates a presumption of compliance with the AI Act's cybersecurity requirements for the covered components. This provides a defined path for providers who already invest in CSA certification. However, the relevant CSA schemes for AI infrastructure are still in development, limiting the practical availability of this presumption pathway. Additionally, the Cyber Resilience Act (EU) 2024/2847 applies to products with digital elements and its full enforcement begins December 11, 2027. The Digital Omnibus package (COM/2025/836) currently under legislative discussion aims to harmonise the overlap between the AI Act cybersecurity obligations, CRA product requirements, GDPR, and NIS2 — but the Omnibus has not yet been adopted.
Who is actually in scope? Edge cases explained
The GPAI provisions reach further than the model providers who built and trained a foundation model from scratch. Four categories of organisations frequently underestimate their exposure:
Fine-tuners. Per the July 2025 AI Office GPAI Guidelines, fine-tuning a GPAI model triggers independent GPAI provider obligations only if the compute used in the fine-tuning modification exceeds one-third of the compute used to train the original base model. Where the original training compute is unknown, the default threshold is approximately 3.33×10²² FLOPs. Parameter-efficient fine-tuning methods — LoRA, adapters, prefix tuning — almost never approach this threshold. RAG pipelines, prompt engineering, and quantisation without retraining do not constitute modifications at all for this purpose. Where fine-tuning does trigger provider status, the obligations apply only to the modification, not to the full base model.
⚠️ The one-third compute threshold is from non-binding AI Office Guidelines, not from the Regulation itself. It has no formal legal authority but represents the AI Office's current interpretive position and will be the practical standard in any supervisory engagement.
API resellers. An organisation that takes a third-party GPAI model and offers access to it via its own API to EU users is placing that model on the EU market and thereby triggering GPAI provider obligations. The key question is whether the original provider explicitly excluded EU distribution in their licensing terms: if so, and the reseller knowingly disregards that exclusion, the reseller becomes the primary provider. If EU distribution is permitted under the original licence, both the original provider and the reseller may hold concurrent obligations — though in practice the AI Office will engage primarily with the party making the market placement.
Internal-to-external models. A GPAI model built for internal purposes — research, internal tooling, back-office automation — does not trigger Article 53 obligations while it remains genuinely internal. The moment that model is made available externally — licensed to a client, released as a product, made available via API to third parties, or published as open-source — it is placed on the market and obligations attach from that point forward. The transition from internal to external use is the triggering event, not the original development.
Non-EU providers. Any provider offering a GPAI model to EU users is in scope under Article 2(1), regardless of their establishment location. For these providers, Article 54's authorised representative requirement applies before market placement, as described above. The fact that a provider's servers are outside the EU, or that their corporate entity is not incorporated in an EU Member State, does not limit applicability. If EU users can access the model — whether via paid API, free tier, or open-source download — the provider is placing the model on the EU market.
Open-source exemption — does your model qualify?
Article 53(2) provides a significant but narrowly constructed exemption from the technical documentation (Annex XI) obligation and the downstream provider information (Annex XII) obligation for providers of GPAI models released under free and open-source licences. The copyright policy obligation under Article 53(1)(c) and the training data summary obligation both survive the exemption — open-source providers must comply with these two regardless.
Per the AI Office GPAI Guidelines published July 2025, three cumulative conditions must be met for the exemption to apply. First, the licence must permit access, use, modification, and distribution of the model by third parties — a licence that restricts any of these four activities disqualifies the model. Second, the model weights, architecture, and relevant usage information must all be publicly available without access controls or approval gates. Third, the provider must not monetise the model — directly or indirectly.
The regulation and guidelines permit certain restrictions without disqualifying the exemption: attribution requirements, share-alike or copyleft provisions requiring modifications to be released under the same licence, and proportionate safety-oriented usage restrictions that are objective and applied on non-discriminatory criteria. These permitted restrictions reflect the regulatory intent to accommodate genuine open-source community norms.
Several categories of restriction are disqualifying. Non-commercial or research-only limitations — even if labelled as "open source" — prevent the exemption because they restrict use. Scale-based licensing triggers, such as provisions that activate additional commercial licensing requirements once a usage threshold is crossed, are disqualifying because they introduce monetisation conditionality. Indirect monetisation also disqualifies: a provider that does not charge for model access but commercially processes user data derived from model usage, or upsells paid support services to model users, may be considered to be monetising the model.
Two widely discussed cases illustrate the boundary. The Llama family of models does not qualify for the Article 53(2) exemption. The Llama community licence contains commercial use restrictions and a scale threshold that triggers additional licensing for deployments with more than 700 million monthly active users. The Llama 4 licence also explicitly excludes EU entities in some provisions. All three conditions are disqualifying. Mistral AI presents a more nuanced picture: Mistral models released under Apache 2.0 (such as Mistral 7B Instruct) may qualify for the exemption if the weights are publicly available and the provider does not monetise that specific model. However, Mistral models distributed under the Mistral Non-Production Licence do not qualify, as the non-production restriction is disqualifying.
One critical exception applies regardless of licensing: open-source models with systemic risk under Article 51 are not exempt from Article 55 obligations or from the Article 54 authorised representative requirement. Systemic risk overrides the open-source exemption in all cases.
A final structural note: the AI Act's definition of "free and open-source" is not coextensive with the OSI Open Source Definition or the FOSS community's conventional standards. The cumulative monetisation condition in particular is novel and not present in standard open-source licensing frameworks. Very few models currently meet all three conditions simultaneously, which means the exemption is narrower in practice than it may appear.
Enforcement and fines — what happens after August 2026
Article 101 governs the AI Office's fining powers for GPAI providers. For breaches of Article 53 or Article 55 obligations, the AI Office may impose fines of up to €15 million or 3% of total worldwide annual turnover for the preceding financial year, whichever is higher. For providers who supply incorrect, incomplete, or misleading information in response to AI Office requests or during a compliance review, the maximum fine is €7.5 million or 1% of worldwide annual turnover. For SMEs and startups, the applicable maximum is the lower of the fixed and turnover-based amounts. The Court of Justice of the EU has unlimited jurisdiction to review and modify Commission fining decisions under Article 101(5).
The timing structure is important. Article 53/55 obligations became applicable on August 2, 2025 — providers have been legally required to comply since that date. However, the AI Office's formal enforcement powers, including the power to impose fines, do not activate until August 2, 2026. This creates a 12-month window in which the AI Office can supervise, issue information requests, and require corrective action, but cannot yet fine. From August 2, 2026, the full enforcement apparatus — including financial penalties — is operational.
Code of Practice adherence functions as a mitigating factor in fine calculations. Signatories who demonstrate good-faith compliance efforts, even where specific gaps exist, will receive more favourable treatment under the AI Office's enforcement discretion. The grace period protection for Code signatories runs through August 2026, meaning that during the enforcement-free period, Code signatories identified with compliance gaps should receive corrective action requests, not pre-fine notifications. No equivalent protection exists for non-signatories.
The AI Office holds a graduated set of supervisory powers beyond fines. Article 91 authorises information requests requiring providers to supply documentation, explanations, or access to systems. Article 92 authorises technical evaluations of models, including via API access, which can be performed independently by the AI Office or through designated experts. Article 93 authorises corrective measures including requirements to modify or restrict model deployment, and in the most serious cases, market withdrawal. Article 90 establishes a Scientific Panel that can issue qualified alerts about systemic risks not yet captured by existing designations, potentially triggering supervisory action before a formal designation is made.
The August 2026 enforcement deadline is approaching. Build your compliance position before fine exposure is live.
Identify your compliance gaps before August 2026How GPAI obligations interact with GDPR and the Cyber Resilience Act
GDPR. When GPAI training data includes personal data, both the AI Act and the GDPR apply concurrently — there is no hierarchy or exclusion between the two regimes. GPAI providers must establish a valid GDPR legal basis for any personal data processing involved in training, including web-scraped content that contains personal data. The European Data Protection Board (EDPB) has scrutinised legitimate interest claims for web-scraped personal data training and has indicated that such claims must be robustly assessed against the three-part test, with particular attention to whether data subjects have a reasonable expectation that their data would be used for LLM training. Data subject rights apply to training data where technically feasible: erasure requests create an obligation to implement technical safeguards, but machine unlearning for large language models remains technically immature — output filtering has been accepted as an interim compliance measure by some supervisory authorities, but no definitive CJEU ruling on sufficiency exists. The AI Act's training data summary requirement is complementary to the GDPR's transparency obligations under Articles 13 and 14, not a substitute — a provider who publishes a training data summary still needs to satisfy the GDPR's individual transparency requirements where personal data is involved. The EDPB has formally confirmed that the AI Act and Union data protection legislation are "complementary and mutually reinforcing."
Cyber Resilience Act. The cybersecurity obligation under Article 55(1)(d) overlaps with obligations under Regulation (EU) 2024/2847 (the Cyber Resilience Act) for GPAI models that meet the CRA's definition of "products with digital elements." CRA full enforcement begins December 11, 2027 — later than the AI Act's August 2026 enforcement date. For providers managing both compliance timelines, the AI Act's Article 55(1)(d) obligation is the earlier-activating requirement and should drive the initial cybersecurity programme design. The Digital Omnibus package (COM/2025/836) is currently in the early legislative process and explicitly aims to reduce overlap and inconsistency between the AI Act, CRA, GDPR, and NIS2 for providers subject to multiple frameworks — but until the Omnibus is adopted, each regulation applies independently on its own terms.
For answers to common EU AI Act questions, see the FAQ page. For broader compliance guides, including the Annex IV technical file guide and provider vs deployer analysis, see the full guides index. To understand see how the methodology works behind AI Act Gap's assessment tool, or run a free GPAI gap analysis now.
Frequently asked questions
Check your GPAI compliance gap in under 5 minutes
The free AI Act Gap tool maps your GPAI model against all Articles 53–55 obligations and produces a structured gap report with prioritised recommended actions.