EU AI Act: Provider vs Deployer classification and the Article 25 reclassification trap
Under the EU AI Act, your role — Provider or Deployer — determines your entire obligation set. Most organisations assume their role is obvious. It often isn't.
Why your role matters
The EU AI Act does not apply uniformly to everyone who works with AI. It creates two primary roles — Provider and Deployer — with fundamentally different obligation sets. Providers must build technical documentation, implement risk management systems, register in the EU database, conduct conformity assessments, and apply CE marking (Articles 16–17, 43). Deployers have a smaller but still significant set of obligations: human oversight, log retention, staff AI literacy, FRIA for certain public-sector uses, and downstream transparency (Article 26).
Getting your role wrong has real consequences. If you think you're a Deployer when you're legally a Provider, you may be missing conformity assessment, technical documentation, and registration obligations that carry significant penalties. If you over-classify yourself as a Provider when you're a Deployer, you'll waste engineering time on documentation requirements that aren't yours to fulfil.
What makes you a Provider
Under Article 3(3), a Provider is any natural or legal person, public authority, agency, or other body that develops an AI system or has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark — whether for payment or free of charge.
In practice, you are a Provider if any of the following apply:
- You built the AI system yourself, or contracted its development to a third party.
- You place an AI system on the EU market under your own name or trademark.
- You supply an AI system to others (customers, partners, or end users) for their use.
- You integrated a third-party AI component into a larger system you sell or deploy, and that integration constitutes substantial modification (more on this below).
What makes you a Deployer
Under Article 3(4), a Deployer is any natural or legal person, public authority, agency, or other body that uses an AI system under its authority — except where that use is in the course of a personal, non-professional activity.
In practice, you are a Deployer if all of the following apply:
- You are using an AI system developed and placed on the market by a third party.
- You are using it under the provider's brand, not your own name or trademark.
- You have not substantially modified its design, training, or intended purpose.
- You are using it in a professional context (not personal, non-professional use).
Most enterprises using third-party AI tools — an off-the-shelf recruitment screening tool, a document summarisation API, a creditworthiness scoring SaaS — are Deployers. But this status is not permanent and can change based on what you do with the system.
Can one organisation be both Provider and Deployer?
Yes — and this is more common than many compliance teams realise. The Provider and Deployer roles attach to specific AI systems, not to the organisation as a whole.
A company can be a Provider of the AI system it builds and deploys under its own name, while simultaneously being a Deployer of a third-party AI tool it uses for internal operations. These are independent classifications, each triggering its own obligation set.
Example: A fintech that develops a proprietary credit-scoring model is a Provider under Article 3(3) for that system. If the same fintech also uses a vendor's AI-powered document processing tool to handle client onboarding, it is a Deployer under Article 3(4) for that separate tool. Both roles coexist simultaneously.
The practical implication: compliance programmes must map obligations system-by-system, not entity-by-entity. A single organisation may need to maintain both Annex IV technical documentation (as Provider) and Article 26 deployer obligations (as Deployer) for different systems at the same time.
Key obligations: Provider vs Deployer
The following table summarises the main obligation differences for high-risk AI system classification purposes. Non-high-risk systems carry a reduced obligation set.
| Obligation | Provider (Art. 16) | Deployer (Art. 26) |
|---|---|---|
| Risk management system | Required (Art. 9) | Not required |
| Technical documentation | Required (Art. 18) | Not required |
| Conformity assessment requirements | Required (Art. 43) | Not required |
| CE marking + EU declaration of conformity | Required | Not required |
| EU database registration | Required (Art. 49) | Required for certain Annex III systems (Art. 49(2)) |
| Instructions for use | Must provide | Must follow |
| Post-market monitoring | Required (Art. 72) | Must report serious incidents (Art. 73) |
| Log retention | Must enable auto-logging (Art. 12) | Must retain logs ≥6 months (Art. 26(6)) |
| Human oversight measures | Must build in (Art. 14) | Must implement (Art. 26(2)) |
| Fundamental rights impact assessment | Not required | Required for certain public/private deployers (Art. 27) |
| Worker/affected person notification | Not required | Required (Art. 26(7–8)) |
The Article 25 trap
Article 25 is where many organisations get caught. It provides that a Deployer becomes a Provider — and assumes all Provider obligations — in any of the following situations:
- You place an AI system on the market or into service under your own name or trademark, regardless of the original provider's involvement.
- You make a substantial modification to a high-risk AI system already placed on the market or put into service.
- You change the intended purpose of an AI system in a way that makes it high-risk, even if it wasn't originally classified as high-risk.
This reclassification is automatic — there is no registration process or notification. If the conditions are met, you are a Provider under the law, with all associated obligations, regardless of what the original contract with your vendor says.
Most free compliance tools do not check for this. They classify your role based on a single question — “did you build it?” — without testing whether your modifications or branding have reclassified you.
What about distributors and importers?
Article 25(1) applies beyond Deployers — it covers distributors, importers, and any other third party in the value chain. An importer who places their own brand on a high-risk AI system, or a distributor who modifies the intended purpose, becomes the Provider and inherits the full Article 16 obligation set.
Article 25(3) addresses product manufacturers specifically: if a high-risk AI system is a safety component of a product covered by Annex I harmonisation legislation (e.g. medical devices, machinery, vehicles), the product manufacturer becomes the Provider of the AI system when placing the integrated product on the market under their name.
What “substantial modification” means — fine-tuning, RAG, and the one-third threshold
Article 3(23) defines substantial modification as a change to an AI system after its placing on the market or putting into service that was not foreseen or planned in the initial conformity assessment, and that either:
- affects the system's compliance with the requirements in Chapter III Section 2 (Articles 8–15: risk management, data governance, technical documentation, transparency, human oversight, accuracy and robustness), or
- modifies the intended purpose for which the system was assessed.
Both prongs must be read together: a pre-planned, documented change assessed at conformity assessment stage is not a substantial modification even if it is technically significant.
Recital 128 examples
Recital 128 confirms that changes to the operating system or software architecture trigger conformity reassessment. Automated adaptive learning that was pre-determined by the provider and assessed at the initial conformity assessment does not trigger reassessment.
The GPAI Guidelines one-third compute threshold (July 2025)
The European Commission's July 2025 GPAI model obligations guidelines introduce an indicative threshold for GPAI model modifications: if fine-tuning uses ≥ one-third of the original model's training compute, it is likely a substantial modification. Most commercial fine-tuning falls well below this threshold. However, even lighter modifications that significantly change the model's capabilities or risk profile could trigger Provider status. Prompt engineering and few-shot learning are generally not considered substantial modifications.
Practical modification guide
| Modification type | Likely classification | Key consideration |
|---|---|---|
| Heavy fine-tuning (≥1/3 original training compute) | Substantial modification | New conformity assessment required |
| Light fine-tuning / RLHF | Likely not substantial | Document scope; assess capability change |
| RAG pipeline | Likely not substantial | Unless it changes the system's risk profile or intended purpose |
| System prompt / prompt engineering | Not substantial | No model change |
| Rebranding / white-labelling | Not a modification — triggers Art. 25(1)(a) separately | Reclassification as Provider |
| UI customisation | Not substantial | No change to the AI system itself |
| Change of deployment domain (e.g. HR → credit scoring) | May trigger Art. 25(1)(c) | Change of intended purpose, not modification |
Contractual allocation of obligations under Article 25(4)
Article 25(1)(a) contains a unique carve-out: the rebranding trigger applies “without prejudice to contractual arrangements stipulating that the obligations are otherwise allocated.” This means parties can contractually reassign provider obligations in a rebranding scenario. This carve-out does not apply to triggers (b) or (c) — contractual reallocation is only available for the rebranding trigger.
Article 25(4) requires a written agreement between the high-risk AI system provider and any third-party supplier (tool provider, component vendor) specifying:
- what information the supplier will provide to support compliance;
- what technical access they will grant; and
- what assistance they will give to the downstream provider's compliance activities.
The AI Office may publish voluntary model contractual terms for this purpose. Watch for updates from the AI Office.
Important limitation: contractual arrangements can clarify the commercial allocation of compliance activities, but cannot override statutory obligations. If reclassified as a Provider, the entity cannot contractually opt out of CE marking, conformity assessment, or registration obligations. A contract that says “the vendor remains the Provider” is ineffective if the legal conditions for reclassification have been met.
Open-source AI and the Article 2(12) exemption
Article 2(12) exempts open-source AI systems from most of the Regulation — but this is a narrow exemption. It does not apply to systems that are high-risk (Art. 6), use prohibited AI practices (Art. 5), or are subject to transparency obligations (Art. 50). Most commercially significant open-source deployments will fall into one of these categories.
For GPAI models specifically, the open-source exemption reduces but does not eliminate obligations. Articles 53(1) and 53(2) grant reduced obligations to providers of genuinely open-source GPAI models (model weights, architecture, and usage information publicly disclosed, not monetised). However, if the model presents systemic risk (≥10²⁵ FLOPs training compute threshold), the full Chapter V obligations apply regardless.
Article 25(4) additionally exempts open-source third-party component suppliers (except GPAI model providers) from the written agreement requirement with high-risk AI system providers.
Practical takeaway: “open-source” is not a compliance exemption for high-risk use cases. If you deploy an open-source model in an Annex III context, you are a Provider with full obligations.
Practical examples
Scenario A — Pure Deployer
A logistics company uses an OpenAI API call to summarise shipping documents, displayed to internal staff. The API is used as documented, under OpenAI's brand via API key, with no retraining or modification. The company has no customer-facing AI product. This company is a Deployer. Its obligations are Article 26 (oversight, log retention, staff literacy) if the system is high-risk, but it is not a Provider.
Scenario B — Fine-tuned model, sold under own brand → Provider
An HR tech company fine-tunes an open-source language model on proprietary performance data to screen job applicants. They sell this as “HireAI” under their own brand to corporate customers. The fine-tuning changes model behaviour for a high-risk use case (employment screening, Annex III point 4), and the product is sold under their name. This company is a Provider, regardless of the underlying model's original license. Full Annex IV documentation, conformity assessment, and EU database registration apply.
Scenario C — Rebranded chatbot with custom UI → likely Provider
A SaaS company builds a customer-facing chatbot for a regulated financial services use case (credit eligibility guidance). They use Claude via an API, write extensive system prompts that shape financial advice, and sell “FinBot” to banks under their own brand with a custom UI. The system is placed on the market under their name. Under Article 25, this company is very likely a Provider. The combination of rebranding, specific financial use case, and market placement tips the analysis toward reclassification.
Scenario D — SaaS company wrapping a foundation model API
A B2B SaaS company builds a recruitment screening tool by calling the OpenAI API. The product is marketed under the SaaS company's own brand name, with no mention of the underlying model. The SaaS company is placing a system on the market under its own name (Art. 3(3) trigger). Because recruitment screening falls under Annex III point 4 (AI systems used in employment, worker management, and access to self-employment), the SaaS company is the Provider of a high-risk AI system. OpenAI's role is as a third-party component supplier; the SaaS company cannot rely on OpenAI's compliance documentation to satisfy its own Provider obligations. It must conduct its own conformity assessment, prepare its own technical documentation, register in the EU database, and affix CE marking.
Reclassification risk: The SaaS company was always the Provider — it never held Deployer status. But many SaaS founders incorrectly assume the foundation model vendor bears compliance responsibility. This is the most common misclassification seen in practice.
Enforcement timeline and penalties
Phased enforcement schedule
- 2 February 2025 — Prohibited AI practices (Art. 5) and AI literacy (Art. 4) obligations entered into force.
- 2 August 2025 — GPAI model provider obligations (Chapter V) entered into force.
- 2 August 2026 — Main high-risk AI system obligations in force: Annex III high-risk systems, Provider and Deployer obligations (Arts. 16, 26), transparency (Art. 50), conformity assessments, EU database registration.
- 2 August 2027 — High-risk AI in regulated products (Annex I: medical devices, machinery, vehicles etc.).
Digital Omnibus caveat: The Digital Omnibus proposal (COM/2025/836) proposes linking the Annex III deadline to availability of harmonised standards, with backstop dates of 2 December 2027 (Annex III) and 2 August 2028 (Annex I). Trilogue negotiations were ongoing as of early 2026 — check the AI Office website for updates.
Penalty structure
- Up to €35M or 7% of global annual turnover — prohibited AI practices
- Up to €15M or 3% of global annual turnover — high-risk AI system non-compliance (including wrongful Provider/Deployer classification)
- Up to €7.5M or 1% of global annual turnover — providing false information to authorities
What to do
Before mapping your obligations, confirm your role. The Provider/Deployer question should be the first thing you resolve — it determines which articles, documentation requirements, and conformity pathways apply to you.
7-question self-assessment checklist
Answer yes/no to each question. “Yes” answers point toward Provider status; “no” answers toward Deployer.
- Did your organisation develop or commission the AI system?
- Does it go to market under your name or trademark?
- Have you modified a third-party AI system beyond its original design?
- Have you changed the intended purpose of a system to a high-risk use case?
- Are you using the system in a professional context under your own authority?
- Does the system fall under any Annex III category?
- Did you rebrand a vendor's system without a contractual reallocation of obligations?
If reclassified as a Provider under Article 25, note that Article 25(2) requires the original provider to cooperate and provide technical access to support your compliance. However, this cooperation obligation does not apply where the initial provider has clearly specified that its AI system is not to be changed into a high-risk AI system — this opt-out is explicitly preserved by the Article 25(2) proviso. Check your vendor agreement and the system's documentation for any such restriction before relying on this cooperation pathway.
Not sure which role applies to your situation? AI Act Gap checks this in step one of the assessment and branches the obligation mapping accordingly — including an explicit Article 25 reclassification check.