Frequently Asked Questions
Common questions about EU AI Act compliance and the AI Act Gap tool.
What is the difference between a Provider and a Deployer under the EU AI Act?
A Provider is the entity that develops an AI system and places it on the market under their own name — they bear the primary technical and documentation obligations (Articles 9–17, 43, 72). A Deployer is the entity that uses a third-party AI system in a professional context — they have distinct obligations around human oversight, log retention, staff literacy, and fundamental rights impact assessment (Articles 4, 26, 27). Many organisations are both: if you fine-tune and deploy a model, you may have Provider obligations for the model and Deployer obligations for the application. Important: under Article 25, a deployer becomes a Provider — and assumes all provider obligations — if they put their own name or trademark on the AI system, change its intended purpose, or make a substantial modification. This means some organisations that consider themselves deployers are legally providers. AI Act Gap checks for this at the start of the assessment.
What is the Article 6(3) carve-out?
Even if your AI system falls under an Annex III domain (such as employment, credit scoring, or biometrics), it may not be classified as high-risk if it meets any of the following conditions under Article 6(3): it performs only a narrow procedural task; it improves the result of a previously completed human activity; it detects decision-making patterns without replacing or influencing a previously completed human assessment; or it performs a preparatory task to an assessment. However, if your system performs profiling of natural persons, it is always considered high-risk regardless of these conditions. AI Act Gap flags this carve-out during the assessment as an informational note — we recommend confirming your classification with legal counsel before concluding your system is not high-risk.
Who needs to conduct a Fundamental Rights Impact Assessment (FRIA)?
The FRIA under Article 27 is not required for all deployers of high-risk AI systems. It applies specifically to: bodies governed by public law (government agencies, municipalities, public authorities); private entities providing public services (such as private healthcare providers under public contract); deployers using AI to evaluate creditworthiness or establish credit scores (Annex III point 5b, excluding fraud detection); and deployers using AI for risk assessment or pricing in life and health insurance (Annex III point 5c). Ordinary private companies deploying high-risk AI for other purposes — such as hiring tools or access to services — are subject to Article 26 deployer obligations but are not required to conduct an FRIA. Note: the AI Office FRIA template has not yet been published as of March 2026. Deployers may adapt a GDPR Article 35 Data Protection Impact Assessment format in the interim.
Is the 10²⁵ FLOPs threshold for GPAI systemic risk an absolute rule?
No. Article 51(2) establishes a rebuttable presumption: if your model was trained using more than 10²⁵ floating point operations (FLOPs), it is presumed to have systemic risk — but this can be contested. Providers may present arguments to the European AI Office demonstrating that their model does not present systemic risks despite exceeding the threshold. Conversely, the Commission may designate models below the threshold as systemic risk under Article 51(1)(b) based on capability or impact criteria in Annex XIII. No changes to the threshold have been adopted as of March 2026. For downstream providers who substantially modify an existing GPAI model, the relevant compute threshold is 3×10²⁴ FLOPs (one-third of the standard threshold), per the July 2025 AI Office GPAI Guidelines.
Is my AI system high-risk under the EU AI Act?
Your system is likely high-risk if it falls under Annex III domains such as biometric identification, employment, credit scoring, education, or law enforcement. Use our free checker to assess your specific case.
What does Article 9 require technically?
Article 9 requires a documented, iterative risk management system covering identification, estimation, evaluation, and mitigation of risks throughout the AI system lifecycle — not just at deployment.
What is a GPAI model under the EU AI Act?
A General Purpose AI (GPAI) model is an AI model trained on large data at scale that can perform a wide range of tasks. Examples include large language models and multimodal foundation models. GPAI obligations under Articles 53–55 apply from August 2025.
What is an EU AI Act gap report?
A gap report identifies which technical requirements of the EU AI Act your AI system does not yet meet, mapped to specific articles, with recommended remediation actions and priority flags.
When do EU AI Act high-risk rules apply?
High-risk AI system requirements under Annex III are currently enforceable from August 2, 2026 under the original AI Act (Article 113). The EU Digital Omnibus proposal (COM/2025/836) proposes extending this to December 2, 2027 for Annex III systems. Both the European Parliament and Council have aligned on this date, but the Omnibus has not yet been formally adopted as of March 2026 — trilogue is expected to begin April 2026. The pragmatic approach: treat August 2026 as your planning deadline, and monitor the Omnibus trilogue for confirmation of any extension. GPAI obligations remain in force from August 2, 2025 regardless.
Is this tool free?
Yes. AI Act Gap is completely free, requires no login, and stores no personal data in assessment results. Email collection is optional and requires explicit GDPR-compliant consent. Article references are verified against the final published text of Regulation EU 2024/1689 (Official Journal of the EU, 12 July 2024).
Have a question not covered here? Contact us.