Frequently Asked Questions
Common questions about EU AI Act compliance and the AI Act Gap tool.
A Provider is the entity that develops an AI system and places it on the market under their own name — they bear the primary technical and documentation obligations (Articles 9–17, 43, 72). A Deployer is the entity that uses a third-party AI system in a professional context — they have distinct obligations around human oversight, log retention, staff literacy, and fundamental rights impact assessment (Articles 4, 26, 27). Many organisations are both: if you fine-tune and deploy a model, you may have Provider obligations for the model and Deployer obligations for the application. Important: under Article 25, a deployer becomes a Provider — and assumes all provider obligations — if they put their own name or trademark on the AI system, change its intended purpose, or make a substantial modification. This means some organisations that consider themselves deployers are legally providers. AI Act Gap checks for this at the start of the assessment.
Even if your AI system falls under an Annex III domain (such as employment, credit scoring, or biometrics), it may not be classified as high-risk if it meets any of the following conditions under Article 6(3): it performs only a narrow procedural task; it improves the result of a previously completed human activity; it detects decision-making patterns without replacing or influencing a previously completed human assessment; or it performs a preparatory task to an assessment. However, if your system performs profiling of natural persons, it is always considered high-risk regardless of these conditions. AI Act Gap flags this carve-out during the assessment as an informational note — we recommend confirming your classification with legal counsel before concluding your system is not high-risk.
The FRIA under Article 27 is not required for all deployers of high-risk AI systems. It applies specifically to: bodies governed by public law (government agencies, municipalities, public authorities); private entities providing public services (such as private healthcare providers under public contract); deployers using AI to evaluate creditworthiness or establish credit scores (Annex III point 5b, excluding fraud detection); and deployers using AI for risk assessment or pricing in life and health insurance (Annex III point 5c). Ordinary private companies deploying high-risk AI for other purposes — such as hiring tools or access to services — are subject to Article 26 deployer obligations but are not required to conduct an FRIA. Note: the AI Office FRIA template has not yet been published as of March 2026. Deployers may adapt a GDPR Article 35 Data Protection Impact Assessment format in the interim.
No. Article 51(2) establishes a rebuttable presumption: if your model was trained using more than 10²⁵ floating point operations (FLOPs), it is presumed to have systemic risk — but this can be contested. Providers may present arguments to the European AI Office demonstrating that their model does not present systemic risks despite exceeding the threshold. Conversely, the Commission may designate models below the threshold as systemic risk under Article 51(1)(b) based on capability or impact criteria in Annex XIII. No changes to the threshold have been adopted as of March 2026. For downstream providers who substantially modify an existing GPAI model, the relevant compute threshold is 3×10²⁴ FLOPs (one-third of the standard threshold), per the July 2025 AI Office GPAI Guidelines.
Your system is likely high-risk if it falls under Annex III domains such as biometric identification, employment, credit scoring, education, or law enforcement. Use our free checker to assess your specific case.
Article 9 requires a documented, iterative risk management system covering identification, estimation, evaluation, and mitigation of risks throughout the AI system lifecycle — not just at deployment.
A General Purpose AI (GPAI) model is an AI model trained on large data at scale that can perform a wide range of tasks. Examples include large language models and multimodal foundation models. GPAI obligations under Articles 53–55 apply from August 2025.
A gap report identifies which technical requirements of the EU AI Act your AI system does not yet meet, mapped to specific articles, with recommended remediation actions and priority flags. For a full explanation of what gap analysis covers and how the report is structured, see our how it works page.
High-risk AI system requirements under Annex III are currently enforceable from August 2, 2026 under the original AI Act (Article 113). The EU Digital Omnibus proposal (COM/2025/836) proposes extending this to December 2, 2027 for Annex III systems. Both the European Parliament and Council have aligned on this date, but the Omnibus has not yet been formally adopted as of March 2026 — trilogue is expected to begin April 2026. The pragmatic approach: treat August 2026 as your planning deadline, and monitor the Omnibus trilogue for confirmation of any extension. GPAI obligations remain in force from August 2, 2025 regardless.
Yes. AI Act Gap is completely free, requires no login, and stores no personal data in assessment results. Email collection is optional and requires explicit GDPR-compliant consent. If you opt in, we collect your email, company name, professional role, industry sector, and assessment context. We may use aggregated, anonymised data to publish EU AI Act readiness trend reports — no individual data is ever shared. Article references are verified against the final published text of Regulation EU 2024/1689 (Official Journal of the EU, 12 July 2024).
If your system is not high-risk (Annex III) and not a GPAI model, you may still have obligations under Article 50 of the EU AI Act (limited risk / transparency obligations). These apply if your system interacts directly with people, generates synthetic content, uses emotion recognition, or publishes AI-generated text on matters of public interest. Minimal risk systems — such as spam filters, recommendation engines, and logistics tools — are largely exempt from binding obligations. AI Act Gap currently covers high-risk and GPAI obligations in depth. Article 50 transparency gap analysis is planned for v2. Use the ‘I’m not sure’ option in Step 1 to identify which track applies to your system.
Reach us at contact@aiactgap.com — we read everything and respond to feedback, questions about the tool, and partnership enquiries.
Have a question not covered here? Contact us.