EU AI Act compliance

EU AI Act compliance made simple

The EU AI Act is live. We turn legal text into a clear plan with owners, deadlines and proof. Start with a fast gap assessment, then track everything in one place.

Risk tiers explained

The EU AI Act uses a risk-based approach with four main categories

Unacceptable risk

Banned outright

Social scoring, emotion recognition at work/school, biometric categorization, real-time biometric ID in public spaces, subliminal techniques

prohibited

High risk

Strict controls required

CV screening, credit scoring, law enforcement, medical devices, critical infrastructure, education assessment, recruitment tools

regulated

Limited risk

Transparency required

Chatbots, deepfakes, emotion recognition systems, biometric categorization (non-prohibited), AI-generated content

disclosure

Minimal risk

Little to no obligations

Spam filters, video games, inventory management, AI-enabled video games, basic recommendation systems

minimal

How VerifyWise supports EU AI Act compliance

Concrete capabilities that address specific regulatory requirements

AI system inventory with risk classification

Register every AI system in your organization with structured metadata. Each entry captures purpose, data sources, deployment context and stakeholders. The platform applies Annex III criteria to determine whether systems qualify as high-risk, limited-risk or minimal-risk, generating classification rationale you can reference during audits.

Addresses: Article 6 classification, Article 9 risk management, Article 49 registration

Technical documentation generation

Build the documentation package required under Article 11. The platform structures information about system architecture, training data provenance, performance metrics and known limitations into formatted documents that match regulatory expectations. Templates cover both provider and deployer perspectives.

Addresses: Article 11 technical documentation, Annex IV requirements

Human oversight workflow configuration

Define who reviews AI outputs, under what conditions and with what authority to override. The platform lets you configure oversight triggers, assign reviewers by role or expertise and capture review decisions with timestamps. Oversight patterns become auditable records demonstrating Article 14 compliance.

Addresses: Article 14 human oversight, Article 26 deployer obligations

Operational logging and retention

Capture system events, user interactions and decision outputs with automatic timestamping. Logs are retained according to configurable policies that default to the six-month minimum deployers must maintain. Search and export functions support incident investigation and regulatory requests.

Addresses: Article 12 record-keeping, Article 26(5) log retention

Incident tracking and reporting

Log AI-related incidents with severity classification and assign investigation owners. The platform tracks remediation progress and generates incident reports suitable for regulatory notification. Serious incidents can be escalated to authorities within the required timeframe, with supporting documentation attached.

Addresses: Article 62 serious incident reporting, Article 73 penalties

Fundamental rights impact assessment

Deployers of high-risk AI in certain sectors must assess impacts on fundamental rights before deployment. The platform provides structured assessment templates covering discrimination risk, privacy implications, access to services and due process considerations. Completed assessments generate dated records for compliance evidence.

Addresses: Article 27 fundamental rights impact assessment

All compliance activities are tracked with timestamps, assigned owners and approval workflows. This audit trail demonstrates systematic governance rather than ad-hoc documentation created after the fact.

Complete EU AI Act requirements coverage

VerifyWise provides dedicated tooling for every regulatory requirement across 15 compliance categories

64

EU AI Act requirements

64

Requirements with dedicated tooling

100%

Coverage across all categories

Article 96/6

Risk Management & Assessment

Article 104/4

Data Governance

Article 11, Annex IV5/5

Technical Documentation

Article 124/4

Record-Keeping & Logging

Article 134/4

Transparency & User Information

Article 145/5

Human Oversight

Article 154/4

Accuracy, Robustness & Cybersecurity

Article 176/6

Quality Management System

Article 43, Annex VI4/4

Conformity Assessment

Article 47-493/3

Registration & CE Marking

Article 724/4

Post-Market Monitoring

Article 733/3

Incident Reporting

Article 265/5

Deployer Obligations

Article 274/4

Fundamental Rights Impact Assessment

Article 43/3

AI Literacy & Training

Capabilities that set VerifyWise apart

CE Marking workflow

Guided 7-step conformity assessment process with document generation

LLM Gateway

Real-time monitoring and policy enforcement for GPAI model usage

Training Registrar

Track AI literacy requirements and staff competency records

Incident Management

Structured workflows for serious incident tracking and authority notification

Not sure if you're in scope?

Take our free EU AI Act readiness assessment to determine your risk classification and compliance obligations in minutes.

5 mins

Quick assessment

Instant

Get results immediately

Take free assessment

Know your role & obligations

Different actors in the AI value chain have different responsibilities under the EU AI Act

Provider

Organizations developing or substantially modifying AI systems

  • Implement risk management system throughout lifecycle
  • Ensure training data quality and governance
  • Create and maintain technical documentation
  • Design appropriate logging capabilities
  • Ensure transparency and provide information to deployers
  • Implement human oversight measures
  • Ensure accuracy, robustness, and cybersecurity
  • Establish quality management system
  • Conduct conformity assessment and affix CE marking
  • Register system in EU database
  • Report serious incidents to authorities

Deployer

Organizations using AI systems under their authority

  • Assign human oversight personnel
  • Maintain logs for at least 6 months
  • Conduct fundamental rights impact assessment
  • Monitor system operation and performance
  • Report serious incidents to provider and authorities
  • Use AI system according to instructions
  • Ensure input data is relevant for intended purpose
  • Inform provider of any risks identified
  • Suspend use if system presents risk
  • Cooperate with authorities during investigations

Distributor/Importer

Organizations making AI systems available in EU market

  • Verify provider has conducted conformity assessment
  • Ensure CE marking and documentation are present
  • Verify registration in EU database
  • Store and maintain required documentation
  • Ensure storage and transport conditions maintain compliance
  • Provide authorities with necessary information
  • Cease distribution if AI system non-compliant
  • Inform provider and authorities of non-compliance
  • Cooperate with authorities for corrective actions

Obligations comparison table

Quick reference guide showing which obligations apply to each role

ObligationProviderDeployerDistributor
Risk management system
Lifecycle risk assessment and mitigation
Technical documentation
System specs, training data, performance metrics
Store
Human oversight
Prevent or minimize risks
DesignImplement
Logging & records
Minimum 6 months retention for deployers
EnableMaintain
Conformity assessment
Self-assessment or notified body
Verify
CE marking
Required before market placement
AffixVerify
EU database registration
High-risk AI systems
RegisterVerify
Fundamental rights impact assessment
Required for deployers in specific sectors
Incident reporting
Serious incidents to authorities
If aware
Post-market monitoring
Continuous surveillance of system performance
Monitor use

Note: Many organizations may have multiple roles. For example, if you both develop and deploy an AI system, you must comply with both Provider and Deployer obligations.

6 steps to compliance by August 2026

A practical roadmap to achieve EU AI Act compliance

Step 11-2 months

AI system inventory

Catalog all AI systems in your organization

  • Identify all AI systems and tools in use
  • Document AI vendors and third-party services
  • Map AI systems to business processes
  • Identify AI system owners and stakeholders
  • Create central AI registry
Step 22-3 months

Risk classification

Assign risk tiers to each AI system

  • Assess each system against Annex III categories
  • Determine if system falls under prohibited use cases
  • Classify as high-risk, limited-risk, or minimal-risk
  • Document classification rationale
  • Identify your role (provider, deployer, distributor)
Step 33-4 months

Gap assessment

Identify compliance gaps and requirements

  • Map current state against EU AI Act requirements
  • Identify missing documentation and processes
  • Assess technical compliance gaps
  • Evaluate governance and oversight mechanisms
  • Prioritize remediation activities
Step 44-8 months

Documentation & governance

Build required documentation and controls

  • Create technical documentation for high-risk systems
  • Implement risk management systems
  • Establish data governance procedures
  • Document human oversight mechanisms
  • Create quality management system
  • Prepare fundamental rights impact assessments
Step 58-10 months

Testing & validation

Conduct conformity assessments

  • Perform internal testing and validation
  • Conduct bias and fairness assessments
  • Test accuracy, robustness, and cybersecurity
  • Engage notified body if required
  • Obtain CE marking for applicable systems
  • Register high-risk systems in EU database
Step 6Ongoing

Monitoring & reporting

Maintain compliance and monitor systems

  • Implement continuous monitoring systems
  • Maintain logs and audit trails
  • Monitor for performance drift and incidents
  • Report serious incidents within required timeframes
  • Conduct periodic reviews and updates
  • Stay updated on regulatory guidance

Note: Notified bodies are already booking into Q2 2026. Start your compliance journey now to meet the August 2026 deadline.

Start your compliance journey

Key dates you should know

Critical compliance deadlines approaching

activeFebruary 2, 2025

Prohibited practices

Banned AI practices become illegal

  • Social scoring
  • Biometric categorization
  • Emotion inference at work/school
upcomingAugust 2, 2025

GPAI transparency

General-purpose AI transparency rules

  • Codes of practice
  • Model documentation
  • Systemic risk assessments
upcomingAugust 2, 2026

High-risk phase 1

Classification rules begin

  • Risk management systems
  • Data governance
  • Technical documentation
upcomingAugust 2, 2027

Full compliance

All high-risk requirements active

  • Complete oversight
  • Post-market monitoring
  • Conformity assessments

High-risk AI systems (Annex III)

Eight categories of AI systems classified as high-risk under the EU AI Act

Biometric identification

Examples

Facial recognition, fingerprint systems, iris scanning

Key requirement

Particularly stringent for law enforcement use

Critical infrastructure

Examples

Traffic management, water/gas/electricity supply management

Key requirement

Must demonstrate resilience and fail-safe mechanisms

Education & vocational training

Examples

Student assessment, exam scoring, admission decisions

Key requirement

Requires bias testing and transparency to students

Employment & HR

Examples

CV screening, interview tools, promotion decisions, monitoring

Key requirement

Must protect worker rights and provide explanations

Essential services

Examples

Credit scoring, insurance risk assessment, benefit eligibility

Key requirement

Requires human review for adverse decisions

Law enforcement

Examples

Risk assessment, polygraph analysis, crime prediction

Key requirement

Additional safeguards for fundamental rights

Migration & border control

Examples

Visa applications, asylum decisions, deportation risk assessment

Key requirement

Strong human oversight and appeal mechanisms

Justice & democracy

Examples

Court case research, judicial decision support

Key requirement

Must maintain judicial independence

Penalties & enforcement

The EU AI Act has a three-tier penalty structure with significant fines

critical

Tier 1 - Prohibited AI

€35M or 7% of global revenue

(whichever is higher)

Violations include:

  • Social scoring systems
  • Manipulative AI
  • Real-time biometric ID in public spaces
  • Untargeted facial scraping
high

Tier 2 - High-risk violations

€15M or 3% of global revenue

(whichever is higher)

Violations include:

  • Non-compliant high-risk AI systems
  • Obligations on AI systems violations
  • Failing to conduct required impact assessments
medium

Tier 3 - Information violations

€7.5M or 1.5% of global revenue

(whichever is higher)

Violations include:

  • Providing incorrect information
  • Failing to provide information to authorities
  • Incomplete documentation

General-purpose AI (GPAI) requirements

Obligations for GPAI providers came into effect on August 2, 2025

What qualifies as general-purpose AI?

General-purpose AI refers to models trained on broad data that can perform a wide range of tasks without being designed for one specific purpose. These foundation models power many downstream applications, from chatbots to code assistants to image generators. The EU AI Act creates specific obligations for organizations that develop these models and those that build applications using them.

Large Language Models

GPT-4, Claude, Gemini, Llama, Mistral

Image Generation

Midjourney, DALL-E, Stable Diffusion

Multimodal Models

GPT-4o, Gemini Pro Vision, Claude 3.5

Code Generation

GitHub Copilot, Amazon CodeWhisperer

Are you a GPAI provider or downstream integrator?

GPAI Provider

You developed or trained the foundation model itself

  • Full GPAI transparency obligations apply
  • Must provide documentation to downstream users
  • Responsible for copyright compliance in training
  • Systemic risk requirements if threshold exceeded

Examples: OpenAI, Anthropic, Google DeepMind, Meta AI

Downstream Provider

You build applications using GPAI models via API or integration

  • Must obtain documentation from GPAI provider
  • Responsible for your specific application's compliance
  • High-risk use cases trigger high-risk obligations
  • Cannot transfer responsibility to foundation model provider

Examples: Companies using GPT-4 API, Claude API, or fine-tuned models

GPAI obligation tiers

Standard

General GPAI models

All general-purpose AI models

  • Provide technical documentation
  • Provide information and documentation to downstream providers
  • Implement copyright policy and publish training data summary
  • Ensure energy efficiency where possible
Systemic risk

Systemic risk GPAI models

>10²⁵ FLOPs or designated by Commission

  • Conduct model evaluation and systemic risk assessment
  • Perform adversarial testing
  • Track, document and report serious incidents
  • Ensure adequate cybersecurity protections
  • Implement risk mitigation measures
  • Report to AI Office annually

Understanding the systemic risk threshold

Models trained with more than 10²⁵ floating point operations (FLOPs) are automatically classified as posing systemic risk. The European Commission can also designate models based on their capabilities, reach or potential for serious harm regardless of training compute. Current models likely meeting this threshold include GPT-4 and successors, Claude 3 Opus and later versions, Gemini Ultra and Meta's largest Llama variants.

Systemic risk classification triggers additional obligations: comprehensive model evaluations, adversarial red-teaming, incident tracking and reporting, enhanced cybersecurity and annual reporting to the EU AI Office.

Open-source GPAI provisions

Exemption

Reduced obligations apply when

  • Model weights are publicly available
  • Training methodology is documented openly
  • Released under a qualifying open-source license
  • Parameters and architecture are published
No exemption

Full obligations still apply if

  • Model poses systemic risk (>10²⁵ FLOPs)
  • You modify and deploy commercially
  • Used in high-risk applications under Annex III
  • Model is integrated into regulated products

If you build on GPAI models

Most organizations using AI are downstream integrators rather than foundation model providers. If you access GPT-4, Claude or similar models through APIs to build your own applications, these obligations apply to you.

1
Obtain and review technical documentation from your GPAI provider
2
Assess whether your specific use case qualifies as high-risk
3
Implement appropriate human oversight for your application
4
Document how you've integrated the GPAI model
5
Establish logging and monitoring for your deployment
6
Create transparency disclosures for end users
7
Define incident response procedures for AI-related issues

The EU AI Office

The EU AI Office within the European Commission provides centralized oversight for GPAI models. It issues guidance, develops codes of practice, evaluates systemic risk models and coordinates with national authorities. GPAI providers with systemic risk models must report directly to the AI Office. The Office also serves as a resource for downstream integrators seeking clarity on their obligations.

Policy templates

Complete AI governance policy repository

Access 37 ready-to-use AI governance policy templates aligned with EU AI Act, ISO 42001, and NIST AI RMF requirements

Core governance

  • • AI Governance Policy
  • • AI Risk Management Policy
  • • Responsible AI Principles
  • • AI Ethical Use Charter
  • • Model Approval & Release
  • • AI Quality Assurance
  • + 6 more policies

Data & security

  • • AI Data Use Policy
  • • Data Minimization for AI
  • • Training Data Sourcing
  • • Sensitive Data Handling
  • • Prompt Security & Hardening
  • • Incident Response for AI
  • + 2 more policies

Legal & compliance

  • • AI Vendor Risk Policy
  • • Regulatory Compliance
  • • CE Marking Readiness
  • • High-Risk System Registration
  • • Documentation & Traceability
  • • AI Accountability & Roles
  • + 7 more policies

Frequently asked questions

Common questions about EU AI Act compliance

Yes. The regulation has extraterritorial reach, meaning any organization whose AI systems or outputs are used in the EU market falls within scope. This includes US-based SaaS companies, consulting firms and any business whose AI affects EU citizens, regardless of where headquarters are located. See Article 2 for the full scope definition.
You still carry obligations as a deployer under Article 26. These include transparency requirements, human oversight provisions and proper logging. The provider handles certain upstream duties, but you remain responsible for how the system operates within your organization.
Prohibited practices defined in Article 5 are banned entirely and carry the highest penalties. High-risk systems listed in Annex III can operate provided you meet strict requirements around documentation, risk management, human oversight and conformity assessment. The distinction determines whether you can use the system at all versus how much governance it requires.
Prohibited practices carry fines up to €35 million or 7% of global annual revenue, whichever is higher. High-risk violations reach €15 million or 3% of revenue. Information violations (incomplete documentation, failure to cooperate with authorities) can trigger €7.5 million or 1.5% of revenue. See Article 99 for the full penalty structure.
The rollout is phased per Article 113. Prohibited practices became illegal in February 2025. GPAI transparency rules apply from August 2025. High-risk system requirements begin August 2026, with full compliance required by August 2027. Notified bodies are already booking assessments into Q2 2026.
GPAI providers face specific obligations under Article 53 around documentation, copyright compliance and downstream transparency. Models exceeding 10^25 FLOPs (or designated by the Commission) carry additional systemic risk requirements per Article 55 including adversarial testing and incident reporting. If you build applications using these models, you remain responsible for your specific use case's compliance.
High-risk systems require technical documentation per Article 11 and Annex IV covering system architecture and capabilities, training data governance records, performance and accuracy metrics, risk management procedures, human oversight mechanisms and operational logs retained for at least six months.
Most high-risk AI systems can use internal self-assessment following Annex VI procedures. Biometrics for law enforcement, certain critical infrastructure safety systems and specific medical devices require third-party evaluation by accredited notified bodies. Check your Annex III category to determine which path applies.
The regulation includes proportionality provisions for smaller organizations. You can use simplified documentation approaches, participate in regulatory sandboxes for testing and prioritize your highest-risk systems first. Core obligations still apply, but implementation can scale to your resources. Many SMEs start with an AI inventory and risk classification before building out full governance.
They operate as complementary frameworks. GDPR governs personal data processing while the AI Act addresses AI system risks regardless of whether personal data is involved. High-risk AI that processes personal data triggers requirements under both: you need GDPR data protection impact assessments alongside AI Act conformity assessments and technical documentation.
Existing systems must achieve compliance by the relevant deadline (August 2026 for most high-risk categories, August 2027 for full requirements). Start by inventorying and classifying your current AI portfolio now. Systems that undergo substantial modification after August 2025 are treated as new systems with immediate obligations per Article 111.
Shadow AI refers to AI tools used without governance oversight. Discovery approaches include monitoring network traffic for calls to AI APIs (OpenAI, Anthropic, Google), auditing expense reports for AI subscriptions, surveying employees about their tool usage and checking browser extensions. Most organizations discover more AI usage than expected during this exercise.
Self-assessment means your internal team conducts conformity evaluation following Annex VI procedures. Notified body assessment involves an accredited third party evaluating your system per Article 43. Most high-risk AI qualifies for self-assessment. Categories requiring external review include remote biometric identification for law enforcement, AI as safety components in regulated products and certain critical infrastructure applications.
Open-source GPAI models released under qualifying licenses with publicly available parameters face reduced transparency obligations per Article 53(2). The exemption disappears if the model poses systemic risk or if you modify and deploy it commercially (which may make you a provider with full obligations). High-risk applications built on open-source models still require complete compliance regardless of the underlying model's license.
High-risk systems require continuous monitoring as part of post-market surveillance under Article 72. Formal compliance reviews should occur at least annually, plus whenever you make significant system changes, experience incidents or receive new regulatory guidance. Compliance is ongoing rather than a certification you achieve once.
EU Member States must establish AI regulatory sandboxes by August 2026 per Article 57. These controlled environments allow testing of innovative AI under regulatory supervision, often with liability protections and expedited feedback. Contact your national AI competent authority to learn about sandbox availability and application procedures in your jurisdiction.
Article 4 mandates that organizations ensure staff have sufficient AI literacy appropriate to their roles. Personnel operating AI systems need to understand capabilities and limitations. Those overseeing AI require deeper knowledge of risk factors and escalation procedures. Training requirements scale with the risk level of systems being handled.
Each Member State designates national competent authorities and market surveillance bodies per Article 70. The EU AI Office (within the European Commission) oversees GPAI models specifically. Enforcement mechanisms include inspections, corrective measures and administrative fines. Whistleblower protections exist for individuals reporting violations. Non-EU companies placing AI on the EU market must appoint an authorized representative within the Union.

Ready to get compliant?

Start your EU AI Act compliance journey today with our comprehensive assessment and tracking tools.

EU AI Act Compliance Solution & Risk Classification | VerifyWise