The EU AI Act is live. We turn legal text into a clear plan with owners, deadlines and proof. Start with a fast gap assessment, then track everything in one place.
The EU AI Act uses a risk-based approach with four main categories
Banned outright
Social scoring, emotion recognition at work/school, biometric categorization, real-time biometric ID in public spaces, subliminal techniques
prohibitedStrict controls required
CV screening, credit scoring, law enforcement, medical devices, critical infrastructure, education assessment, recruitment tools
regulatedTransparency required
Chatbots, deepfakes, emotion recognition systems, biometric categorization (non-prohibited), AI-generated content
disclosureLittle to no obligations
Spam filters, video games, inventory management, AI-enabled video games, basic recommendation systems
minimalConcrete capabilities that address specific regulatory requirements
Register every AI system in your organization with structured metadata. Each entry captures purpose, data sources, deployment context and stakeholders. The platform applies Annex III criteria to determine whether systems qualify as high-risk, limited-risk or minimal-risk, generating classification rationale you can reference during audits.
Addresses: Article 6 classification, Article 9 risk management, Article 49 registration
Build the documentation package required under Article 11. The platform structures information about system architecture, training data provenance, performance metrics and known limitations into formatted documents that match regulatory expectations. Templates cover both provider and deployer perspectives.
Addresses: Article 11 technical documentation, Annex IV requirements
Define who reviews AI outputs, under what conditions and with what authority to override. The platform lets you configure oversight triggers, assign reviewers by role or expertise and capture review decisions with timestamps. Oversight patterns become auditable records demonstrating Article 14 compliance.
Addresses: Article 14 human oversight, Article 26 deployer obligations
Capture system events, user interactions and decision outputs with automatic timestamping. Logs are retained according to configurable policies that default to the six-month minimum deployers must maintain. Search and export functions support incident investigation and regulatory requests.
Addresses: Article 12 record-keeping, Article 26(5) log retention
Log AI-related incidents with severity classification and assign investigation owners. The platform tracks remediation progress and generates incident reports suitable for regulatory notification. Serious incidents can be escalated to authorities within the required timeframe, with supporting documentation attached.
Addresses: Article 62 serious incident reporting, Article 73 penalties
Deployers of high-risk AI in certain sectors must assess impacts on fundamental rights before deployment. The platform provides structured assessment templates covering discrimination risk, privacy implications, access to services and due process considerations. Completed assessments generate dated records for compliance evidence.
Addresses: Article 27 fundamental rights impact assessment
All compliance activities are tracked with timestamps, assigned owners and approval workflows. This audit trail demonstrates systematic governance rather than ad-hoc documentation created after the fact.
VerifyWise provides dedicated tooling for every regulatory requirement across 15 compliance categories
EU AI Act requirements
Requirements with dedicated tooling
Coverage across all categories
Guided 7-step conformity assessment process with document generation
Real-time monitoring and policy enforcement for GPAI model usage
Track AI literacy requirements and staff competency records
Structured workflows for serious incident tracking and authority notification
Take our free EU AI Act readiness assessment to determine your risk classification and compliance obligations in minutes.
Quick assessment
Get results immediately
Different actors in the AI value chain have different responsibilities under the EU AI Act
Organizations developing or substantially modifying AI systems
Organizations using AI systems under their authority
Organizations making AI systems available in EU market
Quick reference guide showing which obligations apply to each role
| Obligation | Provider | Deployer | Distributor |
|---|---|---|---|
Risk management system Lifecycle risk assessment and mitigation | |||
Technical documentation System specs, training data, performance metrics | Store | ||
Human oversight Prevent or minimize risks | Design | Implement | |
Logging & records Minimum 6 months retention for deployers | Enable | Maintain | |
Conformity assessment Self-assessment or notified body | Verify | ||
CE marking Required before market placement | Affix | Verify | |
EU database registration High-risk AI systems | Register | Verify | |
Fundamental rights impact assessment Required for deployers in specific sectors | |||
Incident reporting Serious incidents to authorities | If aware | ||
Post-market monitoring Continuous surveillance of system performance | Monitor use |
Note: Many organizations may have multiple roles. For example, if you both develop and deploy an AI system, you must comply with both Provider and Deployer obligations.
A practical roadmap to achieve EU AI Act compliance
Catalog all AI systems in your organization
Assign risk tiers to each AI system
Identify compliance gaps and requirements
Build required documentation and controls
Conduct conformity assessments
Maintain compliance and monitor systems
Note: Notified bodies are already booking into Q2 2026. Start your compliance journey now to meet the August 2026 deadline.
Start your compliance journeyCritical compliance deadlines approaching
Banned AI practices become illegal
General-purpose AI transparency rules
Classification rules begin
All high-risk requirements active
Eight categories of AI systems classified as high-risk under the EU AI Act
Examples
Facial recognition, fingerprint systems, iris scanning
Key requirement
Particularly stringent for law enforcement use
Examples
Traffic management, water/gas/electricity supply management
Key requirement
Must demonstrate resilience and fail-safe mechanisms
Examples
Student assessment, exam scoring, admission decisions
Key requirement
Requires bias testing and transparency to students
Examples
CV screening, interview tools, promotion decisions, monitoring
Key requirement
Must protect worker rights and provide explanations
Examples
Credit scoring, insurance risk assessment, benefit eligibility
Key requirement
Requires human review for adverse decisions
Examples
Risk assessment, polygraph analysis, crime prediction
Key requirement
Additional safeguards for fundamental rights
Examples
Visa applications, asylum decisions, deportation risk assessment
Key requirement
Strong human oversight and appeal mechanisms
Examples
Court case research, judicial decision support
Key requirement
Must maintain judicial independence
The EU AI Act has a three-tier penalty structure with significant fines
€35M or 7% of global revenue
(whichever is higher)
Violations include:
€15M or 3% of global revenue
(whichever is higher)
Violations include:
€7.5M or 1.5% of global revenue
(whichever is higher)
Violations include:
Obligations for GPAI providers came into effect on August 2, 2025
General-purpose AI refers to models trained on broad data that can perform a wide range of tasks without being designed for one specific purpose. These foundation models power many downstream applications, from chatbots to code assistants to image generators. The EU AI Act creates specific obligations for organizations that develop these models and those that build applications using them.
Large Language Models
GPT-4, Claude, Gemini, Llama, Mistral
Image Generation
Midjourney, DALL-E, Stable Diffusion
Multimodal Models
GPT-4o, Gemini Pro Vision, Claude 3.5
Code Generation
GitHub Copilot, Amazon CodeWhisperer
You developed or trained the foundation model itself
Examples: OpenAI, Anthropic, Google DeepMind, Meta AI
You build applications using GPAI models via API or integration
Examples: Companies using GPT-4 API, Claude API, or fine-tuned models
All general-purpose AI models
>10²⁵ FLOPs or designated by Commission
Models trained with more than 10²⁵ floating point operations (FLOPs) are automatically classified as posing systemic risk. The European Commission can also designate models based on their capabilities, reach or potential for serious harm regardless of training compute. Current models likely meeting this threshold include GPT-4 and successors, Claude 3 Opus and later versions, Gemini Ultra and Meta's largest Llama variants.
Systemic risk classification triggers additional obligations: comprehensive model evaluations, adversarial red-teaming, incident tracking and reporting, enhanced cybersecurity and annual reporting to the EU AI Office.
Most organizations using AI are downstream integrators rather than foundation model providers. If you access GPT-4, Claude or similar models through APIs to build your own applications, these obligations apply to you.
The EU AI Office within the European Commission provides centralized oversight for GPAI models. It issues guidance, develops codes of practice, evaluates systemic risk models and coordinates with national authorities. GPAI providers with systemic risk models must report directly to the AI Office. The Office also serves as a resource for downstream integrators seeking clarity on their obligations.
Access 37 ready-to-use AI governance policy templates aligned with EU AI Act, ISO 42001, and NIST AI RMF requirements
Common questions about EU AI Act compliance
Start your EU AI Act compliance journey today with our comprehensive assessment and tracking tools.