EU AI Act Compliance: What High-Risk AI System Providers Need to Know in 2026
The EU AI Act is the world's first comprehensive AI regulation, and it's already in force. If you're developing or deploying AI systems for the EU market, you need to understand your compliance obligations now.
After building AI safety operational workflows at Google and working with AI systems across platforms, here's what AI Act compliance actually looks like in practice.
What Is the EU AI Act?
The AI Act (Regulation (EU) 2024/1689) establishes a risk-based framework for AI systems:
- Unacceptable risk: Prohibited AI systems (e.g., social scoring, real-time biometric identification in public spaces)
- High-risk: AI systems requiring conformity assessment before deployment
- Limited risk: Transparency obligations (e.g., chatbots must disclose they're AI)
- Minimal risk: No specific obligations beyond existing law
Key dates:
- February 1, 2025: Prohibitions on unacceptable-risk AI entered into force
- August 2, 2026: Obligations for high-risk AI systems begin
- August 2, 2027: Full compliance required for all AI systems
Are You a High-Risk AI System Provider?
High-risk AI systems are listed in Annex III of the AI Act. Common categories include:
1. Biometric Identification and Categorization
- Remote biometric identification (facial recognition)
- Biometric categorization systems (emotion recognition, age estimation)
2. Critical Infrastructure
- AI managing traffic, water, gas, electricity, heating systems
3. Education and Vocational Training
- AI determining access to educational institutions
- AI evaluating learning outcomes or assessing students
4. Employment, Workers Management, Self-Employment
- AI for recruitment (CV screening, interview analysis)
- AI making promotion/termination decisions
- AI monitoring worker performance
- AI allocating tasks to platform workers (gig economy)
5. Access to Essential Services
- AI evaluating creditworthiness
- AI assessing eligibility for benefits (healthcare, social services)
- AI dispatching emergency services
6. Law Enforcement
- AI assessing risk of criminal offense
- AI for polygraph/lie detection
- AI evaluating evidence reliability
- AI predicting criminal behavior (predictive policing)
7. Migration, Asylum, Border Control
- AI examining visa/asylum applications
- AI detecting document forgery
- AI assessing security risks posed by individuals
8. Justice and Democratic Processes
- AI assisting judicial research or case law interpretation
If your AI system falls into these categories, you have high-risk AI obligations.
Core Requirements for High-Risk AI Systems
1. Risk Management System (Article 9)
You must establish and maintain a risk management system throughout the AI system lifecycle.
Requirements:
- Identify and analyze known and foreseeable risks
- Estimate and evaluate risks in intended use and reasonably foreseeable misuse
- Evaluate risks based on data gathered from post-market monitoring
- Adopt risk mitigation measures
- Test high-risk AI systems and review risk management system
- Document all measures taken
Iterative process: Risk management must continue after deployment based on real-world performance data.
2. Data Governance (Article 10)
Training, validation, and testing datasets must be subject to data governance practices.
Requirements:
- Relevant, representative, free of errors, complete
- Appropriate statistical properties (distribution, sample size)
- Consideration of bias that may affect health, safety, fundamental rights
- Examination of gaps, shortcomings, how they can be addressed
- Data provenance documentation
For training data:
- Must be relevant for the intended purpose
- Must have appropriate statistical properties
- Biometric data subject to special protections
3. Technical Documentation (Article 11, Annex IV)
Comprehensive technical documentation must be drawn up before deployment.
Must include:
- General description of AI system (intended purpose, developer info)
- Detailed description of system elements and development process
- Information on monitoring, functioning, control mechanisms
- Description of risk management system
- Changes made through the system lifecycle
- Validation and testing procedures and results
- Cybersecurity measures
This documentation must be kept up-to-date.
4. Record-Keeping / Logging (Article 12)
High-risk AI systems must automatically log events during operation.
Requirements:
- Logging period appropriate for intended purpose (minimum time to be specified)
- Enable post-market monitoring and investigation of incidents
- Ensure traceability of AI system functioning
- Protected against tampering
Typical logs:
- Inputs to the AI system
- Outputs/decisions made
- Timestamp of each event
- Identification of users interacting with system
- Reference to database(s) against which data was checked
5. Transparency and User Information (Article 13)
High-risk AI systems must be transparent to users.
Requirements:
- Instructions for use in appropriate digital or non-digital format
- Written in clear, understandable language
- Information on identity and contact details of provider
- Characteristics, capabilities, limitations of system
- Performance metrics (accuracy, robustness, cybersecurity)
- Purpose and conditions of use
- Human oversight measures
- Expected lifetime and maintenance needs
6. Human Oversight (Article 14)
High-risk AI systems must be designed to enable effective human oversight.
Requirements:
- Fully understand capacities and limitations of the AI system
- Remain aware of automation bias tendency
- Interpret AI system output correctly
- Decide not to use the AI system in a particular situation
- Override or reverse AI system output
- Intervene or interrupt system operation
Implementation varies:
- Recruitment AI: Human reviewer makes final hiring decision
- Credit scoring: Human can override automated decline
- Law enforcement: Human officer decides whether to act on AI prediction
7. Accuracy, Robustness, Cybersecurity (Article 15)
High-risk AI systems must achieve appropriate levels of accuracy, robustness, cybersecurity.
Requirements:
- Declared performance metrics must be achieved and maintained
- System resilient to errors, faults, inconsistencies
- Protection against unauthorized third parties
- Resilient to attempts to alter use or performance
Testing required for:
- Normal operating conditions
- Reasonably foreseeable conditions of misuse
- Potential attacks by third parties
8. Conformity Assessment (Article 43)
Before placing a high-risk AI system on the market, you must complete a conformity assessment.
Two procedures:
A) Internal Control (Annex VI) - Most common
- Provider conducts own conformity assessment
- Prepares technical documentation
- Establishes quality management system
- Draws up EU declaration of conformity
- Affixes CE marking
B) Third-Party Assessment (Annex VII) - Required for:
- Biometric identification systems
- AI systems in critical infrastructure (if subject to third-party conformity assessment)
- AI systems listed in delegated acts as requiring external assessment
Notified bodies conduct third-party assessments and issue conformity certificates.
9. Registration in EU Database (Article 49)
Before placing a high-risk AI system on the market, you must register it in the EU database.
Information required:
- Name, address, contact details of provider
- Trade name, address, contact details of authorized representative (if applicable)
- AI system trade name and additional unambiguous reference
- Intended purpose
- Status (on the market, in service, withdrawn, recalled)
- High-risk classification according to Annex III
- For third-party assessed systems: name and identification number of notified body
Public access: Much of this information is publicly available in the database.
10. Post-Market Monitoring (Article 72)
Providers must establish and document a post-market monitoring system.
Requirements:
- Actively and systematically collect, document, analyze data about performance
- Identify need for immediate corrective action
- Maintain systematic reporting to downstream economic operators
- Report serious incidents to market surveillance authorities
Monitoring must cover:
- How the AI system is actually being used
- Whether it's performing as intended
- Emerging risks not identified during conformity assessment
- User feedback and complaints
AI Act Compliance Roadmap for High-Risk Systems
Phase 1: Classification and Gap Assessment (Weeks 1-4)
Week 1-2: Determine if your AI system is high-risk
Use the classification flowchart:
- Is the AI system used in one of the areas listed in Annex III?
- Is it used for one of the purposes listed in Annex III?
- Does it pose a risk of harm to health, safety, or fundamental rights?
If yes to all three → High-risk AI system
Week 3-4: Conduct gap assessment
For each requirement (risk management, data governance, documentation, etc.), assess:
- What do we already have?
- What's missing?
- What needs to be updated?
Create a compliance roadmap with timelines and ownership.
Phase 2: Documentation and Technical Implementation (Weeks 5-20)
Weeks 5-8: Risk Management System
- Establish risk identification process
- Define risk evaluation criteria
- Create risk mitigation procedures
- Set up testing protocols
- Document everything
Weeks 9-12: Data Governance
- Audit training/validation/testing datasets
- Document data provenance
- Identify and address biases
- Establish data quality monitoring
- Create data governance policies
Weeks 13-16: Technical Documentation
- Compile system description
- Document development process
- Describe monitoring and control mechanisms
- Detail validation and testing results
- Create cybersecurity documentation
Weeks 17-20: Logging and Transparency
- Implement automated logging
- Create instructions for use
- Draft transparency documentation
- Design human oversight mechanisms
- Test logging under various scenarios
Phase 3: Conformity Assessment and Registration (Weeks 21-26)
Week 21-24: Conformity Assessment
If internal control (most cases):
- Complete self-assessment against requirements
- Compile technical documentation
- Establish quality management system
- Draft EU declaration of conformity
If third-party required:
- Identify appropriate notified body
- Submit application and documentation
- Support assessment process
- Receive conformity certificate
Week 25-26: EU Database Registration
- Gather required information
- Register in EU database before market placement
- Verify public information accuracy
Phase 4: Deployment and Ongoing Compliance (Continuous)
Post-deployment:
- Activate post-market monitoring system
- Collect and analyze performance data
- Report serious incidents within required timeframes
- Update documentation based on learnings
- Conduct periodic risk reassessments
- Maintain quality management system
Annual review:
- Review conformity with AI Act requirements
- Update technical documentation
- Assess need for re-certification
- Evaluate changes in risk profile
Common AI Act Compliance Mistakes
1. Assuming You're Not High-Risk
Many providers underestimate whether their AI system is high-risk.
Examples that ARE high-risk:
- "AI assistant for recruiters" → recruitment AI (high-risk)
- "Fraud detection for lenders" → creditworthiness assessment (high-risk)
- "Student performance tracking" → educational assessment (high-risk)
When in doubt, conduct formal classification assessment.
2. Inadequate Data Governance Documentation
"We used publicly available data" is not sufficient.
You need to document:
- Specific data sources and provenance
- How representativeness was evaluated
- Bias testing conducted and results
- Mitigation strategies for identified biases
- How data quality is maintained
3. Generic Risk Assessments
"AI might make biased decisions" is too vague.
Effective risk assessments specify:
- Specific harms (e.g., "AI might systematically reject qualified candidates from protected groups")
- Likelihood estimates (based on testing data)
- Severity assessments (impact on fundamental rights)
- Concrete mitigation measures (bias testing, human review, regular audits)
4. Insufficient Human Oversight
"Humans can override the AI" is often not enough.
Effective human oversight requires:
- Training on AI system capabilities and limitations
- Clear protocols for when to override
- System design that makes override easy and obvious
- Monitoring of override rates (too low may indicate rubber-stamping)
5. No Post-Market Monitoring Plan
You can't just deploy and forget.
Post-market monitoring requires:
- Defined performance metrics
- Automated data collection
- Regular analysis and reporting
- Corrective action triggers
- Incident reporting procedures
Need Help with AI Act Compliance?
Echelon Advisory provides comprehensive AI Act compliance services including risk classification, gap assessments, documentation support, and conformity assessment preparation.
Contact UsKey Takeaways
- AI Act uses risk-based approach - high-risk systems have strict requirements
- High-risk classification depends on use case, not just technology
- Core requirements: risk management, data governance, documentation, logging, transparency, human oversight
- Conformity assessment required before market placement (internal or third-party)
- Registration in EU database mandatory for high-risk systems
- Post-market monitoring is ongoing obligation, not one-time activity
AI Act compliance deadlines are approaching (August 2026 for high-risk systems). The AI providers that build robust governance infrastructure now will avoid enforcement actions, market access restrictions, and financial penalties.
About the Author
Maneesha Pandey is the founder of Echelon Advisory Services, specializing in Trust & Safety, AI Governance, and EU regulatory compliance. She spent 14+ years building AI safety operational workflows at Google and regulatory compliance infrastructure at Amazon and TikTok.