5 Steps to Prepare for EU AI Act High-Risk Compliance (August 2026)
August 2, 2026. That's when the EU AI Act's requirements for high-risk AI systems take full effect. If your company develops or deploys AI systems classified as high-risk — in healthcare, finance, employment, education, law enforcement, or critical infrastructure — you have roughly six months to achieve compliance.
Here are the five concrete steps you should be taking right now.
Step 1: Complete Your AI Inventory and Classification
You can't comply with regulations you don't understand, and you can't understand your obligations without knowing which of your AI systems are affected.
What to do:
- Catalog every AI system your organization develops, deploys, or uses — including third-party AI tools and APIs
- Classify each system against the AI Act's risk categories (see our classification guide)
- Identify your role for each system: are you the provider (developer), deployer (user), or both?
- Document everything: system descriptions, intended purposes, classification rationale
Common pitfalls:
- Forgetting about AI embedded in third-party tools (your CRM's lead scoring, your ATS's resume screening)
- Ignoring internal-use AI systems (they're not exempt)
- Not considering downstream uses of systems you provide to others
Timeline: This should be completed by now. If it isn't, start immediately — everything else depends on knowing which systems need compliance.
Step 2: Implement a Risk Management System
Article 9 of the AI Act requires a continuous, iterative risk management system for each high-risk AI system. This isn't a one-time assessment — it's an ongoing process.
What to do:
- Identify risks: What could go wrong? Consider risks to health, safety, and fundamental rights — not just technical risks
- Assess risks: How likely is each risk? How severe would the impact be?
- Mitigate risks: What measures reduce each risk to an acceptable level?
- Monitor continuously: Establish ongoing monitoring to detect new risks and verify that mitigations remain effective
- Test rigorously: Define testing procedures that verify risk mitigations work as intended
Practical tips:
- Build on existing risk management frameworks (ISO 31000, ISO 23894 for AI) rather than starting from scratch
- Include diverse perspectives — technical, legal, ethical, and domain expert — in risk identification
- Document your risk tolerance decisions and the reasoning behind them
Timeline: Start now if you haven't already. A robust risk management system takes 2-4 months to establish properly.
Step 3: Establish Data Governance
Article 10 sets out detailed requirements for training, validation, and testing data. This is often the most challenging compliance area because it requires retroactive documentation of decisions made during development.
What to do:
- Document data sources: Where does your training data come from? What are the collection methods?
- Assess data quality: Is your data accurate, complete, and representative of the deployment context?
- Examine bias: Have you tested for and addressed biases in your training data? Can you demonstrate this?
- Address data gaps: If your training data isn't representative of all deployment contexts, document the gaps and their potential impact
- Ensure legal basis: Confirm you have legal basis for processing all training data under GDPR
Special consideration — bias detection:
The AI Act allows processing special category data (race, gender, etc.) specifically for bias detection and correction (Article 10(5)). This creates a legal basis that GDPR alone doesn't provide, but requires strict safeguards:
- Data must be pseudonymized
- Access must be strictly limited
- Processing must occur in a controlled environment
- Data must be deleted after bias assessment is complete
Timeline: 2-3 months. This is often the most time-consuming step, especially for systems with complex data pipelines.
Step 4: Create Technical Documentation
Article 11 requires comprehensive technical documentation that demonstrates compliance before the system is placed on the market. The documentation must be kept up to date throughout the system's lifecycle.
What to document:
- System description: General description, intended purpose, versions
- Design specifications: Architecture, algorithms, data processing logic
- Development process: Design choices, training methodologies, validation procedures
- Risk management: Identified risks, mitigation measures, residual risks
- Data governance: Training data documentation, bias assessments
- Performance metrics: Accuracy, robustness, cybersecurity measures with test results
- Human oversight: How human oversight is implemented, what actions humans can take
- Monitoring plan: Post-market monitoring procedures
Practical tips:
- Use the harmonized standards (when available) as templates for your documentation
- Integrate documentation into your development process — don't try to create it after the fact
- Make documentation a living document, updated with each significant system change
Timeline: 2-3 months. Start in parallel with Steps 2 and 3.
Step 5: Implement Human Oversight and Monitoring
Article 14 requires high-risk AI systems to be designed and developed so they can be effectively overseen by natural persons. This isn't just a design requirement — it's an operational one.
What to do:
- Design for oversight: Build interfaces that allow humans to understand AI outputs, intervene when necessary, and override decisions
- Train operators: Ensure people overseeing AI systems understand how they work, what they can and can't do, and when to intervene
- Establish procedures: Create clear procedures for human review, escalation, and override
- Implement logging: Automatic logging of AI system operations for traceability (Article 12)
- Set up post-market monitoring: Continuous monitoring of AI system performance in production, with defined thresholds for intervention
AI literacy requirement:
Article 4 requires organizations to ensure staff involved with AI systems have sufficient AI literacy. This applies to both technical operators and business decision-makers. Plan training programs now.
Timeline: 1-2 months for design changes, ongoing for training and monitoring.
The Bottom Line
Six months is less time than you think. These five steps can't all happen sequentially — you'll need to work on Steps 2, 3, and 4 in parallel, building on the foundation of Step 1.
The companies that started preparing in 2025 will be ready. Those starting now will need to move quickly but can still make it. Those who wait until summer 2026 will almost certainly face compliance gaps.
Need help getting ready? Contact us for an AI Act readiness assessment. We'll identify your gaps and create a realistic compliance timeline.
Dr. Dario Sitnik
CEO i AI znanstvenik u Sitnik AI. Doktorat iz AI-a s ekspertizom u strojnom učenju, NLP-u i inteligentnoj automatizaciji.