EU AI Act 2026: What Non-EU Companies Need to Know
On August 1, 2024, the European Union's Artificial Intelligence Act — the world's first comprehensive AI regulation — officially entered into force. While many companies outside Europe assume this is a "European problem," the reality is quite different. If your AI system is used by people in the EU, or if its outputs affect EU citizens, the AI Act applies to you — regardless of where your company is headquartered.
This article breaks down what the EU AI Act means for non-EU companies, the key compliance timelines you need to know, how the risk classification system works, and practical steps you should take now to prepare.
What Is the EU AI Act?
The EU AI Act is a regulatory framework that establishes rules for the development, deployment, and use of artificial intelligence systems within the European Union. Think of it as the AI equivalent of GDPR — a comprehensive regulation with extraterritorial reach that sets the global standard.
The Act takes a risk-based approach: the higher the risk an AI system poses to health, safety, or fundamental rights, the stricter the requirements. This means not all AI systems face the same level of regulation — a spam filter faces far less scrutiny than an AI system making hiring decisions or medical diagnoses.
Key Objectives
- Protect fundamental rights: Ensure AI systems don't discriminate, manipulate, or undermine human dignity.
- Promote trustworthy AI: Establish transparency, accountability, and human oversight requirements.
- Create legal certainty: Provide clear rules for businesses developing and deploying AI, reducing regulatory fragmentation across EU member states.
- Foster innovation: Regulatory sandboxes and proportionate requirements for low-risk systems aim to keep Europe competitive in AI development.
Why Non-EU Companies Should Care
The EU AI Act has extraterritorial scope, meaning it applies to:
- Providers (developers) of AI systems that are placed on the market or put into service in the EU — regardless of where the provider is established.
- Deployers (users) of AI systems who are located within the EU.
- Providers and deployers located outside the EU, where the output produced by the AI system is used in the EU.
In practice, this means:
- A US-based SaaS company whose AI features are used by European customers must comply.
- A Japanese manufacturer using AI quality control on products sold in Europe is affected.
- An Indian IT services company deploying AI systems for EU-based clients needs to meet the requirements.
- Any company using AI to process data about EU residents or make decisions affecting them falls under the Act's scope.
The GDPR precedent is instructive: when it took effect in 2018, many non-EU companies initially dismissed it. Those that didn't prepare faced significant fines and operational disruptions. The AI Act follows the same pattern — early preparation is far cheaper than reactive compliance.
Key Timelines: What Happens When
The AI Act doesn't take full effect overnight. It follows a phased implementation schedule, giving companies time to prepare:
February 2, 2025 — Prohibited AI Practices
The first provisions to take effect ban AI practices considered an unacceptable risk:
- Social scoring by governments (Chinese-style citizen scoring systems).
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions).
- Emotion recognition in workplaces and educational institutions.
- Manipulative AI systems that exploit vulnerabilities (age, disability, social situation) to distort behavior in harmful ways.
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases.
- Biometric categorization systems that infer sensitive attributes (race, political opinions, sexual orientation).
Action required: Audit your AI systems now. If any fall into these categories, they must be discontinued before this deadline.
August 2, 2025 — General-Purpose AI (GPAI) Rules
Rules for general-purpose AI models (like large language models) take effect:
- Transparency obligations: GPAI providers must maintain technical documentation, provide information to downstream providers, and comply with EU copyright law.
- Systemic risk provisions: GPAI models with "systemic risk" (trained with >10²⁵ FLOPs) face additional obligations including model evaluations, adversarial testing, incident reporting, and cybersecurity measures.
Who's affected: Companies building or fine-tuning foundation models, large language models, or multimodal AI systems deployed in the EU.
August 2, 2026 — High-Risk AI Systems
The most significant deadline. Full requirements for high-risk AI systems come into force:
- Risk management systems: Continuous identification, analysis, and mitigation of risks.
- Data governance: Training data must be relevant, representative, and free from errors. Bias examination and mitigation required.
- Technical documentation: Detailed records of system design, development, and capabilities.
- Record-keeping: Automatic logging of system operations for traceability.
- Transparency: Clear information to deployers about system capabilities and limitations.
- Human oversight: Systems must be designed to allow effective human oversight.
- Accuracy, robustness, cybersecurity: Systems must meet appropriate levels of accuracy and be resilient to errors and attacks.
- Conformity assessment: Many high-risk systems require third-party assessment before market placement.
August 2, 2027 — Full Enforcement
All remaining provisions take effect, including obligations for AI systems that are components of products covered by existing EU product safety legislation (medical devices, machinery, toys, aviation, automotive, etc.).
The Risk Classification System
The AI Act classifies AI systems into four risk categories. Understanding where your systems fall is the first step toward compliance.
Unacceptable Risk (Banned)
AI practices that pose a clear threat to safety, livelihoods, or rights. These are prohibited entirely (see February 2025 timeline above).
High Risk
AI systems that significantly impact health, safety, or fundamental rights. These face the strictest requirements. Examples include:
- Biometric identification (remote biometric ID systems).
- Critical infrastructure: AI in energy, water, gas, heating, and digital infrastructure management.
- Education: AI systems determining access to education or evaluating students.
- Employment: AI for recruitment, screening, hiring decisions, promotion, or termination.
- Essential services: AI evaluating creditworthiness, insurance pricing, or eligibility for public benefits.
- Law enforcement: AI for risk assessment, polygraph alternatives, evidence evaluation.
- Migration and border control: AI for visa processing, asylum applications, border surveillance.
- Justice and democracy: AI assisting judicial decisions or influencing election outcomes.
Limited Risk
AI systems with transparency obligations. Users must be informed they're interacting with AI. This includes:
- Chatbots: Must disclose they are AI-powered.
- Deepfakes: AI-generated or manipulated content must be labeled.
- Emotion recognition: Systems must inform users when emotion recognition is being used (where not banned).
Minimal Risk
The majority of AI systems — spam filters, AI-enabled video games, inventory management systems — fall here. No specific obligations beyond existing laws, though voluntary codes of conduct are encouraged.
Penalties for Non-Compliance
The AI Act includes significant penalties, following the GDPR model:
- Prohibited AI practices: Up to €35 million or 7% of global annual turnover (whichever is higher).
- High-risk AI obligations: Up to €15 million or 3% of global annual turnover.
- Providing incorrect information to authorities: Up to €7.5 million or 1.5% of global annual turnover.
For context, 7% of global turnover for a company generating $1 billion in revenue would be a $70 million fine. These are not symbolic penalties.
What Non-EU Companies Should Do Now
Compliance doesn't happen overnight. Here's a practical roadmap:
1. Conduct an AI Inventory (Now)
Catalog all AI systems your organization develops, deploys, or uses. For each system, document:
- What it does and how it works.
- What data it uses and where that data comes from.
- Who it affects (does it impact EU citizens or residents?).
- What decisions it makes or influences.
2. Classify Your Risk Level (Q1 2026)
Map each AI system to the Act's risk categories. Focus first on systems that might be high-risk — these have the most demanding requirements and the August 2026 deadline.
3. Appoint an EU Authorized Representative
Non-EU providers of high-risk AI systems must appoint an authorized representative established in the EU. This representative acts as the compliance point of contact for EU authorities.
4. Implement Compliance Frameworks (2026)
For high-risk systems, start building the required compliance infrastructure:
- Risk management processes.
- Data governance and quality procedures.
- Technical documentation templates.
- Monitoring and logging systems.
- Human oversight mechanisms.
5. Train Your Teams (Ongoing)
The AI Act explicitly requires AI literacy. Organizations must ensure their staff have sufficient understanding of AI systems to deploy and oversee them responsibly. This applies to both technical and non-technical staff involved in AI-related decisions.
6. Engage Legal and Compliance Expertise
The AI Act intersects with GDPR, product safety directives, and sector-specific regulations. Navigating this landscape requires specialized expertise — either in-house or through qualified advisors.
How Sitnik AI Can Help
At Sitnik AI, we build AI systems with compliance in mind from day one. Based in Munich, Germany, we understand the EU regulatory landscape firsthand. Our approach includes:
- AI Act readiness assessments: We help you classify your AI systems, identify compliance gaps, and create a practical roadmap.
- Compliance-by-design development: New AI systems built with documentation, transparency, and oversight requirements baked in — not bolted on.
- Risk management frameworks: Continuous risk identification and mitigation processes aligned with the Act's requirements.
- Technical documentation: Comprehensive documentation packages that meet regulatory requirements while remaining useful for engineering teams.
The EU AI Act represents a significant shift in how AI is regulated globally. Companies that prepare now will not only avoid penalties but will build more trustworthy, robust AI systems that inspire confidence in customers, partners, and regulators.
Don't wait for August 2026. The companies that start preparing today will have a competitive advantage over those that scramble to comply at the last minute. Get in touch to discuss your AI Act compliance strategy.
Dr. Dario Sitnik
CEO i AI znanstvenik u Sitnik AI. Doktorat iz AI-a s ekspertizom u strojnom učenju, NLP-u i inteligentnoj automatizaciji.