How to Classify Your AI System Under the EU AI Act Risk Framework
The EU AI Act's risk-based approach means different AI systems face different requirements. A spam filter and a hiring algorithm are treated very differently — and for good reason. But correctly classifying your AI system is harder than it sounds.
Get it wrong in one direction, and you're spending resources on compliance that isn't required. Get it wrong in the other direction, and you're facing potential fines of up to €15 million or 3% of global turnover.
This guide walks you through the classification process step by step.
The Four Risk Categories
1. Unacceptable Risk — Banned
Some AI practices are considered an unacceptable risk to fundamental rights and are prohibited entirely. These bans have been in effect since February 2, 2025.
Prohibited practices include:
- Social scoring systems by public authorities
- Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
- Emotion recognition in workplaces and educational institutions
- AI systems that manipulate behavior through subliminal techniques
- Exploitation of vulnerabilities due to age, disability, or social situation
- Untargeted facial image scraping from the internet or CCTV
- Biometric categorization inferring sensitive attributes (race, political opinions, sexual orientation)
Action: If any of your AI systems fall into these categories, they must be discontinued immediately. There are no transition periods or exemptions.
2. High Risk — Strict Requirements
High-risk AI systems are subject to comprehensive requirements including risk management, data governance, technical documentation, transparency, human oversight, and accuracy standards. Full compliance is required by August 2, 2026.
An AI system is classified as high-risk if it falls into one of two categories:
Category A: AI as a safety component or product itself
AI systems that are safety components of products already covered by EU harmonization legislation, or that are themselves such products. This includes AI in:
- Medical devices (MDR/IVDR)
- Machinery (Machinery Regulation)
- Toys (Toy Safety Directive)
- Lifts, pressure equipment
- Radio equipment
- Civil aviation
- Motor vehicles
- Marine equipment
Category B: AI in Annex III high-risk areas
AI systems used in specific domains listed in Annex III of the Act:
- Biometric identification: Remote biometric identification systems (non-real-time)
- Critical infrastructure: AI for managing electricity, gas, water, heating, and digital infrastructure
- Education: AI determining access to education, evaluating learning outcomes, assessing appropriate education levels, monitoring prohibited behavior during exams
- Employment: AI for recruitment, screening candidates, evaluating applications, making promotion/termination decisions, allocating tasks, monitoring performance
- Essential services: AI evaluating creditworthiness, assessing insurance risk and pricing, evaluating eligibility for public assistance and benefits, dispatching emergency services
- Law enforcement: AI for individual risk assessments, polygraph alternatives, evaluating evidence reliability, profiling in criminal investigations
- Migration and border control: AI for assessing migration risks, examining visa and asylum applications, detecting forged travel documents
- Justice and democracy: AI assisting judicial authorities in researching and interpreting facts and law
3. Limited Risk — Transparency Obligations
AI systems with limited risk face transparency obligations only. Users must be informed they are interacting with AI. This applies to:
- Chatbots: Must clearly disclose they are AI-powered
- Deepfakes: AI-generated or manipulated image/video/audio content must be labeled as artificially generated
- Emotion recognition: Where not banned, systems must inform users that emotion recognition is being used
- Biometric categorization: Where not banned, systems must inform users
4. Minimal Risk — No Specific Obligations
The majority of AI systems fall here. Examples include spam filters, AI in video games, inventory management systems, and recommendation algorithms for non-essential services. No specific regulatory obligations apply, though voluntary codes of conduct are encouraged.
Step-by-Step Classification Process
Step 1: Is It an AI System?
The AI Act defines an AI system as "a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."
If your system doesn't meet this definition — for example, simple rule-based automation without any inference capability — the AI Act doesn't apply.
Step 2: Check for Prohibited Practices
Review the list of unacceptable-risk practices. Be thorough — some prohibitions are broader than they initially appear. For example, "emotion recognition in workplaces" covers not just dedicated emotion AI but any system that incidentally infers emotional states from workplace monitoring data.
Step 3: Check Annex III High-Risk Categories
Map your AI system's use case against the Annex III categories. Pay attention to the specific wording — "AI systems intended to be used for" means the intended purpose matters, not just theoretical capability.
Important nuance: An AI system is NOT automatically high-risk just because it's listed in Annex III. The Act includes an exception: an Annex III system is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. This applies when the AI system:
- Performs a narrow procedural task
- Improves the result of a previously completed human activity
- Detects decision-making patterns without replacing human assessment
- Performs a preparatory task to an assessment relevant to the use case
However, you must document your reasoning if you rely on this exception.
Step 4: Check Product Safety Legislation
If your AI system is a safety component of a product covered by EU harmonization legislation, or is itself such a product, it's high-risk under Category A. Check whether your product falls under any of the listed EU directives and regulations.
Step 5: Determine GPAI Classification
If your system is or uses a general-purpose AI model (like an LLM), additional rules apply. GPAI providers must comply with transparency obligations from August 2, 2025. GPAI models with systemic risk (trained with >10²⁵ FLOPs) face additional requirements.
Step 6: Document Your Classification
Whatever your classification, document the analysis. Include:
- System description and intended purpose
- Classification rationale with reference to specific Act provisions
- If relying on the Annex III exception, detailed justification
- Assessment date and responsible person
- Plan for reassessment if the system's use changes
Common Classification Mistakes
Mistake 1: Classifying Based on Technology, Not Use
The AI Act classifies based on intended use, not technology. A neural network used for spam filtering is minimal risk. The same architecture used for recruitment screening is high-risk. Don't assume your technology type determines your classification.
Mistake 2: Ignoring Downstream Uses
If you provide an AI system that a customer uses for a high-risk purpose, the high-risk classification may apply to you as the provider. Consider all reasonably foreseeable uses of your system.
Mistake 3: Assuming "Internal Only" Means Exempt
Using an AI system only internally doesn't exempt you from the Act. An internal hiring algorithm is still high-risk. An internal credit scoring model for employee loans is still high-risk.
Mistake 4: Over-Classifying to Be Safe
Treating everything as high-risk "just to be safe" wastes resources. The conformity assessment, documentation, and ongoing monitoring requirements for high-risk systems are substantial. Accurate classification saves significant time and money.
What's Next After Classification?
Once you've classified your AI system:
- Minimal risk: No specific obligations, but consider voluntary codes of conduct
- Limited risk: Implement transparency measures (disclosure labels, user notifications)
- High risk: Begin the conformity assessment process — risk management system, data governance, technical documentation, human oversight, accuracy testing, and cybersecurity measures
- Unacceptable risk: Discontinue the system immediately
Unsure about your classification? Contact us for a professional AI Act risk assessment. We'll classify your systems and provide a clear compliance roadmap.
Dr. Dario Sitnik
CEO & KI-Wissenschaftler bei Sitnik AI. Promotion in KI mit Expertise in Machine Learning, NLP und intelligenter Automatisierung.