AI Strategy

GDPR vs. EU AI Act: How They Overlap and Where They Conflict

DS
Dr. Dario Sitnik
10 min čitanja
GDPR vs. EU AI Act: How They Overlap and Where They Conflict

The EU AI Act and GDPR are the two pillars of Europe's approach to regulating AI systems. While they share common goals — protecting fundamental rights and ensuring transparency — they were designed at different times, for different purposes, and sometimes their requirements pull in opposite directions.

For companies building or deploying AI systems in the EU, understanding where these two regulations overlap and where they conflict isn't optional — it's a compliance necessity. This article maps the intersections and tensions between GDPR and the EU AI Act, with practical guidance for navigating both.

Shared Foundations

Both regulations are built on similar principles:

  • Transparency: Both require informing individuals about how their data is processed and how AI systems make decisions.
  • Human oversight: GDPR's Article 22 right to human intervention in automated decisions mirrors the AI Act's human oversight requirements for high-risk systems.
  • Risk-based approach: While GDPR applies broadly, its enforcement priorities are risk-based. The AI Act formalizes this with explicit risk categories.
  • Data quality: Both emphasize accurate, relevant data — GDPR through data accuracy principles, the AI Act through training data governance requirements.
  • Documentation: Both require detailed records of processing activities and system design decisions.

Where They Overlap

Data Protection Impact Assessments (DPIA) and AI Act Risk Assessment

Under GDPR Article 35, a DPIA is required when processing is likely to result in high risks to individuals' rights. Under the AI Act, high-risk AI systems require a conformity assessment and ongoing risk management.

Practical overlap: If your AI system processes personal data and is classified as high-risk under the AI Act, you'll need both a DPIA and an AI Act conformity assessment. The good news: much of the analysis overlaps. A well-structured DPIA can feed into your AI Act risk assessment, and vice versa.

Transparency Requirements

GDPR Articles 13-14 require informing data subjects about automated decision-making, including the logic involved. The AI Act requires transparency for all AI systems, with specific obligations for high-risk systems including clear information about capabilities and limitations.

Practical overlap: Your AI system's transparency documentation can serve both purposes. Create a single transparency framework that satisfies GDPR's information requirements and the AI Act's transparency obligations simultaneously.

Right to Explanation

GDPR gives individuals the right to meaningful information about the logic of automated decisions. The AI Act requires high-risk systems to be designed for human interpretability.

Practical overlap: Build explainability into your AI systems from the start. This satisfies both GDPR's right to explanation and the AI Act's interpretability requirements.

Where They Conflict

Data Minimization vs. Training Data Requirements

This is the most significant tension. GDPR's data minimization principle (Article 5(1)(c)) requires processing only data that is adequate, relevant, and limited to what is necessary. The AI Act, however, requires training datasets to be sufficiently representative to avoid bias — which often means collecting more data, not less.

The conflict: To build a fair, unbiased AI system under the AI Act, you may need to process sensitive categories of data (race, gender, age) to detect and mitigate bias. But GDPR Article 9 restricts processing of special category data.

Resolution: The AI Act includes a specific provision (Article 10(5)) that allows processing of special category data for bias detection and correction, subject to safeguards. This creates a legal basis that GDPR alone doesn't provide — but you must implement strict technical safeguards including pseudonymization and access controls.

Purpose Limitation vs. Model Retraining

GDPR's purpose limitation principle requires data to be collected for specified, explicit purposes and not further processed in a manner incompatible with those purposes. AI systems, however, often benefit from continuous learning and model retraining with new data.

The conflict: Using data collected for one purpose to retrain an AI model for improved performance could violate GDPR's purpose limitation if the retraining purpose wasn't specified at collection.

Resolution: Design your data collection consent to include model improvement as a specified purpose. For existing data, conduct a compatibility assessment under GDPR Article 6(4) to determine if retraining is compatible with the original purpose.

Right to Erasure vs. Model Integrity

GDPR's right to erasure (Article 17) allows individuals to request deletion of their personal data. But what happens when that data has been used to train an AI model?

The conflict: Simply deleting training data doesn't remove its influence from a trained model. True "machine unlearning" is technically challenging and can degrade model performance — potentially affecting the AI Act's accuracy requirements.

Resolution: Document your approach to erasure requests in the context of AI training. Options include retraining models without the deleted data (expensive but thorough), using differential privacy techniques during training, or maintaining clear records that erasure was processed even if model retraining isn't immediately feasible.

Practical Compliance Strategy

1. Unified Compliance Framework

Don't treat GDPR and the AI Act as separate compliance exercises. Build a unified framework that addresses both simultaneously. Your data processing records, risk assessments, and transparency documentation should serve both regulations.

2. Privacy-by-Design Meets Compliance-by-Design

Integrate both GDPR privacy-by-design and AI Act compliance-by-design into your development process. This means considering data minimization, bias mitigation, transparency, and human oversight from the earliest design stages.

3. Document Tensions Explicitly

Where GDPR and the AI Act create tensions (e.g., data minimization vs. bias detection), document your reasoning for the approach you've chosen. Regulators will want to see that you've considered both requirements and made informed, justified decisions.

4. Engage Both Legal and Technical Teams

GDPR compliance has traditionally been a legal function. AI Act compliance requires deep technical understanding. Ensure your compliance team includes both legal expertise and technical knowledge of how AI systems actually work.

Looking Ahead

The EU Commission is expected to publish guidelines on the interaction between GDPR and the AI Act. Until then, companies must navigate the intersection themselves. The safest approach: build systems that satisfy the stricter requirement in each area, document your reasoning, and stay adaptable as guidance evolves.

Need help navigating the GDPR-AI Act intersection? Get in touch for a compliance assessment that addresses both regulations.

DS

Dr. Dario Sitnik

CEO i AI znanstvenik u Sitnik AI. Doktorat iz AI-a s ekspertizom u strojnom učenju, NLP-u i inteligentnoj automatizaciji.

Spremni za početak?

Rezervirajte besplatnu konzultaciju za razgovor o Vašem AI projektu.