There is a new academic paper making the rounds in privacy law circles. Published in February 2026, it maps the overlaps between the EU AI Act and the GDPR provision by provision — and the finding that matters most is not the overlap itself but the contradictions. The AI Act's data governance requirements in Article 10 demand representative, complete, error-free training datasets. The GDPR's data minimisation principle says you should collect as little personal data as possible. Both are legally binding. Neither yields to the other.
If your AI governance programme and your data protection programme are running on separate tracks — separate teams, separate assessments, separate legal opinions — you are building two compliance systems that will contradict each other at the worst possible moment: when an auditor or a data protection authority asks how you satisfy both simultaneously.
This edition maps the five specific points where the AI Act and GDPR collide, identifies the joint EDPB-Commission guidelines that are coming (but haven't arrived yet), and — in the Deep Dive — walks through the cross-reference table that survives an EDPB review. This is the edition we teased in the Build Lab bundle two weeks ago. If you built the knowledge base from that Friday drop, the mappings in today's Deep Dive slot directly into your wiki/mappings/ directory.
TL;DR
Article 10 of the AI Act requires representative, complete training data. GDPR Article 5(1)(c) requires data minimisation. These two obligations do not resolve on their own — your compliance team needs an explicit documented position on how they coexist for each high-risk AI system.
The EDPB and European Commission are drafting joint guidelines on the AI Act–GDPR interplay, expected later in 2026. Until those land, organisations are flying without a regulatory map. The five collision points in this edition are the ones to plan around now.
Article 10(5) creates a narrow AI Act-specific allowance, subject to strict safeguards, for exceptional processing of special-category data (race, health, biometrics) to detect and correct bias in high-risk systems. Most DPOs have not seen it yet — and blanket prohibitions on Article 9 data in AI systems need updating.
Many high-risk AI deployments that process personal data will require a DPIA under GDPR Article 35, and certain deployers will also need a FRIA under AI Act Article 27. Where both apply, running them separately doubles the work and halves the insight.
Protect online privacy from the very first click
Your digital footprint begins long before you understand what it means. “Free” Big Tech inboxes like Gmail scan your emails to fuel advertising, personalize content, and build data profiles. Proton Mail offers truly “free” email. Free from data profiling. Free from tracking. Free from ads. And free to use.
[Webinar] Stop babysitting your coding agents
MCPs give your agents access to information, not understanding. The teams pulling ahead are using a context engine to give agents the right context for every task, so they stay on track without the set up tax or the correction loops. Join live on May 6 (FREE) to see how.
The Brief
Collision 1: Data Minimisation vs. Representative Datasets
GDPR Article 5(1)(c) requires that personal data be "adequate, relevant and limited to what is necessary." AI Act Article 10(3) requires that training, validation, and testing datasets be "relevant, sufficiently representative, and to the best extent possible, free of errors and complete." These are not the same instruction. "Limited to what is necessary" pulls toward smaller datasets. "Sufficiently representative and complete" pulls toward larger ones.
The resolution is not to pick one. It is to document, per system, why the dataset size is both necessary for representativeness (AI Act) and limited to what is required for that representativeness (GDPR). That documentation is the artefact that survives review. Without it, either regulator can argue you failed their standard.
Do now: For each high-risk AI system in your inventory, ask your DPO and your AI governance lead to independently write one paragraph justifying the training dataset scope. Compare the two paragraphs. If they contradict, that is the gap your auditor will find first.
Sources: Article 10 — EU AI Act · AI data governance — overlaps between the AI Act and the GDPR (Feb 2026)
Collision 2: Storage Limitation vs. Documentation Retention
GDPR Article 5(1)(e) — the storage limitation principle — says personal data must be kept "no longer than is necessary for the purposes for which the personal data are processed." AI Act Article 12 requires high-risk AI systems to maintain automatic event logs. Article 18 requires providers to keep technical documentation available for ten years after the high-risk AI system has been placed on the market or put into service.
The tension is real, but it resolves cleanly once you separate two categories. Raw personal data and training data remain constrained by GDPR storage limitation — there is no AI Act override for keeping personal data longer than necessary. AI Act compliance artefacts — technical documentation, declarations of conformity, quality-management records, and automatic event logs — have their own retention logic under the AI Act. Article 12 requires systems to technically enable automatic logging over the lifetime of the system, while Article 26(6) requires deployers to keep logs under their control for at least six months, unless other EU or national law provides otherwise.
Most organisations have not drawn this boundary. The compliance architecture needs a clear separation between personal data (GDPR ceiling applies) and system-level documentation (AI Act floor applies) — and a documented position on artefacts that straddle both.
Do now: Map your AI system documentation to identify which artefacts contain or reference personal data. For each, confirm you have a retention schedule that satisfies both GDPR storage limitation and AI Act Article 12/18 documentation requirements. If you do not have one, your DPO needs to know this week.
Sources: AI Data Retention Strategy for GDPR & EU AI Act Compliance — TechGDPR · Key Issue 6: Interplay with GDPR — EU AI Act
Collision 3: The Bias Detection Exception Most DPOs Haven't Seen
Article 10(5) of the AI Act creates a narrow but powerful exception: providers of high-risk AI systems may process special categories of personal data — the categories GDPR Article 9 normally prohibits, including racial origin, health data, and biometric data — "strictly for the purposes of ensuring bias monitoring, detection and correction." This is not a general permission. It applies only to high-risk systems, only for bias purposes, and only with strict safeguards including anonymisation and access controls.
This is the AI Act's most significant intersection with GDPR Article 9 — a narrow, safeguarded allowance that operates alongside EU data protection law, not a blanket override. It matters operationally because many organisations have blanket policies prohibiting the processing of Article 9 data for AI training — policies that were correct under GDPR alone but are now incomplete under the combined framework.
Do now: Check whether your organisation's AI data policy explicitly addresses Article 10(5). If it contains a blanket prohibition on special-category data in AI systems, flag it to your DPO with a reference to the AI Act exception. The policy needs updating before any bias audit touches production systems.
Sources: Article 10 — EU AI Act · Algorithmic discrimination under the AI Act and the GDPR — European Parliament Research Service
Collision 4: Dual Assessments — DPIA Meets FRIA
Many high-risk AI deployments that process personal data will require a DPIA under GDPR Article 35 — triggered when processing is likely to result in a high risk to the rights and freedoms of individuals. Separately, AI Act Article 27 requires certain deployers (bodies governed by public law, private entities providing public services, and deployers of specific Annex III systems, with an exception for Annex III point 2) to conduct a Fundamental Rights Impact Assessment (FRIA) before putting the system into use. Where both apply, the assessments share significant overlap — both examine risks to individuals, both require documented mitigations, both involve stakeholder consultation.
Running them separately is the default in most organisations because the DPIA sits with the DPO's team and the FRIA will sit with the AI governance or legal function. Where both are triggered, the result is duplicated effort, inconsistent risk language, and two documents that an auditor will read side-by-side looking for contradictions.
Do now: Draft a combined DPIA-FRIA template that maps shared fields (risk identification, mitigation measures, affected populations, review cycles) and adds AI Act–specific fields (risk classification rationale, conformity assessment status, human oversight model). Run the combined template against your highest-risk AI system as a pilot before August.
Sources: GDPR Article 35 — Data Protection Impact Assessment · EU AI Act Article 27 — Fundamental Rights Impact Assessment
Collision 5: The Joint Guidelines That Haven't Landed Yet
The EDPB and the European Commission are drafting joint guidelines on the interplay between the GDPR and the AI Act. These guidelines are expected later in 2026 but have no confirmed publication date. The EDPB-EDPS Joint Opinion 1/2026 on the Digital Omnibus (published January 20, 2026) flagged several unresolved questions — including the scope of legitimate interest as a legal basis for AI-related processing, the role of DPAs in EU-level AI regulatory sandboxes, and the interplay between GDPR cooperation mechanisms and AI Act market surveillance.
Until these guidelines land, organisations are operating in a regulatory gap. The five collision points above are not hypothetical — they are the specific areas where the two frameworks create contradictory or ambiguous obligations. Waiting for the guidelines is not a plan. Building a documented position on each collision point is.
Do now: Create a standing agenda item for your next DPO–AI governance sync: "AI Act–GDPR interplay — open positions." Use the five collision points from this edition as the initial list. When the EDPB-Commission guidelines publish, you will have a baseline to compare against rather than a blank sheet.
Deep Dive
The Cross-Reference Table Your Auditor Will Ask For
This Deep Dive delivers the mapping promised in the April 17 Build Lab bundle. If you built the EU AI Act knowledge base from that edition, these cross-references slot into wiki/mappings/ai-act-to-gdpr.md.
The fundamental problem with AI Act–GDPR compliance is not that the two frameworks overlap. Overlapping regulation is normal in European law — DORA and NIS2 overlap on cybersecurity, the Medical Devices Regulation and GDPR overlap on health data. The problem is that the AI Act was drafted by DG CNECT (digital policy) and the GDPR was drafted by the Council under the Justice and Home Affairs configuration. The two texts use different terminology for similar concepts, assign oversight to different authorities, and create assessment obligations that share purpose but not process.
This means your compliance team cannot treat "AI Act compliance" and "GDPR compliance" as two workstreams that happen to share some data. They are a single obligation expressed in two languages. The cross-reference table below is the Rosetta Stone.
The Mapping
AI Act Provision | GDPR Provision | Tension | Resolution Approach |
|---|---|---|---|
Art 10(3) — representative, complete datasets | Art 5(1)(c) — data minimisation | Size of training datasets | Document per-system justification for dataset scope under both frameworks |
Art 10(5) — special-category data for bias detection | Art 9(1) — prohibition on processing sensitive data | Bias testing requires data GDPR normally prohibits | Use Art 10(5) exception with strict safeguards: anonymisation, access control, purpose limitation |
Art 12 — automatic event logging | Art 5(1)(e) — storage limitation | Retention period for logs containing personal data | Separate system logs (AI Act retention) from personal data within logs (GDPR retention ceiling) |
Art 13 — transparency to deployers | Art 13-14 GDPR — right to information | Overlapping but different transparency obligations | Single transparency document addressing both, noting different audiences (deployers vs. data subjects) |
Art 14 — human oversight requirements | Art 22 GDPR — automated decision-making | Human-in-the-loop requirements from two directions | Map human oversight model to satisfy both Art 14 (system design) and Art 22 (individual rights) |
Art 17 — quality management system | Art 24/25 GDPR — controller obligations, DPbD | Parallel quality/governance frameworks | Integrate AI quality management into existing GDPR accountability framework rather than building parallel |
Art 27 — FRIA (deployers) | Art 35 — DPIA | Dual impact assessment burden | Combined DPIA-FRIA template with shared risk fields and framework-specific additions |
Art 18 — 10-year documentation retention (placed on market or put into service) | Art 5(1)(e) — storage limitation | Retention floor (AI Act) vs. ceiling (GDPR) | Separate documentation layers: system-level artefacts (10-year) vs. personal data within artefacts (GDPR-governed). Art 26(6): deployers keep logs ≥6 months |
How to Read This Table
Each row is a decision your compliance team needs to make explicitly and document. The "Resolution Approach" column is not legal advice — it is the operational pattern that the February 2026 academic analysis and the EDPB's own signalling suggest will survive scrutiny. But until the joint EDPB-Commission guidelines land, every resolution is a documented position, not a certified answer.
So what? Print this table. Put it in front of your DPO and your AI governance lead at the same meeting. For each row, ask: "Do we have a documented position on this?" The rows where the answer is "no" are your priority list for the next 90 days.
The Organisational Problem Behind the Legal One
The deeper issue is structural. In most European enterprises, the DPO reports to Legal or Compliance. The AI governance function — if it exists at all — reports to the CTO, the CDO, or a newly created "Head of AI." These two functions share an executive sponsor only at the C-suite level, and in practice they operate on different cadences, different risk vocabularies, and different audit cycles.
The AI Act does not fix this. It creates a parallel governance obligation that will, by default, be staffed and managed separately from the GDPR programme. The organisations that merge these functions — or at minimum create a standing joint working group with shared assessment templates and a unified risk register — will spend less, move faster, and produce artefacts that are internally consistent.
So what? The combined DPIA-FRIA template from Collision 4 is the forcing function. You cannot fill it out without the DPO and the AI governance lead in the same room. Start there. The organisational alignment follows the assessment alignment.
What the EDPB Is Signalling
The January 2026 Joint Opinion on the Digital Omnibus contains several markers of where the EDPB expects the joint guidelines to land. Three stand out:
First, the EDPB expressed caution about recognising legitimate interest (GDPR Article 6(1)(f)) as a default legal basis for AI-related processing. This signals that organisations relying on legitimate interest for training data will need robust Legitimate Interest Assessments — not boilerplate.
Second, the EDPB flagged gaps in DPA involvement in EU-level regulatory sandboxes. This suggests the data protection authorities intend to be active participants in AI Act enforcement, not observers.
Third, the EDPB questioned how the GDPR cooperation mechanism interacts with AI Act market surveillance. This means cross-border AI deployments face potential dual investigations — from the lead supervisory authority under GDPR and from the market surveillance authority under the AI Act.
So what? If you are deploying a high-risk AI system across multiple EU member states, your incident response plan needs to account for two parallel regulatory channels. The DPO handles the GDPR authority; the AI governance lead handles market surveillance. If these are the same person, they need double the bandwidth. If they are different people, they need a shared playbook.
Next Steps
This week: Put the cross-reference table in front of your DPO and AI governance lead. Identify which rows have no documented position. That is your gap list.
This month: Draft a combined DPIA-FRIA template and pilot it against your highest-risk AI system. If you cannot complete the FRIA section, that tells you your Article 6 classification is incomplete.
This quarter: Establish a standing DPO–AI governance working group that meets at minimum monthly with a shared risk register. When the EDPB-Commission joint guidelines publish, you want a team that can respond in days, not a project that takes weeks to staff.
Builder Spotlight
QALA — Data Governance at the Source, Before the AI Touches It
Profiling teams building for the European AI reality.
The company: Qala AG, Zurich, Switzerland What they do: Real-time data governance and observability platform that embeds compliance directly into code, APIs, and data pipelines — before data reaches the AI training layer. Why now: Article 10 compliance starts with knowing what data you have, where it flows, and how it is classified. Qala automates that map.
Qala was founded in 2024 by David Scott Turner, Carl Strempel, and Bruno Soares — veterans of highly regulated industries who previously co-founded Imburse Payments (acquired by a US insurance software provider). The team raised €1.7 million in pre-seed funding led by QBIT Capital and Haatch, with participation from Backbone Ventures, ROI Ventures, and SICTIC-affiliated angels. The platform takes a "shift-left" approach: instead of governing data after it lands in a warehouse or a training pipeline, Qala maps data flows and lineage in real time, classifies sensitive data through AI, enforces policies automatically, and generates continuous audit trails.
For enterprise teams facing the Article 10–GDPR collision described in today's Brief, the upstream problem is always the same: you cannot document that your training dataset is both representative (AI Act) and minimised (GDPR) if you do not know what personal data is in the dataset in the first place. Qala sits at exactly that layer — the point where data enters the pipeline, before classification decisions are made. If you are running the "Do now" from Collision 1 and discovering that your DPO and AI governance lead have different views on dataset scope, a tool like Qala is where the shared truth lives.
Learn more: Qala
This Week in Tech
Digital Omnibus Trilogue Targets April 28 for Political Agreement
Trilogue negotiations on the Digital Omnibus are underway, with late April identified in recent parliamentary and Council process documents as a key political window. Parliament and Council are broadly aligned on fixed dates and major provisions, but open divergences remain on AI Office competence scope and the Article 5 nudifier formulation. The Cypriot Council Presidency is pushing for a political agreement by May 2026 — well before the August 2 general application date. If they miss the window, the original AI Act timeline applies without amendment.
Why it matters: If the Omnibus passes before August, the Annex III high-risk deadline moves to December 2027. If it does not, the original August 2026 date holds. Either way, the GDPR obligations described in today's edition are already enforceable. The Omnibus changes the AI Act timeline, not the GDPR one.
Piraeus Bank and Accenture Launch AI Hub Powered by Anthropic
Piraeus Bank and Accenture announced a dedicated AI Hub in Greece, powered by Anthropic's Claude, designed to accelerate AI transformation across operations, customer experience, risk, and compliance. This is the first dedicated banking AI hub in Greece and one of the first in Southern Europe to name a specific foundation model partner at launch.
Why it matters: For European banking CISOs watching the AI Act–GDPR intersection, Piraeus is an early mover in a regulated sector. The choice of Anthropic — a vendor that markets governance posture as a differentiator — signals that model selection in financial services is increasingly a compliance decision, not just a capability one. Watch for how they handle the DPIA-FRIA dual assessment for credit and risk models.
PwC Study: AI Governance Leaders Capture 75% of Economic Gains
PwC's 2026 AI Performance Study found that 20% of companies capture three-quarters of AI's economic gains. The distinguishing factor is not model sophistication — it is governance maturity. AI leaders are 1.7x more likely to have a Responsible AI framework and 1.5x more likely to have a cross-functional AI governance board than the rest of the market.
Why it matters: This is the data point to drop into your next board presentation on AI governance investment. The argument is no longer "we need governance to avoid fines." It is "the companies winning at AI invest more in governance, not less." The GDPR–AI Act integration described in today's edition is exactly the kind of cross-functional governance infrastructure the top quintile is building.
Source: PwC — 2026 AI Performance Study
Next Steps
What to read now?
AI data governance — overlaps between the AI Act and the GDPR — Taylor & Francis (Feb 2026) The academic paper driving this edition. Provision-by-provision mapping of Article 10 requirements against GDPR obligations, with a novel interpretation framework for resolving the tensions. Dense but essential reading for anyone writing the documented positions recommended above.
EDPB-EDPS Joint Opinion 1/2026 on the Digital Omnibus — EDPB (Jan 2026) Read sections 3.2 (legitimate interest) and 4.1 (sandbox DPA involvement) specifically. These signal where the forthcoming joint guidelines will land on the contested questions.
Key Issue 6: Interplay with GDPR — EU AI Act portal The clearest single-page summary of where the two frameworks intersect. Updated regularly. Bookmark it alongside the Legislative Train tracker for the Omnibus.
AI Data Retention Strategy for GDPR & EU AI Act Compliance — TechGDPR Practical guidance on the retention tension (Collision 2 in today's Brief). Includes a decision tree for separating system-level documentation from personal data within that documentation.
That’s it for this week.
The AI Act and GDPR are not parallel tracks. They are a single regulatory surface with two legislative sources, two sets of authorities, and two enforcement timelines. The organisations that build integrated governance now — shared assessments, shared risk registers, shared teams — will spend less and move faster than the ones that discover the contradictions during an audit.
Until next Thursday, João
OnAbout.AI delivers strategic AI analysis to enterprise technology leaders. European governance lens. Vendor-agnostic. Actionable.
If this landed in your inbox from a forward — subscribe here to get the full picture every week.



