Skip to content
4 min read

Privacy by design is a technical problem first

A lotus seed pod (Nelumbo nucifera) - @andywson
A lotus seed pod (Nelumbo nucifera) - @andywson

Today nearly all organisations I look at have a privacy by design process. It's normally a section in their DPIA (Data Protection Impact Assessment), the structured risk analysis you're expected to complete before launching anything that handles personal data, a few paragraphs describing how the project will "implement appropriate technical and organisational measures." The language follows compliance. The boxes are ticked in the right places. And when I look at the actual systems, the architecture often tells a different story.

Privacy by design is the most widely documented and least consistently implemented obligation in UK data protection. This isn't just my personal observation, it's what the benchmark data shows, and what the ICO's enforcement record confirms.

The gap between the document and the system

The GRC Solutions GDPR Benchmark Report 2025, analysing over 60 UK organisations, found that privacy by design scored lowest of all nine compliance areas assessed. Not just in one sector, right across the board. Surprisingly, even the technology sector, the highest-performing overall, still only reached "developing" maturity. The DQM GRC benchmark the year before put it at 4.0 out of 10.

The ICO's enforcement actions tell the same story from the other end. When 23andMe was fined £2.31 million in June 2025, the centrepiece of the case was the absence of mandatory multi-factor authentication, an elementary architectural decision that should have been made at the start of the system design. The ICO found its absence constituted a "direct infringement" of the security obligations under Articles 5(1)(f) and 32. Advanced Computer Software Group, DPP Law, Reddit, PSNI, the same pattern repeats. The organisations hadn't failed to produce compliant privacy policies, the systems had failed to reflect the policies they produced.

What a technical assessment actually looks like

The legal obligation is straightforward. UK GDPR Article 25 requires controllers to implement appropriate technical and organisational measures at the design stage and throughout the life of the processing. The Data Use and Access Act 2025 hasn't changed this; it's added a children's dimension via Section 81, but the core obligation is exactly what it was. The challenge isn't interpretation, it's implementation.

Here's where the risk lies. A DPO with a legal, non-technical background reviews the DPIA document, checks the policy statements, and assesses them against the regulation. A DPO with a technical background can also look at the architecture underneath. These are different activities, and they identify different problems.

When I assess privacy by design, I'm able to look at the database schema, not just the categories of data listed in the DPIA, but the actual table structure. Are there "shadow fields," extra columns added for future use that collect data nobody documented? Is the user ID a global primary key linked across marketing, billing, and analytics, or are there ephemeral tokens preventing easy cross-database profile reconstruction?

I'm looking at the API layer. Your mobile app might only display a customer's first name, but if the API response could be returning the entire user object, date of birth, hashed password, the lot, that's a data minimisation failure happening at the system level that a privacy policy review will never catch. The fix is filtering at the API layer so the client never receives data it doesn't need.

Beyond the password policy

I will look at access controls. If your developers have a role-based access setting that gives them SELECT * access to a production-mirrored staging database, your DPIA's statement about "appropriate access controls" is describing a reality that doesn't exist. And I'm able to look at the infrastructure itself, not just the DPIA's assurance that "data is encrypted at rest," but whether your cloud storage is using default provider keys or properly managed keys with rotation and audit logging.

None of this requires exotic skills. It requires a DPO who can read an architecture diagram, examine a database schema, and ask the development team technical questions rather than relying on a summarised document of how things work.

Where the documentation differs from reality

Two patterns come up repeatedly, situations where the documentation looks correct but the system tells a different story.

The logical delete. The DPIA states that users can delete their accounts and data is removed within 30 days. The system flips an is_deleted boolean flag, hides the profile from the UI, and leaves the data in the table indefinitely. It's still being ingested by analytics. It's still in the backups. A legal review sees the "Delete Account" button working and moves on. A technical review asks to see the automated purge scripts, and discovers they don't exist.

The "anonymised" export. The DPIA says only anonymised aggregate data is shared with marketing partners. The actual export contains SHA-256 hashed email addresses. To a non-technical reviewer, a hash looks like random characters, it looks anonymous. To someone who understands the technology, a hash is a pseudonym, not a true anonymised value. If the salt is weak (or even absent), those hashes are reversible via rainbow tables. Hashing is not anonymisation, and the re-identification risk of that dataset may be substantial. But the distinction is invisible if you don't understand how hashing works.

The question your board should be asking

The legal requirement for privacy by design hasn't changed. Article 25 says what it's always said. The enforcement trend, though, has shifted noticeably, the ICO's average fine rose from £150,000 in 2024 to over £900,000 in 2025, and the cases that drew the largest penalties all centred on systems that were not designed with protection built in from the start.

The question shouldn't just be "have we done the DPIA?" It should be "have we verified how our systems implement the measures described in our DPIA?" If your DPO only reviews the documents without verifying the underlying architecture, you have potential risk, not in your documents, but in your physical protection, the protection that really matters.

Privacy by design is a legal obligation. But it's a technical problem. And it needs someone who understands the systems to solve it.