Skip to content
6 min read

Data Protection in Financial Services: Managing the Conflict Between Two Regulators

Managing the Conflict Between Two Regulators
Managing the Conflict Between Two Regulators

While there are many challenges facing organisations in terms of complying with regulations, the biggest challenge for data protection governance in financial services is not knowing the regulation itself, but rather managing the conflict between two regulators whose expectations routinely pull in opposite directions.

The ICO (Information Commissioner’s Office) enforces the UK GDPR (General Data Protection Regulation) and the Data Protection Act 2018, whereas the FCA (Financial Conduct Authority) enforces its own rules on systems and controls, record-keeping and operational resilience – rules that regularly relate to the same type of personal data as those enforced by the ICO.

Having worked on developing and implementing risk management systems within regulated financial environments through my career, I see the same pattern play out in financial service organisations. The way data protection is typically viewed and implemented is as a compliance issue, not as an architectural issue.

While it is very important to understand what the regulatory rules say regarding the use of personal data, the larger effort lies in creating systems that meet the requirements of both regulators at once – i.e., a regulator that wants organisations to collect and keep data, and another regulator that wants them to minimise and eliminate data.

Two Regulators, Two Clocks, One Incident

One of the greatest areas of conflict occurs when a financial organisation experiences a data breach. As most major UK-based investment banks and large financial organisations operate in both the UK and EU markets, a data breach creates two independent reporting requirements with differing time lines to report. Specifically, the requirement to notify the ICO under the UK GDPR must occur within 72 hours after discovery of the breach, whereas the requirement to notify the EU Single Entry Point under the Digital Omnibus Regulation must occur within 96 hours.

Developing an incident response plan that takes into consideration only one regulatory requirement will clearly be inadequate to address the reporting requirements of the other. Organisations develop plans that comply with the reporting requirements of the ICO, yet contain no provision for the EU reporting requirements – or vice versa.

Thus, the effort lies in creating a single incident response plan that addresses both regulatory requirements and includes mechanisms to identify and escalate incidents based on both the UK and EU reporting requirements. As such, what may appear to be a compliance issue, is actually an operational issue.

The plan will require testing, the personnel involved will need to understand which regulatory clock they are working against, and the documentation will need to provide sufficient evidence to support both reporting requirements from the time of identification of the incident.

Additionally, this dual jurisdictional aspect exists throughout all aspects of data protection governance, not just incident response. The FCA requires financial service organisations to retain specified records for a minimum of seven years or more in accordance with SYSC 9 and the Money Laundering Regulations. Conversely, the ICO requires organizations to delete personal data when it is no longer needed. Both requirements are based upon valid legal authority. Therefore, the question becomes how the organization’s systems reconcile the conflicting requirements of the two regulators.

Data Retention Conflict

In virtually all sectors, the concept of deleting data is relatively simple. However, in the financial sector, deleting data is technically complex. A financial organisation cannot simply delete a customer’s personal data if the organisation is required by law to maintain the historical transactions of the customer for regulatory purposes. Similarly, the organisation cannot maintain all historical customer data indefinitely based on the fact that a regulator may potentially request access to a seven year old trade at some point in the future.

As such, organisations that successfully manage this requirement include tiered retention in their technical architecture. Active data – including all personally identifiable information for active CRM, trading and client relationship activities - exist in operational systems. If a regulatory hold is applied to the data, the personally identifiable elements are removed and the core audit trail is moved to immutable storage. Once the regulatory retention requirement is satisfied, the encryption keys used to protect the data are destroyed through cryptographic erasure eliminating the possibility of retrieving the data without destroying the physical records themselves.

Designing Systems that Satisfy Both Regulators Simultaneously

Creating technical designs that meet both regulatory requirements is challenging. The successful design of these systems requires a thorough understanding of how data flows through core banking systems, CRM systems, risk management systems, and compliance systems. Through my direct experience working with these types of systems, I am confident that the difference between what organizations document in their retention policies and what their systems actually do is significant. This gap cannot be addressed solely through policy. The solution lies in incorporating retention and deletion requirements into the technical architecture of the systems.

Systems that Meet Regulatory Requirements

While Role-Based Access Control (RBAC) is commonly utilised in many industries, RBAC is insufficient for organisations operating in the financial industry. To ensure that access to sensitive data is appropriately controlled, organisations need Attribute-Based Access Control (ABAC). ABAC provides a mechanism to limit the visibility of data to users based on their current functional mandates and not simply based on their job title. A developer working on the risk platform should not have access to the vulnerability notes located in the CRM, regardless of the level of administrative access to the databases they possess. Additionally, a fraud investigator should only see the specific personal data relevant to their investigation for the duration of the investigation.

The technical method for achieving this functionality involves Dynamic Data Masking (DDM) at the API Layer. Personal data resides in the database in its original form. Each time a query is initiated, the query is evaluated by the access logic to determine who is making the request, why they are making the request, and whether their current mandate authorises the access. In my experience, this is the area where the largest gap exists between stated policy and actual practice. Financial institutions consistently indicate that access to sensitive data is restricted by job function and/or role. However, the systems frequently do not support this assertion.

DUAA 2025 Changes

The Data Use and Access Act 2025 (DUAA), effective in February 2026, made several key changes to affect the data protection landscape of financial service organisations.

Firstly, the DUAA created a new recognised legitimate interest (RLI) basis that permits processing for crime prevention without the need for a full balancing test. For fraud detection and anti-money laundering efforts, this reduces the administrative burden that existed previously for performing a full Legitimate Interest Assessment for processing that the organization was already legally obligated to undertake.

Secondly, the DUAA also made a fundamental shift in the rules related to automated decision-making. Prior to the passage of the DUAA, credit scoring, insurance pricing, and algorithmic fraud detection using standard personal data could only be performed subject to a prior balancing test to confirm that the processing was fair and lawful. The DUAA created an exception to this rule and permits these types of processes to proceed by default, provided that the organisation implements certain mandatory safeguards: informing the individual of the automated decision-making process, enabling representation by the individual, providing for genuine human intervention in the process, and permitting contestation of the decision.

Automated decisions relating to special category data (health, biometric data, etc.) remain permissible only with the express consent of the individual. Health and life insurance underwriting continue to be examples of processes that are subject to this prohibition.

DUAA Non-Changes

Equally important to the changes made by the DUAA are the items that were not changed. Specifically, the DPO (Data Protection Officer) responsibilities remain the same, and the DPIA (Data Protection Impact Assessment) requirements remain the same. Several commentaries have continued to confuse the DUAA with its predecessor, the DPDI Bill, which proposed to replace DPOs with Senior Responsible Individuals and relax DPIA requirements. These provisions were ultimately dropped. The DPO role and the independence of that role remain unchanged.

The Important Question All Financial Services Firms Must Be Asking

The single most important data protection question that all financial service organisations must be asking in 2026 is not “are we GDPR compliant?” It is whether or not anyone in the organisation understands both the FCA’s expectations and the ICO’s requirements sufficiently to recognize when they conflict.

SMCR Does Not Provide a Specific Prescribed Responsibility for Data Protection

Many people believe that the Senior Managers and Certification Regime (SMCR) established a specific prescribed responsibility for data protection. This is incorrect. SMCR establishes accountability through the overall responsibility principle and the duty of responsibility. Therefore, someone must be responsible for data protection. Moreover, that person must be able to understand the technical systems sufficiently to assess whether the controls are actually functioning.

When I enter a financial services organisation, the first thing I want to see is not the Privacy Policy. Rather, it is the data map and the AI Register. Are analysts copying and pasting personal data into untested AI tools to summarise Suspicious Activity Reports (SARs)? Is the vulnerability flagging logic in the CRM able to differentiate between a late payment and a mental health concern, and is the customer aware of how their data is being categorised? When a customer exercises their right to erasure, can the system correctly remove personal details, while retaining the historical records required by the FCA?

These are not simply questions of compliance with laws and regulations. They are questions of the technical architecture of the organisation's systems. In a dual-regulated environment, they are the questions that establish whether the data protection governance functions of the organisation operate in practice or simply appear to function on paper.