AI, accountability and what comes next for Financial Services

The ICO’s latest move just made your data strategy a boardroom issue. Here’s why.
On 10th June 2025, the UK’s Information Commissioner’s Office (ICO) quietly issued an update that could have wide-reaching implications for financial services. Just days before the Data (Use and Access) Act 2025 received Royal Assent, the ICO confirmed it will publish a new statutory code of practice for AI and automated decision-making—aimed at clarifying transparency, fairness, and accountability when using personal data in AI models.
This is a development we welcome. Not because it adds another compliance hurdle, but because it provides much-needed clarity. And in financial services, clarity isn’t just compliance—it’s confidence. It empowers FS leaders to innovate responsibly and act decisively, without fear of crossing a regulatory line they didn’t see coming.
What’s the Bill About?
The Data (Use and Access) Bill is the UK’s long-awaited post-Brexit update to data protection law. It aims to reduce red tape while maintaining EU adequacy and supporting innovation—particularly in sectors like FS where data use is complex, regulated, and high-stakes.
Key features include:
- A new legal basis for “recognised legitimate interests”
- Changes to Subject Access Requests (SARs)
- New rules around cookies, biometrics, and profiling
- A framework for digital identity
- An updated model for international data transfers
- Provisions for public interest research and innovation
Why the June 10th Announcement Matters—Especially for Financial Services
The ICO’s commitment to publish a statutory AI code of practice—expected within 6–12 months—signals a shift from theory to enforcement. And financial services companies are firmly in scope.
Expect guidance on:
- How personal data is used in AI models for scoring, profiling, underwriting, and customer segmentation
- Explaining AI decisions—particularly in lending, fraud detection, and customer service
- Mitigating bias in automated decision-making
- Transparent communications with customers, regulators, and auditors
This isn’t just about getting ahead of AI regulation. It’s about protecting the trust you’ve built, reducing reputational and legal risk, and using data more intelligently across your business.
What Financial Services Leaders Should Do Now
At Beyond: Putting Data To Work, we help FS businesses put data to work responsibly—ensuring compliance, driving growth, and reinforcing trust. Here’s what we recommend:
- Audit Your AI Models
Which models are currently influencing lending decisions, pricing, fraud flags, or communications? What’s the governance around them? - Check Your Documentation
Could you explain how these models work—clearly, and to a regulator? Can you demonstrate fairness, accountability, and traceability? - Review Privacy Notices and Customer Consents
Are you telling customers the truth about how their data is used? Are your disclosures aligned with what’s actually happening behind the scenes? - Prioritise Data Quality and Segmentation
Good decisions come from good data. That means clean, accurate, well-governed data foundations—especially as scrutiny intensifies.
AI in Financial Services: Opportunity or Exposure?
AI holds huge potential for FS firms—from hyper-personalisation to smarter risk modelling. But without the right safeguards, it also opens the door to unintended bias, opaque decisions, and customer mistrust.
The ICO’s upcoming code won’t remove those risks. But it will give you a roadmap to manage them—if you're ready.
Let’s Put Your Data to Work—The Right Way
We’ve spent years helping FS organisations use data and AI to gain competitive advantage—ethically, effectively, and with full regulatory awareness.
If you’re reviewing your AI governance, or want to benchmark your data readiness, get in touch. We’re here to help you turn uncertainty into action—and regulation into an edge.