Issue #6 – The AI Policy Pulse: Balancing Risk, Trust & Progress
Introduction
Welcome to this edition of The Trusted AI Bulletin, where we break down the latest shifts in AI policy, regulation, and the evolving landscape of AI adoption.
This week, we explore the debate over AI risk assessments in the EU, as MEPs push back against a proposal that could exempt major tech firms from stricter oversight. We also examine the UK’s latest strategy for regulating AI in financial services and how businesses are navigating the complexities of AI adoption—balancing innovation with compliance. Meanwhile, in government, outdated infrastructure threatens to stall progress, underscoring the need for practical transformation strategies.
With AI becoming ever more embedded in critical systems, the focus is shifting to how organisations can create real value with AI while ensuring responsible governance. From regulatory battles to real-world implementation challenges, these stories highlight the urgent need for a balanced approach—one that drives adoption, fuels transformation, and keeps accountability at its core.
100 AI Use Cases in Financial Services
AI adoption in financial services is no longer a question of if, but where and how. As firms move beyond experimentation, the focus is shifting toward practical, high impact use cases that can drive real operational and strategic value. From front-office customer engagement to back-office automation, the opportunities to embed AI across the business are expanding rapidly.
But with so many possibilities, the challenge lies in identifying where AI can deliver meaningful outcomes — and doing so in a way that’s scalable, compliant, and aligned with the firm’s broader objectives. That’s where a clear view of proven, emerging use cases becomes essential.
Over the coming weeks, we’ll be exploring the 100 AI Use Cases we have identified shaping the future of financial services. For each, we’ll look at the models involved, the data required, key vendors operating in the space, risk considerations, and examples of where adoption is already underway. The goal is to help senior leaders cut through the noise and focus on the AI opportunities that matter — now and next.
Key Highlights of the Week
1. MEPs Criticise EU’s Shift Towards Voluntary AI Risk Assessments
A coalition of Members of the European Parliament (MEPs) has expressed significant concern over the European Commission’s proposal to make certain AI risk assessment provisions voluntary, particularly those affecting general-purpose AI systems like ChatGPT and Copilot.
This move could exempt major tech companies from mandatory evaluations of their systems for issues such as discrimination or election interference. The MEPs argue that such a change undermines the AI Act’s foundational goals of safeguarding fundamental rights and democracy.
This development highlights the ongoing tension between regulatory bodies and technology firms, especially from the United States, regarding the balance between innovation and ethical oversight in AI governance. The outcome of this debate will be pivotal in shaping the future landscape of AI regulation within the European Union.
Source: Dutch News link
2. UK Financial Regulator Launches Strategy to Balance Risk and Foster Economic Growth
The FCA has launched a new five-year strategy focused on boosting trust, supporting innovation, and improving outcomes for consumers across UK financial services.
By committing to becoming a more data-led, tech-savvy regulator, the FCA aims to strike a better balance between risk and growth—an approach that holds significant implications for the governance of emerging technologies like AI.
Its emphasis on smarter regulation, financial crime prevention, and inclusive consumer support signals a shift toward more agile, forward-looking oversight. For those navigating evolving AI regulations, this strategy reinforces the FCA’s intent to create a regulatory environment that fosters responsible innovation.
Source: FCA link
3. Public Accounts Committee Warns of AI Rollout Challenges Amid Legacy Infrastructure
The UK government’s ambitious plans to integrate AI across public services are at risk due to outdated IT infrastructure, poor data quality, and a shortage of skilled personnel.
A report by the Public Accounts Committee (PAC) highlights that over 20 legacy IT systems remain unfunded for necessary upgrades, with nearly a third of central government systems deemed obsolete as of 2024.
Despite intentions to drive economic growth through AI adoption, these foundational weaknesses pose significant challenges. The PAC also raises concerns about persistent digital skills shortages and uncompetitive civil service pay rates, which hinder the recruitment and retention of necessary talent.
Addressing these issues is crucial to ensure that AI initiatives are effectively implemented, fostering public trust and delivering the anticipated benefits of technological advancement.
Source: The Guardian link
Featured Articles
1. Why the UK’s Light-Touch AI Approach Might Not Be Enough
AI regulation in the UK is developing at a cautious pace, with the government opting for a principles-based, sector-led approach rather than comprehensive legislation. While this flexible model aims to foster innovation and reduce regulatory burdens, it risks creating a fragmented landscape where inconsistent standards could undermine public trust and accountability.
The article highlights that regulators often lack the technical expertise and resources to effectively oversee AI, raising concerns about how well current frameworks can keep pace with rapid technological advancements.
Meanwhile, businesses are calling for greater clarity and coherence, especially those operating across borders and facing stricter regimes like the EU AI Act. The UK’s strategy, though well-intentioned, may fall short in addressing the systemic risks posed by AI if coordination and enforcement mechanisms remain weak. For those focused on AI governance, the message is clear: without sharper oversight and alignment, the UK could lag in both trust and competitiveness.
Source: ICAEW link
2. Bridging the AI Knowledge Gap: A Foundation for Responsible Innovation
In an era where artificial intelligence is reshaping everything from financial services to public policy, understanding how AI works is becoming essential—not just for technologists, but for everyone.
As AI systems increasingly influence the decisions we see, the products we use, and even the jobs we do, being AI-literate is no longer a nice-to-have, but a societal imperative. The CFTE AI Literacy White Paper explores why foundational knowledge of AI is critical for individuals, businesses, and governments alike, arguing that AI should be treated as a core component of digital literacy.
What’s particularly compelling is the focus on inclusion—ensuring that access to AI knowledge isn’t limited to a technical elite but extended across sectors and demographics. Without widespread AI literacy, regulatory and governance efforts risk being outpaced by innovation.
This makes the paper especially relevant to those shaping or responding to emerging AI regulations and frameworks. It’s both a call to action and a roadmap for building a more informed, resilient society in the age of intelligent systems.
Source: CTFE link
3. AI in 2025: From Reasoning Machines to Multimodal Intelligence
The year ahead promises significant advances in artificial intelligence, particularly in areas like reasoning, frontier models, and multimodal capabilities. Large language models are evolving to exhibit more sophisticated forms of human-like reasoning, enhancing their utility across sectors from healthcare to finance.
At the same time, so-called frontier models—exceptionally large and powerful systems—are setting new benchmarks in tasks like image generation and complex decision-making. Multimodal AI, which integrates text, image, and audio inputs, is maturing rapidly and could redefine how machines interpret and respond to the world.
These developments underscore the urgency for updated governance frameworks that can keep pace with AI’s expanding scope and impact. As capabilities grow, so too does the need for greater regulatory clarity and ethical oversight.
Source: Morgan Stanley link
Industry Insights
Case Study: Building a Trustworthy Data Foundation for Responsible AI
Capital One, a major retail bank and credit card provider, has positioned itself at the forefront of responsible AI by investing in a robust, AI-ready data ecosystem. Operating in a highly regulated industry where trust and accuracy is vital, the company recognised early on that scalable, ethical AI requires more than just advanced algorithms—it demands a disciplined approach to data governance and transparency.
In recent years, Capital One has overhauled its data infrastructure to align with its long-term AI vision, focusing on quality, accessibility, and accountability across the entire data lifecycle.
To support this transformation, Capital One implemented a suite of Responsible AI practices, including standardised metadata tagging, active data lineage tracking, and embedded governance controls across cloud-native platforms. These efforts are supported by cross-functional teams that bring together AI researchers, compliance professionals, and data engineers to operationalise fairness, explainability, and bias mitigation.
The results are tangible: Capital One has accelerated the deployment of customer-facing AI solutions—such as fraud detection and credit risk models—while ensuring they meet internal and regulatory standards. By prioritising responsible data management as the foundation for AI, the company is not only enhancing trust with regulators and customers but also driving innovation with confidence.
Key Takeaways:
1. Data governance first: Ethical AI starts with well-governed, high-quality data.
2. Cross-functional collaboration: Aligning compliance, engineering and AI teams is key to operationalising responsibility.
3. Built-in controls, not bolt-ons: Embedding governance into AI systems from the outset enhances both trust and speed to market.
Sources: Forbes link, Capital One link
Upcoming Events
1. Webinar: D&A Leaders: Preparing Your Data for AI Integration – 2 April 2025 3:00 am BST
Gartner’s upcoming webinar, “D&A Leaders, Ready Your Data for AI,” focuses on equipping data and analytics professionals with strategies to prepare organisational data for effective artificial intelligence integration. The session will cover best practices for data quality management, governance frameworks, and aligning data strategies with AI objectives. Attendees will gain actionable insights to ensure their data assets are primed for AI-driven initiatives, enhancing decision-making and business outcomes.
Register now: Gartner link
2. In-Person Event: AI Breakfast & Roundtable – From AI Proof of Concept to Scalable Enterprise Adoption – 23 April 2025
Leading Point is hosting an exclusive AI Breakfast & Roundtable, bringing together AI leaders from top financial institutions, including banks, insurance firms, and buy-side institutions. This intimate, high-level discussion will explore the challenges and opportunities in scaling AI beyond proof of concept to enterprise-wide adoption.
Key discussion points include overcoming implementation barriers, aligning AI initiatives with business objectives, and best practices for AI success in banking, insurance, and investment management. This event offers a unique opportunity to connect with industry peers and gain strategic insights on embedding AI as a core driver of business value.
Want to be a part of the conversation?
If you are an executive with AI responsibilities in business, risk & compliance, or data contact Rajen Madan or Thushan Kumaraswamy to get a seat at the table.
3. In-Person Event: Smarter Couds, Stronger AI-Driven Innovation, Efficiency and Resilience – 24 April 2025
IDC’s, HPE’s and TCS’s upcoming roundtable Smarter Clouds, Stronger Businesses explores how enterprises can drive innovation and resilience by aligning AI strategies with modern cloud architectures. With a focus on agility, scalability, and performance, the agenda covers best practices for adopting AI-enabled infrastructure, building secure and future-ready cloud environments, and reducing complexity across hybrid ecosystems. Industry experts will share insights on turning cloud investments into long-term business value—enabling organisations to stay competitive in an increasingly data-driven world.
Register now: IDC link
4. In-Person Event: Risk & Compliance in Financial Services – 29 April 2025
The 9th Annual Risk & Compliance in Financial Services Conference brings together senior professionals from firms such as Aviva, Invesco, Lloyds Banking Group and NatWest. This year’s agenda focuses on emerging challenges and innovations in the sector—from the use of AI to enhance compliance and operational resilience, to navigating evolving regulations like DORA and Consumer Duty. With expert-led panels on financial crime, cyber risk, and ESG reporting, attendees can expect forward-looking insights tailored for today’s risk environment.
Register now: Financial IT link
Conclusion
The developments this week reinforce a crucial reality: effective AI governance is about more than setting rules—it’s about ensuring accountability, trust, and long-term resilience. Whether it’s the EU’s regulatory crossroads, the FCA’s push for a more agile oversight model, or the challenges of AI integration in the public sector, one thing is clear: the success of AI depends on the frameworks we build today.
As AI capabilities expand, so too must our approach to regulation, ethics, and education. The road ahead demands collaboration between policymakers, businesses, and technologists to create systems that not only foster innovation but also safeguard society.
We’ll be back in two weeks with more insights. Until then, let’s continue driving the conversation on responsible AI.

Rajen Madan
-
Rajen Madan#molongui-disabled-link
-
Rajen Madan#molongui-disabled-link
-
Rajen Madan#molongui-disabled-link
-
Rajen Madan#molongui-disabled-link

Thushan Kumaraswamy
-
Thushan Kumaraswamy#molongui-disabled-link
-
Thushan Kumaraswamy#molongui-disabled-link
-
Thushan Kumaraswamy#molongui-disabled-link
-
Thushan Kumaraswamy#molongui-disabled-link