Issue #7 – Scaling AI Responsibly: From Compliance to Competitive Edge
Introduction
In this edition of The Trusted AI Bulletin, we examine the shifting centre of gravity for AI within financial services firms. The newly released 2025 AI Index Report highlights not only the global acceleration in AI development but also the widening gaps in regulatory preparedness and organisational readiness.
Our Co-Founder Thushan Kumaraswamy opens with a perspective on the need for business-led AI ownership — a view increasingly echoed across the industry. As firms move from experimentation to enterprise adoption, clarity around governance, accountability, and value realisation is no longer optional. This issue explores what that shift looks like in practice.
Executive Perspective: Where Should AI Responsibility Live?
Our Co-Founder Thushan Kumaraswamy comments:
“The 2025 AI Index Report raise some interesting challenges regarding Responsible AI (RAI) in business. The use of AI in financial services firms requires cooperation between multiple departments, but ownership of AI remains fragmented. Currently, it sits with information security or data teams, but it really needs to be owned by the business.
It is the business that is paying for the AI systems to be developed and adopted. It is the business that owns the data used by the AI systems. It is the business that (hopefully!) sees the value realised.
I am starting to see more “Head of Responsible AI” noise in financial services firms now with Lloyds Banking Group hiring in Jan 2025, but still not that many and it remains unclear if these kinds of roles are data/tech-related or part of the business.
I get that AI, both from a technical perspective and an operational one, is new for many business leaders, and they struggle to keep up with the daily barrage of innovations. This is where a “Head of AI” should sit; to advise the business on what is possible with AI, to work with data, technology, and infosec teams to ensure that AI systems are used safely, and to ensure that the ROI of AI is at least what is expected.
Specialists can advise on a temporary basis, but in the long-term this must be an in-house role and team, supported by the board, and given the necessary authority to stop AI developments at any stage if they pose uncontrolled risks to the firm or will not deliver the required return.”
100 AI Use Cases in Financial Services – #1 Chatbot
In last week’s edition, we introduced our series on the 100 AI use cases reshaping financial services — focusing on how firms can move from experimentation to scalable, high-impact adoption.
We kick off the series with one of the most widely adopted and visible AI use cases: chatbots. In financial services, chatbots are transforming how firms interact with customers — delivering faster support, tailored advice, and improved satisfaction. But the benefits come with real risks around data privacy, regulatory compliance, and fairness.
AI Highlights of the Week
1. New AI Index Report Charts Rising Global Stakes—and Regulatory Gaps
The 2025 AI Index Report has just been released by Stanford University, offering a comprehensive snapshot of global trends in artificial intelligence. This year’s report underscores the intensifying race between nations, with the United States still leading in the development of top AI models but China rapidly catching up, especially in research output and patent filings.
The report draws attention to the soaring cost of training cutting-edge models—OpenAI’s GPT-4 is estimated to have cost $78 million—raising questions about who can afford to innovate at this scale. Notably, AI regulation is on the rise: U.S. AI-related laws have grown from just one in 2016 to 25 in 2023, reflecting the increasing pressure on governments to keep pace with technological advancement.
As AI systems become more powerful and embedded in daily life, the findings stress the urgent need for thoughtful, coordinated governance that can balance innovation with accountability. With AI’s trajectory showing no signs of slowing, the report serves as a timely reminder that regulatory frameworks must evolve just as swiftly.
Source: 2025 AI Index Report link
2. Brussels Bets Big on AI to Regain Tech Edge and Counter U.S. Tariffs
As the European Union grapples with the ripple effects of American tariffs, Brussels is preparing a major policy shift aimed at transforming Europe into an “AI Continent.” A draft strategy, to be unveiled this week, reveals plans to streamline regulations, reduce compliance burdens, and create a more innovation-friendly environment for AI development.
This charm offensive is a direct response to mounting criticism from Big Tech and global AI leaders, who argue that the EU’s rigid regulatory framework, including the AI Act, is stifling competitiveness. Central to the strategy are massive investments in computing infrastructure — including five AI “gigafactories” — and ambitious targets to boost AI skills among 100 million Europeans by 2030.
The push also seeks to reduce dependence on U.S.-based cloud providers by tripling Europe’s data centre capacity. With only 13 percent of European firms currently adopting AI, the plan signals a timely recalibration of Europe’s approach to AI governance — one that recognises the urgent need to lead, not lag, in the global AI race.
Source: Politico link
3. Standard Chartered Embraces Generative AI to Revolutionise Global Operations
Standard Chartered is set to deploy its Generative AI tool, SC GPT, across 41 markets, aiming to enhance operational efficiency and client engagement among its 70,000 employees. This strategic move is expected to boost productivity, personalise sales and marketing efforts, automate software engineering tasks, and refine risk management processes.
A more tailored version is in development to leverage the bank’s proprietary data for bespoke problem-solving, while local teams are encouraged to adapt SC GPT to address specific market needs, including digital marketing and customer services. This initiative underscores Standard Chartered’s commitment to responsibly harnessing AI, reflecting a broader trend in the financial sector towards integrating advanced technologies.
As AI governance and regulations evolve, such proactive adoption highlights the importance of balancing innovation with ethical considerations in the banking industry.
Source: Finextra link
4. Navigating the AI Revolution: Ensuring Responsible Innovation in UK Financial Services
The integration of AI into financial services is revolutionising the sector, enhancing operations from algorithmic trading to personalised customer interactions. However, this rapid adoption introduces significant regulatory challenges, particularly concerning financial stability and consumer protection.
The UK’s Financial Conduct Authority (FCA) has yet to implement comprehensive AI regulations, leading to ambiguity in compliance and oversight. Unregulated AI-driven activities, such as algorithmic trading, could exacerbate market volatility, while biased AI models in credit scoring may disadvantage vulnerable consumers.
To address these issues, financial institutions should proactively enhance AI governance frameworks, prioritising transparency, bias mitigation, and robust cybersecurity measures. Engaging with policymakers to establish clear, forward-thinking regulations is crucial to balance innovation with economic stability.
As AI continues to redefine financial services, the UK’s ability to implement effective governance will determine its leadership in this evolving landscape.
Source: HM Strategy link
Industry Insights
Case study – Allianz Scaling Responsible AI Across Global Insurance Operations
Allianz, one of the world’s largest insurers, is taking a leading role in translating Responsible AI principles into real-world practice across its global operations. With nearly 160,000 employees and a presence in more than 70 countries, the company has moved beyond AI experimentation to embed ethical safeguards into scalable AI deployment.
In 2024, Allianz joined the European Commission’s AI Pact, aligning its roadmap with the EU AI Act and signalling its intent to not just comply, but lead on AI governance.
At the core of Allianz’s approach is a practical, organisation-wide AI Risk Management Framework developed in-house. This framework governs all AI and machine learning initiatives, from document processing to customer service automation, with defined roles for model owners, risk teams, and compliance functions.
Key initiatives include:
- An AI Impact Assessment Tool used early in development to flag risks such as discriminatory outcomes, low explainability, or overreliance on sensitive data.
- The Enterprise Knowledge Assistant (EKA), a GenAI-powered tool now used by thousands of service agents to cut resolution times and improve consistency across 10+ countries.
- A strict model registration process and “human-in-the-loop” policy to ensure that critical decisions — like claims rejection or fraud detection — are always overseen by a human.
- Mandatory training for AI Product Owners, with oversight from a central AI governance board embedded in Group Compliance.
These measures are not theoretical. They have enabled Allianz to scale nearly 400 GenAI use cases while maintaining regulatory confidence, internal accountability, and public trust. For Allianz, AI governance is more than risk mitigation — it’s what allows innovation to scale responsibly, without compromising on customer fairness or institutional integrity.
Sources: Allianz link 1, Allianz link 2, WE Forum PDF link
Upcoming Events
1. Gartner Expert Q&A: Practical Guidance on Adapting to the EU AI Act -14 April 2025
This webinar offers valuable insights for businesses navigating the new EU AI regulations. Industry experts will provide actionable advice on how to ensure compliance and unlock opportunities within the evolving AI landscape. It’s a must-attend for anyone keen to stay ahead of regulatory changes and ensure their AI strategies are future-proof.
Register now: Gartner link
2. In-Person Event: AI Breakfast & Roundtable – From AI Proof of Concept to Scalable Enterprise Adoption – 23 April 2025
Leading Point is hosting an exclusive AI Breakfast & Roundtable, bringing together AI leaders from top financial institutions, including banks, insurance firms, and buy-side institutions. This intimate, high-level discussion will explore the challenges and opportunities in scaling AI beyond proof of concept to enterprise-wide adoption.
Key discussion points include overcoming implementation barriers, aligning AI initiatives with business objectives, and best practices for AI success in banking, insurance, and investment management. This event offers a unique opportunity to connect with industry peers and gain strategic insights on embedding AI as a core driver of business value.
Want to be a part of the conversation?
If you are an executive with AI responsibilities in business, risk & compliance, or data contact Rajen Madan or Thushan Kumaraswamy to get a seat at the table.
3. In-Person Event: The AI in Business Conference – 15 May 2025
This in-person event offers a unique opportunity to hear from industry leaders across various sectors, providing real-world insights into AI implementation and strategy. Attendees will benefit from a rich agenda of expert sessions and have the chance to network with like-minded professionals, building lasting connections while tackling common challenges in AI.
Plus, the event is co-located with the Digital Transformation Conference, allowing platinum ticket holders to access a broader range of content, deepening their understanding of AI’s role in digital business transformation.
Register now: AI Business Conference link
Conclusion
The themes emerging across this issue point to a maturing AI agenda in financial services: from clearer governance models and responsible scaling, to regulatory recalibration and infrastructure investment. What’s clear is that AI can no longer be treated as a peripheral capability — it must be embedded within core business strategy, with the right controls in place from the outset.
As organisations seek to balance innovation with oversight, the ability to operationalise Responsible AI at scale will define not only compliance readiness but also competitive advantage.
In our next issue, we’ll continue the ‘100 AI Use Cases’ series with a focus on AI in Investment Research — examining how firms are using AI to enhance insight generation, improve analyst productivity, and navigate the risks of model-driven decision-making.

Rajen Madan
-
Rajen Madan#molongui-disabled-link
-
Rajen Madan#molongui-disabled-link
-
Rajen Madan#molongui-disabled-link
-
Rajen Madan#molongui-disabled-link

Thushan Kumaraswamy
-
Thushan Kumaraswamy#molongui-disabled-link
-
Thushan Kumaraswamy#molongui-disabled-link
-
Thushan Kumaraswamy#molongui-disabled-link
-
Thushan Kumaraswamy#molongui-disabled-link