The Trusted AI Bulletin #4

Issue #4 – Regulating AI: Balancing Innovation, Risk, and Global Influence

Introduction

Welcome to this edition of The Trusted AI Bulletin, where we explore the latest shifts in AI governance, regulation, and adoption. This week, we examine the UK’s evolving AI policy, the growing tensions between Big Tech and European regulators, and the strategic choices shaping AI’s future. With governments reassessing their regulatory approaches and businesses navigating complex compliance landscapes, the conversation around responsible AI is more urgent than ever.

AI adoption requires firms to focus on key capabilities, baseline their AI maturity, and articulate AI risks more effectively. Discussions with executives highlight a gap between the C-suite and AI leads, making governance alignment a critical success factor.

Whether you’re a policymaker, business leader, or AI enthusiast, our curated insights will help you stay informed on the key trends shaping the future of AI.

 


Key Highlights of the Week

1. UK Postpones AI Regulation to Align with US Policies

The UK government has postponed its anticipated AI regulation bill, originally slated for release before Christmas, now expected in the summer. This delay aims to align the UK's AI policies with the deregulatory stance of President Trump's administration, which has recently dismantled previous AI safety measures. Ministers express concern that premature regulation could deter AI businesses from investing in the UK, especially as the US adopts a more laissez-faire approach.

This strategic shift underscores the UK's intent to remain competitive in the global AI landscape, particularly against the backdrop of the EU's stricter regulatory proposals. However, this move has sparked debate over the balance between fostering innovation and ensuring ethical AI development.

Our take: This as a critical moment for businesses to take a proactive approach to AI governance rather than waiting for regulatory clarity. Firms must self-regulate by adopting strong AI controls and risk frameworks to ensure ethical and responsible AI deployment.

Source: The Guardian link

 

2. Big Tech vs Brussels: Silicon Valley Ramps Up Fight Against EU AI Rules

Silicon Valley’s biggest players, led by Meta, are intensifying their efforts to weaken the EU’s stringent AI and digital market regulations—this time with backing from the Trump administration. Lobbyists see an opportunity to pressure Brussels into softening enforcement of the AI Act and Digital Markets Act, with Meta outright refusing to sign up to the EU’s upcoming AI code of practice. The European Commission insists it will uphold its rules, but its recent decision to drop the AI Liability Directive suggests some willingness to compromise. If European regulators waver, it could set a dangerous precedent, emboldening Big Tech to dictate the terms of global AI governance.

 

Source: FT link

 

3. AI Safety Institute Rebrands, Drops Bias Research

The UK government has rebranded its AI Safety Institute, now called the AI Security Institute, shifting its focus away from AI bias and free speech concerns. Instead, the institute will prioritise cybersecurity threats, fraud prevention, and other high-risk AI applications. This move aligns the UK's AI policy more closely with the U.S. and has sparked debate over whether deprioritising bias research could have unintended societal consequences.

Our take: Bias and fairness remain core AI governance challenges. Firms need to go beyond regulatory mandates and build internal frameworks that address bias and transparency, ensuring trust in AI applications.

Should AI regulation focus solely on security threats, or is ignoring bias a step backward in responsible AI governance?

 

Source: UK Gov link

 


Featured Articles

1. UK AI Regulation Must Balance Innovation and Responsibility

The UK government’s approach to AI regulation will play a crucial role in shaping economic growth. The challenge lies in ensuring AI is safe, fair, and reliable without imposing rigid constraints that could stifle innovation. A risk-based, principles-driven framework—similar to the EU’s AI Act—offers a way forward, allowing adaptability while maintaining accountability. The real test will be whether regulation fosters trust and responsible AI use or becomes an obstacle to progress. Governance should encourage businesses to integrate ethical AI practices, not just comply with rules.

Our take: Striking this balance will be key to ensuring AI drives long-term economic and technological advancement. Firms shouldn’t wait for regulatory clarity. Assessing AI risks, implementing governance frameworks, and ensuring transparency now will give organisations a competitive edge.

 

Source: The Times link

 

2. Addressing Data and Expertise Gaps in AI Integration

In the rapidly evolving landscape of artificial intelligence, organisations face significant hurdles in adoption, notably concerns about data accuracy and bias, with nearly half of respondents expressing such apprehensions. Additionally, 42% of enterprises report insufficient proprietary data to effectively customise AI models, underscoring the need for robust data strategies. A similar percentage highlights a lack of generative AI expertise, pointing to a critical skills gap that must be addressed.

Moreover, financial justification remains a challenge, as organisations struggle to quantify the return on investment for AI initiatives. These challenges are particularly pertinent in the context of AI governance and regulation, emphasising the necessity for comprehensive frameworks to ensure ethical and effective AI deployment.

Source: IBM link

 

3. Global AI Compliance Made Easy – A Must-Have Tracker for AI Governance

The Global AI Regulation Tracker developed by Raymond Sun, is a powerful, interactive tool that keeps you ahead of the curve on AI laws, policies, and regulatory updates worldwide. With a dynamic world map, in-depth country profiles, and a live AI newsfeed, it provides a one-stop resource for navigating the complex and evolving AI governance landscape. Updated regularly, it ensures you never miss a critical regulatory shift that could impact your business or compliance strategy. Stay informed, stay compliant, and turn AI regulation into a competitive advantage.

Source: Techie Ray link

 

4. Breaking Down Barriers: Strategies for Successful AI Adoption

Artificial intelligence holds immense promise for revolutionising business operations, yet a staggering 80% of AI initiatives fall short of expectations. This high failure rate often stems from challenges such as subpar data quality, organisational resistance, and a lack of robust leadership support.

To navigate these obstacles, companies must prioritise comprehensive data management, foster a culture open to change, and ensure active engagement from leadership. Moreover, aligning AI projects with clear business objectives and investing in employee training are pivotal steps towards realising AI's full potential. Without addressing these critical areas, organisations risk squandering resources and missing out on the transformative benefits AI offers.

Source: Forbes link

 


Industry Insights

Case Study: AXA's Ethical AI Integration: Boosting Efficiency and Trust in Insurance

AXA, a global insurance leader, has strategically integrated Artificial Intelligence (AI) into its operations to enhance efficiency and uphold ethical standards. By implementing a dedicated AI governance team comprising actuaries, data scientists, privacy specialists, and business experts, AXA ensures responsible AI adoption across its services. This team focuses on creating transparent AI models, safeguarding data privacy, and maintaining human oversight in AI-driven decisions.

A practical application of this strategy is evident in AXA UK's deployment of 13 software bots within their claims departments, which, over six months, saved approximately 18,000 personnel hours and yielded around £140,000 in productivity gains. This initiative not only streamlines repetitive tasks but also reinforces AXA's commitment to ethical AI practices, setting a benchmark for the insurance industry.

Key Outcomes of AI Governance at AXA:

* Operational Efficiency: The introduction of AI bots has significantly reduced manual processing time, enhancing overall productivity.
* Ethical AI Deployment: Establishing a robust governance framework ensures AI applications are transparent, fair, and aligned with societal responsibilities.
* Enhanced Customer Service: Automation of routine tasks allows employees to focus on more complex customer needs, improving service quality.

 

Sources: Cap Gemini linkAXA link

 


Upcoming Events

1. Webinar: Augmenting Private Equity Expertise With AI – 6 March 2035

This event aims to explore practical strategies for private equity firms to integrate artificial intelligence, enhancing expertise and uncovering new value sources. Discussions will focus on AI's role in competitive deal sourcing, transforming due diligence processes, and bolstering risk management. As AI continues to reshape the financial landscape, this webinar offers timely insights into aligning technology strategies with business objectives, ensuring AI-driven value creation throughout the investment lifecycle.

Register now: FT Live link

 

2. Webinar: CIOs, Set the Right AI Strategy in 2025 – 7 March 2025

In this upcoming webinar, Chief Information Officers will gain insights into formulating effective AI strategies that yield measurable outcomes. The session aims to equip CIOs with the tools to navigate the complexities of AI implementation, ensuring alignment with organisational goals and compliance with emerging AI regulations. As AI continues to reshape industries, understanding its governance and regulatory landscape becomes imperative for IT leaders.

Register now: Gartner link

 

3. In-Person Event: AI UK 2025 Alan Turing Institute – 17 – 18 March 2025

This in-person event brings together experts to explore the latest advancements in artificial intelligence, governance, and regulation. A key highlight of the event is the panel discussion, Advancing AI Governance Through Standards, taking place on 18 March 2025.

Led by The AI Standard Hub, the session will delve into recent developments in AI assurance, global standardisation efforts, and strategies for fostering inclusivity in AI governance. As AI regulations continue to evolve, this discussion offers valuable insights into building a robust AI assurance ecosystem and ensuring responsible AI deployment.

Register now: Turing Institute link

 


Conclusion

As AI governance takes centre stage, the challenge remains—how do we drive innovation while ensuring transparency, fairness, and accountability? This issue underscores the importance of strategic regulation, ethical AI adoption, and proactive leadership in shaping a future where AI works for businesses and society alike. AI governance is shifting, but businesses can’t afford to wait. AI risks require more effort to understand, firms need to baseline their AI capabilities, and governance gaps between leadership and AI teams must be bridged.

With AI’s influence growing across industries, the need for informed decision-making has never been greater. Whether it’s policymakers refining regulations or organisations refining their AI strategies, the key takeaway is clear: responsible AI isn’t just about compliance—it’s about long-term success.

We’ll be back in two weeks with more insights—until then, let’s continue shaping a responsible AI future together.

 


The Trusted AI Bulletin #3

Issue #3 – Global AI Crossroads: Ethics, Regulation, and Innovation

Introduction

Welcome to this week’s edition of The Trusted AI Bulletin, where we explore the latest developments, challenges, and opportunities in the rapidly evolving world of AI governance. From global ethical debates to regulatory updates and industry innovations, this week’s highlights underscore the critical importance of balancing innovation with responsibility.

As AI continues to transform industries and societies, the need for robust governance frameworks has never been more urgent. For many organisations, this means not just keeping pace with regulatory change but also taking practical steps—such as bringing key teams together to assess AI usage, ensuring leadership is informed on emerging risks, and building governance frameworks that can evolve alongside innovation.

Join us as we delve into key stories shaping the future of AI governance and examine how organisations and nations are navigating this complex landscape.

 


Key Highlights of the Week

1. UK and US Withhold Support for Global AI Ethics Pact

At the AI Action Summit in Paris, the UK and US refused to sign a joint declaration on ethical and transparent AI, which was backed by 61 countries, including China and EU nations. The UK cited concerns over a lack of "practical clarity" and insufficient focus on security, while the US objected to language around "inclusive and sustainable" AI. Both governments stressed the need for further discussions on AI governance that align with their national interests. Critics and AI experts warn that this decision is a missed opportunity for democratic nations to take the lead in shaping AI governance, potentially allowing other global powers to set the agenda.

Source: The Times link

 

2. New PRA Letter Outlines 2025 Expectations for UK Banks

The Prudential Regulation Authority (PRA) has issued a letter outlining its 2025 supervisory priorities for UK banks, focusing on risk management, governance, and resilience. With ongoing market volatility, AI adoption, and geopolitical uncertainty, firms are expected to strengthen their risk frameworks and controls.

Liquidity and funding will also be under scrutiny, as the Bank of England shifts to a new reserve management approach. Meanwhile, banks must demonstrate by March 2025 that they can maintain operations during severe disruptions.

Notably, the Basel 3.1 timeline has been pushed to 2027, giving firms more time to adjust. However, regulatory focus on AI, cyber risks, and data management is set to increase, with further updates expected later this year.

 

Source: PRA PDF link

 

3. G42 and Microsoft Launch the Middle East’s First Responsible AI Initiative

G42 and Microsoft have jointly established the Responsible AI Foundation, the first of its kind in the Middle East, aiming to promote ethical AI standards across the Middle East and Global South. Supported by the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), the foundation will focus on advancing responsible AI research and developing governance frameworks that consider cultural diversity. Inception, a G42 company, will lead the programme, while Microsoft plans to expand its AI for Good Lab to Abu Dhabi. This initiative underscores a commitment to ensuring AI technologies are safe, fair, and aligned with societal values.

Source: G42.ai link

 


Featured Articles

1. AI is Advancing Fast—Why Isn’t Governance Keeping Up?

Artificial intelligence is evolving at breakneck speed, reshaping industries and daily life, yet a clear governance framework is still missing. Effective policies must be based on scientific reality, not speculation, to address real-world challenges without stifling progress. Striking the right balance between innovation and regulation is crucial, especially as AI’s impact grows. Open access to AI models is key to driving research and ensuring future breakthroughs aren’t limited to a select few. With AI set to transform everything from healthcare to energy, the question remains—can governance keep pace?

 

Source: FT link

 

2. The EU AI Act: What High-Risk AI Systems Must Get Right

The EU AI Act imposes stringent obligations on high-risk AI systems, requiring organisations to implement risk management frameworks, ensure data governance, and maintain transparency. CIOs and CDOs must oversee compliance, ensuring human oversight, proper documentation, and clear communication when AI is in use.

A key focus is ensuring AI systems are explainable and auditable, enabling regulators and stakeholders to understand how decisions are made. Non-compliance carries significant financial and operational risks, making early alignment with regulatory requirements essential.

With enforcement approaching, businesses must integrate these rules into their AI strategies to maintain trust, mitigate risks, and drive responsible innovation. To stay ahead, organisations should conduct internal audits, update governance policies, and invest in staff training to embed compliance across AI initiatives. Proactive action now will determine competitive advantage in an AI-regulated future.

Source: A&O Shearman link

 

3. Building a Data-Driven Culture: Four Essential Pillars for Success

While many organisations collect vast amounts of data, few truly unlock its transformative potential. Success lies in mastering four critical elements: leadership commitment to champion data use, fostering data literacy across teams, ensuring data is accessible and integrated, and establishing trust through robust governance. Without these pillars, even the most data-rich organisations risk inefficiency and missed opportunities. A strong data-driven culture isn’t just about tools—it’s about embedding these principles into the fabric of your organisation.

 

Source: MIT Sloan link

 


Industry Insights

Case Study: Ocado’s Approach to Responsible AI Governance

Ocado Group has embedded AI across its operations, from optimising warehouse logistics to enhancing customer experiences. However, as AI adoption scales, so do the risks—unintended biases, unpredictable decision-making, and regulatory challenges. To navigate this, Ocado has placed responsible AI governance at the heart of its strategy, ensuring its models remain transparent, fair, and reliable.

A key component of its AI governance Strategy is its Responsible AI Framework, built around five key principles: Fairness, Transparency, Governance, Robustness, and Impact. This structured approach ensures AI systems are rigorously tested to prevent bias, remain explainable, and function as intended across complex operations.

One tangible success of this framework is Ocado’s real-time AI-powered monitoring, which has led to £100,000 in annual cost savings by automatically detecting and resolving system anomalies. With AI observability tools tracking over 100 microservice applications within its Ocado Smart Platform (OSP), the company can proactively address inefficiencies, minimising downtime and enhancing system reliability.

AI governance ensures Ocado’s AI models remain resilient and accountable, reducing risks associated with unpredictable AI behaviour. By embedding responsible AI principles into its operations, Ocado continues to optimise efficiency, prevent costly errors, and align with evolving regulatory expectations around AI.

 

Sources: Ocado Group linkOcado Group link (CDO interview)

 


Upcoming Events

1. In-Person Event: Microsoft AI Tour - 5 March 2025

The Microsoft AI Tour in London is an event for professionals looking to explore the transformative potential of artificial intelligence. Featuring expert-led sessions, interactive workshops, and live demonstrations, it offers a unique opportunity to dive into the latest AI innovations and their real-world applications. Whether you're looking to expand your knowledge, network with industry leaders, or discover how AI can drive impact, this event is an invaluable experience for anyone invested in the future of technology.

Register now: MS link

 

2. In-Person Event: IRM UK Data Governance Conference - 17-20 March

The Data Governance, AI Governance & Master Data Management Conference Europe is scheduled for 17–20 March 2025 in London. This four-day event offers five focused tracks, covering topics such as data quality, MDM strategies, and AI ethics. The conference features practical case studies from leading organisations, providing attendees with actionable insights into effective data management practices.

Key sessions include “Navigating the Intersection of Data Governance and AI Governance” and “How Master Data Management can Enable AI Adoption”. Participants will also have opportunities to connect with over 250 data professionals during dedicated networking sessions.

Register now: IRM UK link

 

3. Webinar: Strategies and solutions for unlocking value from unstructured data - 27 March 2025

Discover how to harness the untapped potential of unstructured data in this insightful webinar. The session will explore practical strategies and innovative solutions to extract actionable insights from data sources like emails, documents, and multimedia. Attendees will gain valuable knowledge on overcoming challenges in data management, leveraging advanced technologies, and driving business value from previously underutilised information.

Register now: A-Team Insight link

 


Conclusion

As we wrap up this edition of The Trusted AI Bulletin, it’s clear that the journey toward ethical and effective AI governance is both challenging and essential. From the UK and US withholding support for a global AI ethics pact to Ocado’s pioneering approach to responsible AI, the stories this week highlight the diverse perspectives and strategies shaping the future of AI.

While progress is being made, the road ahead demands collaboration, innovation, and a shared commitment to ensuring AI benefits all of humanity. For organisations looking to act now, investing in education, cross-functional AI collaboration, and a clear governance roadmap will be key to staying competitive in an AI-regulated future.

Stay tuned for more updates, and let’s continue working together to build a future where AI is not only powerful but also fair, transparent, and accountable.

 


The Trusted AI Bulletin #2

Issue #2 – AI Investment, Ethics & Compliance Trends

Introduction

Welcome to this edition of The Trusted AI Bulletin, where we bring you the latest developments in enterprise AI risk management, adoption, and ethical AI practices.

This week, we examine how tech giants are investing billions into AI innovation, the growing global alignment on AI safety, and why passive data management is no longer viable in an AI-driven world.

With AI becoming an integral part of finance, healthcare, and other critical industries, strong governance frameworks are essential to ensure trust, transparency, and long-term success.

From real-world case studies like DBS Bank’s AI journey to upcoming industry events, this issue gives you insights to help you stay ahead in the evolving AI landscape.

 


Key Highlights of the Week

1. Stargate: America's $500 Billion AI Power Play

The United States unveiled the Stargate Project, a $500 billion initiative over the next four years to establish the world's most extensive AI infrastructure and secure global dominance in the field. Led by OpenAI, Oracle, and SoftBank, with backing from President Trump, the project plans to build 20 massive data centres across the U.S., starting with a 1 million-square-foot facility in Texas. Beyond advancing AI capabilities, Stargate is also a strategic move to attract global investment capital, potentially limiting China’s access to AI funding. With its $500 billion commitment far surpassing China’s $186 billion AI infrastructure spending to date, the U.S. is making a bold play to corner the market and maintain its technological edge.

Source: Forbes link

 

2. China’s AI Firms Align with Global Safety Commitments, Signalling Convergence in Governance

Chinese AI companies, including DeepSeek, are rapidly advancing in the global AI race, with their latest models rivalling top Western counterparts. In a notable shift, 17 Chinese firms have signed AI safety commitments similar to those adopted by Western companies, signalling a growing alignment on governance principles. This convergence highlights the potential for international collaboration on AI safety, despite ongoing geopolitical competition. As AI development accelerates, upcoming forums like the Paris AI Action Summit may play a crucial role in shaping global AI governance.

Source: Carnegie Endowment research link

 

3. Lloyds Banking Group Expands AI Leadership with New Head of Responsible AI

Lloyds Banking Group has appointed Magdalena Lis as its new Head of Responsible AI, reinforcing its commitment to ethical AI development. With over 15 years of experience, including advisory roles for the UK Government and leadership at Toyota Connected Europe, Lis will focus on ensuring AI safeguards while advancing innovation. This move follows the appointment of Dr. Rohit Dhawan as Director of AI and Advanced Analytics in 2024, as Lloyds continues to grow its AI Centre of Excellence, now comprising over 200 specialists. As AI reshapes banking, Lloyds aims to balance technological advancement with responsible implementation.

 

Source: FF News link

 


Featured Articles

1. Why Passive Data Management No Longer Works in the AI Era

The days of passive data management are over—AI-driven organisations need a proactive approach to governance. Chief Data Officers (CDOs) must ensure that data is high-quality, well-structured, and compliant to fully unlock AI’s potential. This means implementing automation, real-time monitoring, and stronger governance frameworks to mitigate risks while enhancing decision-making. Without these measures, businesses risk falling behind in an increasingly AI-powered world. The article explores how CDOs can take control of their data strategy to drive innovation and maintain regulatory compliance.

Image source: Image generated using ImageFX

Source: Medium link

2. How Governance & Privacy Can Safeguard AI Development

As AI adoption accelerates, so do concerns over data exposure, compliance failures, and reputational damage. Informatica warns that without strong governance and privacy policies, organisations risk losing control over sensitive information. Proactive data management, human oversight, and clear accountability are crucial to ensuring AI is both powerful and responsible. Businesses must not only understand the data fuelling their AI models but also implement safeguards to prevent unintended consequences. In an AI-driven world, those who neglect governance may find themselves facing serious risks.

Source: A-Team Insight link

3. AI Literacy: The Key to Staying Ahead in an AI-Driven World

AI is transforming industries, but do your teams truly understand how to use it responsibly? Without proper AI literacy, businesses risk compliance failures, biased decision-making, and missed opportunities. A well-designed AI training programme helps employees navigate regulations, mitigate risks, and unlock AI’s full potential. From assessing knowledge gaps to tailoring content for different roles, the right approach ensures AI is used strategically and ethically. As AI continues to evolve, organisations that prioritise education will be better equipped to adapt and thrive.

Source: IAPP link

 


Industry Insights

Case Study: DBS Bank - AI Success Rooted in Robust Governance Framework

Harvard Business School’s recent case study on DBS Bank highlights the critical role of AI governance in executing a successful AI strategy. Headquartered in Singapore, DBS embarked on a multi-year digital transformation under CEO Piyush Gupta in 2014, incorporating AI to enhance business value and customer experience. As AI adoption scaled, DBS developed its P-U-R-E framework—emphasising Purposeful, Unsurprising, Respectful, and Explainable AI—to ensure ethical and responsible AI deployment. This governance-first approach has been instrumental in managing risks while maximising AI’s potential across banking operations.

In 2022, DBS began exploring Generative AI (Gen AI) use cases, adapting its governance frameworks to balance innovation with emerging risks. By leveraging its existing AI capabilities, the bank continues to integrate AI responsibly while maintaining regulatory compliance and trust.

Key Outcomes of AI Governance at DBS:
o Economic Impact: DBS anticipates its AI initiatives will generate over £595 million in economic benefits by 2025, following consecutive years of doubling impact.
o Enhanced Customer Experience: AI-driven hyper-personalised prompts assist customers in making better investment and financial planning decisions.
o Employee Development: AI supports employees with tailored career and upskilling roadmaps, fostering long-term career growth.

Sources: DBS Bank news link

 


Upcoming Events

1. Webinar: Transforming Banking with GenAI – 13 February 2025

Join the Financial Times for a webinar exploring the transformative potential of Generative AI (GenAI) in the banking sector. Industry leaders will discuss the latest GenAI applications, including synthetic data and self-supervised learning, and provide strategies for navigating the rapidly evolving AI landscape. Key topics include revolutionising core banking operations, building robust data strategies, and reskilling workforces for future challenges.

Register now: FT link

2. Webinar: AI Maturity & Roadmap: Accelerate Your Journey to AI Excellence – 27 February 2025

Gartner is hosting a webinar focusing on assessing AI maturity and exploring the transformative potential of AI within organisations. The session will utilise Gartner's AI maturity assessment and roadmap tools to outline key practices across seven workstreams essential for achieving AI success at scale. Attendees will gain insights into managing and prioritising activities to harness AI's full potential.

Register now: Gartner link

3. Webinar: What Do CIOs Really Care About? – 13 March 2025

Join IDC for an insightful webinar exploring the evolving priorities of Chief Information Officers in the digital era. The session will delve into how CIOs are balancing innovation with pragmatism, transitioning from traditional IT management to strategic leadership roles that drive business transformation. Attendees will gain perspectives on aligning technology initiatives with organisational goals and the critical role of CIOs in today's rapidly changing technological landscape.

Register now: IDC link

 


Conclusion

Implementing Trusted AI isn’t just a regulatory requirement—it’s a business imperative. As organisations integrate AI into critical decision-making, ensuring trust, transparency, and compliance will define long-term success. By staying informed on evolving policies, adopting strong governance frameworks, and fostering ethical AI practices, businesses can harness AI’s full potential while managing risks.
We’d love to hear your thoughts! Join the conversation, share your perspectives, and stay engaged with us as we navigate the future of responsible AI together.

See you in the next issue!

Rajen Madan

Thushan Kumaraswamy


The Trusted AI Bulletin #1

Issue #1 – AI Advancements and Regulatory Shifts

Introduction

Welcome to the inaugural edition of The Trusted AI Bulletin! As artificial intelligence continues to reshape industries, the importance of robust risk management, deployment processes, transparency and ethical oversight on AI cannot be overstated.

At Leading Point our mission is to help those responsible for implementing AI in enterprises deliver trusted, rapid AI innovations while removing the blockers – be it uncertainty around AI value, lack of trust with AI outputs or user adoption.

This newsletter is your bi-weekly guide to staying informed, inspired, and ahead of the curve in navigating the challenges with AI deployment and realise the opportunity of AI in your enterprise.


Key Highlights of the Week

1. AI Innovations in Financial Services

The UK’s AI sector continues to grow, attracting £200 million in daily private investment since July 2024, with notable contributions like CoreWeave’s £1.75 billion data centre investment. These advancements underscore the transformative potential of AI in sectors such as financial services. From cutting-edge AI models to emerging data infrastructure, staying ahead of these innovations is essential for leaders navigating this rapidly evolving space.

Source: UK Government link

2. UK AI Action Plan

The UK government has officially approved a sweeping AI action plan aimed at establishing a robust economic and regulatory framework for artificial intelligence. The plan focuses on ensuring AI is developed safely and responsibly, with a strong emphasis on promoting innovation while addressing potential risks. Key priorities include creating clear guidelines for AI governance, fostering collaboration between government and industry, and ensuring the UK remains a global leader in AI development. This action plan marks a significant step towards creating a balanced approach to AI regulation.

Source: Artificial Intelligence News link

3. Tech Nation to launch London AI Hub

Brent Hoberman’s Founder’s Forum, announced the London AI Hub in collaboration with European AI group Merantix, Onfido and Quench.ai founder Husayn Kassai and flexible office provider Techspace. The initiative aims to bring together a fragmented sector. Hoberman said the hub would act as a “physical nucleus for meaningful collaboration across founders, investors, academics, policymakers and innovators.

Source: UK Tech News link


Featured Articles

1. 10 AI Strategy Questions Every CIO Must Answer

Artificial intelligence is transforming industries, and CIOs play a key role in aligning AI initiatives with business objectives. The article outlines 10 critical questions that every CIO must answer to ensure successful AI strategy, from building governance frameworks to implementing ethical AI.

Source: CIO.com link

2. AI Regulations, Governance, and Ethics for 2025

The global landscape for AI regulation is evolving rapidly, with regions adopting diverse approaches to governance and ethics. In the UK, a traditionally light-touch, pro-innovation approach is now shifting toward proportionate legislation focused on managing risks from advanced AI models. With upcoming proposals and the UK AI Safety Institute’s pivotal role in global risk research, the country aims to balance innovation with safety.

Source: Dentons link


Industry Insights

Case Study: Mastercard

Mastercard’s commitment to ethical AI governance acts as a core part of its innovation strategy. Recognising the potential risks of AI, Mastercard developed a comprehensive framework to ensure its AI systems align with corporate values, societal expectations, and regulatory standards. This approach highlights the growing importance of AI governance in fostering trust, minimising risks, and enabling responsible innovation.

Key elements of Mastercard’s AI governance strategy include:

o Transparency and accountability: Regular audits and cross-functional oversight ensure AI systems operate fairly and responsibly.

o Ethical principles in practice: AI systems are designed to uphold fairness, privacy, and security, balancing innovation with societal and corporate responsibilities.

This case underscores how robust AI governance can help organisations navigate the complexities of AI deployment while maintaining trust and ethical integrity.

Source: IMD link


Upcoming Events

1. Webinar: A CISO Guide to AI and Privacy – 21 January 2025

Explore how to develop effective AI policies aligned with industry best practices and emerging regulations in this insightful webinar. Maryam Meseha and Janelle Hsia will discuss ethical AI use, stakeholder collaboration, and balancing business risks with opportunities. Learn how AI can enhance cybersecurity and drive innovation while maintaining compliance and trust.

Register now: Brighttalk link

2. The Data Advantage – Smarter Investments in Private Markets – 28 January 2025

This event, run by Leading Point, focuses on the transformative role of data and technology in private markets, bringing together investors, data professionals, and market leaders to explore smarter investment strategies. Key discussions will cover leveraging data-driven insights, integrating advanced analytics, and enhancing decision-making processes to maximise returns in private markets.

Register now: Eventbrite link

3. The Data Management Summit 2025 – 20 March 2025

The Data Management Summit London is a premier event bringing together data leaders, regulators, and technology innovators to discuss the latest trends and challenges in data management, particularly in financial services. Key topics include data governance, ESG data, cloud strategies, and leveraging AI and advanced analytics to drive innovation while maintaining regulatory compliance. It’s an excellent opportunity to network and learn from industry leaders.

Register now: A-Team Insight link


Conclusion

As AI continues to transform industries, the need for operating level clarity and adoption in AI becomes ever more pressing. By staying informed about the latest advancements, regulatory changes, and best practices in AI implementations, enterprises can navigate this landscape effectively and responsibly. We encourage you to engage with this content, share your insights, and join the conversation in our upcoming events and discussions.

Stay informed, stay responsible!

Rajen Madan

Thushan Kumaraswamy


Accelerating AI Success

Accelerating AI Success: The Role of Data Enablement in Financial Services

Introduction

The webinar, held on 10 October 2024, focused on accelerating AI success and the foundational role of data enablement in financial services. Leading Point Founder & CEO, Rajen Madan, introduced the topic and the panel of four executives: Joanne Biggadike (Schroders), Nivedh Iyer (Danske Bank), Paul Barker (HSBC), and Meredith Gibson (Leading Point).

Rajen explained that data enablement involves "creating and harnessing data assets, making them super accessible and well managed, and embedding them into operational decision-making processes." He outlined the evolution of data management in the industry, describing three waves:

1️⃣ Focus on big warehouses and governance

2️⃣ Making data more pervasive and accessible

3️⃣ The opportunity now – emphasis on value extraction, embedding data insights in operational processes and decision-making and transform with AI

 

Data Governance and AI Governance

The panellists discussed the evolving role of data governance and its relationship to AI governance. Joanne Biggadike, Head of Data Governance at Schroders, noted the increasing importance of data governance: "Everybody's realising in order to move forward, especially with AI and generative AI, you really need your data to be reliable and you need to understand it."

She emphasised that while data governance and AI governance are separate, they are complementary. Biggadike stressed the importance of knowing data sources and having human oversight in AI processes: "We need a touch point. We need a human in the loop. We need to be able to review what we're coming out with as our outcomes, because we want to make sure that we're not coming out with the wrong output because the data's incorrect, or because the data's biased."

Paul Barker, Head of Data and Analytics Governance at HSBC cautioned against creating new silos for AI governance: "We've been doing model risk management for 30 years. We've been doing third party management for 30 years. We've been doing data governance for a very long time. So I think... it's about trying not to create a new silo.“

 

Data Quality and AI Adoption

Nivedh Iyer, Head of Data Management at Danske Bank, highlighted the importance of data quality in AI adoption: "AI in the space of data management, if I say core aspects of data management like governance, quality, lineage is still in the process of adoption... One of the main challenges for AI adoption is how comfortable we are... on the quality of the data we have because Gen AI or AI for that matter depends on good quality data."

Iyer also mentioned the emergence of innovative solutions in data quality management, particularly from fintech providers.

 

Central Shift and Technical Capabilities

Paul Barker emphasised the dual challenges of cultural shift and technical capabilities in data management: "There is a historic tendency to keep all the data secret... When you start with that as your DNA, it's then very difficult to move to a data democratisation culture where we're trying to surface data for the non-data professional."

Regarding technical capabilities, Barker noted the challenges faced by large, complex organisations compared to start-ups: "You can look at an organisation that's the scale and complexity of say HSBC... compared to a start-up organisation that literally starts its data architecture with a blank piece of paper and can build that Model Bank."

From a technical standpoint, large organisations face unique challenges in integrating various data sources across multiple markets and op models compared to smaller startups that can build their data architecture from scratch. There has been progress with technical solutions that can address some of these interoperability challenges.

 

Legal and Regulatory Aspects

Meredith Gibson, Data & Regulatory Lawyer with Leading Point, speaking from a legal perspective, highlighted the evolving regulatory landscape: "As the banks and other financial institutions... become more complex and more interested in data... so does the roadmap for how you control that change has morphed with deeper understanding by regulators and increased requirements."

She also raised concerns about data ownership in the context of AI and large language models: "Programmers have always done a copy and paste, which was fine until you end up with large language models where actually I'm not sure that people do know where their information and their data comes from."

The Panel highlighted the tension between banks' desire for autonomy in managing their data and regulators' need for standardisation to monitor activities effectively. There are several initiatives on standardisation including ISO, LEI and the EU AI Act. Lineage is crucial for getting AI ready. Who owns the data, who controls it, information on the data usage and obligations become central.

 

Leading Point’s Data Enablement Framework

Data is readily accessible, well-managed, and used to drive decision-making and innovation.​

Data Strategy & Data Architecture

By having a clear data strategy and one that is aligned with the business strategy, you can reach better decisions quicker. Using insights from your data provides more confidence that the business actions you are taking are justified.

Having an agreed cross-business data architecture supports accelerated IT development and adoption of new products and solutions, by defining data standards, data quality, and data governance.

Data Catalogue & Data Virtualisation

Having a data catalogue is more than just implementing a tool like Collibra. It is important to define what that business data means at a logical level and how that is represented in the physical attributes.

A typical way to consolidate data is with a data warehouse, but that is a complex undertaking that requires migration from data sources into the warehouse with the associated additional storage costs. Data virtualisation simplifies data integration, standardisation, federation, and transformation without increasing data storage costs.

 

The Future of Data Enablement

The panellists discussed how data enablement needs to evolve to accommodate AI and other emerging technologies.

Joanne Biggadike suggested that while core principles of data governance remain useful, they need to adapt: "I think what they need to do is to make sure that they're not a blocker for AI, because AI is innovative and it actually means that sometimes you don't know everything that you might already need to know when you're doing day-to-day data governance."

Paul Barker noted the need for more dynamic governance processes: "We are now in the 21st century, but a lot of data governance is still based on a sort of 19th, early 20th century... form a committee, write a paper, have a six week period of consultation."

We need data governance by design. Financial institutions have been good with deploying SDLC, controlled and well-governed releases with checkpoints. We need to embed AI and data governance as part of the SDLC.
Data lineage, should not be a one-off solution it should be right-sized to the requirement i.e. coarse or fine-grained. Chasing detailed lineage across the complexity of large organisation infrastructures will take years and there will not be ROI. Pragmatism is required.

Focus on data ethics, as AI and ML becomes more widely-used, is as much a training and skills development requirement as a technical one. Understanding what terms and conditions underpin service, client conduct, usage of PII data and overall values of building customer trust.

Data ownership, rather than theoretical “who is to blame” when there are data quality issues, firms should focus on creating transparency on accountability and establishing clear chain of communications. Ownership can naturally align to domain data sets, for instance, CFO should have ownership on financial data. Central to ownership is establishing escalation points, “Who can I reach out to change something? Who is best placed to provide future integration?”

The climate impact of AI infrastructure is potentially significant, and firms need to factor this in their deployment. There will be innovation in data centres but also firms will get clarity of end state. Currently many organisations have gone through costly initiatives to move to cloud, and due to AI and security concerns firms are bringing some of it on-prem, this needs to be worked through.

We need to start thinking of AI as another tool that can accelerate and re-imagine processes making them more effective and efficient but it is not an innovation by itself and we should approach any AI adoption with what is the business problem we are looking to solve.

 

Challenges and Opportunities

The panellists identified several challenges and opportunities in the data and AI space:

1️⃣ Balancing innovation with governance and risk management

2️⃣ Ensuring data quality and reliability for AI applications

3️⃣ Adapting governance frameworks to be more agile and responsive

4️⃣ Addressing data ownership and privacy concerns in the age of AI

5️⃣ Bridging the gap between traditional data management practices and emerging technologies

 

Conclusion

The webinar highlighted the critical role of data enablement in accelerating AI success in financial services. The panellists stressed the need for robust data governance, high-quality data, and a cultural shift towards data democratisation. They also noted the importance of adapting existing governance frameworks to accommodate AI and other emerging technologies, rather than creating new silos.

As organisations continue to navigate the complex landscape of data and AI, they must balance innovation with risk management, ensure data quality and reliability, and address legal and ethical concerns. The future of data governance in financial services will likely involve more dynamic, agile processes that are embedded in business and operations and allow to keep pace with rapidly evolving technologies while maintaining the necessary controls and oversight. An overall pragmatic and principled approach is the best way forward for organisations.

 

Download the report

Leading Point - Webinar - Data Enablement for AI - Summary

 


AI Under Scrutiny

Why AI risk & governance should be a focus area for financial services firms

 

Introduction

As financial services firms increasingly integrate artificial intelligence (AI) into their operations, the imperative to focus on AI risk & governance becomes paramount. AI offers transformative potential, driving innovation, enhancing customer experiences, and streamlining operations. However, with this potential comes significant risks that can undermine the stability, integrity, and reputation of financial institutions. This article delves into the critical importance of AI risk & governance for financial services firms, providing a detailed exploration of the associated risks, regulatory landscape, and practical steps for effective implementation. Our goal is to persuade financial services firms to prioritise AI governance to safeguard their operations and ensure regulatory compliance.

 

The Growing Role of AI in Financial Services

AI adoption in the financial services industry is accelerating, driven by its ability to analyse vast amounts of data, automate complex processes, and provide actionable insights. Financial institutions leverage AI for various applications, including fraud detection, credit scoring, risk management, customer service, and algorithmic trading. According to a report by McKinsey & Company, AI could potentially generate up to $1 trillion of additional value annually for the global banking sector.

 

Applications of AI in Financial Services

1 Fraud Detection and Prevention: AI algorithms analyse transaction patterns to identify and prevent fraudulent activities, reducing losses and enhancing security.

2 Credit Scoring and Risk Assessment: AI models evaluate creditworthiness by analysing non-traditional data sources, improving accuracy and inclusivity in lending decisions.

3 Customer Service and Chatbots: AI-powered chatbots and virtual assistants provide 24/7 customer support, while machine learning algorithms offer personalised product recommendations.

4 Personalised Financial Planning: AI-driven platforms offer tailored financial advice and investment strategies based on individual customer profiles, goals, and preferences, enhancing client engagement and satisfaction.

 

Potential Benefits of AI

The benefits of AI in financial services are manifold, including increased efficiency, cost savings, enhanced decision-making, and improved customer satisfaction. AI-driven automation reduces manual workloads, enabling employees to focus on higher-value tasks. Additionally, AI's ability to uncover hidden patterns in data leads to more informed and timely decisions, driving competitive advantage.

 

The Importance of AI Governance

AI governance encompasses the frameworks, policies, and practices that ensure the ethical, transparent, and accountable use of AI technologies. It is crucial for managing AI risks and maintaining stakeholder trust. Without robust governance, financial services firms risk facing adverse outcomes such as biased decision-making, regulatory penalties, reputational damage, and operational disruptions.

 

Key Components of AI Governance

1 Ethical Guidelines: Establishing ethical principles to guide AI development and deployment, ensuring fairness, accountability, and transparency.

2 Risk Management: Implementing processes to identify, assess, and mitigate AI-related risks, including bias, security vulnerabilities, and operational failures.

3 Regulatory Compliance: Ensuring adherence to relevant laws and regulations governing AI usage, such as data protection and automated decision-making.

4 Transparency and Accountability: Promoting transparency in AI decision-making processes and holding individuals and teams accountable for AI outcomes.

 

Risks of Neglecting AI Governance

Neglecting AI governance can lead to several significant risks:

1 Embedded bias: AI algorithms can unintentionally perpetuate biases if trained on biased data or if developers inadvertently incorporate them. This can lead to unfair treatment of certain groups and potential violations of fair lending laws.

2 Explainability and complexity: AI models can be highly complex, making it challenging to understand how they arrive at decisions. This lack of explainability raises concerns about transparency, accountability, and regulatory compliance

3 Cybersecurity: Increased reliance on AI systems raises cybersecurity concerns, as hackers may exploit vulnerabilities in AI algorithms or systems to gain unauthorised access to sensitive financial data

4 Data privacy: AI systems rely on vast amounts of data, raising privacy concerns related to the collection, storage, and use of personal information

5 Robustness: AI systems may not perform optimally in certain situations and are susceptible to errors. Adversarial attacks can compromise their reliability and trustworthiness

6 Impact on financial stability: Widespread adoption of AI in the financial sector can have implications for financial stability, potentially amplifying market dynamics and leading to increased volatility or systemic risks

7 Underlying data risks: AI models are only as good as the data that supports them. Incorrect or biased data can lead to inaccurate outputs and decisions

8 Ethical considerations: The potential displacement of certain roles due to AI automation raises ethical concerns about societal implications and firms' responsibilities to their employees

9 Regulatory compliance: As AI becomes more integral to financial services, there is an increasing need for transparency and regulatory explainability in AI decisions to maintain compliance with evolving standards

10 Model risk: The complexity and evolving nature of AI technologies mean that their strengths and weaknesses are not yet fully understood, potentially leading to unforeseen pitfalls in the future

 

To address these risks, financial institutions need to implement robust risk management frameworks, enhance data governance, develop AI-ready infrastructure, increase transparency, and stay updated on evolving regulations specific to AI in financial services.

The consequences of inadequate AI governance can be severe. Financial institutions that fail to implement proper risk management and governance frameworks may face significant financial penalties, reputational damage, and regulatory scrutiny. The proposed EU AI Act, for instance, outlines fines of up to €30 million or 6% of global annual turnover for non-compliance. Beyond regulatory consequences, poor AI governance can lead to biased decision-making, privacy breaches, and erosion of customer trust, all of which can have long-lasting impacts on a firm's operations and market position.

 

Regulatory Requirements

The regulatory landscape for AI in financial services is evolving rapidly, with regulators worldwide introducing guidelines and standards to ensure the responsible use of AI. Compliance with these regulations is not only a legal obligation but also a critical component of building a sustainable and trustworthy AI strategy.

 

Key Regulatory Frameworks

1 General Data Protection Regulation (GDPR): The European Union's GDPR imposes strict requirements on data processing and the use of automated decision-making systems, ensuring transparency and accountability.

2 Financial Conduct Authority (FCA): The FCA in the UK has issued guidance on AI and machine learning, emphasising the need for transparency, accountability, and risk management in AI applications.

3 Federal Reserve: The Federal Reserve in the US has provided supervisory guidance on model risk management, highlighting the importance of robust governance and oversight for AI models.

4 Monetary Authority of Singapore (MAS): MAS has introduced guidelines for the ethical use of AI and data analytics in financial services, promoting fairness, ethics, accountability, and transparency (FEAT).

5 EU AI Act: This new act aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

 

Importance of Compliance

Compliance with regulatory requirements is essential for several reasons:

1 Legal Obligation: Financial services firms must adhere to laws and regulations governing AI usage to avoid legal penalties and fines.

2 Reputational Risk: Non-compliance can damage a firm's reputation, eroding trust with customers, investors, and regulators.

3 Operational Efficiency: Regulatory compliance ensures that AI systems are designed and operated according to best practices, enhancing efficiency and effectiveness.

4 Stakeholder Trust: Adhering to regulatory standards builds trust with stakeholders, demonstrating a commitment to responsible and ethical AI use.

 

Identifying AI Risks

AI technologies pose several specific risks to financial services firms that must be identified and mitigated through effective governance frameworks.

 

Bias and Discrimination

AI systems can reflect and reinforce biases present in training data, leading to discriminatory outcomes. For instance, biased credit scoring models may disadvantage certain demographic groups, resulting in unequal access to financial services. Addressing bias requires rigorous data governance practices, including diverse and representative training data, regular bias audits, and transparent decision-making processes.

 

Security Risks

AI systems are vulnerable to various security threats, including cyberattacks, data breaches, and adversarial manipulations. Cybercriminals can exploit vulnerabilities in AI models to manipulate outcomes or gain unauthorised access to sensitive financial data. Ensuring the security and integrity of AI systems involves implementing robust cybersecurity measures, regular security assessments, and incident response plans.

 

Operational Risks

AI-driven processes can fail or behave unpredictably under certain conditions, potentially disrupting critical financial services. For example, algorithmic trading systems can trigger market instability if not responsibly managed. Effective governance frameworks include comprehensive testing, continuous monitoring, and contingency planning to mitigate operational risks and ensure reliable AI performance.

 

Compliance Risks

Failure to adhere to regulatory requirements can result in significant fines, legal consequences, and reputational damage. AI systems must be designed and operated in compliance with relevant laws and regulations, such as data protection and automated decision-making guidelines. Regular compliance audits and updates to governance frameworks are essential to ensure ongoing regulatory adherence.

 

Benefits of Effective AI Governance

Implementing robust AI governance frameworks offers numerous benefits for financial services firms, enhancing risk management, trust, and operational efficiency.

 

Risk Mitigation

Effective AI governance helps identify, assess, and mitigate AI-related risks, reducing the likelihood of adverse outcomes. By implementing comprehensive risk management processes, firms can proactively address potential issues and ensure the safe and responsible use of AI technologies.

 

Enhanced Trust and Transparency

Transparent and accountable AI practices build trust with customers, regulators, and other stakeholders. Clear communication about AI decision-making processes, ethical guidelines, and risk management practices demonstrates a commitment to responsible AI use, fostering confidence and credibility.

 

Regulatory Compliance

Adhering to governance frameworks ensures compliance with current and future regulatory requirements, minimising legal and financial repercussions. Robust governance practices align AI development and deployment with regulatory standards, reducing the risk of non-compliance and associated penalties.

 

Operational Efficiency

Governance frameworks streamline the development and deployment of AI systems, promoting efficiency and consistency in AI-driven operations. Standardised processes, clear roles and responsibilities, and ongoing monitoring enhance the effectiveness and reliability of AI applications, driving operational excellence.

 

Case Studies

Several financial services firms have successfully implemented AI governance frameworks, demonstrating the tangible benefits of proactive risk management and responsible AI use.

 

JP Morgan Chase

JP Morgan Chase has established a comprehensive AI governance structure that includes an AI Ethics Board, regular audits, and robust risk assessment processes. The AI Ethics Board oversees the ethical implications of AI applications, ensuring alignment with the bank's values and regulatory requirements. Regular audits and risk assessments help identify and mitigate AI-related risks, enhancing the reliability and transparency of AI systems.

 

ING Group

ING Group has developed an AI governance framework that emphasises transparency, accountability, and ethical considerations. The framework includes guidelines for data usage, model validation, and ongoing monitoring, ensuring that AI applications align with the bank's values and regulatory requirements. By prioritising responsible AI use, ING has built trust with stakeholders and demonstrated a commitment to ethical and transparent AI practices.

 

HSBC

HSBC has implemented a robust AI governance framework that focuses on ethical AI development, risk management, and regulatory compliance. The bank's AI governance framework includes a dedicated AI Ethics Committee, comprehensive risk management processes, and regular compliance audits. These measures ensure that AI applications are developed and deployed responsibly, aligning with regulatory standards and ethical guidelines.

 

Practical Steps for Implementation

To develop and implement effective AI governance frameworks, financial services firms should consider the following actionable steps:

 

Establish a Governance Framework

Develop a comprehensive AI governance framework that includes policies, procedures, and roles and responsibilities for AI oversight. The framework should outline ethical guidelines, risk management processes, and compliance requirements, providing a clear roadmap for responsible AI use.

 

Create an AI Ethics Board

Form an AI Ethics Board or committee to oversee the ethical implications of AI applications and ensure alignment with organisational values and regulatory requirements. The board should include representatives from diverse departments, including legal, compliance, risk management, and technology.

 

Implement Specific AI Risk Management Processes

Conduct regular risk assessments to identify and mitigate AI-related risks. Implement robust monitoring and auditing processes to ensure ongoing compliance and performance. Risk management processes should include bias audits, security assessments, and contingency planning to address potential operational failures.

 

Ensure Data Quality and Integrity

Establish data governance practices to ensure the quality, accuracy, and integrity of data used in AI systems. Address potential biases in data collection and processing, and implement measures to maintain data security and privacy. Regular data audits and validation processes are essential to ensure reliable and unbiased AI outcomes.

 

Invest in Training and Awareness

Provide training and resources for employees to understand AI technologies, governance practices, and their roles in ensuring ethical and responsible AI use. Ongoing education and awareness programs help build a culture of responsible AI use, promoting adherence to governance frameworks and ethical guidelines.

 

Engage with Regulators and Industry Bodies

Stay informed about regulatory developments and industry best practices. Engage with regulators and industry bodies to contribute to the development of AI governance standards and ensure alignment with evolving regulatory requirements. Active participation in industry forums and collaborations helps stay ahead of regulatory changes and promotes responsible AI use.

 

Conclusion

As financial services firms continue to embrace AI, the importance of robust AI risk & governance frameworks cannot be overstated. By proactively addressing the risks associated with AI and implementing effective governance practices, firms can unlock the full potential of AI technologies while safeguarding their operations, maintaining regulatory compliance, and building trust with stakeholders. Prioritising AI risk & governance is not just a regulatory requirement but a strategic imperative for the sustainable and ethical use of AI in financial services.

 

References and Further Reading

  1. McKinsey & Company. (2020). The AI Bank of the Future: Can Banks Meet the AI Challenge?
  2. European Union. (2018). General Data Protection Regulation (GDPR).
  3. Financial Conduct Authority (FCA). (2019). Guidance on the Use of AI and Machine Learning in Financial Services.
  4. Federal Reserve. (2020). Supervisory Guidance on Model Risk Management.
  5. JP Morgan Chase. (2021). AI Ethics and Governance Framework.
  6. ING Group. (2021). Responsible AI: Our Approach to AI Governance.
  7. Monetary Authority of Singapore (MAS). (2019). FEAT Principles for the Use of AI and Data Analytics in Financial Services.

 

For further reading on AI governance and risk management in financial services, consider the following resources:

- "Artificial Intelligence: A Guide for Financial Services Firms" by Deloitte

- "Managing AI Risk in Financial Services" by PwC

- "AI Ethics and Governance: A Global Perspective" by the World Economic Forum


Helping ARX, a cyber-security FinTech with interim COO services to scale-up their delivery

We were engaged by ARX to provide an interim COO as they gaining traction in the market and needed to scale their operations to support their new clients. We used our financial services delivery experience to take on UX/UI design, redesign their operational processes for scale, and be a delivery partner for their supply chain resilience solution.

Due to our efforts, ARX were able to meet their client demand with an improved product and more efficient sales & go-to-market approach.


Increasing data product offerings by profiling 80k terms at a global data provider

“Through domain & technical expertise Leading Point have been instrumental in the success of this project to analyse and remediate 80k industry terms. LP have developed a sustainable process, backed up by technical tools, allowing the client to continue making progress well into the future. I would have no hesitation recommending LP as a delivery partner to any firm who needs help untangling their data.”

PM at Global Market Data Provider


AI in Insurance - Article 1 - A Catalyst for Innovation

How insurance companies can use the latest AI developments to innovate their operations

The emergence of AI

The insurance industry is undergoing a profound transformation driven by the relentless advance of artificial intelligence (AI) and other disruptive technologies. A significant change in business thinking is gaining pace and Applied AI is being recognised for its potential in driving top-line growth and not merely a cost-cutting tool.

The adoption of AI is poised to reshape the insurance industry, enhancing operational efficiencies, improving decision-making, anticipating challenges, delivering innovative solutions, and transforming customer experiences.

This shift from data-driven to AI-driven operations is bringing about a paradigm shift in how insurance companies collect, analyse, and utilise data to make informed decisions and enhance customer experiences. By analysing vast amounts of data, including historical claims records, market forces, and external factors (global events like hurricanes, and regional conflicts), AI can assess risk with speed and accuracy to provide insurance companies a view of their state of play in the market.

Data vs AI approaches

This data-driven approach has enabled insurance companies to improve their underwriting accuracy, optimise pricing models, and tailor products to specific customer needs. However, the limitations of traditional data analytics methods have become increasingly apparent in recent years.

These methods often struggle to capture the complex relationships and hidden patterns within large datasets. They are also slow to adapt to rapidly-changing market conditions and emerging risks. As a result, insurance companies are increasingly turning to AI to unlock the full potential of their data and drive innovation across the industry.

AI algorithms, powered by machine learning and deep learning techniques, can process vast amounts of data far more efficiently and accurately than traditional methods. They can connect disparate datasets, identify subtle patterns, correlations & anomalies that would be difficult or impossible to detect with human analysis.

By leveraging AI, insurance companies can gain deeper insights into customer behaviour, risk factors, and market trends. This enables them to make more informed decisions about underwriting, pricing, product development, and customer service and gain a competitive edge in the ever-evolving marketplace.

Top 5 opportunities

1. Enhanced Risk Assessment

AI algorithms can analyse a broader range of data sources, including social media posts and weather patterns, to provide more accurate risk assessments. This can lead to better pricing and reduced losses.

2. Personalised Customer Experiences

AI can create personalised customer experiences, from tailored product recommendations to proactive risk mitigation guidance. This can boost customer satisfaction and loyalty.

3. Automated Claims Processing

AI can automate routine claims processing tasks, for example, by reviewing claims documentation and providing investigation recommendations, thus reducing manual efforts and improving efficiency. This can lead to faster claims settlements and lower operating costs.

4. Fraud Detection and Prevention

AI algorithms can identify anomalies and patterns in claims data to detect and prevent fraudulent activities. This can protect insurance companies from financial losses and reputational damage.

5. Predictive Analytics

AI can be used to anticipate future events, such as customer churn or potential fraud. This enables insurance companies to take proactive measures to prevent negative outcomes.

 

Adopting AI in Insurance

The adoption of AI in the insurance industry is not without its challenges. Insurance companies must address concerns about data quality, data privacy, transparency, and potential biases in AI algorithms. They must also ensure that AI is integrated seamlessly into their existing systems and processes.

Despite these challenges, AI presents immense opportunities. Insurance companies that embrace AI-driven operations will be well-positioned to gain a competitive edge, enhance customer experiences, and navigate the ever-changing risk landscape.

The shift from data-driven to AI-driven operations is a transformative force in the insurance industry. AI is not just a tool for analysing data; it is a catalyst for innovation and a driver of change. Insurance companies that harness the power of AI will be at the forefront of this transformation, shaping the future of insurance and delivering exceptional value to their customers.

 

Download the PDF article here.


The Challenges of Data Management

John Macpherson on The Challenges of Data Management

 

 

I often get asked, what are the biggest trends impacting the Financial Services industry? Through my position as Chair of the Investment Association Engine, I have unprecedented access to the key decision-makers in the industry, as well as constant connectivity with the ever-expanding Fintech ecosystem, which has helped me stay at the cutting edge of the latest trends.

So, when I get asked, ‘what is the biggest trend that financial services will face’, for the past few years my answer has remained the same, data.

During my time as CEO of BMLL, big data rose to prominence and developed into a multi-billion-dollar problem across financial services. I remember well an early morning interview I gave to CNBC around 5 years ago, where the facts were starkly presented. Back then, data was doubling every three years globally, but at an even faster pace in financial markets.

Firms are struggling under the weight of this data

The use of data is fundamental to a company's operations, but they are finding it difficult to get a handle on this problem. The pace of this increase has left many smaller and mid-sized IM/ AM firms in a quandary. Their ability to access, manage and use multiple data sources alongside their own data, market data, and any alternative data sources, is sub-optimal at best. Most core data systems are not architected to address the volume and pace of change required, with manual reviews and inputs creating unnecessary bottlenecks. These issues, among a host of others, mean risk management systems cannot cope as a result. Modernised data core systems are imperative to solve where real-time insights are currently lost, with fragmented and slow-moving information.

Around half of all financial service data goes unmentioned and ungoverned, this “dark data” poses a security and regulatory risk, as well as a huge opportunity.

While data analytics, big data, AI, and data science are historically the key sub-trends, these have been joined by data fabric (as an industry standard), analytical ops, data democratisation, and a shift from big data to smaller and wider data.

Operating models hold the key to data management

modellr™ dashboard

Governance is paramount to using this data in an effective, timely, accurate and meaningful way. Operating models are the true gauge as to whether you are succeeding.

Much can be achieved with the relatively modest budget and resources firms have, provided they invest in the best operating models around their data.

Leading Point is a firm I have been getting to know over several years now. Their data intelligence platform modellr™, is the first truly digital operating model. modellr™ harvests a company’s existing data to create a living operating model, digitising the change process, and enabling quicker, smarter, decision making. By digitising the process, they’re removing the historically slow and laborious consultative approach. Access to all the information in real-time is proving transformative for smaller and medium-sized businesses.

True transparency around your data, understanding it and its consumption, and then enabling data products to support internal and external use cases, is very much available.

Different firms are at very different places on their maturity curve. Longer-term investment in data architecture, be it data fabric or data mesh, will provide the technical backbone to harvest ML/ AI and analytics.

Taking control of your data

Recently I was talking to a large investment bank for whom Leading Point had been brought in to help. The bank was looking to transform its client data management and associated regulatory processes such as KYC, and Anti-financial crime.

They were investing heavily in sourcing, validating, normalising, remediating, and distributing over 2,000 data attributes. This was costing the bank a huge amount of time, money, and resources. But, despite the changes, their environment and change processes had become too complicated to have any chance of success. The process results were haphazard, with poor controls and no understanding of the results missing.

Leading Point was brought in to help and decided on a data minimisation approach. They profiled and analysed the data, despite working across regions and divisions. Quickly, 2,000 data attributes were narrowed to less than 200 critical ones for the consuming functions. This allowed the financial institutions, regulatory, and reporting processes to come to life, with clear data quality measurement and ownership processes. It allowed the financial institutions to significantly reduce the complexity of their data and its usability, meaning that multiple business owners were able to produce rapid and tangible results

I was speaking to Rajen Madan, the CEO of Leading Point, and we agreed that in a world of ever-growing data, data minimisation is often key to maximising success with data!

Elsewhere, Leading Point has seen benefits unlocked from unifying data models, and working on ontologies, standards, and taxonomies. Their platform, modellr™is enabling many firms to link their data, define common aggregations, and support knowledge graph initiatives allowing firms to deliver more timely, accurate and complete reporting, as well as insights on their business processes.

The need for agile, scalable, secure, and resilient tech infrastructure is more imperative than ever. Firms’ own legacy ways of handling this data are singularly the biggest barrier to their growth and technological innovation.

If you see a digital operating model as anything other than a must-have, then you are missing out. It’s time for a serious re-think.

Words by John Macpherson — Board advisor at Leading Point, Chair of the Investment Association Engine

 

John was recently interviewed about his role at Leading Point, and the key trends he sees affecting the financial services industry. Watch his interview here


Artificial Intelligence: The Solution to the ESG Data Gap?

The Power of ESG Data

It was Warren Buffett who said, “It takes twenty years to build a reputation and five minutes to ruin it” and that is the reality that all companies face on a daily basis. An effective set of ESG (Environment, Social & Governance) policies has never been more crucial. However, it is being hindered by difficulties surrounding the effective collection and communication of ESG data points, as well a lack of standardisation when it comes to reporting such data. As a result, the ESG space is being revolutionised by Artificial Intelligence, which can find, analyse and summarise this information.
 

There is increasing public and regulatory pressure on firms to ensure their policies are sustainable and on investors to take such policies into account when making investment decisions. The issue for investors is how to know which firms are good ESG performers and which are not. The majority of information dominating research and ESG indices comes from company-reported data. However, with little regulation surrounding this, responsible investors are plagued by unhelpful data gaps and “Greenwashing”. This is when a firm uses favourable data points and convoluted wording to appear more sustainable than they are in reality. They may even leave out data points that reflect badly on them. For example, firms such as Shell are accused of using the word ‘sustainable’ in their mission statement whilst providing little evidence to support their claims (1)

Could AI be the complete solution?

AI could be the key to help investors analyse the mountain of ESG data that is yet to be explored, both structured and unstructured. Historically, AI has been proven to successfully extract relevant information from data sources including news articles but it also offers new and exciting opportunities. Consider the transcripts of board meetings from a Korean firm: AI could be used to translate and examine such data using techniques such as Sentiment Analysis. Does the CEO seem passionate about ESG issues within the company? Are they worried about an investigation into Human Rights being undertaken against them? This is a task that would be labour-intensive, to say the least, for analysts to complete manually.  

 

In addition, AI offers an opportunity for investors to not only act responsibly, but also align their ESG goals to a profitable agenda. For example, algorithms are being developed that can connect specific ESG indicators to financial performance and can therefore be used by firms to identify the risk and reward of certain investments. 

 

Whilst AI offers numerous opportunities with regards to ESG investing, it is not without fault. Firstly, AI takes enormous amounts of computing power and, hence, energy. For example, in 2018, OpenAI found the level of computational power used to train the largest AI models has been doubling every 3.4 months since 2012 (2). With the majority of the world’s energy coming from non-renewable sources, it is not difficult to spot the contradiction in motives here. We must also consider whether AI is being used to its full potential; when simply used to scan company published data, AI could actually reinforce issues such as “Greenwashing”. Further, the issue of fake news and unreliable sources of information still plagues such methods and a lot of work has to go into ensuring these sources do not feature in algorithms used. 

 

When speaking with Dr Thomas Kuh, Head of Index at leading ESG data and AI firm Truvalue Labs™, he outlined the difficulties surrounding AI but noted that since it enables human beings to make more intelligent decisions, it is surely worth having in the investment process. In fact, he described the application of AI to ESG research as ‘inevitable’ as long as it is used effectively to overcome the shortcomings of current research methods. For instance, he emphasised that AI offers real time information that traditional sources simply cannot compete with. 

 A Future for AI?

According to a 2018 survey from Greenwich Associates (3), only 17% of investment professionals currently use AI as part of their process; however, 40% of respondents stated they would increase budgets for AI in the future. As an area where investors are seemingly unsatisfied with traditional data sources, ESG is likely to see more than its fair share of this increase. Firms such as BNP Paribas (4) and Ecofi Investissements (5) are already exploring AI opportunities and many firms are following suit. We at Leading Point see AI inevitably becoming integral to an effective responsible investment process and intend to be at the heart of this revolution. 

 

AI is by no means the judge, jury and executioner when it comes to ESG investing and depends on those behind it, constantly working to improve the algorithms, as well as the analysts using it to make more informed decisions. AI does, however, have the potential to revolutionise what a responsible investment means and help reallocate resources towards firms that will create a better future.

[1] The problem with corporate greenwashing

[2] AI and Compute

[3] Could AI Displace Investment Bank Research?

[4] How AI could shape the future of investment banking

[5] How AI Can Help Find ESG Opportunities

 

"It takes twenty years to build a reputation and five minutes to ruin it"

 

AI offers an opportunity for investors to not only act responsibly, but also align their ESG goals to a profitable agenda

Environmental Social Governance (ESG) & Sustainable Investment

Client propositions and products in data driven transformation in ESG and Sustainable Investing. Previous roles include J.P. Morgan, Morgan Stanley, and EY.

 

Upcoming blogs:

This is the second in a series of blogs that will explore the ESG world: its growth, its potential opportunities and the constraints that are holding it back. We will explore the increasing importance of ESG and how it affects business leaders, investors, asset managers, regulatory actors and more.

 

 

Riding the ESG Regulatory Wave: In the third part of our Environmental, Social and Governance (ESG) blog series, Alejandra explores the implementation challenges of ESG regulations hitting EU Asset Managers and Financial Institutions.

Is it time for VCs to take ESG seriously? In the fourth part of our Environmental, Social and Governance (ESG) blog series, Ben explores the current research on why startups should start implementing and communicating ESG policies at the core of their business.

Now more than ever, businesses are understanding the importance of having well-governed and socially-responsible practices in place. A clear understanding of your ESG metrics is pivotal in order to communicate your ESG strengths to investors, clients and potential employees.

By using our cloud-based data visualisation platform to bring together relevant metrics, we help organisations gain a standardised view and improve your ESG reporting and portfolio performance.  Our live ESG dashboard can be used to scenario plan, map out ESG strategy and tell the ESG story to stakeholders.

AI helps with the process of ingesting, analysing and distributing data as well as offering predictive abilities and assessing trends in the ESG space.  Leading Point is helping our AI startup partnerships adapt their technology to pursue this new opportunity, implementing these solutions into investment firms and supporting them with the use of the technology and data management.

We offer a specialised and personalised service based on firms’ ESG priorities.  We harness the power of technology and AI to bridge the ESG data gap, avoiding ‘greenwashing’ data trends and providing a complete solution for organisations.

Leading Point's AI-implemented solutions decrease the time and effort needed to monitor current/past scandals of potential investments. Clients can see the benefits of increased output, improved KPIs and production of enhanced data outputs.

Implementing ESG regulations and providing operational support to improve ESG metrics for banks and other financial institutions. Ensuring compliance by benchmarking and disclosing ESG information, in-depth data collection to satisfy corporate reporting requirements, conducting appropriate investment and risk management decisions, and to make disclosures to clients and fund investors.

 


LIBOR Transition - Preparation in the Face of Adversity

LIBOR TRANSITION IN CONTEXT

What is it?  FCA will no longer seek require banks to submit quotes to the London Interbank Offered Rate (LIBOR) – LIBOR will be unsupported by regulators come 2021, and therefore, unreliable

Requirement: Firms need to transition away from LIBOR to alternative overnight risk-free rates (RFRs)

Challenge: Updating the risk and valuation processes to reflect RFR benchmarks and then reviewing the millions of legacy contracts to remove references to IBOR

Implementation timeline: Expected in Q4 2021

 

HOW LIBOR MAY IMPACT YOUR BUSINESS

Front office: New issuance and trading products to support capital, funding, liquidity, pricing, hedging

Finance & Treasury: Balance sheet valuation and accounting, asset, liability and liquidity management

Risk Management: New margin, exposure, counterparty risk models, VaR, time series, stress and sensitivities

Client outreach: Identification of in-scope contracts, client outreach and repapering to renegotiate current exposure

Change management: F2B data and platform changes to support all of the above

 

WHAT YOU NEED TO DO

Plug in to the relevant RFR and trade association working groups, understand internal advocacy positions vs. discussion outcomes

Assess, quantify and report LIBOR exposure across jurisdictions, businesses and products

Remediate data quality and align product taxonomies to ensure integrity of LIBOR exposure reporting

Evaluate potential changes to risk and valuation models; differences in accounting treatment under an alternative RFR regime

Define list of in-scope contracts and their repapering approach; prepare for client outreach

“[Firms should be] moving to contracts which do not rely on LIBOR and will not switch references rates at an unpredictable time”

Andrew Bailey, CEO,
Financial Conduct Authority (FCA)

“Identification of areas of no-regret spending is critical in this initial phase of delivery so as to give a head start to implementation”

Rajen Madan, CEO,
Leading Point FM

 

BENCHMARK TRANSITION KEY FACTS
  • Market Exposure - Total IBOR market exposure >$370TN 80% represented by USD LIBOR & EURIBOR
  • Tenor - The 3-month tenor by volume is the most widely referenced rate in all currencies (followed by the 6-month tenor)
  • Derivatives - OTC and exchange traded derivatives represent > $300TN (80%) of products referencing IBORs
  • Syndicated Loans - 97% of syndicated loans in the US market, with outstanding volume of approximately $3.4TN, reference USD LIBOR. 90% of syndicated loans in the euro market, with outstanding volume of approximately $535BN, reference EURIBOR
  • Floating Rate Notes (FRNs) - 84% of FRNs inthe US market, with outstanding volume of approximately $1.5TN, reference USD LIBOR. 70% of FRNs in the euro market,with outstanding volume of approximately $2.6TN, reference EURIBOR
  • Business Loans - 30%-50% of business loans in the US market, with outstanding volume of approximately $2.9TN, reference USD LIBOR. 60% of business loans in the euro market, with outstanding volume of approximately $5.8TN, reference EURIBOR

*(“IBOR Global Benchmark Survey 2018 Transition Roadmap”, ISDA, AFME, ICMA, SIFMA, SIFMA AM, February 2018)

 


Data Innovation, Uncovered

 

Leading Point Financial Markets recently partnered with selected tech companies to present innovative solutions to a panel of SMEs and an audience of FS senior execs and practitioners across 5 use-cases Leading Point is helping financial institutions with. The panel undertook a detailed discussion on the solutions’ feasibility within these use-cases, and their potential for firms, followed by a lively debate between Panellists and Attendees.

EXECUTIVE SUMMARY

“There is an opportunity to connect multiple innovation solutions to solve different, but related, business problems”

  • 80% of data is relatively untapped in organisations. The more familiar the datasets, the better data can be used
  • On average, an estimated £84 million (expected to be a gross underestimation) is wasted each year from increasing risk and delivery from policies and regulations
  • Staying innovative, while staying true to privacy data is a fine line. Solutions exist in the marketplace to help
  • Is there effective alignment between business and IT? Panellists insisted there is a significantly big gap, but using business architecture can be a successful bridge between the business and IT, by driving the right kinds of change
  • There is a huge opportunity to blend these solutions to provide even more business benefits

CLIENT DATA LIFECYCLE (TAMR)

  • Tamr uses machine learning to combine, consolidate and classify disparate data sources with potential to improve customer segmentation analytics
  • To achieve the objective of a 360-degree view of the customer requires merging external datasets with internal in a appropriate and efficient manner, for example integrating ‘Politically Exposed Persons’ lists or sanctions ‘blacklists’
  • Knowing what ‘good’ looks like is a key challenge. This requires defining your comfort level, in terms of precision and probability based approaches, versus the amount of resource required to achieve those levels
  • Another challenge is convincing Compliance that machines are more accurate than individuals
  • To convince the regulators, it is important to demonstrate that you are taking a ‘joined up’ approach across customers, transactions, etc. and the rationale behind that approach

LEGAL DOCS TO DATA (iManage)

  • iManage locates, categorises & creates value from all your contractual content
  • Firms hold a vast amount of legal information in unstructured formats - Classifying 30,000,000 litigation documents manually would take 27 years
  • However, analysing this unstructured data and converting it to structured digital data allows firms to conduct analysis and repapering exercises with much more efficiency
  • It is possible to a) codify regulations & obligations b) compare them as they change and c) link them to company policies & contracts – this enables complete traceability
  • For example, you can use AI to identify parties, dates, clauses & conclusions held within ISDA contract forms, reports, loan application contracts, accounts and opinion pieces

DATA GOVERNANCE (Io-Tahoe)

  • Io-Tahoe LLC is a provider of ‘smart’ data discovery solutions that go beyond traditional metadata and leverages machine learning and AI to look at implied critical and often unknown relationships within the data itself
  • Io-Tahoe interrogates any structured/semi-structured data (both schema and underlying data) and identifies and classifies related data elements to determine their business criticality
  • Pockets of previously-hidden sensitive data can be uncovered enabling better compliance to data protection regulations, such as GDPR
  • Any and all data analysis is performed on copies of the data held wherever the information security teams of the client firms deems it safe
  • Once data elements are understood, they can be defined & managed and used to drive data governance management processes

FINANCIAL CRIME (Ayasdi)

  • Ayasdi augments the AML process with intelligent segmentation, typologies and alert triage. Their topological data analysis capabilities provide a formalised and repeatable way of applying hundreds of combinations of different machine learning algorithms to a data set to find out the relationships within that data
  • For example, Ayasdi was used reason-based elements in predictive models to track, analyse and predict complaint patterns. over the next day, month and year.
  • As a result, the transaction and customer data provided by a call centre was used effectively to reduce future complaints and generate business value
  • Using Ayasdi, a major FS firm was able to achieve more than a 25% reduction in false positives and achieved savings of tens of millions of dollars - but there is still a lot more that can be done

DATA MONETISATION (Privitar)

  • Privitar’s software solution allows the safe use of sensitive information enabling organisations to extract maximum data utility and economic benefit
  • The sharp increase in data volume and usage in FS today has brought two competing dynamics: Data protection regulation aimed at protecting people from the misuse of their data and the absorption of data into tools/technologies such as machine learning
  • However, as more data is made available, the harder it is to protect the privacy of the individual through data linkage
  • Privitar’s tools are capable of removing a large amount of risk from this tricky area, and allow people to exchange data much more freely by anonymisation
  • Privitar allows for open data for innovation and collaboration, whilst also acting in the best interest of customers’ privacy

SURVEY RESULTS

  • Encouragingly, over 97% of participants who responded confirmed the five use cases presented were relevant to their respective organisations
  • Nearly 50% of all participants who responded stated they would consider using the tech solutions presented
  • 70% of responders believe their firms would be likely to adopt one of the solutions
  • Only 10% of participants who responded believed the solutions were not relevant to their respective firms
  • Approximately 30% of responders thought they would face difficulties in taking on a new solution