The Trusted AI Bulletin #7

Issue #7 – Scaling AI Responsibly: From Compliance to Competitive Edge

Introduction

In this edition of The Trusted AI Bulletin, we examine the shifting centre of gravity for AI within financial services firms. The newly released 2025 AI Index Report highlights not only the global acceleration in AI development but also the widening gaps in regulatory preparedness and organisational readiness.

Our Co-Founder Thushan Kumaraswamy opens with a perspective on the need for business-led AI ownership — a view increasingly echoed across the industry. As firms move from experimentation to enterprise adoption, clarity around governance, accountability, and value realisation is no longer optional. This issue explores what that shift looks like in practice.


Executive Perspective: Where Should AI Responsibility Live?

Our Co-Founder Thushan Kumaraswamy comments:

"The 2025 AI Index Report raise some interesting challenges regarding Responsible AI (RAI) in business. The use of AI in financial services firms requires cooperation between multiple departments, but ownership of AI remains fragmented. Currently, it sits with information security or data teams, but it really needs to be owned by the business.

It is the business that is paying for the AI systems to be developed and adopted. It is the business that owns the data used by the AI systems. It is the business that (hopefully!) sees the value realised.

I am starting to see more “Head of Responsible AI” noise in financial services firms now with Lloyds Banking Group hiring in Jan 2025, but still not that many and it remains unclear if these kinds of roles are data/tech-related or part of the business.

I get that AI, both from a technical perspective and an operational one, is new for many business leaders, and they struggle to keep up with the daily barrage of innovations. This is where a “Head of AI” should sit; to advise the business on what is possible with AI, to work with data, technology, and infosec teams to ensure that AI systems are used safely, and to ensure that the ROI of AI is at least what is expected.

Specialists can advise on a temporary basis, but in the long-term this must be an in-house role and team, supported by the board, and given the necessary authority to stop AI developments at any stage if they pose uncontrolled risks to the firm or will not deliver the required return."


100 AI Use Cases in Financial Services - #1 Chatbot

In last week’s edition, we introduced our series on the 100 AI use cases reshaping financial services — focusing on how firms can move from experimentation to scalable, high-impact adoption.

We kick off the series with one of the most widely adopted and visible AI use cases: chatbots. In financial services, chatbots are transforming how firms interact with customers — delivering faster support, tailored advice, and improved satisfaction. But the benefits come with real risks around data privacy, regulatory compliance, and fairness.


AI Highlights of the Week

1. New AI Index Report Charts Rising Global Stakes—and Regulatory Gaps

The 2025 AI Index Report has just been released by Stanford University, offering a comprehensive snapshot of global trends in artificial intelligence. This year’s report underscores the intensifying race between nations, with the United States still leading in the development of top AI models but China rapidly catching up, especially in research output and patent filings.

The report draws attention to the soaring cost of training cutting-edge models—OpenAI’s GPT-4 is estimated to have cost $78 million—raising questions about who can afford to innovate at this scale. Notably, AI regulation is on the rise: U.S. AI-related laws have grown from just one in 2016 to 25 in 2023, reflecting the increasing pressure on governments to keep pace with technological advancement.

As AI systems become more powerful and embedded in daily life, the findings stress the urgent need for thoughtful, coordinated governance that can balance innovation with accountability. With AI's trajectory showing no signs of slowing, the report serves as a timely reminder that regulatory frameworks must evolve just as swiftly.

Source: 2025 AI Index Report link

2. Brussels Bets Big on AI to Regain Tech Edge and Counter U.S. Tariffs

As the European Union grapples with the ripple effects of American tariffs, Brussels is preparing a major policy shift aimed at transforming Europe into an “AI Continent.” A draft strategy, to be unveiled this week, reveals plans to streamline regulations, reduce compliance burdens, and create a more innovation-friendly environment for AI development.

This charm offensive is a direct response to mounting criticism from Big Tech and global AI leaders, who argue that the EU’s rigid regulatory framework, including the AI Act, is stifling competitiveness. Central to the strategy are massive investments in computing infrastructure — including five AI “gigafactories” — and ambitious targets to boost AI skills among 100 million Europeans by 2030.

The push also seeks to reduce dependence on U.S.-based cloud providers by tripling Europe’s data centre capacity. With only 13 percent of European firms currently adopting AI, the plan signals a timely recalibration of Europe's approach to AI governance — one that recognises the urgent need to lead, not lag, in the global AI race.

Source: Politico link

3. Standard Chartered Embraces Generative AI to Revolutionise Global Operations

Standard Chartered is set to deploy its Generative AI tool, SC GPT, across 41 markets, aiming to enhance operational efficiency and client engagement among its 70,000 employees. This strategic move is expected to boost productivity, personalise sales and marketing efforts, automate software engineering tasks, and refine risk management processes.

A more tailored version is in development to leverage the bank's proprietary data for bespoke problem-solving, while local teams are encouraged to adapt SC GPT to address specific market needs, including digital marketing and customer services. This initiative underscores Standard Chartered's commitment to responsibly harnessing AI, reflecting a broader trend in the financial sector towards integrating advanced technologies.

As AI governance and regulations evolve, such proactive adoption highlights the importance of balancing innovation with ethical considerations in the banking industry.

Source: Finextra link

4. Navigating the AI Revolution: Ensuring Responsible Innovation in UK Financial Services

The integration of AI into financial services is revolutionising the sector, enhancing operations from algorithmic trading to personalised customer interactions. However, this rapid adoption introduces significant regulatory challenges, particularly concerning financial stability and consumer protection.

The UK's Financial Conduct Authority (FCA) has yet to implement comprehensive AI regulations, leading to ambiguity in compliance and oversight. Unregulated AI-driven activities, such as algorithmic trading, could exacerbate market volatility, while biased AI models in credit scoring may disadvantage vulnerable consumers.

To address these issues, financial institutions should proactively enhance AI governance frameworks, prioritising transparency, bias mitigation, and robust cybersecurity measures. Engaging with policymakers to establish clear, forward-thinking regulations is crucial to balance innovation with economic stability.

As AI continues to redefine financial services, the UK's ability to implement effective governance will determine its leadership in this evolving landscape.

Source: HM Strategy link


Industry Insights

Case study – Allianz Scaling Responsible AI Across Global Insurance Operations

Allianz, one of the world’s largest insurers, is taking a leading role in translating Responsible AI principles into real-world practice across its global operations. With nearly 160,000 employees and a presence in more than 70 countries, the company has moved beyond AI experimentation to embed ethical safeguards into scalable AI deployment.

In 2024, Allianz joined the European Commission’s AI Pact, aligning its roadmap with the EU AI Act and signalling its intent to not just comply, but lead on AI governance.

At the core of Allianz’s approach is a practical, organisation-wide AI Risk Management Framework developed in-house. This framework governs all AI and machine learning initiatives, from document processing to customer service automation, with defined roles for model owners, risk teams, and compliance functions.

Key initiatives include:

  • An AI Impact Assessment Tool used early in development to flag risks such as discriminatory outcomes, low explainability, or overreliance on sensitive data.
  • The Enterprise Knowledge Assistant (EKA), a GenAI-powered tool now used by thousands of service agents to cut resolution times and improve consistency across 10+ countries.
  • A strict model registration process and “human-in-the-loop” policy to ensure that critical decisions — like claims rejection or fraud detection — are always overseen by a human.
  • Mandatory training for AI Product Owners, with oversight from a central AI governance board embedded in Group Compliance.

These measures are not theoretical. They have enabled Allianz to scale nearly 400 GenAI use cases while maintaining regulatory confidence, internal accountability, and public trust. For Allianz, AI governance is more than risk mitigation — it’s what allows innovation to scale responsibly, without compromising on customer fairness or institutional integrity.

Sources: Allianz link 1, Allianz link 2, WE Forum PDF link


Upcoming Events

1. Gartner Expert Q&A: Practical Guidance on Adapting to the EU AI Act -14 April 2025

This webinar offers valuable insights for businesses navigating the new EU AI regulations. Industry experts will provide actionable advice on how to ensure compliance and unlock opportunities within the evolving AI landscape. It's a must-attend for anyone keen to stay ahead of regulatory changes and ensure their AI strategies are future-proof.

Register now: Gartner link

2. In-Person Event: AI Breakfast & Roundtable – From AI Proof of Concept to Scalable Enterprise Adoption – 23 April 2025

Leading Point is hosting an exclusive AI Breakfast & Roundtable, bringing together AI leaders from top financial institutions, including banks, insurance firms, and buy-side institutions. This intimate, high-level discussion will explore the challenges and opportunities in scaling AI beyond proof of concept to enterprise-wide adoption.

Key discussion points include overcoming implementation barriers, aligning AI initiatives with business objectives, and best practices for AI success in banking, insurance, and investment management. This event offers a unique opportunity to connect with industry peers and gain strategic insights on embedding AI as a core driver of business value.

Want to be a part of the conversation?

If you are an executive with AI responsibilities in business, risk & compliance, or data contact Rajen Madan or Thushan Kumaraswamy to get a seat at the table.

3. In-Person Event: The AI in Business Conference - 15 May 2025

This in-person event offers a unique opportunity to hear from industry leaders across various sectors, providing real-world insights into AI implementation and strategy. Attendees will benefit from a rich agenda of expert sessions and have the chance to network with like-minded professionals, building lasting connections while tackling common challenges in AI.

Plus, the event is co-located with the Digital Transformation Conference, allowing platinum ticket holders to access a broader range of content, deepening their understanding of AI’s role in digital business transformation.

Register now: AI Business Conference link


Conclusion

The themes emerging across this issue point to a maturing AI agenda in financial services: from clearer governance models and responsible scaling, to regulatory recalibration and infrastructure investment. What’s clear is that AI can no longer be treated as a peripheral capability — it must be embedded within core business strategy, with the right controls in place from the outset.

As organisations seek to balance innovation with oversight, the ability to operationalise Responsible AI at scale will define not only compliance readiness but also competitive advantage.

In our next issue, we’ll continue the ‘100 AI Use Cases’ series with a focus on AI in Investment Research — examining how firms are using AI to enhance insight generation, improve analyst productivity, and navigate the risks of model-driven decision-making.


The Trusted AI Bulletin #6

Issue #6 – The AI Policy Pulse: Balancing Risk, Trust & Progress

Introduction

Welcome to this edition of The Trusted AI Bulletin, where we break down the latest shifts in AI policy, regulation, and the evolving landscape of AI adoption.

This week, we explore the debate over AI risk assessments in the EU, as MEPs push back against a proposal that could exempt major tech firms from stricter oversight. We also examine the UK’s latest strategy for regulating AI in financial services and how businesses are navigating the complexities of AI adoption—balancing innovation with compliance. Meanwhile, in government, outdated infrastructure threatens to stall progress, underscoring the need for practical transformation strategies.

With AI becoming ever more embedded in critical systems, the focus is shifting to how organisations can create real value with AI while ensuring responsible governance. From regulatory battles to real-world implementation challenges, these stories highlight the urgent need for a balanced approach—one that drives adoption, fuels transformation, and keeps accountability at its core.


100 AI Use Cases in Financial Services

AI adoption in financial services is no longer a question of if, but where and how. As firms move beyond experimentation, the focus is shifting toward practical, high impact use cases that can drive real operational and strategic value. From front-office customer engagement to back-office automation, the opportunities to embed AI across the business are expanding rapidly.

But with so many possibilities, the challenge lies in identifying where AI can deliver meaningful outcomes — and doing so in a way that’s scalable, compliant, and aligned with the firm’s broader objectives. That’s where a clear view of proven, emerging use cases becomes essential.

Over the coming weeks, we’ll be exploring the 100 AI Use Cases we have identified shaping the future of financial services. For each, we’ll look at the models involved, the data required, key vendors operating in the space, risk considerations, and examples of where adoption is already underway. The goal is to help senior leaders cut through the noise and focus on the AI opportunities that matter — now and next.


Key Highlights of the Week

1. MEPs Criticise EU's Shift Towards Voluntary AI Risk Assessments

A coalition of Members of the European Parliament (MEPs) has expressed significant concern over the European Commission's proposal to make certain AI risk assessment provisions voluntary, particularly those affecting general-purpose AI systems like ChatGPT and Copilot.

This move could exempt major tech companies from mandatory evaluations of their systems for issues such as discrimination or election interference. The MEPs argue that such a change undermines the AI Act's foundational goals of safeguarding fundamental rights and democracy.

This development highlights the ongoing tension between regulatory bodies and technology firms, especially from the United States, regarding the balance between innovation and ethical oversight in AI governance. The outcome of this debate will be pivotal in shaping the future landscape of AI regulation within the European Union.​

Source: Dutch News link

2. UK Financial Regulator Launches Strategy to Balance Risk and Foster Economic Growth

The FCA has launched a new five-year strategy focused on boosting trust, supporting innovation, and improving outcomes for consumers across UK financial services.

By committing to becoming a more data-led, tech-savvy regulator, the FCA aims to strike a better balance between risk and growth—an approach that holds significant implications for the governance of emerging technologies like AI.

Its emphasis on smarter regulation, financial crime prevention, and inclusive consumer support signals a shift toward more agile, forward-looking oversight. For those navigating evolving AI regulations, this strategy reinforces the FCA’s intent to create a regulatory environment that fosters responsible innovation.

Source: FCA link

3. Public Accounts Committee Warns of AI Rollout Challenges Amid Legacy Infrastructure

The UK government's ambitious plans to integrate AI across public services are at risk due to outdated IT infrastructure, poor data quality, and a shortage of skilled personnel.

A report by the Public Accounts Committee (PAC) highlights that over 20 legacy IT systems remain unfunded for necessary upgrades, with nearly a third of central government systems deemed obsolete as of 2024.

Despite intentions to drive economic growth through AI adoption, these foundational weaknesses pose significant challenges. The PAC also raises concerns about persistent digital skills shortages and uncompetitive civil service pay rates, which hinder the recruitment and retention of necessary talent.

Addressing these issues is crucial to ensure that AI initiatives are effectively implemented, fostering public trust and delivering the anticipated benefits of technological advancement.

Source: The Guardian link


Featured Articles

1. Why the UK’s Light-Touch AI Approach Might Not Be Enough

AI regulation in the UK is developing at a cautious pace, with the government opting for a principles-based, sector-led approach rather than comprehensive legislation. While this flexible model aims to foster innovation and reduce regulatory burdens, it risks creating a fragmented landscape where inconsistent standards could undermine public trust and accountability.

The article highlights that regulators often lack the technical expertise and resources to effectively oversee AI, raising concerns about how well current frameworks can keep pace with rapid technological advancements.

Meanwhile, businesses are calling for greater clarity and coherence, especially those operating across borders and facing stricter regimes like the EU AI Act. The UK’s strategy, though well-intentioned, may fall short in addressing the systemic risks posed by AI if coordination and enforcement mechanisms remain weak. For those focused on AI governance, the message is clear: without sharper oversight and alignment, the UK could lag in both trust and competitiveness.

Source: ICAEW link

2. Bridging the AI Knowledge Gap: A Foundation for Responsible Innovation

In an era where artificial intelligence is reshaping everything from financial services to public policy, understanding how AI works is becoming essential—not just for technologists, but for everyone.

As AI systems increasingly influence the decisions we see, the products we use, and even the jobs we do, being AI-literate is no longer a nice-to-have, but a societal imperative. The CFTE AI Literacy White Paper explores why foundational knowledge of AI is critical for individuals, businesses, and governments alike, arguing that AI should be treated as a core component of digital literacy.

What’s particularly compelling is the focus on inclusion—ensuring that access to AI knowledge isn't limited to a technical elite but extended across sectors and demographics. Without widespread AI literacy, regulatory and governance efforts risk being outpaced by innovation.

This makes the paper especially relevant to those shaping or responding to emerging AI regulations and frameworks. It’s both a call to action and a roadmap for building a more informed, resilient society in the age of intelligent systems.

Source: CTFE link

3. AI in 2025: From Reasoning Machines to Multimodal Intelligence

The year ahead promises significant advances in artificial intelligence, particularly in areas like reasoning, frontier models, and multimodal capabilities. Large language models are evolving to exhibit more sophisticated forms of human-like reasoning, enhancing their utility across sectors from healthcare to finance.

At the same time, so-called frontier models—exceptionally large and powerful systems—are setting new benchmarks in tasks like image generation and complex decision-making. Multimodal AI, which integrates text, image, and audio inputs, is maturing rapidly and could redefine how machines interpret and respond to the world.

These developments underscore the urgency for updated governance frameworks that can keep pace with AI’s expanding scope and impact. As capabilities grow, so too does the need for greater regulatory clarity and ethical oversight.

Source: Morgan Stanley link


Industry Insights

Case Study: Building a Trustworthy Data Foundation for Responsible AI

Capital One, a major retail bank and credit card provider, has positioned itself at the forefront of responsible AI by investing in a robust, AI-ready data ecosystem. Operating in a highly regulated industry where trust and accuracy is vital, the company recognised early on that scalable, ethical AI requires more than just advanced algorithms—it demands a disciplined approach to data governance and transparency.

In recent years, Capital One has overhauled its data infrastructure to align with its long-term AI vision, focusing on quality, accessibility, and accountability across the entire data lifecycle.

To support this transformation, Capital One implemented a suite of Responsible AI practices, including standardised metadata tagging, active data lineage tracking, and embedded governance controls across cloud-native platforms. These efforts are supported by cross-functional teams that bring together AI researchers, compliance professionals, and data engineers to operationalise fairness, explainability, and bias mitigation.

The results are tangible: Capital One has accelerated the deployment of customer-facing AI solutions—such as fraud detection and credit risk models—while ensuring they meet internal and regulatory standards. By prioritising responsible data management as the foundation for AI, the company is not only enhancing trust with regulators and customers but also driving innovation with confidence.

Key Takeaways:
1. Data governance first: Ethical AI starts with well-governed, high-quality data.
2. Cross-functional collaboration: Aligning compliance, engineering and AI teams is key to operationalising responsibility.
3. Built-in controls, not bolt-ons: Embedding governance into AI systems from the outset enhances both trust and speed to market.

Sources: Forbes link, Capital One link


Upcoming Events

1. Webinar: D&A Leaders: Preparing Your Data for AI Integration – 2 April 2025 3:00 am BST

Gartner's upcoming webinar, "D&A Leaders, Ready Your Data for AI," focuses on equipping data and analytics professionals with strategies to prepare organisational data for effective artificial intelligence integration. The session will cover best practices for data quality management, governance frameworks, and aligning data strategies with AI objectives. Attendees will gain actionable insights to ensure their data assets are primed for AI-driven initiatives, enhancing decision-making and business outcomes.

Register now: Gartner link

2. In-Person Event: AI Breakfast & Roundtable – From AI Proof of Concept to Scalable Enterprise Adoption – 23 April 2025

Leading Point is hosting an exclusive AI Breakfast & Roundtable, bringing together AI leaders from top financial institutions, including banks, insurance firms, and buy-side institutions. This intimate, high-level discussion will explore the challenges and opportunities in scaling AI beyond proof of concept to enterprise-wide adoption.

Key discussion points include overcoming implementation barriers, aligning AI initiatives with business objectives, and best practices for AI success in banking, insurance, and investment management. This event offers a unique opportunity to connect with industry peers and gain strategic insights on embedding AI as a core driver of business value.

Want to be a part of the conversation?

If you are an executive with AI responsibilities in business, risk & compliance, or data contact Rajen Madan or Thushan Kumaraswamy to get a seat at the table.

3. In-Person Event: Smarter Couds, Stronger AI-Driven Innovation, Efficiency and Resilience – 24 April 2025

IDC’s, HPE’s and TCS’s upcoming roundtable Smarter Clouds, Stronger Businesses explores how enterprises can drive innovation and resilience by aligning AI strategies with modern cloud architectures. With a focus on agility, scalability, and performance, the agenda covers best practices for adopting AI-enabled infrastructure, building secure and future-ready cloud environments, and reducing complexity across hybrid ecosystems. Industry experts will share insights on turning cloud investments into long-term business value—enabling organisations to stay competitive in an increasingly data-driven world.

Register now: IDC link

4. In-Person Event: Risk & Compliance in Financial Services - 29 April 2025

The 9th Annual Risk & Compliance in Financial Services Conference brings together senior professionals from firms such as Aviva, Invesco, Lloyds Banking Group and NatWest. This year’s agenda focuses on emerging challenges and innovations in the sector—from the use of AI to enhance compliance and operational resilience, to navigating evolving regulations like DORA and Consumer Duty. With expert-led panels on financial crime, cyber risk, and ESG reporting, attendees can expect forward-looking insights tailored for today’s risk environment.

Register now: Financial IT link


Conclusion

The developments this week reinforce a crucial reality: effective AI governance is about more than setting rules—it’s about ensuring accountability, trust, and long-term resilience. Whether it’s the EU’s regulatory crossroads, the FCA’s push for a more agile oversight model, or the challenges of AI integration in the public sector, one thing is clear: the success of AI depends on the frameworks we build today.

As AI capabilities expand, so too must our approach to regulation, ethics, and education. The road ahead demands collaboration between policymakers, businesses, and technologists to create systems that not only foster innovation but also safeguard society.

We’ll be back in two weeks with more insights. Until then, let’s continue driving the conversation on responsible AI.


The Trusted AI Bulletin #5

Issue #5 – AI at the Edge: Governing the Future of Innovation

Introduction

Welcome to this week’s edition of The Trusted AI Bulletin, where we unpack the latest developments in AI governance, regulation, and adoption.

This week, we’re diving into OpenAI’s push for federal AI regulations, the launch of new compliance standards for bank-fintech partnerships, and the stark warnings from Turing Award winners about the unsafe deployment of AI models. As governments and businesses grapple with the dual demands of innovation and accountability, the conversation around responsible AI is reaching a critical inflection point.

The rapid evolution of AI is forcing a reckoning: how do we balance the need for speed and competitiveness with the imperative to build safeguards that protect society? From the financial sector’s embrace of AI-driven tools to IKEA’s leadership in ethical AI governance, the stories this week highlight both the opportunities and the risks of this transformative technology.

 


Key Highlights of the Week

1. OpenAI Appeals to White House for Unified AI Regulations Amidst State-Level Disparities

OpenAI has formally requested the White House to intervene against a patchwork of state-level AI regulations, advocating for a cohesive federal framework to govern artificial intelligence. This move underscores the company's concern that disparate state laws could stifle innovation and create compliance challenges.

Notably, OpenAI's Chief Global Affairs Officer, Chris Lehane, has highlighted the urgency of accelerating AI policy under the current administration, shifting from merely advocating regulation to actively promoting policies that bolster AI growth and maintain the U.S.'s competitive edge over nations like China.

In a 15-page set of policy suggestions released on Thursday, OpenAI argued that the hundreds of AI-related bills currently pending across the U.S. risk undercutting America's technological progress at a time when it faces renewed competition from China. The company proposed that the administration consider providing relief for AI companies from state rules in exchange for voluntary access to their models.

Source: Bloomberg link

 

2. CFES Unveils New Standards to Strengthen Compliance in Bank-Fintech Partnerships

The Coalition for Financial Ecosystem Standards (CFES) announced in a press release this week the launch of a new industry framework aimed at strengthening compliance and risk management in bank-fintech partnerships. The STARC framework, comprising 54 standards, sets a benchmark for key areas such as anti-money laundering (AML), third-party risk, and operational compliance, providing financial institutions with a structured rating system to assess their maturity.

To support adoption, CFES has also established an Advisory Board featuring key industry players like the Independent Community Bankers of America (ICBA) and the American Fintech Council (AFC). With regulators increasing scrutiny on fintech partnerships, these standards could play an important role in helping firms navigate compliance without stifling innovation.

As artificial intelligence continues to reshape financial services, frameworks like STARC offer a structured approach to ensuring transparency and accountability.

Source: Press release PDF link, CFES Standards link

 

3. Turing Award winners warn over unsafe deployment of AI models

AI pioneers Andrew Barto and Richard Sutton have strongly criticised the industry’s reckless approach to deploying AI models, warning that companies are prioritising speed and profit over responsible engineering. They argue that releasing untested AI systems to millions without safeguards is a dangerous practice, likening it to building a bridge and testing it by sending people across.Their work, which underpins major advancements in machine learning, has fuelled the rise of AI powerhouses such as OpenAI and Google DeepMind.

The pair, who have been awarded the 2024 Turing Award for their foundational contributions to artificial intelligence, have expressed serious concerns that AI development is being driven by business incentives rather than a focus on safety. Barto criticised the industry’s approach, stating, “Releasing software to millions of people without safeguards is not good engineering practice,” while Sutton dismissed the idea of artificial general intelligence (AGI) as mere “hype.” As AI investment reaches unprecedented levels, their warnings highlight the growing tensions between rapid technological advancement and the urgent need for stronger governance and regulatory oversight.

Source: FT link

 


Featured Articles

1. How Artificial Intelligence is Shaping the Future of Banking and Finance

The financial services sector is experiencing a significant transformation through the integration of artificial intelligence (AI), with investments projected to escalate from $35 billion in 2023 to $97 billion by 2027, reflecting a compound annual growth rate of 29%.

Leading institutions such as Morgan Stanley and JPMorgan Chase have introduced AI-driven tools to enhance operational efficiency and client services. In the immediate term, AI co-pilots are streamlining workflows, while always-on AI web crawlers and automation of unstructured data tasks are providing real-time insights and reducing manual processes.

Looking ahead, AI's potential to revolutionise risk management and customer experience through the use of synthetic data is becoming increasingly evident. Fintech companies are at the forefront of this evolution, democratising AI capabilities and enabling smaller financial institutions to compete effectively. This rapid AI adoption underscores the urgency for robust AI governance and regulatory frameworks to ensure ethical implementation and maintain public trust.

Source: Forbes link

 

2. Mandatory AI Governance: Gartner Predicts Worldwide Regulatory Adoption by 2027

According to Gartner's research, by 2027, AI governance is expected to become a mandatory component of national regulations worldwide. This projection underscores the escalating concerns surrounding data security and the imperative for robust governance frameworks in the rapidly evolving AI landscape.

Notably, Gartner anticipates that over 40% of AI-related data breaches could stem from cross-border misuse of generative AI, highlighting the critical need for cohesive ethical governance. The absence of such frameworks may result in organisations failing to realise the anticipated value of their AI initiatives.

This development signals a pivotal shift towards more stringent AI oversight, emphasising the necessity for organisations to proactively adopt comprehensive governance strategies to mitigate risks and ensure compliance with forthcoming regulatory standards.

Source: CDO Magazine link

 

3. Balancing Control and Collaboration: Five Essential Layers of AI Sovereignty

The concept of AI sovereignty extends far beyond data localisation or regulatory compliance, requiring a multi-layered approach to ensure true independence.

Five key layers define AI sovereignty: legal and regulatory control, resource and technical independence, operational autonomy, cognitive sovereignty over AI models and algorithms, and cultural influence in shaping public perception and ethical norms. Each layer plays a crucial role in balancing national or organisational control with global collaboration, ensuring AI aligns with strategic interests while maintaining adaptability.

Without a structured approach to sovereignty, reliance on external AI infrastructure and governance could pose significant risks to security, competitiveness, and ethical oversight. As AI regulations evolve, this framework highlights the need for a proactive, layered strategy to navigate the complexities of AI governance effectively.

Source: Anthony Butler link

 


Industry Insights

Case Study: IKEA’s responsible AI governance

As AI becomes increasingly embedded in business operations, IKEA has taken a proactive and structured approach to AI governance, ensuring ethical and responsible deployment. Recognising the potential risks of AI alongside its benefits, IKEA introduced its first digital ethics policy in 2019, laying the foundation for responsible AI development.

By 2021, the company had established a dedicated AI governance framework, with a multidisciplinary team overseeing compliance, risk management, and ethical considerations. This governance model ensures that AI is used transparently, fairly, and in alignment with business goals.

Key areas of focus include enhancing employee productivity, optimising supply chains, and improving customer experiences—all while maintaining strict ethical standards. Additionally, IKEA’s AI literacy programme is designed to empower employees with the skills needed to navigate AI responsibly, reinforcing the company’s commitment to human-centric innovation.

Key Takeaways:
1. AI Governance as a Business Imperative: Rather than treating AI governance as a regulatory checkbox, IKEA integrates responsible AI principles into its core business strategy. This ensures that AI-driven innovations align with ethical considerations and organisational priorities.
2. Proactive Regulatory Compliance: IKEA’s commitment to responsible AI extends to early compliance with the EU AI Act. As a signatory of the AI Pact, the company is ahead of regulatory requirements, demonstrating leadership in ethical AI governance.
3. Empowering Employees Through AI Education: Understanding that responsible AI usage starts with people, IKEA has launched an AI literacy programme to train 30,000 employees in 2024. This initiative fosters a culture of accountability and awareness, reducing risks associated with AI adoption.

By prioritising governance, education, and ethical AI integration, IKEA is setting a benchmark for responsible AI adoption in the retail sector, ensuring that technological advancements serve both business needs and societal good.

 

Sources: CIO Dive inkGlobal Loyalty Organisation link

 


Upcoming Events

1. In-Person Event: AI for CFOs - Minimise Risk to Maximise Returns - 25 March 2025

On March 25th, 2025, The Economist is hosting the AI for CFOs event in London, focusing on how finance leaders can leverage artificial intelligence to enhance corporate performance. Attendees will explore AI's role in delivering real-time insights, improving forecasting accuracy, automating compliance, and strengthening data security. This event offers a valuable opportunity to connect with industry experts and discover actionable strategies for integrating AI into financial operations.

Register now: The Economist link

 

2. Webinar: Strategies and Solutions for Unlocking Value from Unstructured Data - 27 March 2025

A-Team Insight’s upcoming webinar, Strategies and Solutions for Unlocking Value from Unstructured Data, will explore how firms can harness the vast potential of unstructured data—emails, customer feedback, and other text-based information—to drive smarter decision-making and gain a competitive edge. Industry experts will share practical approaches to extracting insights, improving operational efficiency, and uncovering new business opportunities. If you're looking to turn your organisation’s unstructured data into a valuable asset, this session is not to be missed.

Register now: A-Team Insight link

 

3. Webinar: Five Essential Tips for Successful AI Adoption - 15 April 2025

This webinar, focuses on the critical role of data quality in AI success. As businesses rush to integrate AI, experts will discuss why clean, structured, and well-governed data must be a top priority to avoid AI becoming a liability. The session will cover key topics such as data governance, security, privacy, ethical considerations, and how to maximise AI ROI. Attendees will gain executive-level strategies to ensure AI delivers meaningful business impact.

Register now: CIO Dive link

 

4. In-Person Event: AI Breakfast & Roundtable – From AI Proof of Concept to Scalable Enterprise Adoption – 23 April 2025

Leading Point is hosting an exclusive AI Breakfast & Roundtable, bringing together AI leaders from top financial institutions, including banks, insurance firms, and buy-side institutions. This intimate, high-level discussion will explore the challenges and opportunities in scaling AI beyond proof of concept to enterprise-wide adoption.

Key discussion points include overcoming implementation barriers, aligning AI initiatives with business objectives, and best practices for AI success in banking, insurance, and investment management. This event offers a unique opportunity to connect with industry peers and gain strategic insights on embedding AI as a core driver of business value.

Want to be a part of the conversation?

If you are an executive with AI responsibilities in business, risk & compliance, or data contact Rajen Madan or Thushan Kumaraswamy to get a seat at the table.

 


Conclusion

The stories this week underscore a critical truth: AI governance isn’t just about compliance—it’s about building trust. From OpenAI’s push for federal oversight to IKEA’s ethical framework, the focus is shifting from rapid adoption to responsible deployment. The warnings from Turing Award winners Barto and Sutton are a stark reminder: innovation without safeguards is a risk we can’t afford.

As AI’s influence grows, the challenge is clear—businesses and policymakers must act now to bridge governance gaps, prioritise transparency, and ensure AI serves society as much as it drives progress. The future of AI depends on the choices we make today.

We’ll be back in two weeks with more insights. Until then, let’s keep pushing for a future where AI works for everyone.

 

 


The Trusted AI Bulletin #4

Issue #4 – Regulating AI: Balancing Innovation, Risk, and Global Influence

Introduction

Welcome to this edition of The Trusted AI Bulletin, where we explore the latest shifts in AI governance, regulation, and adoption. This week, we examine the UK’s evolving AI policy, the growing tensions between Big Tech and European regulators, and the strategic choices shaping AI’s future. With governments reassessing their regulatory approaches and businesses navigating complex compliance landscapes, the conversation around responsible AI is more urgent than ever.

AI adoption requires firms to focus on key capabilities, baseline their AI maturity, and articulate AI risks more effectively. Discussions with executives highlight a gap between the C-suite and AI leads, making governance alignment a critical success factor.

Whether you’re a policymaker, business leader, or AI enthusiast, our curated insights will help you stay informed on the key trends shaping the future of AI.

 


Key Highlights of the Week

1. UK Postpones AI Regulation to Align with US Policies

The UK government has postponed its anticipated AI regulation bill, originally slated for release before Christmas, now expected in the summer. This delay aims to align the UK's AI policies with the deregulatory stance of President Trump's administration, which has recently dismantled previous AI safety measures. Ministers express concern that premature regulation could deter AI businesses from investing in the UK, especially as the US adopts a more laissez-faire approach.

This strategic shift underscores the UK's intent to remain competitive in the global AI landscape, particularly against the backdrop of the EU's stricter regulatory proposals. However, this move has sparked debate over the balance between fostering innovation and ensuring ethical AI development.

Our take: This as a critical moment for businesses to take a proactive approach to AI governance rather than waiting for regulatory clarity. Firms must self-regulate by adopting strong AI controls and risk frameworks to ensure ethical and responsible AI deployment.

Source: The Guardian link

 

2. Big Tech vs Brussels: Silicon Valley Ramps Up Fight Against EU AI Rules

Silicon Valley’s biggest players, led by Meta, are intensifying their efforts to weaken the EU’s stringent AI and digital market regulations—this time with backing from the Trump administration. Lobbyists see an opportunity to pressure Brussels into softening enforcement of the AI Act and Digital Markets Act, with Meta outright refusing to sign up to the EU’s upcoming AI code of practice. The European Commission insists it will uphold its rules, but its recent decision to drop the AI Liability Directive suggests some willingness to compromise. If European regulators waver, it could set a dangerous precedent, emboldening Big Tech to dictate the terms of global AI governance.

 

Source: FT link

 

3. AI Safety Institute Rebrands, Drops Bias Research

The UK government has rebranded its AI Safety Institute, now called the AI Security Institute, shifting its focus away from AI bias and free speech concerns. Instead, the institute will prioritise cybersecurity threats, fraud prevention, and other high-risk AI applications. This move aligns the UK's AI policy more closely with the U.S. and has sparked debate over whether deprioritising bias research could have unintended societal consequences.

Our take: Bias and fairness remain core AI governance challenges. Firms need to go beyond regulatory mandates and build internal frameworks that address bias and transparency, ensuring trust in AI applications.

Should AI regulation focus solely on security threats, or is ignoring bias a step backward in responsible AI governance?

 

Source: UK Gov link

 


Featured Articles

1. UK AI Regulation Must Balance Innovation and Responsibility

The UK government’s approach to AI regulation will play a crucial role in shaping economic growth. The challenge lies in ensuring AI is safe, fair, and reliable without imposing rigid constraints that could stifle innovation. A risk-based, principles-driven framework—similar to the EU’s AI Act—offers a way forward, allowing adaptability while maintaining accountability. The real test will be whether regulation fosters trust and responsible AI use or becomes an obstacle to progress. Governance should encourage businesses to integrate ethical AI practices, not just comply with rules.

Our take: Striking this balance will be key to ensuring AI drives long-term economic and technological advancement. Firms shouldn’t wait for regulatory clarity. Assessing AI risks, implementing governance frameworks, and ensuring transparency now will give organisations a competitive edge.

 

Source: The Times link

 

2. Addressing Data and Expertise Gaps in AI Integration

In the rapidly evolving landscape of artificial intelligence, organisations face significant hurdles in adoption, notably concerns about data accuracy and bias, with nearly half of respondents expressing such apprehensions. Additionally, 42% of enterprises report insufficient proprietary data to effectively customise AI models, underscoring the need for robust data strategies. A similar percentage highlights a lack of generative AI expertise, pointing to a critical skills gap that must be addressed.

Moreover, financial justification remains a challenge, as organisations struggle to quantify the return on investment for AI initiatives. These challenges are particularly pertinent in the context of AI governance and regulation, emphasising the necessity for comprehensive frameworks to ensure ethical and effective AI deployment.

Source: IBM link

 

3. Global AI Compliance Made Easy – A Must-Have Tracker for AI Governance

The Global AI Regulation Tracker developed by Raymond Sun, is a powerful, interactive tool that keeps you ahead of the curve on AI laws, policies, and regulatory updates worldwide. With a dynamic world map, in-depth country profiles, and a live AI newsfeed, it provides a one-stop resource for navigating the complex and evolving AI governance landscape. Updated regularly, it ensures you never miss a critical regulatory shift that could impact your business or compliance strategy. Stay informed, stay compliant, and turn AI regulation into a competitive advantage.

Source: Techie Ray link

 

4. Breaking Down Barriers: Strategies for Successful AI Adoption

Artificial intelligence holds immense promise for revolutionising business operations, yet a staggering 80% of AI initiatives fall short of expectations. This high failure rate often stems from challenges such as subpar data quality, organisational resistance, and a lack of robust leadership support.

To navigate these obstacles, companies must prioritise comprehensive data management, foster a culture open to change, and ensure active engagement from leadership. Moreover, aligning AI projects with clear business objectives and investing in employee training are pivotal steps towards realising AI's full potential. Without addressing these critical areas, organisations risk squandering resources and missing out on the transformative benefits AI offers.

Source: Forbes link

 


Industry Insights

Case Study: AXA's Ethical AI Integration: Boosting Efficiency and Trust in Insurance

AXA, a global insurance leader, has strategically integrated Artificial Intelligence (AI) into its operations to enhance efficiency and uphold ethical standards. By implementing a dedicated AI governance team comprising actuaries, data scientists, privacy specialists, and business experts, AXA ensures responsible AI adoption across its services. This team focuses on creating transparent AI models, safeguarding data privacy, and maintaining human oversight in AI-driven decisions.

A practical application of this strategy is evident in AXA UK's deployment of 13 software bots within their claims departments, which, over six months, saved approximately 18,000 personnel hours and yielded around £140,000 in productivity gains. This initiative not only streamlines repetitive tasks but also reinforces AXA's commitment to ethical AI practices, setting a benchmark for the insurance industry.

Key Outcomes of AI Governance at AXA:

* Operational Efficiency: The introduction of AI bots has significantly reduced manual processing time, enhancing overall productivity.
* Ethical AI Deployment: Establishing a robust governance framework ensures AI applications are transparent, fair, and aligned with societal responsibilities.
* Enhanced Customer Service: Automation of routine tasks allows employees to focus on more complex customer needs, improving service quality.

 

Sources: Cap Gemini linkAXA link

 


Upcoming Events

1. Webinar: Augmenting Private Equity Expertise With AI – 6 March 2035

This event aims to explore practical strategies for private equity firms to integrate artificial intelligence, enhancing expertise and uncovering new value sources. Discussions will focus on AI's role in competitive deal sourcing, transforming due diligence processes, and bolstering risk management. As AI continues to reshape the financial landscape, this webinar offers timely insights into aligning technology strategies with business objectives, ensuring AI-driven value creation throughout the investment lifecycle.

Register now: FT Live link

 

2. Webinar: CIOs, Set the Right AI Strategy in 2025 – 7 March 2025

In this upcoming webinar, Chief Information Officers will gain insights into formulating effective AI strategies that yield measurable outcomes. The session aims to equip CIOs with the tools to navigate the complexities of AI implementation, ensuring alignment with organisational goals and compliance with emerging AI regulations. As AI continues to reshape industries, understanding its governance and regulatory landscape becomes imperative for IT leaders.

Register now: Gartner link

 

3. In-Person Event: AI UK 2025 Alan Turing Institute – 17 – 18 March 2025

This in-person event brings together experts to explore the latest advancements in artificial intelligence, governance, and regulation. A key highlight of the event is the panel discussion, Advancing AI Governance Through Standards, taking place on 18 March 2025.

Led by The AI Standard Hub, the session will delve into recent developments in AI assurance, global standardisation efforts, and strategies for fostering inclusivity in AI governance. As AI regulations continue to evolve, this discussion offers valuable insights into building a robust AI assurance ecosystem and ensuring responsible AI deployment.

Register now: Turing Institute link

 


Conclusion

As AI governance takes centre stage, the challenge remains—how do we drive innovation while ensuring transparency, fairness, and accountability? This issue underscores the importance of strategic regulation, ethical AI adoption, and proactive leadership in shaping a future where AI works for businesses and society alike. AI governance is shifting, but businesses can’t afford to wait. AI risks require more effort to understand, firms need to baseline their AI capabilities, and governance gaps between leadership and AI teams must be bridged.

With AI’s influence growing across industries, the need for informed decision-making has never been greater. Whether it’s policymakers refining regulations or organisations refining their AI strategies, the key takeaway is clear: responsible AI isn’t just about compliance—it’s about long-term success.

We’ll be back in two weeks with more insights—until then, let’s continue shaping a responsible AI future together.

 


The Trusted AI Bulletin #3

Issue #3 – Global AI Crossroads: Ethics, Regulation, and Innovation

Introduction

Welcome to this week’s edition of The Trusted AI Bulletin, where we explore the latest developments, challenges, and opportunities in the rapidly evolving world of AI governance. From global ethical debates to regulatory updates and industry innovations, this week’s highlights underscore the critical importance of balancing innovation with responsibility.

As AI continues to transform industries and societies, the need for robust governance frameworks has never been more urgent. For many organisations, this means not just keeping pace with regulatory change but also taking practical steps—such as bringing key teams together to assess AI usage, ensuring leadership is informed on emerging risks, and building governance frameworks that can evolve alongside innovation.

Join us as we delve into key stories shaping the future of AI governance and examine how organisations and nations are navigating this complex landscape.

 


Key Highlights of the Week

1. UK and US Withhold Support for Global AI Ethics Pact

At the AI Action Summit in Paris, the UK and US refused to sign a joint declaration on ethical and transparent AI, which was backed by 61 countries, including China and EU nations. The UK cited concerns over a lack of "practical clarity" and insufficient focus on security, while the US objected to language around "inclusive and sustainable" AI. Both governments stressed the need for further discussions on AI governance that align with their national interests. Critics and AI experts warn that this decision is a missed opportunity for democratic nations to take the lead in shaping AI governance, potentially allowing other global powers to set the agenda.

Source: The Times link

 

2. New PRA Letter Outlines 2025 Expectations for UK Banks

The Prudential Regulation Authority (PRA) has issued a letter outlining its 2025 supervisory priorities for UK banks, focusing on risk management, governance, and resilience. With ongoing market volatility, AI adoption, and geopolitical uncertainty, firms are expected to strengthen their risk frameworks and controls.

Liquidity and funding will also be under scrutiny, as the Bank of England shifts to a new reserve management approach. Meanwhile, banks must demonstrate by March 2025 that they can maintain operations during severe disruptions.

Notably, the Basel 3.1 timeline has been pushed to 2027, giving firms more time to adjust. However, regulatory focus on AI, cyber risks, and data management is set to increase, with further updates expected later this year.

 

Source: PRA PDF link

 

3. G42 and Microsoft Launch the Middle East’s First Responsible AI Initiative

G42 and Microsoft have jointly established the Responsible AI Foundation, the first of its kind in the Middle East, aiming to promote ethical AI standards across the Middle East and Global South. Supported by the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), the foundation will focus on advancing responsible AI research and developing governance frameworks that consider cultural diversity. Inception, a G42 company, will lead the programme, while Microsoft plans to expand its AI for Good Lab to Abu Dhabi. This initiative underscores a commitment to ensuring AI technologies are safe, fair, and aligned with societal values.

Source: G42.ai link

 


Featured Articles

1. AI is Advancing Fast—Why Isn’t Governance Keeping Up?

Artificial intelligence is evolving at breakneck speed, reshaping industries and daily life, yet a clear governance framework is still missing. Effective policies must be based on scientific reality, not speculation, to address real-world challenges without stifling progress. Striking the right balance between innovation and regulation is crucial, especially as AI’s impact grows. Open access to AI models is key to driving research and ensuring future breakthroughs aren’t limited to a select few. With AI set to transform everything from healthcare to energy, the question remains—can governance keep pace?

 

Source: FT link

 

2. The EU AI Act: What High-Risk AI Systems Must Get Right

The EU AI Act imposes stringent obligations on high-risk AI systems, requiring organisations to implement risk management frameworks, ensure data governance, and maintain transparency. CIOs and CDOs must oversee compliance, ensuring human oversight, proper documentation, and clear communication when AI is in use.

A key focus is ensuring AI systems are explainable and auditable, enabling regulators and stakeholders to understand how decisions are made. Non-compliance carries significant financial and operational risks, making early alignment with regulatory requirements essential.

With enforcement approaching, businesses must integrate these rules into their AI strategies to maintain trust, mitigate risks, and drive responsible innovation. To stay ahead, organisations should conduct internal audits, update governance policies, and invest in staff training to embed compliance across AI initiatives. Proactive action now will determine competitive advantage in an AI-regulated future.

Source: A&O Shearman link

 

3. Building a Data-Driven Culture: Four Essential Pillars for Success

While many organisations collect vast amounts of data, few truly unlock its transformative potential. Success lies in mastering four critical elements: leadership commitment to champion data use, fostering data literacy across teams, ensuring data is accessible and integrated, and establishing trust through robust governance. Without these pillars, even the most data-rich organisations risk inefficiency and missed opportunities. A strong data-driven culture isn’t just about tools—it’s about embedding these principles into the fabric of your organisation.

 

Source: MIT Sloan link

 


Industry Insights

Case Study: Ocado’s Approach to Responsible AI Governance

Ocado Group has embedded AI across its operations, from optimising warehouse logistics to enhancing customer experiences. However, as AI adoption scales, so do the risks—unintended biases, unpredictable decision-making, and regulatory challenges. To navigate this, Ocado has placed responsible AI governance at the heart of its strategy, ensuring its models remain transparent, fair, and reliable.

A key component of its AI governance Strategy is its Responsible AI Framework, built around five key principles: Fairness, Transparency, Governance, Robustness, and Impact. This structured approach ensures AI systems are rigorously tested to prevent bias, remain explainable, and function as intended across complex operations.

One tangible success of this framework is Ocado’s real-time AI-powered monitoring, which has led to £100,000 in annual cost savings by automatically detecting and resolving system anomalies. With AI observability tools tracking over 100 microservice applications within its Ocado Smart Platform (OSP), the company can proactively address inefficiencies, minimising downtime and enhancing system reliability.

AI governance ensures Ocado’s AI models remain resilient and accountable, reducing risks associated with unpredictable AI behaviour. By embedding responsible AI principles into its operations, Ocado continues to optimise efficiency, prevent costly errors, and align with evolving regulatory expectations around AI.

 

Sources: Ocado Group linkOcado Group link (CDO interview)

 


Upcoming Events

1. In-Person Event: Microsoft AI Tour - 5 March 2025

The Microsoft AI Tour in London is an event for professionals looking to explore the transformative potential of artificial intelligence. Featuring expert-led sessions, interactive workshops, and live demonstrations, it offers a unique opportunity to dive into the latest AI innovations and their real-world applications. Whether you're looking to expand your knowledge, network with industry leaders, or discover how AI can drive impact, this event is an invaluable experience for anyone invested in the future of technology.

Register now: MS link

 

2. In-Person Event: IRM UK Data Governance Conference - 17-20 March

The Data Governance, AI Governance & Master Data Management Conference Europe is scheduled for 17–20 March 2025 in London. This four-day event offers five focused tracks, covering topics such as data quality, MDM strategies, and AI ethics. The conference features practical case studies from leading organisations, providing attendees with actionable insights into effective data management practices.

Key sessions include “Navigating the Intersection of Data Governance and AI Governance” and “How Master Data Management can Enable AI Adoption”. Participants will also have opportunities to connect with over 250 data professionals during dedicated networking sessions.

Register now: IRM UK link

 

3. Webinar: Strategies and solutions for unlocking value from unstructured data - 27 March 2025

Discover how to harness the untapped potential of unstructured data in this insightful webinar. The session will explore practical strategies and innovative solutions to extract actionable insights from data sources like emails, documents, and multimedia. Attendees will gain valuable knowledge on overcoming challenges in data management, leveraging advanced technologies, and driving business value from previously underutilised information.

Register now: A-Team Insight link

 


Conclusion

As we wrap up this edition of The Trusted AI Bulletin, it’s clear that the journey toward ethical and effective AI governance is both challenging and essential. From the UK and US withholding support for a global AI ethics pact to Ocado’s pioneering approach to responsible AI, the stories this week highlight the diverse perspectives and strategies shaping the future of AI.

While progress is being made, the road ahead demands collaboration, innovation, and a shared commitment to ensuring AI benefits all of humanity. For organisations looking to act now, investing in education, cross-functional AI collaboration, and a clear governance roadmap will be key to staying competitive in an AI-regulated future.

Stay tuned for more updates, and let’s continue working together to build a future where AI is not only powerful but also fair, transparent, and accountable.

 


The Trusted AI Bulletin #2

Issue #2 – AI Investment, Ethics & Compliance Trends

Introduction

Welcome to this edition of The Trusted AI Bulletin, where we bring you the latest developments in enterprise AI risk management, adoption, and ethical AI practices.

This week, we examine how tech giants are investing billions into AI innovation, the growing global alignment on AI safety, and why passive data management is no longer viable in an AI-driven world.

With AI becoming an integral part of finance, healthcare, and other critical industries, strong governance frameworks are essential to ensure trust, transparency, and long-term success.

From real-world case studies like DBS Bank’s AI journey to upcoming industry events, this issue gives you insights to help you stay ahead in the evolving AI landscape.

 


Key Highlights of the Week

1. Stargate: America's $500 Billion AI Power Play

The United States unveiled the Stargate Project, a $500 billion initiative over the next four years to establish the world's most extensive AI infrastructure and secure global dominance in the field. Led by OpenAI, Oracle, and SoftBank, with backing from President Trump, the project plans to build 20 massive data centres across the U.S., starting with a 1 million-square-foot facility in Texas. Beyond advancing AI capabilities, Stargate is also a strategic move to attract global investment capital, potentially limiting China’s access to AI funding. With its $500 billion commitment far surpassing China’s $186 billion AI infrastructure spending to date, the U.S. is making a bold play to corner the market and maintain its technological edge.

Source: Forbes link

 

2. China’s AI Firms Align with Global Safety Commitments, Signalling Convergence in Governance

Chinese AI companies, including DeepSeek, are rapidly advancing in the global AI race, with their latest models rivalling top Western counterparts. In a notable shift, 17 Chinese firms have signed AI safety commitments similar to those adopted by Western companies, signalling a growing alignment on governance principles. This convergence highlights the potential for international collaboration on AI safety, despite ongoing geopolitical competition. As AI development accelerates, upcoming forums like the Paris AI Action Summit may play a crucial role in shaping global AI governance.

Source: Carnegie Endowment research link

 

3. Lloyds Banking Group Expands AI Leadership with New Head of Responsible AI

Lloyds Banking Group has appointed Magdalena Lis as its new Head of Responsible AI, reinforcing its commitment to ethical AI development. With over 15 years of experience, including advisory roles for the UK Government and leadership at Toyota Connected Europe, Lis will focus on ensuring AI safeguards while advancing innovation. This move follows the appointment of Dr. Rohit Dhawan as Director of AI and Advanced Analytics in 2024, as Lloyds continues to grow its AI Centre of Excellence, now comprising over 200 specialists. As AI reshapes banking, Lloyds aims to balance technological advancement with responsible implementation.

 

Source: FF News link

 


Featured Articles

1. Why Passive Data Management No Longer Works in the AI Era

The days of passive data management are over—AI-driven organisations need a proactive approach to governance. Chief Data Officers (CDOs) must ensure that data is high-quality, well-structured, and compliant to fully unlock AI’s potential. This means implementing automation, real-time monitoring, and stronger governance frameworks to mitigate risks while enhancing decision-making. Without these measures, businesses risk falling behind in an increasingly AI-powered world. The article explores how CDOs can take control of their data strategy to drive innovation and maintain regulatory compliance.

Image source: Image generated using ImageFX

Source: Medium link

2. How Governance & Privacy Can Safeguard AI Development

As AI adoption accelerates, so do concerns over data exposure, compliance failures, and reputational damage. Informatica warns that without strong governance and privacy policies, organisations risk losing control over sensitive information. Proactive data management, human oversight, and clear accountability are crucial to ensuring AI is both powerful and responsible. Businesses must not only understand the data fuelling their AI models but also implement safeguards to prevent unintended consequences. In an AI-driven world, those who neglect governance may find themselves facing serious risks.

Source: A-Team Insight link

3. AI Literacy: The Key to Staying Ahead in an AI-Driven World

AI is transforming industries, but do your teams truly understand how to use it responsibly? Without proper AI literacy, businesses risk compliance failures, biased decision-making, and missed opportunities. A well-designed AI training programme helps employees navigate regulations, mitigate risks, and unlock AI’s full potential. From assessing knowledge gaps to tailoring content for different roles, the right approach ensures AI is used strategically and ethically. As AI continues to evolve, organisations that prioritise education will be better equipped to adapt and thrive.

Source: IAPP link

 


Industry Insights

Case Study: DBS Bank - AI Success Rooted in Robust Governance Framework

Harvard Business School’s recent case study on DBS Bank highlights the critical role of AI governance in executing a successful AI strategy. Headquartered in Singapore, DBS embarked on a multi-year digital transformation under CEO Piyush Gupta in 2014, incorporating AI to enhance business value and customer experience. As AI adoption scaled, DBS developed its P-U-R-E framework—emphasising Purposeful, Unsurprising, Respectful, and Explainable AI—to ensure ethical and responsible AI deployment. This governance-first approach has been instrumental in managing risks while maximising AI’s potential across banking operations.

In 2022, DBS began exploring Generative AI (Gen AI) use cases, adapting its governance frameworks to balance innovation with emerging risks. By leveraging its existing AI capabilities, the bank continues to integrate AI responsibly while maintaining regulatory compliance and trust.

Key Outcomes of AI Governance at DBS:
o Economic Impact: DBS anticipates its AI initiatives will generate over £595 million in economic benefits by 2025, following consecutive years of doubling impact.
o Enhanced Customer Experience: AI-driven hyper-personalised prompts assist customers in making better investment and financial planning decisions.
o Employee Development: AI supports employees with tailored career and upskilling roadmaps, fostering long-term career growth.

Sources: DBS Bank news link

 


Upcoming Events

1. Webinar: Transforming Banking with GenAI – 13 February 2025

Join the Financial Times for a webinar exploring the transformative potential of Generative AI (GenAI) in the banking sector. Industry leaders will discuss the latest GenAI applications, including synthetic data and self-supervised learning, and provide strategies for navigating the rapidly evolving AI landscape. Key topics include revolutionising core banking operations, building robust data strategies, and reskilling workforces for future challenges.

Register now: FT link

2. Webinar: AI Maturity & Roadmap: Accelerate Your Journey to AI Excellence – 27 February 2025

Gartner is hosting a webinar focusing on assessing AI maturity and exploring the transformative potential of AI within organisations. The session will utilise Gartner's AI maturity assessment and roadmap tools to outline key practices across seven workstreams essential for achieving AI success at scale. Attendees will gain insights into managing and prioritising activities to harness AI's full potential.

Register now: Gartner link

3. Webinar: What Do CIOs Really Care About? – 13 March 2025

Join IDC for an insightful webinar exploring the evolving priorities of Chief Information Officers in the digital era. The session will delve into how CIOs are balancing innovation with pragmatism, transitioning from traditional IT management to strategic leadership roles that drive business transformation. Attendees will gain perspectives on aligning technology initiatives with organisational goals and the critical role of CIOs in today's rapidly changing technological landscape.

Register now: IDC link

 


Conclusion

Implementing Trusted AI isn’t just a regulatory requirement—it’s a business imperative. As organisations integrate AI into critical decision-making, ensuring trust, transparency, and compliance will define long-term success. By staying informed on evolving policies, adopting strong governance frameworks, and fostering ethical AI practices, businesses can harness AI’s full potential while managing risks.
We’d love to hear your thoughts! Join the conversation, share your perspectives, and stay engaged with us as we navigate the future of responsible AI together.

See you in the next issue!

Rajen Madan

Thushan Kumaraswamy


The Trusted AI Bulletin #1

Issue #1 – AI Advancements and Regulatory Shifts

Introduction

Welcome to the inaugural edition of The Trusted AI Bulletin! As artificial intelligence continues to reshape industries, the importance of robust risk management, deployment processes, transparency and ethical oversight on AI cannot be overstated.

At Leading Point our mission is to help those responsible for implementing AI in enterprises deliver trusted, rapid AI innovations while removing the blockers – be it uncertainty around AI value, lack of trust with AI outputs or user adoption.

This newsletter is your bi-weekly guide to staying informed, inspired, and ahead of the curve in navigating the challenges with AI deployment and realise the opportunity of AI in your enterprise.


Key Highlights of the Week

1. AI Innovations in Financial Services

The UK’s AI sector continues to grow, attracting £200 million in daily private investment since July 2024, with notable contributions like CoreWeave’s £1.75 billion data centre investment. These advancements underscore the transformative potential of AI in sectors such as financial services. From cutting-edge AI models to emerging data infrastructure, staying ahead of these innovations is essential for leaders navigating this rapidly evolving space.

Source: UK Government link

2. UK AI Action Plan

The UK government has officially approved a sweeping AI action plan aimed at establishing a robust economic and regulatory framework for artificial intelligence. The plan focuses on ensuring AI is developed safely and responsibly, with a strong emphasis on promoting innovation while addressing potential risks. Key priorities include creating clear guidelines for AI governance, fostering collaboration between government and industry, and ensuring the UK remains a global leader in AI development. This action plan marks a significant step towards creating a balanced approach to AI regulation.

Source: Artificial Intelligence News link

3. Tech Nation to launch London AI Hub

Brent Hoberman’s Founder’s Forum, announced the London AI Hub in collaboration with European AI group Merantix, Onfido and Quench.ai founder Husayn Kassai and flexible office provider Techspace. The initiative aims to bring together a fragmented sector. Hoberman said the hub would act as a “physical nucleus for meaningful collaboration across founders, investors, academics, policymakers and innovators.

Source: UK Tech News link


Featured Articles

1. 10 AI Strategy Questions Every CIO Must Answer

Artificial intelligence is transforming industries, and CIOs play a key role in aligning AI initiatives with business objectives. The article outlines 10 critical questions that every CIO must answer to ensure successful AI strategy, from building governance frameworks to implementing ethical AI.

Source: CIO.com link

2. AI Regulations, Governance, and Ethics for 2025

The global landscape for AI regulation is evolving rapidly, with regions adopting diverse approaches to governance and ethics. In the UK, a traditionally light-touch, pro-innovation approach is now shifting toward proportionate legislation focused on managing risks from advanced AI models. With upcoming proposals and the UK AI Safety Institute’s pivotal role in global risk research, the country aims to balance innovation with safety.

Source: Dentons link


Industry Insights

Case Study: Mastercard

Mastercard’s commitment to ethical AI governance acts as a core part of its innovation strategy. Recognising the potential risks of AI, Mastercard developed a comprehensive framework to ensure its AI systems align with corporate values, societal expectations, and regulatory standards. This approach highlights the growing importance of AI governance in fostering trust, minimising risks, and enabling responsible innovation.

Key elements of Mastercard’s AI governance strategy include:

o Transparency and accountability: Regular audits and cross-functional oversight ensure AI systems operate fairly and responsibly.

o Ethical principles in practice: AI systems are designed to uphold fairness, privacy, and security, balancing innovation with societal and corporate responsibilities.

This case underscores how robust AI governance can help organisations navigate the complexities of AI deployment while maintaining trust and ethical integrity.

Source: IMD link


Upcoming Events

1. Webinar: A CISO Guide to AI and Privacy – 21 January 2025

Explore how to develop effective AI policies aligned with industry best practices and emerging regulations in this insightful webinar. Maryam Meseha and Janelle Hsia will discuss ethical AI use, stakeholder collaboration, and balancing business risks with opportunities. Learn how AI can enhance cybersecurity and drive innovation while maintaining compliance and trust.

Register now: Brighttalk link

2. The Data Advantage – Smarter Investments in Private Markets – 28 January 2025

This event, run by Leading Point, focuses on the transformative role of data and technology in private markets, bringing together investors, data professionals, and market leaders to explore smarter investment strategies. Key discussions will cover leveraging data-driven insights, integrating advanced analytics, and enhancing decision-making processes to maximise returns in private markets.

Register now: Eventbrite link

3. The Data Management Summit 2025 – 20 March 2025

The Data Management Summit London is a premier event bringing together data leaders, regulators, and technology innovators to discuss the latest trends and challenges in data management, particularly in financial services. Key topics include data governance, ESG data, cloud strategies, and leveraging AI and advanced analytics to drive innovation while maintaining regulatory compliance. It’s an excellent opportunity to network and learn from industry leaders.

Register now: A-Team Insight link


Conclusion

As AI continues to transform industries, the need for operating level clarity and adoption in AI becomes ever more pressing. By staying informed about the latest advancements, regulatory changes, and best practices in AI implementations, enterprises can navigate this landscape effectively and responsibly. We encourage you to engage with this content, share your insights, and join the conversation in our upcoming events and discussions.

Stay informed, stay responsible!

Rajen Madan

Thushan Kumaraswamy


Accelerating AI Success

Accelerating AI Success: The Role of Data Enablement in Financial Services

Introduction

The webinar, held on 10 October 2024, focused on accelerating AI success and the foundational role of data enablement in financial services. Leading Point Founder & CEO, Rajen Madan, introduced the topic and the panel of four executives: Joanne Biggadike (Schroders), Nivedh Iyer (Danske Bank), Paul Barker (HSBC), and Meredith Gibson (Leading Point).

Rajen explained that data enablement involves "creating and harnessing data assets, making them super accessible and well managed, and embedding them into operational decision-making processes." He outlined the evolution of data management in the industry, describing three waves:

1️⃣ Focus on big warehouses and governance

2️⃣ Making data more pervasive and accessible

3️⃣ The opportunity now – emphasis on value extraction, embedding data insights in operational processes and decision-making and transform with AI

 

Data Governance and AI Governance

The panellists discussed the evolving role of data governance and its relationship to AI governance. Joanne Biggadike, Head of Data Governance at Schroders, noted the increasing importance of data governance: "Everybody's realising in order to move forward, especially with AI and generative AI, you really need your data to be reliable and you need to understand it."

She emphasised that while data governance and AI governance are separate, they are complementary. Biggadike stressed the importance of knowing data sources and having human oversight in AI processes: "We need a touch point. We need a human in the loop. We need to be able to review what we're coming out with as our outcomes, because we want to make sure that we're not coming out with the wrong output because the data's incorrect, or because the data's biased."

Paul Barker, Head of Data and Analytics Governance at HSBC cautioned against creating new silos for AI governance: "We've been doing model risk management for 30 years. We've been doing third party management for 30 years. We've been doing data governance for a very long time. So I think... it's about trying not to create a new silo.“

 

Data Quality and AI Adoption

Nivedh Iyer, Head of Data Management at Danske Bank, highlighted the importance of data quality in AI adoption: "AI in the space of data management, if I say core aspects of data management like governance, quality, lineage is still in the process of adoption... One of the main challenges for AI adoption is how comfortable we are... on the quality of the data we have because Gen AI or AI for that matter depends on good quality data."

Iyer also mentioned the emergence of innovative solutions in data quality management, particularly from fintech providers.

 

Central Shift and Technical Capabilities

Paul Barker emphasised the dual challenges of cultural shift and technical capabilities in data management: "There is a historic tendency to keep all the data secret... When you start with that as your DNA, it's then very difficult to move to a data democratisation culture where we're trying to surface data for the non-data professional."

Regarding technical capabilities, Barker noted the challenges faced by large, complex organisations compared to start-ups: "You can look at an organisation that's the scale and complexity of say HSBC... compared to a start-up organisation that literally starts its data architecture with a blank piece of paper and can build that Model Bank."

From a technical standpoint, large organisations face unique challenges in integrating various data sources across multiple markets and op models compared to smaller startups that can build their data architecture from scratch. There has been progress with technical solutions that can address some of these interoperability challenges.

 

Legal and Regulatory Aspects

Meredith Gibson, Data & Regulatory Lawyer with Leading Point, speaking from a legal perspective, highlighted the evolving regulatory landscape: "As the banks and other financial institutions... become more complex and more interested in data... so does the roadmap for how you control that change has morphed with deeper understanding by regulators and increased requirements."

She also raised concerns about data ownership in the context of AI and large language models: "Programmers have always done a copy and paste, which was fine until you end up with large language models where actually I'm not sure that people do know where their information and their data comes from."

The Panel highlighted the tension between banks' desire for autonomy in managing their data and regulators' need for standardisation to monitor activities effectively. There are several initiatives on standardisation including ISO, LEI and the EU AI Act. Lineage is crucial for getting AI ready. Who owns the data, who controls it, information on the data usage and obligations become central.

 

Leading Point’s Data Enablement Framework

Data is readily accessible, well-managed, and used to drive decision-making and innovation.​

Data Strategy & Data Architecture

By having a clear data strategy and one that is aligned with the business strategy, you can reach better decisions quicker. Using insights from your data provides more confidence that the business actions you are taking are justified.

Having an agreed cross-business data architecture supports accelerated IT development and adoption of new products and solutions, by defining data standards, data quality, and data governance.

Data Catalogue & Data Virtualisation

Having a data catalogue is more than just implementing a tool like Collibra. It is important to define what that business data means at a logical level and how that is represented in the physical attributes.

A typical way to consolidate data is with a data warehouse, but that is a complex undertaking that requires migration from data sources into the warehouse with the associated additional storage costs. Data virtualisation simplifies data integration, standardisation, federation, and transformation without increasing data storage costs.

 

The Future of Data Enablement

The panellists discussed how data enablement needs to evolve to accommodate AI and other emerging technologies.

Joanne Biggadike suggested that while core principles of data governance remain useful, they need to adapt: "I think what they need to do is to make sure that they're not a blocker for AI, because AI is innovative and it actually means that sometimes you don't know everything that you might already need to know when you're doing day-to-day data governance."

Paul Barker noted the need for more dynamic governance processes: "We are now in the 21st century, but a lot of data governance is still based on a sort of 19th, early 20th century... form a committee, write a paper, have a six week period of consultation."

We need data governance by design. Financial institutions have been good with deploying SDLC, controlled and well-governed releases with checkpoints. We need to embed AI and data governance as part of the SDLC.
Data lineage, should not be a one-off solution it should be right-sized to the requirement i.e. coarse or fine-grained. Chasing detailed lineage across the complexity of large organisation infrastructures will take years and there will not be ROI. Pragmatism is required.

Focus on data ethics, as AI and ML becomes more widely-used, is as much a training and skills development requirement as a technical one. Understanding what terms and conditions underpin service, client conduct, usage of PII data and overall values of building customer trust.

Data ownership, rather than theoretical “who is to blame” when there are data quality issues, firms should focus on creating transparency on accountability and establishing clear chain of communications. Ownership can naturally align to domain data sets, for instance, CFO should have ownership on financial data. Central to ownership is establishing escalation points, “Who can I reach out to change something? Who is best placed to provide future integration?”

The climate impact of AI infrastructure is potentially significant, and firms need to factor this in their deployment. There will be innovation in data centres but also firms will get clarity of end state. Currently many organisations have gone through costly initiatives to move to cloud, and due to AI and security concerns firms are bringing some of it on-prem, this needs to be worked through.

We need to start thinking of AI as another tool that can accelerate and re-imagine processes making them more effective and efficient but it is not an innovation by itself and we should approach any AI adoption with what is the business problem we are looking to solve.

 

Challenges and Opportunities

The panellists identified several challenges and opportunities in the data and AI space:

1️⃣ Balancing innovation with governance and risk management

2️⃣ Ensuring data quality and reliability for AI applications

3️⃣ Adapting governance frameworks to be more agile and responsive

4️⃣ Addressing data ownership and privacy concerns in the age of AI

5️⃣ Bridging the gap between traditional data management practices and emerging technologies

 

Conclusion

The webinar highlighted the critical role of data enablement in accelerating AI success in financial services. The panellists stressed the need for robust data governance, high-quality data, and a cultural shift towards data democratisation. They also noted the importance of adapting existing governance frameworks to accommodate AI and other emerging technologies, rather than creating new silos.

As organisations continue to navigate the complex landscape of data and AI, they must balance innovation with risk management, ensure data quality and reliability, and address legal and ethical concerns. The future of data governance in financial services will likely involve more dynamic, agile processes that are embedded in business and operations and allow to keep pace with rapidly evolving technologies while maintaining the necessary controls and oversight. An overall pragmatic and principled approach is the best way forward for organisations.

 

Download the report

Leading Point - Webinar - Data Enablement for AI - Summary

 


Securing the GenAI Future

Securing the Future with GenAI Data Access Controls

Why you need data access controls for your GenAI systems

As generative AI (GenAI) rapidly transforms industries, the need for stringent data access controls is becoming a critical security priority. According to IBM's Cost of a Data Breach 2024 report1, a staggering 46% of breaches involve customer personal data, a concerning statistic as GenAI models increasingly process sensitive, proprietary, and personal data. With their ability to generate content and insights at scale, GenAI systems pose new challenges for securing data pipelines, making it essential for organisations to adopt granular, adaptive access controls to mitigate risks while harnessing the full potential of these powerful tools.

 

Objectives of GenAI access controls

To control access to a GenAI based application using the principle of least privilege, i.e., a user can only use appropriate prompts to interact with a GenAI application which in turn can only access data (approved for the user level) to provide inference (evaluated for appropriateness). This may involve any combination of control features like data classification & categorisation, role definition, role - resource mapping, attribute based permissioning, data masking, encryption at rest & transit.

 

Access control through roles and attributes

Role-based access control (RBAC) and the related attribute-based access control (ABAC) are methods of regulating access to computer, or a network resource based on the individual roles or other attributes of users within an organisation. RBAC ensures that only authorised individuals can access specific resources, performing only actions necessary for their roles. ABAC includes multiple attributes to determine access to resources (of which role can be one).

The benefit of RBAC is its simplicity; users do not need to manage or remember specific permissions as their role automatically determines their access. This facilitates changes in user roles and enhances security and compliance. 

There are however challenges even at an organisational level in implementing a robust role-based access system. The challenges resulting from:

1️⃣ Role complexity with too many roles and a network role hierarchy

2️⃣ Role confusion where it is unclear which role is appropriate for a particular user or task

3️⃣ Maintaining role definition accuracy over time leading to outdated and inconsistent access controls

4️⃣ Managing joiner/mover/leaver (JML) processes and ensuring alignment with RBAC

5️⃣ Frequent changes in dynamic environments

Additionally, a more fine-grained access control will be necessary, as we delve into the complexities of an AI application deployment involving ever changing AI architectures. This is where extending RBAC and using attributes to develop an entitlement & permissioning system is an immediate necessity.

 

Specific challenges for implementing RBAC for GenAI

GenAI applications are those that use large language models (LLMs) to generate natural language texts or perform natural language understanding tasks.

LLMs are powerful tools that can enable various scenarios such as content creation, summarisation, translation, question answering, and conversational agents. However, LLMs also pose significant security challenges that need to be addressed by developers and administrators of GenAI applications. These challenges include:

1️⃣ Protecting the confidentiality and integrity of the data used to train and query the LLMs

2️⃣ Ensuring the availability and reliability of the LLMs and their services

3️⃣ Preventing the misuse or abuse of the LLMs by malicious actors or unintended users

4️⃣ Monitoring and auditing the LLMs' outputs and behaviours for quality, accuracy, and compliance

5️⃣ Managing the ethical and social implications of the LLMs' outputs and impacts

An effective GenAI access control requires a deep understanding of AI system architecture and a precise identification and definition of target features and objects accessible by AI users. These target features must be governed individually with entitlements, ensuring users and system resources have access to, can operate on and deliver information that is considerate of the entitlements defined.

While RBAC is well understood in enterprise security, implementing it for GenAI systems can, however, be challenging:

1️⃣ Inherent missing access controls: GenAI applications inherently do have not integrated RBAC features which can lead to a host of data privacy and security issues

2️⃣ Unstructured input: Inputs to Gen AI applications are usually unstructured. Request (prompts) are usually in natural language unlike the highly structured API calls for applications where identity-based policies are easier to implement

3️⃣ Natural language output: Typical outcomes from a Gen Ai application is in natural text that can contain any kind of information, in response to the request (prompts). These outcomes may contain sensitive information which may be in form of code or unstructured text

4️⃣ Model's inherent structure: AI models are inherently complex and sometimes monolithic. Controlling access to specific part of the model is complex

5️⃣ Extensibility: Advanced techniques like soft prompts & fine-tuning can extend a model’s existing functionality and are candidates for control

Given the impracticality of deploying multiple models, each trained specifically for individual roles in the RBAC system, access should not be binary. It should also consider additional variables such as hyper-parameters, used to control model behaviour.

The requirement to share the same model with multiple users, with different access, requires a more holistic view of access controls, taking into consideration the inputs, outputs and in case of retrieval augmented generation (RAG)-based models AI agents.

 

Pre-requisites for a successful RBAC/ABAC implementation

A successful RBAC implementation for GenAI requires a foundation of key pre-requisites. Organisations must first define clear roles, ensuring each role has well-articulated permissions tailored to data sensitivity and usage within the GenAI framework. Comprehensive data classification is crucial, enabling more granular control over who accesses specific datasets.

Additionally, regular audits and monitoring processes should be established to prevent privilege creep and ensure compliance. Lastly, cross-departmental collaboration is vital, as security, IT, and AI teams must align on policies to effectively manage the unique risks posed by GenAI systems while maintaining operational efficiency.

Role definition Agree on role definition and role hierarchy
Access mapping Mapping of roles & resources to data access entitlements
Data classification Classification and categorisation of data  
Data labelling Labelling raw data based on privacy, security & sensitivity
Data curation Creation of training, validation datasets & embedding incorporating data classification and privacy elements
IAM (Identity & Access Management) Roles to be aligned to IAM and managed via JML processes
RBAC Resource role-based policies for controlling access
ABAC Resource attributes-based policies for controlling access

 

Implementation approach

Implementing RBAC for AI applications (GenAI, RAG-based, AI models), requires the understanding of the AI architecture in play and is best approached following the layered approach closely following the AI architecture components. GenAI models can generate diverse content, including its language output, audio, images, and even videos.

Our focus initially will be on LLMs, Large Language Models that generate natural language output. This creates a scope boundary and provides a view of the threat surface are to be covered for the LLM application and the formulation of controls required to ensure data privacy and security.

Subsequent enhancements will investigate how the controls can be extended to cover the additional complexities of handling multimodal capabilities of a Gen AI applications.

Securing LLM applications using access controls can be achieved by a layered approach, where each layer is secured from unauthorised access along with core data security such as masking and encryption. The layers are:

Layer 1: End user layer access control

* This layer controls who can access the GenAI tools themselves. It involves defining user roles and permissions to determine which employees can interact with the GenAI applications/ agents

* The focus is on ensuring that only authorised users can use the GenAI tools, thereby preventing unauthorised access to the system itself

Layer 2: AI layer access control

* At this layer, access controls what data and functionality the GenAI model & agents can access based on the permissions of the user making the request

* The AI model respects the user's role and permissions when processing prompts and retrieving information, ensuring that sensitive data is accessed only by users with appropriate clearance

Layer 3: Data layer access control

* At this layer, access controls what data AI model can access based on the permissions of the user making the request

* The focus here is controlling access to data being used and produced by the model

Layer 4: Infrastructure layer access control

* At this layer, access controls who has access to the infrastructure where AI solution has been deployed

* The focus here is in providing secure access to the GenAI deployment infrastructure

To ensure consistent security throughout the AI lifecycle, RBAC policies should integrate model architecture information, training procedures, data, and logs with the access policies for inference use. Adopting a data-centric approach in designing RBAC policies allows organisations to implement granular policies while treating AI systems as a single entity throughout their life cycles. While Role-Based Access Control (RBAC) may set a foundational baseline for security within enterprise AI systems, it falls short in offering the nuanced granularity required for data access by Agents and this is where the implementation of Attribute based access controls comes into play.

Put together RBAC and ABAC should provide a level of security which consummate with a secure use of GenAI applications.

 

Solution architecture for GenAI data access controls

A layered AI solutions architecture is outlined here:

Layer 1: End user layer access control

THREATS CONTROL
End user application

A web application as a front end to delivering inference

·   Access to GenAI application ·   RBAC: controlling who has access to the web application
AI Inputs

GenAI inputs or "Prompts" that instruct the GenAI model

Prompt template

·   Prompt injection ·   RBAC policy: to control access to templates

·   Limiting prompt parameter length

·   Limiting to specific formats

·   Restricting parameter values to a predefined set

Prompts intent detection ·   Adversarial attack ·   DLP: Security policies implemented at inference endpoints to ensure data privacy, sensitivity, exfiltration is controlled

·   RBAC: Role based access to who (human), what (system) can access the API

·   Encrypted traffic: bidirectional encryption to control data exploitation while in transit

API

To connect to the underlying application / model, transferring information bidirectionally

·   Access to GenAI application ·   DLP: Security policies implemented at inference endpoints to ensure data privacy, sensitivity, exfiltration is controlled

·   RBAC: Role based access to who (human), what (system) can access the API

·   Encrypted traffic: bidirectional encryption to control data exploitation while in transit

AI outcomes

Outcomes from GenAI models (primarily unstructured) analysed and flagged for any potentially harmful or sensitive content with tagging

·   Harmful / Sensitive data leakage ·   RBAC: Policies on harmful or sensitive content

 

Layer 2: AI layer access control

THREATS CONTROL
LLM Model Weights

The trained critical parameters for the Model

·   Data / IP theft, model manipulation ·   ABAC: Granular policies to control resources that can access / modify GenAI model weights

·   Data Encryption: Ensure that all data used in training, both at rest and in transit, is encrypted. This protects against un-authorised access and tampering

LLM Hyperparameters

Settings like temperature, input context size, and output size, influencing model behaviour and output

·   Model behaviour / IP theft ·   ABAC: Granular policies to control resources that can access / modify GenAI model hyperparameters
RAG LLM

To enable to access external data not included in the model during training

·   Data leakage, Data corruption, Service disruption ·   ABAC: Granular policies by which users are assigned specific roles and permissions to use a particular agent / agent capability for agents, tools, and reader/retriever
Training

Training GenAI models with RBAC features

·   Data poisoning ·   Model Training: Ensuring that RBAC principles are applied during the data preprocessing, tokenisation and embedding stages. Models. This involves filtering the training data based on the defined access controls and ensuring that the LLM only learns from data appropriate for each user role.

·   Adversarial training: A defensive algorithm that involves introducing adversarial examples into a model’s training data to teach the model to correctly classify these inputs as intentionally misleading.

Vector DB

Used to store, index, and retrieve embeddings for use by the AI models

·   Data leakage, service disruption, model inference manipulation ·   RBAC: Policies to ensure that only authorised personnel have access to the database, and even within that group, access levels are differentiated based on roles.
Fine tuning

Models for fine-tuning LLM's (LORA) which introduce additional tuning parameters

·   Model behaviour, IP theft, data leakage ·   ABAC: Granular policies to control resources that can access / modify GenAI model tuning parameters

 

Layer 3: Data layer access control

THREATS CONTROL
Transactional & Analytical data store

Data-warehouse/ lakes: for Real time Transactional business data (Financial and non-financial) along Analytical & with slow changing data (HR data)

·   Data poisoning ·   RBAC / ABAC: Granular control on data access within the organisation

·   Data Encryption: Ensure that all data used in training, both at rest and in transit, is encrypted. This protects against un-authorised access and tampering.

Journals data store

Settings like temperature, input context size, and output size, influencing model behaviour and output

·   Data & IP Theft, Service recovery disruption ·   RBAC/ABAC: Granular control on data access within the organisation

 

Layer 4: Infrastructure layer access control

THREATS CONTROL
Infrastructure

Development, Training & Production are some of the infrastructure environments used to deploy AI inferencing applications

·   Service disruption, Privacy & Compliance breach ·   RBAC / ABAC: Granular control on AI infrastructure

·   Data Encryption: Ensure that all data used in training, both at rest and in transit, is encrypted. This protects against un-authorised access and tampering

Entitlements management Defining user entitlements and permissions required to enforce access controls ·   Gaining unauthorised access to any system within the organisation/ Information theft ·   RBAC: Policies to ensure that only authorised personnel have access to the entitlements management application and data.

 

Conclusion

In conclusion, effectively implementing role-based access control (RBAC) for generative AI is crucial for safeguarding sensitive data while maximising the technology's potential. By establishing clear roles, conducting thorough data classification, and fostering collaboration across teams, organisations can create a robust security framework that mitigates risks associated with GenAI. Regular audits and monitoring will further enhance the system’s resilience against insider threats and compliance breaches. As the landscape of AI continues to evolve, organisations must remain proactive in refining their access control strategies to ensure that innovation does not come at the expense of security and data integrity.

 

References

  1. https://www.ibm.com/reports/data-breach

 

 


Blueprints for Tomorrow

The Evolution of Business Process Modelling in the AI Era

Introduction

In today’s rapidly evolving financial landscape, Business Process Management (BPM) has emerged as a critical driver of operational excellence and competitive advantage. Financial services firms are grappling with increasing regulatory complexities, heightened customer expectations, and the relentless pace of technological change. In this context, BPM offers a systematic approach to streamline processes, enhance efficiency, and foster innovation. By effectively managing and optimising business processes, financial institutions can not only improve their operational performance but also adapt swiftly to market dynamics, ensuring sustained growth and profitability.

 

Process Diagrams are NOT Process Models

Many firms make a limited effort to draw process diagrams in PowerPoint and Visio; PowerPoint, at least, is available for everyone to use in every firm. But the sheer volume of processes that need modelling overwhelms firms trying to do the right thing. There is often no agreed standard for capturing processes. BPMN (Business Process Model and Notation) is commonly used but can be arcane and too detailed. Flowchart formats are also used. But nothing is ever consistent.

A diagram is just a drawing; boxes, lines, and words on a page. A model takes those boxes, lines, and words and adds meaning, context, and relationships. A named activity in one process can be reused in another process. In a process diagram, this is just a copy with no link between the two. In a process model, the model knows that these two activities are the same thing.

Modelling takes a little more effort as you need to choose a modelling tool and standards and set up governance around your process models. But the small amount of time spent doing this will reap huge short and long-term benefits once you start modelling for real.

 

The Importance of BPM

According to a report by Gartner, organisations that implement BPM effectively can achieve a 20-30% improvement in process efficiency, highlighting its transformative potential in the financial sector. However, many institutions only scratch the surface by stopping at process modelling, missing out on the broader benefits of a fully optimised process landscape.

The benefits of process modelling include:

1️⃣ Transparency: By visualising processes, stakeholders gain a clear understanding of how various activities interconnect, facilitating better communication and collaboration across departments.

2️⃣ Optimisation: Process models help identify inefficiencies, redundancies, and bottlenecks, enabling organisations to implement targeted improvements that enhance performance and reduce costs.

3️⃣ Standardisation: Process modelling ensures consistency in operations, which is essential for maintaining quality and compliance in financial services.

4️⃣ Compliance: Detailed process documentation ensures that all activities adhere to regulatory standards and internal policies, reducing the risk of non-compliance and associated penalties.

5️⃣ Better Decision-Making: Comprehensive process analysis provides valuable insights that inform strategic planning and operational decisions, supporting data-driven management practices.

6️⃣ Training and Onboarding: Well-defined processes make it easier to train new employees and integrate them into the organisation. Some BPM tools enable you to create detailed procedures and manuals based on your existing processes.

7️⃣ Workflow: Once you have your processes modelled, you can use them to execute your processes in a workflow. A workflow controls how the process is executed by your teams. Workflows track metrics on process performance to provide opportunities for improvement based on actual results.

In our work with several financial institutions on operating models and process definition, we consistently observe a recurring challenge: many organisations miss out on realising the next significant wave of benefits from process optimisation. Their efforts often stall in debates over which vendor to select for process mapping and how to implement future processes. This approach limits their ability to unlock the full potential of process improvements. The integration of emerging technologies such as Artificial Intelligence (AI), low-code platforms, and process mining can be transformative, enabling institutions to break free from this cycle and drive more substantial, long-term value.

 

The Future of BPM - Unlocking Potential with Emerging Technologies

Artificial Intelligence (AI) is transforming BPM by introducing advanced capabilities for automation, optimisation, and decision-making. AI encompasses a range of technologies, including machine learning (ML), natural language processing (NLP), and robotic process automation (RPA), that enhance BPM systems' ability to manage and analyse complex, data-intensive tasks.

AI in BPM goes beyond simple automation to include intelligent process automation (IPA), which combines AI with traditional automation to handle more complex tasks. For instance, AI can analyse vast amounts of data to identify inefficiencies and patterns, predict outcomes, and generate actionable insights. This enables financial services firms to streamline operations, reduce manual intervention, and enhance decision-making processes. By integrating AI, organisations can achieve improved process efficiency, enhanced customer experiences, and better compliance.

Process Simulation

Process simulation involves creating virtual models of business processes to predict their performance under various scenarios. AI enhances process simulation by enabling more sophisticated modelling and forecasting. AI algorithms can simulate complex process interactions, predict potential bottlenecks, and evaluate the impact of different changes in real-time.

This proactive approach helps organisations anticipate challenges and make data-driven decisions before implementing changes. By leveraging AI for process simulation, financial services firms can test and refine processes in a controlled environment, leading to more effective and resilient operational strategies.

 Process Mining

Process mining utilises data from event logs to visualise and analyse the actual flow of business processes. AI-powered process mining provides deeper insights by analysing large volumes of data to identify inefficiencies, compliance issues, and areas for improvement.

This technology allows organisations to uncover hidden patterns and deviations from standard procedures, leading to more informed and targeted process optimisations. With a projected compound annual growth rate of around 40% from 2022 to 2028, process mining will play a crucial role in enhancing process visibility and performance.

Intelligent Automation

Intelligent automation combines technologies such as robotic process automation (RPA), AI, machine learning (ML), and business process management (BPM) to enhance tasks and decision-making across organisations. By integrating AI with RPA, businesses can automate not only routine, repetitive tasks but also more complex processes such as fraud detection, customer onboarding, and regulatory compliance, all in real time. This reduces manual effort, minimises errors, and improves operational efficiency, enabling employees to focus on more strategic activities. As these technologies continue to evolve, intelligent automation will drive greater value, adaptability, and sustained growth for organisations.

 

Case Studies

Case Study 1: HSBC Enhances Customer Service with AI and BPM

A leading global bank, HSBC, integrated AI-powered chatbots into its customer service processes through a robust BPM framework. This integration resulted in a 40% reduction in response time and a 25% increase in customer satisfaction scores. HSBC also utilised process modelling to streamline its loan approval process, reducing approval times from weeks to days and improving overall operational efficiency. (Source: HSBC Annual Report 2021, Deloitte Insights on AI in Financial Services)

Case Study 2: Process Mining at Deutsche Bank

Deutsche Bank leveraged process mining technology to optimise its operational processes across various business units. The bank used Celonis, a leading process mining tool, to analyse millions of transaction records and uncover inefficiencies in their processes. By visualising and understanding their process flows in real-time, Deutsche Bank identified bottlenecks and deviations from standard procedures, leading to significant improvements in efficiency and compliance. For instance, they were able to reduce loan processing times by 15% and improve overall process conformance, resulting in enhanced customer satisfaction and reduced operational costs. (Source: Celonis Implementation at Deutsche Bank)

Case Study 3: Insurance Firm Streamlines Claims Processing with Low-Code Platforms

An international insurance company, Zurich Insurance Group, adopted a BPM solution to revamp its claims processing system. The implementation enabled rapid development and deployment of customised workflows, reducing processing times by 50% and decreasing operational costs by 30%. The enhanced process transparency and automation also led to improved compliance and audit readiness. (Source: Forrester Research on Low-Code Platforms, Zurich Insurance Case Study by Appian)

 

Actionable Steps for Realising the Opportunity with BPM

Financial services firms looking to adopt and enhance BPM should consider the following actionable steps:

1️⃣ Start Small: Don’t try to model everything. Pick one important process that you know is not working. Think about the process at a high-level to begin with. Once you have the basic process modelled, you can drill down into more detail. Then, you can expand to other high priority areas.

2️⃣ Work Top-Down: In the process you have chosen, what are the 5-7 most important activities that happen in the process? Once you have captured those in your model. Pick each one of those 5-7 and go into the next level of detail.

3️⃣ Define Clear Objectives and KPIs: Establish specific goals for BPM initiatives aligned with overall business strategy and identify key performance indicators to measure success.

4️⃣ Conduct a Comprehensive Process Audit: Begin by mapping and analysing existing processes to identify areas for improvement and prioritise initiatives based on impact and feasibility.

5️⃣ Leverage Appropriate Technologies: Select and implement technologies such as AI, low-code platforms, and cloud solutions that align with organisational needs and capabilities.

6️⃣ Seed with a Skilled Team: Invest in training and developing a team skilled in BPM methodologies and technologies with in-house and partners, fostering a culture of continuous improvement and innovation.

7️⃣ Adopt an Iterative Development Approach: Embrace rapid prototyping and iterative development to quickly deliver initial versions of new processes. Get these processes into use early, gathering feedback from real-world application, and then refine them based on this feedback. This approach accelerates time to value and ensures that solutions are continuously improved in response to actual user needs and evolving business conditions.

8️⃣ Monitor and Refine Continuously: Regularly review process performance against KPIs and make necessary adjustments to sustain and enhance improvements over time.

 

Conclusion

Business Process Management is not merely a tool for operational efficiency; it is a strategic enabler that empowers financial services firms to navigate complexity, embrace innovation, and achieve sustained competitive advantage. By focusing on the next wave of opportunity with BPM, financial institutions can optimise their processes, integrate emerging technologies, and adapt to the ever-changing market dynamics.

For those interested in delving deeper, we offer access to the results of our extensive analysis of 100 BPM solutions. Our evaluation covered key aspects such as capabilities, technical functionality, and product architecture.

To discuss these insights further or to understand how they can be applied to your organisation, please contact Leading Point co-founder and process specialist Thush, who is available to provide expert guidance and tailored recommendations.

 

References and Further Reading

1️⃣ "Business Process Management: Concepts, Languages, Architectures" by Mathias Weske

2️⃣ Gartner Research Reports on BPM and Emerging Technologies

3️⃣ "The Ultimate Guide to Business Process Management" by BPMInstitute.org

4️⃣ Deloitte’s Insights on "Transforming Financial Services through BPM"

5️⃣ "Process Mining: Data Science in Action" by Wil van der Aalst

 

 


AI Under Scrutiny

Why AI risk & governance should be a focus area for financial services firms

 

Introduction

As financial services firms increasingly integrate artificial intelligence (AI) into their operations, the imperative to focus on AI risk & governance becomes paramount. AI offers transformative potential, driving innovation, enhancing customer experiences, and streamlining operations. However, with this potential comes significant risks that can undermine the stability, integrity, and reputation of financial institutions. This article delves into the critical importance of AI risk & governance for financial services firms, providing a detailed exploration of the associated risks, regulatory landscape, and practical steps for effective implementation. Our goal is to persuade financial services firms to prioritise AI governance to safeguard their operations and ensure regulatory compliance.

 

The Growing Role of AI in Financial Services

AI adoption in the financial services industry is accelerating, driven by its ability to analyse vast amounts of data, automate complex processes, and provide actionable insights. Financial institutions leverage AI for various applications, including fraud detection, credit scoring, risk management, customer service, and algorithmic trading. According to a report by McKinsey & Company, AI could potentially generate up to $1 trillion of additional value annually for the global banking sector.

 

Applications of AI in Financial Services

1 Fraud Detection and Prevention: AI algorithms analyse transaction patterns to identify and prevent fraudulent activities, reducing losses and enhancing security.

2 Credit Scoring and Risk Assessment: AI models evaluate creditworthiness by analysing non-traditional data sources, improving accuracy and inclusivity in lending decisions.

3 Customer Service and Chatbots: AI-powered chatbots and virtual assistants provide 24/7 customer support, while machine learning algorithms offer personalised product recommendations.

4 Personalised Financial Planning: AI-driven platforms offer tailored financial advice and investment strategies based on individual customer profiles, goals, and preferences, enhancing client engagement and satisfaction.

 

Potential Benefits of AI

The benefits of AI in financial services are manifold, including increased efficiency, cost savings, enhanced decision-making, and improved customer satisfaction. AI-driven automation reduces manual workloads, enabling employees to focus on higher-value tasks. Additionally, AI's ability to uncover hidden patterns in data leads to more informed and timely decisions, driving competitive advantage.

 

The Importance of AI Governance

AI governance encompasses the frameworks, policies, and practices that ensure the ethical, transparent, and accountable use of AI technologies. It is crucial for managing AI risks and maintaining stakeholder trust. Without robust governance, financial services firms risk facing adverse outcomes such as biased decision-making, regulatory penalties, reputational damage, and operational disruptions.

 

Key Components of AI Governance

1 Ethical Guidelines: Establishing ethical principles to guide AI development and deployment, ensuring fairness, accountability, and transparency.

2 Risk Management: Implementing processes to identify, assess, and mitigate AI-related risks, including bias, security vulnerabilities, and operational failures.

3 Regulatory Compliance: Ensuring adherence to relevant laws and regulations governing AI usage, such as data protection and automated decision-making.

4 Transparency and Accountability: Promoting transparency in AI decision-making processes and holding individuals and teams accountable for AI outcomes.

 

Risks of Neglecting AI Governance

Neglecting AI governance can lead to several significant risks:

1 Embedded bias: AI algorithms can unintentionally perpetuate biases if trained on biased data or if developers inadvertently incorporate them. This can lead to unfair treatment of certain groups and potential violations of fair lending laws.

2 Explainability and complexity: AI models can be highly complex, making it challenging to understand how they arrive at decisions. This lack of explainability raises concerns about transparency, accountability, and regulatory compliance

3 Cybersecurity: Increased reliance on AI systems raises cybersecurity concerns, as hackers may exploit vulnerabilities in AI algorithms or systems to gain unauthorised access to sensitive financial data

4 Data privacy: AI systems rely on vast amounts of data, raising privacy concerns related to the collection, storage, and use of personal information

5 Robustness: AI systems may not perform optimally in certain situations and are susceptible to errors. Adversarial attacks can compromise their reliability and trustworthiness

6 Impact on financial stability: Widespread adoption of AI in the financial sector can have implications for financial stability, potentially amplifying market dynamics and leading to increased volatility or systemic risks

7 Underlying data risks: AI models are only as good as the data that supports them. Incorrect or biased data can lead to inaccurate outputs and decisions

8 Ethical considerations: The potential displacement of certain roles due to AI automation raises ethical concerns about societal implications and firms' responsibilities to their employees

9 Regulatory compliance: As AI becomes more integral to financial services, there is an increasing need for transparency and regulatory explainability in AI decisions to maintain compliance with evolving standards

10 Model risk: The complexity and evolving nature of AI technologies mean that their strengths and weaknesses are not yet fully understood, potentially leading to unforeseen pitfalls in the future

 

To address these risks, financial institutions need to implement robust risk management frameworks, enhance data governance, develop AI-ready infrastructure, increase transparency, and stay updated on evolving regulations specific to AI in financial services.

The consequences of inadequate AI governance can be severe. Financial institutions that fail to implement proper risk management and governance frameworks may face significant financial penalties, reputational damage, and regulatory scrutiny. The proposed EU AI Act, for instance, outlines fines of up to €30 million or 6% of global annual turnover for non-compliance. Beyond regulatory consequences, poor AI governance can lead to biased decision-making, privacy breaches, and erosion of customer trust, all of which can have long-lasting impacts on a firm's operations and market position.

 

Regulatory Requirements

The regulatory landscape for AI in financial services is evolving rapidly, with regulators worldwide introducing guidelines and standards to ensure the responsible use of AI. Compliance with these regulations is not only a legal obligation but also a critical component of building a sustainable and trustworthy AI strategy.

 

Key Regulatory Frameworks

1 General Data Protection Regulation (GDPR): The European Union's GDPR imposes strict requirements on data processing and the use of automated decision-making systems, ensuring transparency and accountability.

2 Financial Conduct Authority (FCA): The FCA in the UK has issued guidance on AI and machine learning, emphasising the need for transparency, accountability, and risk management in AI applications.

3 Federal Reserve: The Federal Reserve in the US has provided supervisory guidance on model risk management, highlighting the importance of robust governance and oversight for AI models.

4 Monetary Authority of Singapore (MAS): MAS has introduced guidelines for the ethical use of AI and data analytics in financial services, promoting fairness, ethics, accountability, and transparency (FEAT).

5 EU AI Act: This new act aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

 

Importance of Compliance

Compliance with regulatory requirements is essential for several reasons:

1 Legal Obligation: Financial services firms must adhere to laws and regulations governing AI usage to avoid legal penalties and fines.

2 Reputational Risk: Non-compliance can damage a firm's reputation, eroding trust with customers, investors, and regulators.

3 Operational Efficiency: Regulatory compliance ensures that AI systems are designed and operated according to best practices, enhancing efficiency and effectiveness.

4 Stakeholder Trust: Adhering to regulatory standards builds trust with stakeholders, demonstrating a commitment to responsible and ethical AI use.

 

Identifying AI Risks

AI technologies pose several specific risks to financial services firms that must be identified and mitigated through effective governance frameworks.

 

Bias and Discrimination

AI systems can reflect and reinforce biases present in training data, leading to discriminatory outcomes. For instance, biased credit scoring models may disadvantage certain demographic groups, resulting in unequal access to financial services. Addressing bias requires rigorous data governance practices, including diverse and representative training data, regular bias audits, and transparent decision-making processes.

 

Security Risks

AI systems are vulnerable to various security threats, including cyberattacks, data breaches, and adversarial manipulations. Cybercriminals can exploit vulnerabilities in AI models to manipulate outcomes or gain unauthorised access to sensitive financial data. Ensuring the security and integrity of AI systems involves implementing robust cybersecurity measures, regular security assessments, and incident response plans.

 

Operational Risks

AI-driven processes can fail or behave unpredictably under certain conditions, potentially disrupting critical financial services. For example, algorithmic trading systems can trigger market instability if not responsibly managed. Effective governance frameworks include comprehensive testing, continuous monitoring, and contingency planning to mitigate operational risks and ensure reliable AI performance.

 

Compliance Risks

Failure to adhere to regulatory requirements can result in significant fines, legal consequences, and reputational damage. AI systems must be designed and operated in compliance with relevant laws and regulations, such as data protection and automated decision-making guidelines. Regular compliance audits and updates to governance frameworks are essential to ensure ongoing regulatory adherence.

 

Benefits of Effective AI Governance

Implementing robust AI governance frameworks offers numerous benefits for financial services firms, enhancing risk management, trust, and operational efficiency.

 

Risk Mitigation

Effective AI governance helps identify, assess, and mitigate AI-related risks, reducing the likelihood of adverse outcomes. By implementing comprehensive risk management processes, firms can proactively address potential issues and ensure the safe and responsible use of AI technologies.

 

Enhanced Trust and Transparency

Transparent and accountable AI practices build trust with customers, regulators, and other stakeholders. Clear communication about AI decision-making processes, ethical guidelines, and risk management practices demonstrates a commitment to responsible AI use, fostering confidence and credibility.

 

Regulatory Compliance

Adhering to governance frameworks ensures compliance with current and future regulatory requirements, minimising legal and financial repercussions. Robust governance practices align AI development and deployment with regulatory standards, reducing the risk of non-compliance and associated penalties.

 

Operational Efficiency

Governance frameworks streamline the development and deployment of AI systems, promoting efficiency and consistency in AI-driven operations. Standardised processes, clear roles and responsibilities, and ongoing monitoring enhance the effectiveness and reliability of AI applications, driving operational excellence.

 

Case Studies

Several financial services firms have successfully implemented AI governance frameworks, demonstrating the tangible benefits of proactive risk management and responsible AI use.

 

JP Morgan Chase

JP Morgan Chase has established a comprehensive AI governance structure that includes an AI Ethics Board, regular audits, and robust risk assessment processes. The AI Ethics Board oversees the ethical implications of AI applications, ensuring alignment with the bank's values and regulatory requirements. Regular audits and risk assessments help identify and mitigate AI-related risks, enhancing the reliability and transparency of AI systems.

 

ING Group

ING Group has developed an AI governance framework that emphasises transparency, accountability, and ethical considerations. The framework includes guidelines for data usage, model validation, and ongoing monitoring, ensuring that AI applications align with the bank's values and regulatory requirements. By prioritising responsible AI use, ING has built trust with stakeholders and demonstrated a commitment to ethical and transparent AI practices.

 

HSBC

HSBC has implemented a robust AI governance framework that focuses on ethical AI development, risk management, and regulatory compliance. The bank's AI governance framework includes a dedicated AI Ethics Committee, comprehensive risk management processes, and regular compliance audits. These measures ensure that AI applications are developed and deployed responsibly, aligning with regulatory standards and ethical guidelines.

 

Practical Steps for Implementation

To develop and implement effective AI governance frameworks, financial services firms should consider the following actionable steps:

 

Establish a Governance Framework

Develop a comprehensive AI governance framework that includes policies, procedures, and roles and responsibilities for AI oversight. The framework should outline ethical guidelines, risk management processes, and compliance requirements, providing a clear roadmap for responsible AI use.

 

Create an AI Ethics Board

Form an AI Ethics Board or committee to oversee the ethical implications of AI applications and ensure alignment with organisational values and regulatory requirements. The board should include representatives from diverse departments, including legal, compliance, risk management, and technology.

 

Implement Specific AI Risk Management Processes

Conduct regular risk assessments to identify and mitigate AI-related risks. Implement robust monitoring and auditing processes to ensure ongoing compliance and performance. Risk management processes should include bias audits, security assessments, and contingency planning to address potential operational failures.

 

Ensure Data Quality and Integrity

Establish data governance practices to ensure the quality, accuracy, and integrity of data used in AI systems. Address potential biases in data collection and processing, and implement measures to maintain data security and privacy. Regular data audits and validation processes are essential to ensure reliable and unbiased AI outcomes.

 

Invest in Training and Awareness

Provide training and resources for employees to understand AI technologies, governance practices, and their roles in ensuring ethical and responsible AI use. Ongoing education and awareness programs help build a culture of responsible AI use, promoting adherence to governance frameworks and ethical guidelines.

 

Engage with Regulators and Industry Bodies

Stay informed about regulatory developments and industry best practices. Engage with regulators and industry bodies to contribute to the development of AI governance standards and ensure alignment with evolving regulatory requirements. Active participation in industry forums and collaborations helps stay ahead of regulatory changes and promotes responsible AI use.

 

Conclusion

As financial services firms continue to embrace AI, the importance of robust AI risk & governance frameworks cannot be overstated. By proactively addressing the risks associated with AI and implementing effective governance practices, firms can unlock the full potential of AI technologies while safeguarding their operations, maintaining regulatory compliance, and building trust with stakeholders. Prioritising AI risk & governance is not just a regulatory requirement but a strategic imperative for the sustainable and ethical use of AI in financial services.

 

References and Further Reading

  1. McKinsey & Company. (2020). The AI Bank of the Future: Can Banks Meet the AI Challenge?
  2. European Union. (2018). General Data Protection Regulation (GDPR).
  3. Financial Conduct Authority (FCA). (2019). Guidance on the Use of AI and Machine Learning in Financial Services.
  4. Federal Reserve. (2020). Supervisory Guidance on Model Risk Management.
  5. JP Morgan Chase. (2021). AI Ethics and Governance Framework.
  6. ING Group. (2021). Responsible AI: Our Approach to AI Governance.
  7. Monetary Authority of Singapore (MAS). (2019). FEAT Principles for the Use of AI and Data Analytics in Financial Services.

 

For further reading on AI governance and risk management in financial services, consider the following resources:

- "Artificial Intelligence: A Guide for Financial Services Firms" by Deloitte

- "Managing AI Risk in Financial Services" by PwC

- "AI Ethics and Governance: A Global Perspective" by the World Economic Forum


Strengthening Information Security

The Combined Power of Identity & Access Management and Data Access Controls

The digital age presents a double-edged sword for businesses. While technology advancements offer exciting capabilities in cloud, data analytics, and customer experience, they also introduce new security challenges. Data breaches are a constant threat, costing businesses an average of $4.45 million per incident according to a 2023 IBM report (https://www.ibm.com/reports/data-breach) and eroding consumer trust. Traditional security measures often fall short, leaving vulnerabilities for attackers to exploit. These attackers, targeting poorly managed identities and weak data protection, aim to disrupt operations, steal sensitive information, or even hold companies hostage. The impact extends beyond the business itself, damaging customers, stakeholders, and the broader financial market

In response to these evolving threats, the European Commission (EU) has implemented the Digital Operational Resilience Act (DORA) (Regulation (EU) 2022/2554). This regulation focuses on strengthening information and communications technology (ICT) resilience standards in the financial services sector. While designed for the EU, DORA’s requirements offer valuable insights for businesses globally, especially those with operations in the EU or the UK. DORA mandates that financial institutions define, approve, oversee, and be accountable for implementing a robust risk-management framework. This is where identity & access management (IAM) and data access controls (DAC).

The Threat Landscape and Importance of Data Security

Data breaches are just one piece of the security puzzle. Malicious entities also employ malware, phishing attacks, and even exploit human error to gain unauthorised access to sensitive data. Regulatory compliance further emphasises the importance of data security. Frameworks like GDPR and HIPAA mandate robust data protection measures. Failure to comply can result in hefty fines and reputational damage.

Organisations, in a rapidly-evolving hybrid working environment, urgently need to implement or review their information security strategy. This includes solutions that not only reduce the attack surface but also improve control over who accesses what data within the organisation. IAM and DAC, along with fine-grained access provisioning for various data formats, are critical components of a strong cybersecurity strategy.

Keep reading to learn the key differences between IAM and DAC, and how they work in tandem to create a strong security posture.

Identity & Access Management (IAM)

Think of IAM as the gatekeeper to your digital environment. It ensures only authorised users can access specific systems and resources. Here is a breakdown of its core components:

  1. Identity Management (authentication): This involves creating, managing, and authenticating user identities. IAM systems manage user provisioning (granting access), authentication (verifying user identity through methods like passwords or multi-factor authentication [MFA]), and authorisation (determining user permissions). Common identity management practices include:
    • Single Sign-On (SSO): Users can access multiple applications with a single login, improving convenience and security.
    • Multi-Factor Authentication (MFA):An extra layer of security requiring an additional verification factor beyond a password (e.g., fingerprint, security code).
    • Passwordless: A recent usability improvement removes the use of passwords and replaces them with authentication apps and biometrics.
    • Adaptive or Risk-based Authentication: Uses AI and machine learning to analyse user behaviour and adjust authentication requirements in real-time based on risk level.
  2. Access Management (authorisation): Once a user has had their identity authenticated, then access management checks to see what resources the user has access to. IAM systems apply tailored access policies based on user identities and other attributes. Once verified, IAM controls access to applications, data, and other resources.

Advanced IAM concepts like Privileged Access Management (PAM) focus on securing access for privileged users with high-level permissions, while Identity Governance ensures user access is reviewed and updated regularly.

Data Access Control (DAC)

While IAM focuses on user identities and overall system access, DAC takes a more granular approach, regulating access to specific data stored within those systems. Here are some common DAC models:

  • Discretionary Access Control (also DAC): Allows data owners to manage access permissions for other users. While offering flexibility, it can lead to inconsistencies and security risks if not managed properly. One example of this is UNIX files, where an owner of a file can grant or deny other users access.
  • Mandatory Access Control (MAC): Here, the system enforces access based on pre-defined security labels assigned to data and users. This offers stricter control but requires careful configuration.
  • Role-Based Access Control (RBAC): This approach complements IAM RBAC by defining access permissions for specific data sets based on user roles.
  • Attribute-Based Access Control (ABAC): Permissions are granted based on a combination of user attributes, data attributes, and environmental attributes, offering a more dynamic and contextual approach.
  • Encryption: Data is rendered unreadable without the appropriate decryption key, adding another layer of protection.

IAM vs. DAC: Key Differences and Working Together

While IAM and DAC serve distinct purposes, they work in harmony to create a comprehensive security posture. Here is a table summarising the key differences:

FEATURE

IAM

DAC

Description

Controls access to applications

Controls access to data within applications

Granularity

Broader – manages access to entire systems

More fine-grained – controls access to specific data check user attributes

Enforcement

User-based (IAM) or system-based (MAC)

System-based enforcement (MAC) or user-based (DAC)

Imagine an employee accessing customer data in a CRM system. IAM verifies their identity and grants access to the CRM application. However, DAC determines what specific customer data they can view or modify based on their role (e.g., a sales representative might have access to contact information but not financial details).

Dispelling Common Myths

Several misconceptions surround IAM and DAC. Here is why they are not entirely accurate:

  • Myth 1: IAM is all I need. The most common mistake that organisations make is to conflate IAM and DAC, or worse, assume that if they have IAM, that includes DAC. Here is a hint. It does not.
  • Myth 2: IAM is only needed by large enterprises. Businesses of all sizes must use IAM to secure access to their applications and ensure compliance. Scalable IAM solutions are readily available.
  • Myth 3: More IAM tools equal better security. A layered approach is crucial. Implementing too many overlapping IAM tools can create complexity and management overhead. Focus on choosing the right tools that complement each other and address specific security needs.
  • Myth 4: Data access control is enough for complete security. While DAC plays a vital role, it is only one piece of the puzzle. Strong IAM practices ensure authorised users are accessing systems, while DAC manages their access to specific data within those systems. A comprehensive security strategy requires both.

Tools for Effective IAM and DAC

There are various IAM and DAC solutions available, and the best choice depends on your specific needs. While Active Directory remains a popular IAM solution for Windows-based environments, it may not be ideal for complex IT infrastructures or organisations managing vast numbers of users and data access needs.

Imagine a scenario where your application has 1,000 users and holds sensitive & personal customer information for 1,000,000 customers split across ten countries and five products. Not every user should see every customer record. It might be limited to the country the user works in and the specific product they support. This is the “Principle of Least Privilege.” Applying this principle is critical to demonstrating you have appropriate data access controls.

To control access to this data, you would need to create tens of thousands of AD groups for every combination of country or countries and product or products. This is unsustainable and makes choosing AD groups to manage data access control an extremely poor choice.

The complexity of managing nested AD groups and potential integration challenges with non-Windows systems highlight the importance of carefully evaluating your specific needs when choosing IAM tools. Consider exploring cloud-based IAM platforms or Identity Governance and Administration (IGA) solutions for centralised management and streamlined access control.

Building a Strong Security Strategy

The EU’s Digital Operational Resilience Act (DORA) emphasises strong IAM practices for financial institutions and will coming into act from 17 January 2025. DORA requires financial organisations to define, approve, oversee, and be accountable for implementing robust IAM and data access controls as part of their risk management framework.

Here are some key areas where IAM and DAC can help organisations comply with DORA and protect themselves:

DORA Pillar

How IAM helps

How DAC helps

ICT risk management

  • Identifies risks associated with unauthorised access/misuse
  • Detects users with excessive permissions or dormant accounts

  • Minimises damage from breaches by restricting access to specific data

ICT related incident reporting

  • Provides audit logs for investigating breaches (user activity, login attempts, accessed resources)
  • Helps identify source of attack and compromised accounts

  • Helps determine scope of breach and potentially affected information

ICT third-party risk management

  • Manages access for third-party vendors/partners
  • Grants temporary access with limited permissions, reducing attack surface

  • Restricts access for third-party vendors by limiting ability to view/modify sensitive data

Information sharing

  • Permissions designated users authorised to share sensitive information

  • Controls access to shared information via roles and rules

Digital operational resilience testing

  • Enables testing of IAM controls to identify vulnerabilities
  • Penetration testing simulates attacks to assess effectiveness of IAM controls

  • Ensures data access restrictions are properly enforced and minimizes breach impact

Understanding IAM and DAC empowers you to build a robust data security strategy

Use these strategies to leverage the benefits of IAM and DAC combined:

  • Recognise the difference between IAM and DAC, and how they are implemented in your organisation
  • Conduct regular IAM and DAC audits to identify and address vulnerabilities
  • Implement best practices like the Principle of Least Privilege (granting users only the minimum access required for their job function)
  • Regularly review and update user access permissions
  • Educate employees on security best practices (e.g., password hygiene, phishing awareness)

Explore different IAM and DAC solutions based on your specific organisational needs and security posture. Remember, a layered approach that combines IAM, DAC, and other security measures like encryption creates the most effective defence against data breaches and unauthorised access.

Conclusion

By leveraging the combined power of IAM and DAC, you can ensure only the right people have access to the right data at the right time. This fosters trust with stakeholders, protects your reputation, and safeguards your valuable information assets.


Top 5 Trends for MLROs in 2024

Our Financial Crime Practice Lead, Kavita Harwani, recently attended the FRC Leadership Convention at the Celtic Manor, Newport, Wales. This gave us the opportunity to engage with senior leaders in the financial risk and compliance space on the latest best practices, upcoming technology advances, and practical insights.

Criminals are becoming increasingly sophisticated, driving MLROs to innovate their financial crime controls. There is never a quiet time for FRC professionals, but 2024 is proving to be exceptionally busy.
Our view on the top five trends that MLROs need to focus on is presented here.

Top 5 Trends

  1. Minimise costs by using technology to scan the regulatory horizon and identify impacts on your business
  2. Accelerating transaction monitoring & decisioning by applying AI & data analytics
  3. Optimising due diligence with a 360 view of the customers
  4. Improving operational efficiency by using machine learning to automate alert handling
  5. Reducing financial crime risk through training and communications programmes.

1. Regulatory Compliance and Adaptation

MLROs need to stay abreast of evolving regulatory frameworks and compliance requirements. With regulatory changes occurring frequently, MLROs must ensure their organisations are compliant with the latest anti-money laundering (AML) and counter-terrorist financing (CTF) regulations.

This involves scanning the regulatory horizon, updating policies, procedures, and systems to reflect regulatory updates and adapting swiftly to new compliance challenges.

2. Technology & Data Analytics

MLROs will increasingly leverage advanced technology and data analytics tools to enhance their AML capabilities.

Machine learning algorithms and predictive analytics can help identify suspicious activities more effectively, allowing MLROs to detect and prevent money laundering and financial crime quicker, at lower cost, and with higher accuracy rates.

MLROs must focus on implementing robust AML technologies and optimising data analytics strategies to improve risk detection and decision-making processes.

3. Customer Due Diligence (CDD) and Enhanced Due Diligence (EDD)

MLROs should prioritise strengthening CDD processes to better understand their customers’ risk of committing financial crimes.

Enhanced due diligence is critical for high-risk customers, such as politically exposed persons (PEPs) and high net worth individuals (HNWIs).

MLROs should focus on enhancing risk-based approaches to CDD and EDD, leveraging technology and data analytics to streamline customer onboarding processes while maintaining compliance with regulatory requirements.

4. Transaction Monitoring and Suspicious Activity Reporting

MLROs will continue to refine transaction monitoring systems to effectively identify suspicious activities and generate accurate alerts for investigation.

MLROs should focus on optimising transaction monitoring rules and scenarios to reduce false positives and prioritise high-risk transactions for further review.

Enhanced collaboration with law enforcement agencies and financial intelligence units will be crucial for timely and accurate suspicious activity reporting. Cross-industry collaboration is an expanding route to quicker insights on bad actors and behaviours.

5. Training and Awareness Programmes

MLROs must invest in comprehensive training and awareness programs to educate employees on AML risks, obligations, and best practices.

Building a strong culture of compliance within the organisation is essential for effective AML risk management.

Additionally, MLROs must promote a proactive approach to AML compliance, encouraging employees to raise concerns and seek guidance when faced with potential AML risks.

Conclusion

The expanded use of technology and data is becoming more evident from our discussions. The latest, ever-accelerating, improvements in automation and AI has brought a new set of opportunities to transform legacy manual, people-heavy processes into streamlined, efficient, and effective anti-financial crime departments.

Leading Point has a specialist financial crime team and can help strengthen your operations and meet these challenges in 2024. Reach out to our practice lead Kavita Harwani on kavita@leadingpoint.io to discuss your needs further.


Helping a leading investment bank improve its client on-boarding processes into a single unified operating model

Our client, like many banks, were facing multiple challenges in their onboarding and account opening processes. Scalability and efficiency were two important metrics we were asked to improve. Our senior experts interviewed the onboarding teams to document the current process and recommended a new unified process covering front, middle and back office teams.

We identified and removed key-person dependencies and documented the new process into a key operating manual for global use.


Helping a global investment bank design & execute a client data governance target operating model

Our client had a challenge to evidence control of their 2000+ client data elements. We were asked to implement a new target operating model for client data governance in six months. Our approach was to identify the core, essential data elements used by the most critical business processes and start governance for these, including data ownership and data quality.

We delivered business capability models, data governance processes, data quality rules & reporting, global support coverage for 100+ critical data elements supporting regulatory reporting and risk.


Helping a global investment bank reduce its residual risk with a target operating model

Our client asked us to provide operating model design & governance expertise for its anti-financial crime (AFC) controls. We reviewed and approved the bank’s AFC target operating model using our structured approach, ensuring designs were compliant with regulations, aligned to strategy, and delivered measurable outcomes.

We delivered clear designs with capability impact maps, process models, and system & data architecture diagrams, enabling change teams to execute the AFC strategy.


Helping a Japanese investment bank to develop & execute their trading front-to-bank operating model

Our client wanted to increase their trading efficiency by improving their data sourcing processes and resource efficiency in a multi-year programme. We analysed over 3,500 data feeds from 50 front office systems and over 100 reconciliations to determine how best to optimise their data.

Streamlining their data usage and operational processes is estimated to save them 20-30% costs over the next five years.


Improving a DLT FinTech's operations enabling rapid scaling in target markets

"Leading Point brings a top-flight management team, a reputation for quality and professionalism, and will heighten the value of [our] applications through its extensive knowledge of operations in the financial services sector."

Chief Risk Officer at DLT FinTech


Increasing data product offerings by profiling 80k terms at a global data provider

“Through domain & technical expertise Leading Point have been instrumental in the success of this project to analyse and remediate 80k industry terms. LP have developed a sustainable process, backed up by technical tools, allowing the client to continue making progress well into the future. I would have no hesitation recommending LP as a delivery partner to any firm who needs help untangling their data.”

PM at Global Market Data Provider


Catch the Multi-Cloud Wave

Charting Your Course

The digital realm is a constant current, pulling businesses towards new horizons. Today, one of the most significant tides shaping the landscape is the surge of multi-cloud adoption. But what exactly is driving this trend, and is your organisation prepared to ride the wave?

At its core, multi-cloud empowers businesses to break free from the constraints of a single cloud provider. Imagine cherry-picking the best services from different cloud vendors, like selecting the perfect teammates for a sailing crew. In 2022, 92% of firms either had or were considering a multi-cloud strategy (1). Having a strategy is one thing. Implementing it is a very different story. It takes meticulous planning and preparation. The potential of migrating from a single cloud provider to a multi-cloud environment can be huge if you are dealing with vast volumes of data. This flexibility unlocks a treasure trove of benefits.
1 Faction - The Continued Growth of Multi-Cloud and Hybrid Infrastructure

 

Top 4 Benefits

1 Unmatched Agility

Respond to ever-changing demands with ease by scaling resources up or down. Multi-cloud lets you ditch the "one-size-fits-all" approach and tailor your cloud strategy to your specific needs, fostering innovation and efficiency

2 Resilience in the Face of the Storm

Don't let cloud downtime disrupt your operations. By distributing your workload across multiple providers, you create a safety net that ensures uninterrupted service even when one encounters an issue.

3 A World of Choice at Your Fingertips

No single cloud provider can be all things to all businesses. Multi-cloud empowers you to leverage the unique strengths of different vendors, giving you access to a diverse array of services and optimising your overall offering.

4 Future-Proofing Your Digital Journey

The tech landscape is a whirlwind of innovation. With multi-cloud, you're not tethered to a single provider's roadmap. Instead, you have the freedom to seamlessly adapt to emerging technologies and trends, ensuring you stay ahead of the curve.

 

Cost Meets the Cloud

Perhaps the most exciting development propelling multi-cloud adoption is the shrinking cost barrier. As cloud providers engage in fierce competition, prices are driving down, making multi-cloud solutions more accessible for businesses of all sizes. This cost optimisation, coupled with the strategic advantages mentioned earlier, makes multi-cloud an increasingly attractive proposition. However, a word of caution: While the overall trend is towards affordability, navigating the multi-cloud landscape still requires meticulous planning and cost management. Without proper controls and precise resource allocation, you risk increased expenses and potential setbacks. With increased distribution of data, comes the increased risk of data leakage. Not only must data be protected within each cloud environment, it needs to be protected across the multi-cloud. Data monitoring increases in complexity. As data needs to move between cloud solutions, there may be additional latency risks. These can be mitigated with good risk controls and monitoring.

 

Kicking Off Your Journey

Ditch single-provider limitations and enjoy flexibility, resilience, and a wider range of services to boost your digital transformation but remember…

Multi-cloud environments can heighten security risks.

Navigate cautiously with proper controls and expert guidance to avoid hidden expenses.

Fierce competition is lowering multi-cloud barriers.

Let Leading Point be your guide, helping you set sail on the multi-cloud journey with confidence and unlock its full potential.

The multi-cloud path isn't without its challenges, but the rewards are undeniable. At Leading Point, we're experts in helping businesses navigate the multi-cloud wave with confidence. Let us help you unlock the full potential of multi-cloud for a more resilient, flexible, and innovative future. So, is your organisation ready to catch the wave? Contact Leading Point today and start your multi-cloud journey!


Unlocking the opportunity of vLEIs

Streamlining financial services workflows with Verifiable Legal Entity Identifiers (vLEIs)

Source: GLIEF

Trust is hard to come by

How do you trust people you have never met in businesses you have never dealt with before? It was difficult 20 years ago and even more so today. Many checks are needed to verify if the person you are talking to is the person you think it is. Do they even work for the business they claim to represent? Failures of these checks manifest themselves every day with spear phishing incidents hitting the headlines, where an unsuspecting clerk is badgered into making a payment to a criminal’s account by a person claiming to be a senior manager.

With businesses increasing their cross-border business and more remote working, it is getting harder and harder to trust what you see in front of you. How do financial services firms reduce the risk of cybercrime attacks? At a corporate level, there are Legal Entity Identifiers (LEIs) which have been a requirement for regulated financial services businesses to operate in capital markets, OTC derivatives, fund administration or debt issuance.

LEIs are issued by Local Operating Units (LOUs). These are bodies that are accredited by GLEIF (Global Legal Entity Identifier Foundation) to issue LEIs. Examples of LOUs are the London Stock Exchange Group (LSEG) and Bloomberg. However, LEIs only work at a legal entity level for an organisation. LEIs are not used for individuals within organisations.

Establishing trust at this individual level is critical to reducing risk and establishing digital trust is key to streamlining workflows in financial services, like onboarding, trade finance, and anti-financial crime.

This is where Verifiable Legal Entity Identifiers (vLEIs) come into the picture.

 

What is the new vLEI initiative and how will it be used?

Put simply, vLEIs combine the organisation’s identity (the existing LEI), a person, and the role they play in the organisation into a cryptographically-signed package.

GLEIF has been working to create a fully digitised LEI service enabling instant and automated identity verification between counterparties across the globe. This drive for instant automation has been made possible by developments in blockchain technology, self-sovereign identity (SSI) and other decentralised key management platforms (Introducing the verifiable LEI (vLEI), GLEIF website).

vLEIs are secure digitally-signed credentials and a counterpart of the LEI, which is a unique 20-digit alphanumeric ISO-standardised code used to represent a single legal organisation. The vLEI cryptographically encompasses three key elements; the LEI code, the person identification string, and the role string, to form a digital credential of a vLEI. The GLEIF database and repository provides a breakdown of key information on each registered legal entity, from the registered location, the legal entity name, as well as any other key information pertaining to the registered entity or its subsidiaries, as GLEIF states this is of “principally ‘who is who’ and ‘who owns whom’”(GLEIF eBook: The vLEI: Introducing Digital I.D. for Legal Entities Everywhere, GLEIF Website).

In December 2022, GLEIF launched their first vLEI services through proof-of-concept (POC) trials, offering instant digitally verifiable credentials containing the LEI. This is to meet GLEIF’s goal to create a standardised, digitised service capable of enabling instant, automated trust between legal entities and their authorised representatives, and the counterparty legal entities and representatives with which they interact” (GLEIF eBook: The vLEI: Introducing Digital I.D. for Legal Entities Everywhere, page 2).

 

“The vLEI has the potential to become one of the most valuable digital credentials in the world because it is the hallmark of authenticity for a legal entity of any kind. The digital credentials created by GLEIF and documented in the vLEI Ecosystem Governance Framework can serve as a chain of trust for anyone needing to verify the legal identity of an organisation or a person officially acting on that organisation’s behalf. Using the vLEI, organisations can rely upon a digital trust infrastructure that can benefit every country, company, and consumers worldwide”,

Karla McKenna, Managing Director GLEIF Americas

 

This new approach for the automated verification of registered entities will benefit many organisations and businesses. It will enhance and speed up regulatory reports and filings, due diligence, e-signatures, client onboarding/KYC, business registration, as well as other wider business scenarios.

Imagine the spear phishing example in the introduction. A spoofed email will not have a valid vLEI cryptographic signature, so can be rejected (even automatically), saving potentially thousands of £.

 

How do I get a vLEI?

Registered financial entities can obtain a vLEI from a Qualified vLEI Issuer (QVI) organisation to benefit from instant verification, when dealing with other industries or businesses (Get a vLEI: List of Qualified vLEI Issuing Organisations, GLEIF Website).

A QVI organisation is authorised under GLEIF to register, renew or revoke vLEI credentials belonging to any financial entity. GLEIF offers a Qualification Program where organisations can apply to operate as a QVI. GLEIF maintain a list of QVIs on their website.

Source: GLIEF

What is the new ISO 5009:2022 and why is it relevant?

The International Organisation of Standards (ISO) published the ISO 5009 standard in 2022, which was initially proposed by GLEIF, for the financial services sector. This is a new scheme to address “the official organisation roles in a structured way in order to specify the roles of persons acting officially on behalf of an organisation or legal entity” (ISO 5009:2022, ISO.org).

Both ISO and GLEIF have created and developed this new scheme of combining organisation roles with the LEI, to enable digital identity management of credentials. This is because the ISO 5009 scheme offers a standard way to specify organisational roles in two types of LEI-based digital assets, being the public key certificates with embedded LEIs, as per X.509 (ISO/IEC 9594-8), also outlined in ISO 17442-2, or for digital verifiable credentials such as vLEIs to be specified, to help confirm the authenticity of a person’s role, who acts on behalf of an organisation (ISO 5009:2022, ISO Website). This will help speed up the validation of person(s) acting on behalf of an organisation, for regulatory requirements and reporting, as well as for ID verification, across various business use cases.

Leading Point have been supporting GLEIF in the analysis and implementation of the new ISO 5009 standard, for which GLEIF acts as the operating entity to maintain the ISO 5009 standard on behalf of ISO.  Identifying and defining OORs was dependent on accurate assessments of hundreds of legal documents by Leading Point.

“We have seen first-hand the challenges of establishing identity in financial services and were proud to be asked to contribute to establishing a new standard aimed at solving this common problem. As data specialists, we continuously advocate the benefits of adopting standards. Fragmentation and trying to solve the same problem multiple times in different ways in the same organisation hurts the bottom line. Fundamentally, implementing vLEIs using ISO 5009 roles improves the customer experience, with quicker onboarding, reduced fraud risk, faster approvals, and most importantly, a higher level of trust in the business.”

Rajen Madan (Founder and CEO, Leading Point)

Thushan Kumaraswamy (Founding Partner & CTO, Leading Point)

How can Leading Point assist?

Our team of expert practitioners can assist financial entities to implement the ISO 5009 standard in their workflows for trade finance, anti-financial crime, KYC and regulatory reporting. We are fully-equipped to help any organisation that is looking to get vLEIs for their senior team and to incorporate vLEIs into their business processes, reducing costs, accelerating new business growth, and preventing anti-financial crime.

 

Glossary of Terms and Additional Information on GLEIF

 

Who is GLEIF?

The Global Legal Entity Identifier Foundation (GLEIF) was established by the Financial Stability Board (FSB) in June 2014 and as part of the G20 agenda to endorse a global LEI. The GLEIF organisation helps to implement the use of the Legal Entity Identifier (LEI) and is headquartered in Basel, Switzerland.

 

What is an LEI?

A Legal Entity Identifier (LEI) is a unique 20 alphanumeric character code based on the ISO-17442 standard. This is a unique identification code for legal financial entities that are involved in financial transactions. The role of the structure of how an LEI is concatenated, principally answers ‘who is who’ and ‘who owns whom’, as per ISO and GLEIF standards, for entity verification purposes and to improve data quality in financial regulatory reports.

 

How does GLEIF help?

GLEIF not only helps to implement the use of LEI, but it also offers a global reference data and central repository on LEI information via the Global LEI Index on gleif.org, which is an online, public, open, standardised, and a high-quality searchable tool for LEIs, which includes both historical and current LEI records.

 

What is GLEIF’S Vision?

GLEIF believe that each business involved in financial transactions should be identifiable with a unique single digital global identifier. GLEIF look to increase the rate of LEI adoption globally so that the Global LEI Index can include all global financial entities that engage in financial trading activities. GLEIF believes this will encourage market participants to reduce operational costs and burdens and will offer better insight into the global financial markets (Our Vision: One Global Identity Behind Every Business, GLEIF Website).


Séverine Raymond Soulier's Interview with Leading Point

Séverine Raymond Soulier’s Interview with Leading Point

 

 

Séverine Raymond Soulier is the recently appointed Head of EMEA at Symphony.com – the secure, cloud-based, communication and content sharing platform. Séverine has over a decade of experience within the Investment Banking sector and following 9 years with Thomson Reuters (now Refinitiv) where she was heading the Investment and Advisory division for EMEA leading a team of senior market development managers in charge of the Investing and Advisory revenue across the region. Séverine brings a wealth of experience and expertise to Leading Point, helping expand its product portfolio and its reach across international markets.


John Macpherson's Interview with Leading Point

John Macpherson’s Interview with Leading Point 2022

 

 

John Macpherson was the former CEO of BMLL Technologies; and is a veteran of the city, holding several MD roles at CITI, Nomura and Goldman Sachs. In recent years John has used his extensive expertise to advise start-ups and FinTech in challenges ranging from compliance to business growth strategy. John is Deputy Chair of the Investment Association Engine which is the trade body and industry voice for over 200+ UK investment managers and insurance companies. 


Leading Point and P9 Form Collaboration to Accelerate Trade and Transaction Reporting

Leading Point and P9 Form Collaboration to Accelerate Trade and Transaction Reporting

 

 

Leading Point and Point Nine (P9) will collaborate to streamline and accelerate the delivery of trade and transaction reporting. Together, they will streamline the delivery of trade and transaction reporting using P9’s scalable regulatory solution, and Leading Point's data management expertise. This new collaboration will help both firms better serve their clients and provide faster, more efficient reporting. 

London, UK, July 22nd, 2022 

 

P9’s in-house proprietary technology is a scalable regulatory solution. It provides best-in-class reporting solutions to both buy- and sell-side financial firms, service providers, and corporations, such as ED&F Man, FxPro and Schnigge. P9 helps them ensure high-quality and accurate trade/transaction reporting, and to remain compliant under the following regimes: EMIR, MiFIR, SFTR, FinfraG, ASIC, CFTC and Canadian. 

 

Leading Point, a highly regarded digital transformation company headquartered in London, are specialists in intelligent data solutions. They serve a global client base of capital market institutions, market data providers and technology vendors.  

 

Leading Point are data specialists, who have helped some of the Financial Services industry’s biggest players organise and link their data, as well as design and deliver data-led transformations in global front-to-back trading. Leading Point are experts in getting into the detail of what data is critical to businesses. They deliver automation and re-engineered processes at scale, leveraging their significant financial services domain expertise. 

 

The collaboration will combine the power of P9's knowledge of regulatory reporting, and Leading Point’s expertise in data management and data optimisation. The integration of Leading Point’s services and P9's regulatory technology will enable clients to seamlessly integrate improved regulatory reporting and efficient business processes. 

 

Leading Point will organise and optimise P9’s client’s data sets, making it feasible for P9's regulatory software to integrate with client regulatory workflows and reporting. In a statement made by Christina Barbash, Business Development Manager at Point Nine, she claims that, “creating a network of best-in-breed partners will enable Point Nine to better serve its existing and potential clients in the trade and transaction reporting market.” 

 

Andreas Roussos, Partner at Point Nine adds:

“Partnering with Leading Point is a pivotal strategic move for our organization. Engaging with consulting firms will not only give us a unique position in the market, but also allow us to provide more comprehensive service to our clients, making it a game-changer for our organization, our clients, and the industry as a whole.”

 

Dishang Patel, COO and Founding Partner at Leading Point, speaks on the collaboration: 

“We are thrilled to announce that we are collaborating with Point Nine. Their technology and knowledge of regulatory reporting can assist the wider European market. The new collaboration will unlock doors to entirely new transformation possibilities for organisations within the Financial Sector across EMEA.”   

 

The collaboration reflects the growing complexity of financial trading and businesses’ need for more automation for compliance with regulations, whilst ensuring data management is front and centre of the approach for optimum client success. Considering this, the two firms have declared to support organisations to improve the quality and accuracy of their regulatory reporting for all regimes. 

 

About Leading Point 

Leading Point is a digital transformation company with offices in London and Dubai. They are revolutionising the way change is done through their blend of expert services and their proprietary technology, modellr™. 

Find out more at: www.leadingpoint.io   

Contact Dishang Patel, Founding Partner & COO at Leading Point - dishang@leadingpoint.io  

 

About Point Nine 

Point Nine (Limassol, Cyprus), is a dedicated regulatory reporting firm, focusing on the provision of trade and transaction reporting services to legal entities across the globe. Point Nine uses its in-house cutting-edge proprietary technology to provide a best-in-class solution to all customers and regulatory reporting requirements. 

Find out more at: www.p9dt.com    

Contact Head office, Point Nine Data Trust Limited - info@p9dt.com


ESG Operating models hold the key to ESG compliance

John Macpherson on ESG Risk

In my last article, I wrote about the need for an effective operating model in the handling and optimisation of data for Financial Services firms. But data is only one of several key trends amongst these firms that would benefit from a digital operating model. ESG has risen the ranks in importance, and the reporting of this has become imperative.  

 

The Investment Association Engine Program, which I Chair, is designed to identify the most relevant pain points and key themes amongst Asset and Investment Management clients. We do this by searching out FinTech businesses that are already working on solutions to these issues. By partnering with these businesses, we can help our clients overcome their challenges and improve their operations. 

 

While data has been an ever-present issue, ESG has risen to an equal standing of importance over the last couple of years. Different regulatory jurisdictions and expectations worldwide has left SME firms struggling to comply and implement in a new paradigm of environmental, sustainable and governance protocols. 

 

ESG risk is different to anything we have experienced before and does not fit into neat categories such as areas like operational risk. The depth and breadth of data and models required for firms to make informed strategic decisions varies widely based on the specific issue at hand (e.g., supply chain, reputation, climate change goals, etc.). Firms need to carefully consider their own position and objectives when determining how much analysis is needed. 

According to S&P Global, sustainable debt issuance reached a record level in 2021, and is only expected to increase further in the coming years. With this growth comes increased scrutiny and a heightened concern of so-called ‘greenwashing’, where companies falsely claim to be environmentally friendly. To combat this, participants need to manage that growth in a way that combats rising concerns about ‘greenwashing’. 

 

Investors, regulators and the public, in general, are keen to challenge large companies’ ESG goals and results. These challenges vary wildly, but the biggest seen on a regular basis range from human rights to social unrest and climate change. As organisations begin to decarbonise their operations, they face the initially overlooked challenge of creating a credible near-term plan that will enable them to reach their long-term sustainability goals.  

 

Investor pressure on climate change has historically focussed on the Energy sector. Now central banks are trying to incorporate climate risk as a stress testing feature for all Financial Services firms. 

Source: S&P Global 

Operating models hold the key to ESG transition and compliance. Having an operating model for how each of the firm’s functions intersect with ESG, requires new processes, new data, and new reporting techniques. This needs to be pulled across the enterprise, so firms have a process that is substantiated. 

 

Before firms worry about ESG scores from their market data providers, they would do well to look closely at their own operating model and framework. In this way, they can then pull in the data required from the marketplace and use it in anger. 

 

Leading Point is a FinTech business I am proud to be supporting. Their operating model system, modellr describes how financial services businesses work, from the products and services offered, to the key processes, people, data, and technology used to deliver value to their customers. This digital representation of how the business works is crucial to show what areas ESG will impact and how the firm can adapt in the most effective way.  

 

Rajen Madan, CEO at Leading Point: 

“In many ways, the transition to ESG is exposing the acute gap in firms of not being able to have meaningful dialogue with the plethora of data they already have, and need, to further add to for ESG”.  

 

modellrharvests a company’s existing data to create a living dashboard, whilst also digitising the change process and enabling quicker and smarter decision-making. Access to all the information, from internal and external sources, in real time is proving transformative for SME size businesses. 

 

Thushan Kumaraswamy, Chief Solutions Officer at Leading Point:  

“ESG is already one of the biggest drivers of transformation in financial services and is only going to get bigger. Firms need to identify the impact on their business, choose the right change option, execute the strategy, and measure the improvements. The mass of ESG frameworks adds to the confusion of what to report and how. Tools such as modellr bring clarity and purpose to the ESG imperative.” 

 

While most firms will look to sustainability officers for guidance on matters around ESG, Leading Point are providing these officers, and less qualified team members, with the tools to make informed decisions now, and in the future. We have established exactly what these firms need to succeed – a digital operating model. 

 

Words by John Macpherson — Board advisor at Leading Point and Chair of the Investment Association Engine 

 


The Challenges of Data Management

John Macpherson on The Challenges of Data Management

 

 

I often get asked, what are the biggest trends impacting the Financial Services industry? Through my position as Chair of the Investment Association Engine, I have unprecedented access to the key decision-makers in the industry, as well as constant connectivity with the ever-expanding Fintech ecosystem, which has helped me stay at the cutting edge of the latest trends.

So, when I get asked, ‘what is the biggest trend that financial services will face’, for the past few years my answer has remained the same, data.

During my time as CEO of BMLL, big data rose to prominence and developed into a multi-billion-dollar problem across financial services. I remember well an early morning interview I gave to CNBC around 5 years ago, where the facts were starkly presented. Back then, data was doubling every three years globally, but at an even faster pace in financial markets.

Firms are struggling under the weight of this data

The use of data is fundamental to a company's operations, but they are finding it difficult to get a handle on this problem. The pace of this increase has left many smaller and mid-sized IM/ AM firms in a quandary. Their ability to access, manage and use multiple data sources alongside their own data, market data, and any alternative data sources, is sub-optimal at best. Most core data systems are not architected to address the volume and pace of change required, with manual reviews and inputs creating unnecessary bottlenecks. These issues, among a host of others, mean risk management systems cannot cope as a result. Modernised data core systems are imperative to solve where real-time insights are currently lost, with fragmented and slow-moving information.

Around half of all financial service data goes unmentioned and ungoverned, this “dark data” poses a security and regulatory risk, as well as a huge opportunity.

While data analytics, big data, AI, and data science are historically the key sub-trends, these have been joined by data fabric (as an industry standard), analytical ops, data democratisation, and a shift from big data to smaller and wider data.

Operating models hold the key to data management

modellr™ dashboard

Governance is paramount to using this data in an effective, timely, accurate and meaningful way. Operating models are the true gauge as to whether you are succeeding.

Much can be achieved with the relatively modest budget and resources firms have, provided they invest in the best operating models around their data.

Leading Point is a firm I have been getting to know over several years now. Their data intelligence platform modellr™, is the first truly digital operating model. modellr™ harvests a company’s existing data to create a living operating model, digitising the change process, and enabling quicker, smarter, decision making. By digitising the process, they’re removing the historically slow and laborious consultative approach. Access to all the information in real-time is proving transformative for smaller and medium-sized businesses.

True transparency around your data, understanding it and its consumption, and then enabling data products to support internal and external use cases, is very much available.

Different firms are at very different places on their maturity curve. Longer-term investment in data architecture, be it data fabric or data mesh, will provide the technical backbone to harvest ML/ AI and analytics.

Taking control of your data

Recently I was talking to a large investment bank for whom Leading Point had been brought in to help. The bank was looking to transform its client data management and associated regulatory processes such as KYC, and Anti-financial crime.

They were investing heavily in sourcing, validating, normalising, remediating, and distributing over 2,000 data attributes. This was costing the bank a huge amount of time, money, and resources. But, despite the changes, their environment and change processes had become too complicated to have any chance of success. The process results were haphazard, with poor controls and no understanding of the results missing.

Leading Point was brought in to help and decided on a data minimisation approach. They profiled and analysed the data, despite working across regions and divisions. Quickly, 2,000 data attributes were narrowed to less than 200 critical ones for the consuming functions. This allowed the financial institutions, regulatory, and reporting processes to come to life, with clear data quality measurement and ownership processes. It allowed the financial institutions to significantly reduce the complexity of their data and its usability, meaning that multiple business owners were able to produce rapid and tangible results

I was speaking to Rajen Madan, the CEO of Leading Point, and we agreed that in a world of ever-growing data, data minimisation is often key to maximising success with data!

Elsewhere, Leading Point has seen benefits unlocked from unifying data models, and working on ontologies, standards, and taxonomies. Their platform, modellr™is enabling many firms to link their data, define common aggregations, and support knowledge graph initiatives allowing firms to deliver more timely, accurate and complete reporting, as well as insights on their business processes.

The need for agile, scalable, secure, and resilient tech infrastructure is more imperative than ever. Firms’ own legacy ways of handling this data are singularly the biggest barrier to their growth and technological innovation.

If you see a digital operating model as anything other than a must-have, then you are missing out. It’s time for a serious re-think.

Words by John Macpherson — Board advisor at Leading Point, Chair of the Investment Association Engine

 

John was recently interviewed about his role at Leading Point, and the key trends he sees affecting the financial services industry. Watch his interview here


Leading Point Shortlisted For Data Management Insight Awards

Leading Point has been shortlisted for the A-Teams Data Management Insight Awards.

Data Management Insight Awards, now in their seventh year, are designed to recognise leading providers of data management solutions, services and consultancy within capital markets.

Leading Point has been nominated for four categories:

  1. Most Innovative Data Management Provider
  2. Best Data Analytics Solution Provider
  3. Best Proposition for AI, Machine Learning, Data Science
  4. Best Consultancy in Data Management

 

Areas of Outstanding Service & Innovation

Leading Form Index: Data readiness assessment, created by Leading Point FM, which measures firms data capabilities and their capacity to transform across 24 unique areas. This allows participating firms to understand the maturity of their information assets, the potential to apply new tech (AI, DLT) and benchmark with peers.

Chief Risk Officer Dashboard: Management Information Dashboard that specifies, quantifies, and visualises risks arising from firms’ non-financial, operational, fraud, financial crime, and cyber risks.

Leading Point FM ‘Think Fast’ Application: The application provides the ability to input use cases and solution journeys and helps visualise process, systems and data flows, as well as target state definition & KPI’s. This allows business change and technology teams to quickly define and initiate change management.

Anti-Financial Crime Solution: Data centric approach combined with Artificial Intelligence technology reimagines and optimises AML processes to reduce volumes of client due diligence, reduce overall risk exposure, and provide the roadmap to AI-assisted automation.

Treasury Optimisation Solution: Data content expertise leveraging cutting edge DLT & Smart Contract technology to bridge intracompany data silos and enable global corporates to access liquidity and efficiently manage finance operations.

Digital Repapering Solution: Data centric approach to sourcing, management and distribution of unstructured data combined with NLP technology to provide roadmap towards AI assisted repapering and automated contract storage and distribution.

Leading Form Practical Business Design Canvas: A practical business design method to describe your business goals & objectives, change projects, capabilities, operating model, and KPI’s to enable a true business-on-a-page view that is captured within hours.

ISO 27001 Certification – Delivery of Information Security Management System (ISMS) & Cyber risk mitigation with a Risk Analysis Tool


What COP26 means for Financial Services

What COP26 means for Financial Services

 

 

Many have proclaimed COP26 as a failure, with funding falling short, loose wording and non-binding commitments. However, despite the doom and gloom, there was a bright spot; the UK’s finance industry.

Trillions need to be invested to achieve the 1.5 degrees target, but governments alone do not have the funds to achieve this. Alternative sources of finance must be found, and private investment needs to be encouraged on all fronts to, ‘go green’. Looking at supply-side energy alone, the IPPC estimates that up to $3.8 trillion needs to be mobilised annually to achieve the transition to net-zero by 2050.

The UK led from the front in green finance, introducing plans to become the world’s first net-zero aligned financial centre. New Treasury rules for financial institutions, listed on the London Stock Exchange, mean that companies will have to create and publish net-zero transition plans by 2023, although the full details are yet to be announced. These plans will be evaluated by a new institution, but crucially, are not mandatory. The adjudicator of the investment plans will be investors. Although some argue the regulation could be stronger, just like national climate targets, once there are institutions publishing their alignment with net-zero, there is a level of accountability that can be scrutinised and a platform for comparison which encourages competition. Anything stronger could have pushed investment firms into less-regulated exchanges.

Encouragingly, the private sector showed strong engagement, with nearly 500 global financial services firms agreeing to align $130 trillion — around 40% of the world’s financial assets — with the goals set out in the Paris Agreement, including limiting global warming to 1.5 degrees Celsius.

From large multinational companies, to small local businesses, the summit provided greater clarity on how climate policies and regulations will shape the future business environment. The progress made, on phasing out fossil fuel subsidies and coal investments, was a clear signal to the global market about the future viability of fossil fuels. It will now be more difficult to gain funding to expand existing or build new coal mines. Over time, this adjustment will have wider impacts on the funding of other polluting industries.

This new framework will give the private sector the confidence and certainty it needs to invest in green technology and green energy. Renewable energy is already the cheapest form of energy in 2/3 of the world. This reassurance will be crucial in driving the economies of scale we need, within the renewable energy industry.

A truly sustainable future is still a long way off. The private sector will still invest in fossil fuels, new regulations will cause challenges, and ESG remains optional; but initial signals from COP26 show that the future of the world is looking green.

 

By Maria King — ESG Associate at Leading Point

 

Who we are:

Leading Point is a fintech specialising in digital operating models. We are revolutionising the way operating models are created and managed through our proprietary technology, modellr™, and expert services delivered by our team of specialists.[/vc_column_text][/vc_column][/vc_row]


GDFM & Leading Point Partnering for Smarter Regulatory Health Management

GDFM and Leading Point collaborate to deliver innovative and efficient regulatory risk management to our clients and through the SMART_Dash product; enabling consistent, centralised, accessible regulatory health data to assist responsible and accountable individuals with ensuring adequate transparency, for risk mitigation decision making and action taking.  This is complemented by a SMART_Board suite for Board level leadership and a more detailed SMART_Support suite for regulatory reporting teams.

We are delighted that SMART_Dash has been shortlisted in 3 categories in this year's prestigious RegTech Insight Awards in Europe, which recognises both established solution providers and innovative newcomers, seeking to herald and highlight innovative RegTech solutions across the global financial services industry.

GD Financial Markets Head of Regulatory Compliance Practice and SMART_Dash Co-creator Sarah Peaston "Centralised, consolidated, consistent regulatory health transparency and tracking is key to identifying and managing regulatory and operating risk.  I am delighted that SMART_Dash has been recognised as a new breed of solution that practically assists Managers, Senior Managers and Leadership with managing their regulatory health through the provision of the right information, at the right level to the right seniority”.

Leading Point CEO Rajen Madan "Our vision with SMART_Dash is to accelerate better regulatory risk management approaches and vastly more efficient RegOps. As financial services practitioners we are acutely aware of the time managers spend trying to make sense of their regulatory and operating risk areas from a multitude of inconsistent reports. SMART_Dash enables the shift to an enhanced way of risk management, which creates standardisation and makes reg data work for your business. We are very grateful to the COO, CRO and CFOs whom have contributed to its development and help the industry move forward”.

GDFM and Leading Point are rolling out the SMART_Dash suite to the first set of industry consortium partners progressively in H1 2021, and thereafter open to a wider set of institutions.


The Composable Enterprise: Improving the Front-Office User Experience

[et_pb_section fb_built="1" _builder_version="4.4.8" min_height="1084px" custom_margin="16px||-12px|||" custom_padding="0px||0px|||"][et_pb_row column_structure="2_3,1_3" _builder_version="3.25" custom_margin="-2px|auto||auto||" custom_padding="1px||3px|||"][et_pb_column type="2_3" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_social_media_follow url_new_window="off" follow_button="on" _builder_version="4.4.8" text_orientation="left" module_alignment="left" min_height="14px" custom_margin="1px||5px|0px|false|false" custom_padding="0px|0px|0px|0px|false|false" border_radii="on|1px|1px|1px|1px"][et_pb_social_media_follow_network social_network="linkedin" url="https://uk.linkedin.com/company/leadingpoint" _builder_version="4.4.8" background_color="#007bb6" follow_button="on" url_new_window="off"]linkedin[/et_pb_social_media_follow_network][/et_pb_social_media_follow][et_pb_image src="https://leadingpointfm.com/wp-content/uploads/2020/10/cloud-based-services.png" title_text="cloud-based-services" align_tablet="center" align_phone="" align_last_edited="on|desktop" admin_label="Image" _builder_version="4.4.8" locked="off"][/et_pb_image][/et_pb_column][et_pb_column type="1_3" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][/et_pb_column][/et_pb_row][et_pb_row column_structure="1_2,1_2" _builder_version="4.4.8"][et_pb_column type="1_2" _builder_version="4.4.8"][et_pb_text _builder_version="4.4.8" text_font="||||||||" text_font_size="14px" text_line_height="1.6em" header_font="||||||||" header_font_size="25px" width="100%" custom_margin="10px|-34px|-5px|||" custom_padding="16px|0px|5px|8px||" content__hover_enabled="off|desktop"]

By Dishang Patel, Fintech & Growth Delivery Partner, Leading Point Financial Markets.

The past six months have by no means been a time of status quo. During this period of uncertainty, standards have been questioned and new ‘norms’ have been formed.

A standout development has been the intensified focus on cloud-based services. Levels of adoption have varied, from those moving to cloud for the first time, to others making cloud their only form of storage and access, and with numerous ‘others’ in between.

One area affected adversely (for those who weren’t ready) but positively (for those who were) is software. ‘Old-school’ software vendors – whose multi-million-pound solutions were traditionally implemented on premise at financial institutions, whether as part of a pure ‘buy’ or broader ‘build’ approach – have worked hard to offer cloud-based services.

The broad shift to working from home (WFH) as a result of the Covid-19 pandemic has tested the end-user experience all the way from front to back offices in financial institutions. Security, ease of access and speed are all high on the agenda in the new world in which we find ourselves.

The digitisation journey

With workforces operating globally, it is difficult to guarantee uniform user experiences and be able to cater for a multitude of needs. To achieve success in this area and to ensure a seamless WFH experience, financial institutions have moved things up a level and worked as hard as software providers to offer cloud-based solutions.

All manner of financial institutions (trading firms, brokerages, asset managers, challenger banks) have been on a digitisation journey to make the online user experience more consistent and reliable.

Composable Enterprise is an approach that those who have worked in a front office environment within financial services may have come across and for many could be the way forward.

 

Composable Enterprise: the way forward

Digitisation can come in many forms: from robotic process automation (RPA), operational excellence, implementation of application-based solution, interoperability and electronification. Interoperability and electronification are two key components of this Composable Enterprise approach.

Interoperability – whether in terms of web services, applications, or both –  is an approach that can create efficiencies on the desktop and deliver improved user experience. It has the potential to deliver business performance benefits, in terms of faster and better decision making with the ultimate potential to uncover previously untapped alpha. It also has two important environmental benefits:

1) Reducing energy spend;

2) Less need for old hardware to be disposed of, delivering the reduced environmental footprint that organisations desire.

Electronification, for most industry players, may represent the final step on the full digitisation journey. According to the Oxford English Dictionary, electronification is the “conversion to or adoption of an electronic mode of operation,” which translates to the front office having all the tools they need to do their jobs to the best of their ability.

The beauty of both interoperability and electronification is that they work just as well in a remote set up as they do in an office environment. This is because a good implementation of both results in maximising an organisation’s ability to use all the tools (trading platforms, market data feeds, CRMs, and so on) at their disposal without needing masses of physical infrastructure.

Because of the lower barriers (such as time and cost) of interoperability, financial institutions should start their digitisation journeys from this component and then embark on a larger and more complicated move to electronification.

Composable Enterprise is about firms being able to choose the best component needed for their business, allowing them to be more flexible and more open in order to adapt to new potential revenue opportunities. In these challenging times, it is no surprise that more and more financial institutions are adding Composable Enterprise as a key item on their spending agenda.

 

 

 

 

[/et_pb_text][/et_pb_column][et_pb_column type="1_2" _builder_version="4.4.8"][et_pb_text disabled_on="on|on|off" _builder_version="4.4.8" min_height="15px" custom_margin="452px||133px|||" custom_padding="8px||0px|||"]

"The broad shift to working from home as a result of the Covid-19 pandemic has tested the end-user experience all the way from front to back offices in financial institutions."

[/et_pb_text][et_pb_text disabled_on="on|on|off" _builder_version="4.4.8" min_height="15px" custom_margin="452px||133px|||" custom_padding="8px|||||"]

"It has the potential to deliver business performance benefits, in terms of faster and better decision making with the ultimate potential to uncover previously untapped alpha."

[/et_pb_text][et_pb_text disabled_on="on|on|off" _builder_version="4.4.8" min_height="15px" custom_margin="427px|||||" custom_padding="1px|||||"]

"The beauty of both interoperability and electronification is that they work just as well in a remote set up as they do in an office environment."

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" _builder_version="3.22.3" animation_style="fade" locked="off"][et_pb_row _builder_version="3.25"][et_pb_column type="4_4" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_team_member name="Dishang Patel" position="Fintech & Growth Delivery Partner" image_url="https://leadingpointfm.com/wp-content/uploads/2020/03/dishang.2e16d0ba.fill-400x400-1.jpg" _builder_version="4.4.8" link_option_url="mailto:dishang@leadingpoint.io" hover_enabled="0" admin_label="Person" title_text="dishang.2e16d0ba.fill-400x400"]

Responsible for delivering digital FS businesses.

Transforming delivery models for the scale up market.

[/et_pb_team_member][et_pb_text admin_label="Contact Us" module_class="txtblue" _builder_version="3.27.4" text_font="||||||||" link_font="||||||||" ul_font="||||||||" text_orientation="center"]

Contact Us

[/et_pb_text][et_pb_text admin_label="Form" _builder_version="3.27.4"][formidable id=2][/et_pb_text][et_pb_code admin_label="Social media icons" module_class="form" _builder_version="3.19.4" custom_margin="0px||0px" custom_padding="0px||0px"]

[/et_pb_code][/et_pb_column][/et_pb_row][/et_pb_section]


Information Security in a New Digital Era

[et_pb_section fb_built="1" _builder_version="4.4.8" min_height="1084px" custom_margin="16px||-12px|||" custom_padding="0px||0px|||"][et_pb_row column_structure="2_3,1_3" _builder_version="3.25" custom_margin="-2px|auto||auto||" custom_padding="1px||3px|||"][et_pb_column type="2_3" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_social_media_follow url_new_window="off" follow_button="on" admin_label="Social Media Follow" _builder_version="4.4.8" text_orientation="left" module_alignment="left" min_height="14px" custom_margin="1px||5px|0px|false|false" custom_padding="0px|0px|0px|0px|false|false" border_radii="on|1px|1px|1px|1px"][et_pb_social_media_follow_network social_network="linkedin" url="https://uk.linkedin.com/company/leadingpoint" _builder_version="4.4.8" background_color="#007bb6" follow_button="on" url_new_window="off"]linkedin[/et_pb_social_media_follow_network][/et_pb_social_media_follow][et_pb_image src="https://leadingpointfm.com/wp-content/uploads/2020/09/infosec.jpg" title_text="infosec" align_tablet="center" align_phone="" align_last_edited="on|desktop" admin_label="Image" _builder_version="4.4.8" locked="off"][/et_pb_image][/et_pb_column][et_pb_column type="1_3" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][/et_pb_column][/et_pb_row][et_pb_row column_structure="1_2,1_2" _builder_version="4.4.8"][et_pb_column type="1_2" _builder_version="4.4.8"][et_pb_text admin_label="Text" _builder_version="4.4.8" text_font="||||||||" text_font_size="14px" text_line_height="1.6em" header_font="||||||||" header_font_size="25px" width="100%" custom_margin="10px|-34px|-5px|||" custom_padding="16px|0px|5px|8px||" content__hover_enabled="off|desktop"]

Shifting priorities

 

The 2020’s pandemic, subsequent economic turmoil and related social phenomena has paved the way for much-needed global digital transformation and the prioritisation of digital strategies. The rise in digitisation across all businesses, however, has accelerated cyber risk exponentially. With cloud-based attacks rising by 630% between January and April 2020(1), organisations are now turning their focus on how to benefit from digitisation whilst maintaining sufficiently secure digital environments for their services and clients.

 

A global challenge

 

A new digital setup could easily jeopardise organisations’ cyber safety. With data becoming companies’ most valuable asset, hackers are getting creative with increasingly-sophisticated threats and phishing attacks. According to the 2019 Data Breach Investigation Report(2) by Verizon, 32% of all verified data breaches appeared to be phishing.
As data leaks are increasing (3,800 alone in 2019), so is the cyber skill shortage. According to the MIT Technology Review report(3), there will be 3.5 million unfulfilled cybersecurity jobs in 2021; a rise of 350%. As a result of Covid-19 and digitised home working, cybersecurity professionals are high in demand to fill the gaps organisations’
currently face.

 

The way forward

Although tackling InfoSec breaches in the rapidly-evolving digital innovation landscape is not easy, it is essential to keep it as an absolute priority. In our work with regulated sector firms in financial services, pharma and energy as well as with fintechs, we see consistent steps that underpin successful information security risk management. We have created a leaderboard of 10 discussion points for COOs, CIOs and CISOs to keep up with their information security needs:

  • Information Security Standards
    Understand information security standards like NIST, ISO 27001/2 and BIP 0116/7 and put in place processes and controls accordingly. These are good practices to keep a secure digital environment and are vital to include in your risk mitigation strategy. Preventing cyber attacks and data breaches is less costly and less resource-exhaustive than dealing with the damage caused by these attacks. There are serious repercussions of security breaches in terms of cost and reputational damage, yet organisations still only look at the issue after the event. Data shows that firms prefer to take a passive approach to tackle these issues instead of taking steps to prevent them in the first place.
  • Managing security in cloud delivery models
    2020 has seen a rise in the use of SaaS applications to support employee engagement, workflow management and communication. While cloud is still an area in its preliminary stages, cloud adoption is rapidly accelerating. But many firms have initiated cloud migration projects without a firm understanding and design for the future business, customer or end user flows. This is critical to ensuring a good security infrastructure in a multi-cloud operating environment. How does your firm keep up with the latest developments in Cloud Management?
  • Operational resilience
    70% of Operational Risk professionals say that their priorities and focus have changed as a result of Covid-19(4). With less than half of businesses testing their continuity and business-preparedness initiatives(5), Coronavirus served as an eye-opener in terms of revisiting these questions. Did your business continuity plan prove successful? If so, what was the key to its success? How do you define and measure operational resilience in your business? Cross-functional data sets are increasingly vital for informed risk management.
  • Culture
    Cyber risk is not just a technology problem; it is a people
    problem. You cannot mitigate cyber risks with just technology;
    embedding the right culture within your team is vital. How do you make sure a cyber-secure company culture is kept up in remote working environments? Does your company already have an information security training plan in place?

 

  • Knowing what data is important
    Data is expanding exponentially – you have to know what you need to protect. Only by defining important data, reducing the signal-to-data noise and aggregating multiple data points can organisations look to protect them. As a firm, what percentage of your data elements are defined with an owner and user access workflow?
  • Speed of innovation means risk
    The speed of innovation is often faster than the speed of safety. As technology and data adoption is rapidly changing, data protection has to keep up as well – there is little point in investing in technology until you really understand your risks and your exposure to those risks. This is increasingly true of new business-tech frameworks, including DLT, AI and Open Banking. When looking at DLT and AI based processes - how do you define the security and thresholds?
  • Master the basics
    80% of UK companies and startups are not Cyber Essentials ready, which shows that the fundamentals of data security are not being dealt with. Larger companies are rigid and not sufficiently agile – more demands are being placed on teams but without sufficient resources and skills development. Large companies cannot innovate if they are not given the freedom to actually adapt. What is the blocker in your firm?
  • Collaborate with startups
    Thousands of innovative startups tackling cyber security currently exist and many more will begin their growth journey over the next few years. Larger businesses need to be more open to collaborating with them to help speed up advancements in the cyber risk space.
  • The right technology can play a key role in efficiency and speed
    We see the emerging operating models for firms are open API based, and organisations need to stitch together many point solutions. Technology can help here if deployed correctly. For
    instance, to join up multiple data, to provide transparency of
    messages crossing in and out of systems, to execute and detect
    information security processes and controls with 100x efficiency and speed. This will make a material difference in the new world of
    financial services.
  • Transparency of your supply chain
    Supply chains are becoming more data-driven than ever with increased number of core operations and IT services being outsourced. Attackers are using weak supplier controls to compromise client networks and dispersed dependencies create increased reliance and risk exposure from entities outside of your direct control. How do you manage the current pressure points of your supplier relationships?

 Next steps

 

Cyber risk (especially regarding data protection) is simultaneously a compliance problem (regulatory risk, legal risk etc.), an architecture problem (infrastructure, business continuity, etc.), and a business problem (reputational risk, loss of trust, ‘data poisoning’, competitor intelligence etc.). There are existing risk assessment frameworks for managing operational risk (example: ORMF) – why not plug in?
Getting the basics right, using industry standards, multi-cloud environments and transparency of supply chain are good places to start. These are all to do with holistic data risk management (HRM).
While all these individual issues pose problems on their own, they can be viewed through inter-relationships applying a holistic approach where a coordinated solution can be found to efficiently manage these issues as a whole. The solution lies in taking a more deliberate approach to cyber security and following this 4-step process:

 IDENTIFY
 ORGANISE
 ASSIGN
 RESOLVE

 

 

Find out more on Operational Resilience from Leading Point:
https://leadingpointfm.com/operational-resilience-data-infrastructure-and-aconsolidated-risk-view-is-pivotal-to-the-new-rules-on-operational-risk/#_edn2

Find out more on Data Kitchen, a Leading Point initiative:
https://leadingpointfm.com/the-data-kitchen-does-data-need-science/

 

 

(1) https://www.fintechnews.org/the-2020-cybersecurity-stats-you-need-to-know/

(2) https://www.techfunnel.com/information-technology/cyber-security-trends/

(3) https://www.technologyreview.com/2018/10/18/139708/a-cyber-skills-shortage-means-students-are-being-recruited-to-fight-off-hackers/

(4) https://leadingpointfm.com/operational-resilience-data-infrastructure-and-a-consolidated-risk-view-is-pivotal-to-the-new-rules-on-operational-risk/#_edn2

(5) https://securityintelligence.com/articles/these-cybersecurity-trends-could-get-a-boost-in-2020/

 

 

 

[/et_pb_text][/et_pb_column][et_pb_column type="1_2" _builder_version="4.4.8"][et_pb_text disabled_on="on|on|off" _builder_version="4.4.8" min_height="15px" custom_margin="452px||133px|||" custom_padding="8px|||||"]

"With data becoming companies’ most valuable asset, hackers are getting creative with increasingly-sophisticated threats and phishing attacks."

[/et_pb_text][et_pb_text disabled_on="on|on|off" _builder_version="4.4.8" min_height="15px" custom_margin="452px||133px|||" custom_padding="8px||0px|||"]

"Preventing cyber attacks and data breaches is less costly and less resource-exhaustive than dealing with the damage caused by these attacks."

[/et_pb_text][et_pb_text disabled_on="on|on|off" _builder_version="4.4.8" min_height="15px" custom_margin="427px|||||" custom_padding="1px|||||"]

"70% of Operational Risk professionals say that their priorities and focus have changed as a result of Covid-19."

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" _builder_version="4.4.8"][et_pb_row column_structure="1_3,1_3,1_3" _builder_version="4.4.8" min_height="643px"][et_pb_column type="1_3" _builder_version="4.4.8"][et_pb_gallery gallery_ids="4011" show_title_and_caption="off" _builder_version="4.4.8" width="100%"][/et_pb_gallery][et_pb_text _builder_version="4.4.8" custom_margin="-82px|||||" custom_padding="0px|||||"]

Rajen Madan

Founder & CEO

rajen@leadingpoint.io

Delivering Digital FS businesses. Change leader with over 20 years’ experience in helping firms with efficiency, revenue and risk management challenges

[/et_pb_text][/et_pb_column][et_pb_column type="1_3" _builder_version="4.4.8"][et_pb_image src="https://leadingpointfm.com/wp-content/uploads/2020/09/Aliz-photo-colour-320x500-1.jpg" title_text="Aliz photo colour 320x500 (1)" _builder_version="4.4.8"][/et_pb_image][et_pb_text _builder_version="4.4.8"]

Aliz Gyenes

Leading Point

aliz@leadingpoint.io

Data Innovation, InfoSec, Investment behaviour research Helping businesses understand and improve their data strategy via the Leading Point Data Innovation Index

[/et_pb_text][/et_pb_column][et_pb_column type="1_3" _builder_version="4.4.8"][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" module_class="txtwhite" _builder_version="3.22.3" background_color="#23408f" custom_padding="||62px|||" locked="off"][et_pb_row _builder_version="4.4.8"][et_pb_column type="4_4" _builder_version="4.4.8"][et_pb_text _builder_version="4.4.8" text_text_color="#ffffff" text_font_size="15px" header_text_color="#ffffff"]

How Leading Point can help

[/et_pb_text][/et_pb_column][/et_pb_row][et_pb_row column_structure="1_3,1_3,1_3" _builder_version="4.4.8"][et_pb_column type="1_3" _builder_version="4.4.8"][/et_pb_column][et_pb_column type="1_3" _builder_version="4.4.8"][/et_pb_column][et_pb_column type="1_3" _builder_version="4.4.8"][/et_pb_column][/et_pb_row][/et_pb_section][et_pb_section fb_built="1" _builder_version="4.4.8" animation_style="fade" locked="off"][et_pb_row _builder_version="3.25"][et_pb_column type="4_4" _builder_version="3.25" custom_padding="|||" custom_padding__hover="|||"][et_pb_text admin_label="Contact Us" module_class="txtblue" _builder_version="3.27.4" text_font="||||||||" link_font="||||||||" ul_font="||||||||" text_orientation="center"]

Contact Us

[/et_pb_text][et_pb_text admin_label="Form" _builder_version="3.27.4"][formidable id=2][/et_pb_text][et_pb_code admin_label="Social media icons" module_class="form" _builder_version="3.19.4" custom_margin="0px||0px" custom_padding="0px||0px"]

[/et_pb_code][/et_pb_column][/et_pb_row][/et_pb_section]


Artificial Intelligence: The Solution to the ESG Data Gap?

The Power of ESG Data

It was Warren Buffett who said, “It takes twenty years to build a reputation and five minutes to ruin it” and that is the reality that all companies face on a daily basis. An effective set of ESG (Environment, Social & Governance) policies has never been more crucial. However, it is being hindered by difficulties surrounding the effective collection and communication of ESG data points, as well a lack of standardisation when it comes to reporting such data. As a result, the ESG space is being revolutionised by Artificial Intelligence, which can find, analyse and summarise this information.
 

There is increasing public and regulatory pressure on firms to ensure their policies are sustainable and on investors to take such policies into account when making investment decisions. The issue for investors is how to know which firms are good ESG performers and which are not. The majority of information dominating research and ESG indices comes from company-reported data. However, with little regulation surrounding this, responsible investors are plagued by unhelpful data gaps and “Greenwashing”. This is when a firm uses favourable data points and convoluted wording to appear more sustainable than they are in reality. They may even leave out data points that reflect badly on them. For example, firms such as Shell are accused of using the word ‘sustainable’ in their mission statement whilst providing little evidence to support their claims (1)

Could AI be the complete solution?

AI could be the key to help investors analyse the mountain of ESG data that is yet to be explored, both structured and unstructured. Historically, AI has been proven to successfully extract relevant information from data sources including news articles but it also offers new and exciting opportunities. Consider the transcripts of board meetings from a Korean firm: AI could be used to translate and examine such data using techniques such as Sentiment Analysis. Does the CEO seem passionate about ESG issues within the company? Are they worried about an investigation into Human Rights being undertaken against them? This is a task that would be labour-intensive, to say the least, for analysts to complete manually.  

 

In addition, AI offers an opportunity for investors to not only act responsibly, but also align their ESG goals to a profitable agenda. For example, algorithms are being developed that can connect specific ESG indicators to financial performance and can therefore be used by firms to identify the risk and reward of certain investments. 

 

Whilst AI offers numerous opportunities with regards to ESG investing, it is not without fault. Firstly, AI takes enormous amounts of computing power and, hence, energy. For example, in 2018, OpenAI found the level of computational power used to train the largest AI models has been doubling every 3.4 months since 2012 (2). With the majority of the world’s energy coming from non-renewable sources, it is not difficult to spot the contradiction in motives here. We must also consider whether AI is being used to its full potential; when simply used to scan company published data, AI could actually reinforce issues such as “Greenwashing”. Further, the issue of fake news and unreliable sources of information still plagues such methods and a lot of work has to go into ensuring these sources do not feature in algorithms used. 

 

When speaking with Dr Thomas Kuh, Head of Index at leading ESG data and AI firm Truvalue Labs™, he outlined the difficulties surrounding AI but noted that since it enables human beings to make more intelligent decisions, it is surely worth having in the investment process. In fact, he described the application of AI to ESG research as ‘inevitable’ as long as it is used effectively to overcome the shortcomings of current research methods. For instance, he emphasised that AI offers real time information that traditional sources simply cannot compete with. 

 A Future for AI?

According to a 2018 survey from Greenwich Associates (3), only 17% of investment professionals currently use AI as part of their process; however, 40% of respondents stated they would increase budgets for AI in the future. As an area where investors are seemingly unsatisfied with traditional data sources, ESG is likely to see more than its fair share of this increase. Firms such as BNP Paribas (4) and Ecofi Investissements (5) are already exploring AI opportunities and many firms are following suit. We at Leading Point see AI inevitably becoming integral to an effective responsible investment process and intend to be at the heart of this revolution. 

 

AI is by no means the judge, jury and executioner when it comes to ESG investing and depends on those behind it, constantly working to improve the algorithms, as well as the analysts using it to make more informed decisions. AI does, however, have the potential to revolutionise what a responsible investment means and help reallocate resources towards firms that will create a better future.

[1] The problem with corporate greenwashing

[2] AI and Compute

[3] Could AI Displace Investment Bank Research?

[4] How AI could shape the future of investment banking

[5] How AI Can Help Find ESG Opportunities

 

"It takes twenty years to build a reputation and five minutes to ruin it"

 

AI offers an opportunity for investors to not only act responsibly, but also align their ESG goals to a profitable agenda

Environmental Social Governance (ESG) & Sustainable Investment

Client propositions and products in data driven transformation in ESG and Sustainable Investing. Previous roles include J.P. Morgan, Morgan Stanley, and EY.

 

Upcoming blogs:

This is the second in a series of blogs that will explore the ESG world: its growth, its potential opportunities and the constraints that are holding it back. We will explore the increasing importance of ESG and how it affects business leaders, investors, asset managers, regulatory actors and more.

 

 

Riding the ESG Regulatory Wave: In the third part of our Environmental, Social and Governance (ESG) blog series, Alejandra explores the implementation challenges of ESG regulations hitting EU Asset Managers and Financial Institutions.

Is it time for VCs to take ESG seriously? In the fourth part of our Environmental, Social and Governance (ESG) blog series, Ben explores the current research on why startups should start implementing and communicating ESG policies at the core of their business.

Now more than ever, businesses are understanding the importance of having well-governed and socially-responsible practices in place. A clear understanding of your ESG metrics is pivotal in order to communicate your ESG strengths to investors, clients and potential employees.

By using our cloud-based data visualisation platform to bring together relevant metrics, we help organisations gain a standardised view and improve your ESG reporting and portfolio performance.  Our live ESG dashboard can be used to scenario plan, map out ESG strategy and tell the ESG story to stakeholders.

AI helps with the process of ingesting, analysing and distributing data as well as offering predictive abilities and assessing trends in the ESG space.  Leading Point is helping our AI startup partnerships adapt their technology to pursue this new opportunity, implementing these solutions into investment firms and supporting them with the use of the technology and data management.

We offer a specialised and personalised service based on firms’ ESG priorities.  We harness the power of technology and AI to bridge the ESG data gap, avoiding ‘greenwashing’ data trends and providing a complete solution for organisations.

Leading Point's AI-implemented solutions decrease the time and effort needed to monitor current/past scandals of potential investments. Clients can see the benefits of increased output, improved KPIs and production of enhanced data outputs.

Implementing ESG regulations and providing operational support to improve ESG metrics for banks and other financial institutions. Ensuring compliance by benchmarking and disclosing ESG information, in-depth data collection to satisfy corporate reporting requirements, conducting appropriate investment and risk management decisions, and to make disclosures to clients and fund investors.

 


Regulatory Risk: Getting away from Whack-a-Mole

Senior Management is under more pressure than ever to demonstrate compliance and risk-sensitive decision making - but the process by which they do it is straining under the sheer number and weight of obligations to manage.

36% of fines handed out by the FCA over the last 3 years - over a third - have been for failings related to management and control (PRIN 3)*. With an average penalty of £24 million firms cannot afford to be lax in this.  Transparency of their firm’s systems and controls continues to be vital for leaders at Board level and within Senior Management Functions to ensure that their business is compliant and within risk tolerances. 

Increasingly, during the ongoing pandemic, regulators expect comprehensive, responsible, and tangible governance and control to be operated by regulated firms. Creating transparency of firms’ regulatory activity across the business paramount. Not just for leaders at Board and Senior Management Functions levels (SMFs) but also in the supporting infrastructure within Compliance, Operations, Technology, Finance, Legal, and HR.

In their recent Joint Statement for Firms, the UK regulators outlined that firms must:

“Develop and implement mitigating actions and processes to ensure that they continue to operate an effective control environment: in particular, addressing any key reporting and other controls on which they have placed reliance historically, but which may not prove effective in the current environment. .. Consider how they will secure reliable and relevant information, on a continuing basis, in order to manage their future operations.”**

Joint statement by the Financial Conduct Authority (FCA), Financial Reporting Council (FRC) and Prudential Regulation Authority (PRA), 26th March 2020

‘Securing reliable and relevant information’ is harder than it sounds. The information required for this is frequently cobbled together in PowerPoint, Excel or other tools from a wide variety of disparate sources. This is inefficient and time intensive, and is subject to inconsistencies. Information may be out of date by the time it is produced, and often does not meet the level of detail required by the various audiences. 

More than that, Senior Managers lack a consolidated view of their regulatory risk across their business. This is difficult to achieve given the number of areas they need to monitor, ongoing regulatory change, and the pace of digital transformation. Managers are often spending more time piecing together a picture of their overall regulatory ‘health’ and fighting fires than they are developing the business.

Compliance issues become like Whack-A-Mole, as soon as one gets whacked, another one pops up, and then another. Senior Management are effectively blindfolded holding the ‘mole hammer’ and have to ask a business analyst or a compliance officer “are there any moles today?” and “what do I hit?”. 

These regulatory moles are not common or garden business problem moles. There may be hundreds of moles to whack at any given time. As a result, managers need the ability to triage the reports of mole sightings to decide which is most pressing. Which is most likely to ruin his or her lawn? Is it the Sanctions Breach mole, the Data Protection mole or Transaction Reporting mole? 

Not only are there many of them - you need to keep records of which ones you’ve whacked and why. At some point you’ll need to evidence why you didn’t whack the Sanctions Breach mole immediately and provide the context for that decision. If you fail to whack enough of them, or the right ones, your business could be fined, or worse, you personally could end up in court.

This is a much more pressing issue due to the level of personal accountability, and broadened personal liability,  introduced by the Senior Managers and Certification Regime (SM&CR). The SM&CR, which came into force on 9th December 2019, overhauled the Approved Persons Regime for individuals working in UK financial services firms. Placing more stringent requirements on senior managers to take responsibility for their firms’ activities through a ‘Duty of Responsibility’ to take ‘reasonable steps’ to prevent or stop regulatory breaches. 

As the FCA Handbook states in their “Specific guidance on individual conduct rules” (COCON 4.2) addressed to Senior Managers: “SC2: You must take reasonable steps to ensure that the business of the firm for which you are responsible complies with the relevant requirements and standards of the regulatory system.”***

We believe that one of these ‘Reasonable Steps’ is having appropriate reporting to achieve a clear view of the ‘Regulatory Health’ of their business and their risk points. Firms and Senior Managers need the ability to:

  1. Capture key regulatory risk metrics
  2. Link them to the appropriate compliance monitoring data
  3. Put those risk metrics into context across the business
  4. Generate a consolidated view of the business’ regulatory health and risk points
  5. Make it accessible & easily understandable to the relevant managers
  6. Make it ‘persistent’ over time to and allow ‘point in time’ views of risk levels

A solution that could a) take existing and live compliance data b) isolate the risk metrics that really ‘matter’, and c) present them in context across regulations and business areas is really needed for Senior Managers to have a picture of their overall risk. 

Senior Management should know where the regulatory moles are - without having to ask. Rather than having to review reams of documentation, it could allow managers a more holistic and focused view of regulatory risk across their business, as well as save time and resource spent creating, managing, and reviewing PowerPoints. Knowing what to look for is half the battle after all.  

Don’t let the moles ruin your lawn.

 

References

1. Leading Point analysis of FCA fines related to PRIN 3 Management and control: A firm must take reasonable care to organise and control its affairs responsibly and effectively, with adequate risk management systems.” FCA Principles for Business https://www.handbook.fca.org.uk/handbook/PRIN/2/?view=chapter

 

2. https://www.bankofengland.co.uk/-/media/boe/files/prudential-regulation/publication/2020/joint-statement-on-covid-19.pdf?la=en&hash=28F9AC9E45681F3DC65B90B36B5C92075048955F

 

3. “Specific guidance on individual conduct rules” (COCON 4.2) addressed to Senior Managers: https://www.handbook.fca.org.uk/handbook/COCON/4/2.html

On July 14th, experts from banks, hedge funds and market infrastructure providers will discuss how financial institutions can create transparency and insights from their regulatory risk data, and Leading Point will introduce their new industry-leading regulatory risk data system SMART_Dash.

Panellists will discuss:

- The challenges of internal regulatory oversight that all financial services firms are facing

- How businesses can create a consolidated view of their regulatory risk

- The ways that regulatory monitoring data can be more accessible

- An introduction to SMART_Dash; a revolutionary tool providing regulatory risk reassurance

*Regulatory Risk, not moles

Join our webinar to learn more about how to create transparency and insights from regulatory risk data

 

 

 

Senior Management are effectively blindfolded holding the ‘mole hammer’ and have to ask a business analyst or a compliance officer “are there any moles today?” and “what do I hit?”.

 

36% of fines handed out by the FCA over the last 3 years - over a third - have been for failings related to management and control (PRIN 3).

 

"[Firms must] Consider how they will secure reliable and relevant information, on a continuing basis, in order to manage their future operations."

 

"firms need to ensure that their cloud-based operating models are not only safe and secure, but address the capabilities required for operational resilience testing. Investment in frameworks and data analytics that can support these capabilities are essential"

 

Thushan Kumaraswamy
Head of Solutions

Architecture lead with over 20 years’ experience helping the world’s biggest financial services providers in capital markets, banking and buy-side to deliver practical business transformations in client data, treasury, sales, operations, finance and risk functions, and major firm-wide efficiency initiatives. Mastery in business and technical architecture, with significant experience in end-to-end design, development and maintenance of mission critical systems in his early career. Specialities – business and technical architecture leadership, data warehousing, capital markets, wealth management, private banking.

 

 

Rajen Madan
Founder & CEO

Change leader with over 20 years’ experience in helping financial markets with their toughest business challenges in data, operating model transformation in sales, CRM, Ops, Data, Finance & MI functions, and delivery of complex compliance, front-to-back technology implementations. Significant line experience. Former partner in management consulting leading client solution development, delivery and P&L incl. Accenture. Specialities – Operating Models, Data Assets, Compliance, Technology Partnerships & Solutions in Capital Markets, Market Infrastructure, Buy-Side, Banking & Insurance.

 

 


What if business operations could be more like Lego?

Financial services (FS) professionals from 30+ organisations tuned in to our inaugural webinar last week “What if business operations could be more like Lego?” to hear the challenges that COO and Heads of Change face in changing their business operating models and how we might break through the barriers. A summary of key takeaways from the discussion are presented below. See the webinar recording here

 

The importance of ‘Know Your Operating Model’

FS firms are under renewed pressure to rethink their operating models; competitive pressure, raised consumer expectations, and continuous regulatory requirements mean constant operating model re-think and change. Yet most firms are stuck with theoretical target operating models that lack a plan, a way to measure performance and progress, or a business case. As a result, only 25% of investors are confident strategic digital transformation will be effective.**

Innovation is hindered as firms struggle to overcome significant technical debt to implement new technology (e.g. automation, AI, cloud etc.) while effectively using budget tied up in high operating costs. Indeed, 80% of technology spend in organisations is focused on legacy systems and processes, while only 20% of analytics insights deliver business outcomes and 80% of AI projects “remain alchemy, run by wizards”***

Insufficient business understanding means lost opportunities, wasteful spends & risk – if you don’t understand your business well enough, you will be exposing yourself to risks and lost opportunities.

 

The barriers to business understanding

Firms current approaches to business operations and change are not fit for purpose.

Insight Gap in the Boardroom: Experts with specialist toolkits are needed to structure and interpret most business information. Management’s understanding of the business is often directly related to the ability of their analytical teams to explain it to them. Most firms are still stuck with an overload of information without insights, without the right questions being asked.

Cultural Challenge: Many execs still think in terms of headcount and empire building rather than outcomes, capabilities, and clients.

Misaligned metrics: Metrics are too focused on P&L, costs and bonuses! Less on holistic organisation metrics, proof points and stories.

Complexity makes it difficult to act… Most enterprises suffer from excessively complicated operating models where the complexity of systems, policies, processes, controls, data and their accompanying activities make it difficult to act.

…and difficult to explain: Substantiating decisions to stakeholders, regulators or investors is an ongoing struggle, for both new and historic decisions.

If you can't measure it, you can't manage it: Inconsistent change initiatives without performance metrics compound errors of the past and mean opportunities for efficiency gains go unseen.
How can we break through these barriers?

Business insight comes from context, data and measurement: How the building blocks of the business fit together and interact is essential to the ‘what’ and ‘how’ of change, and measurement is key to drive transparency and improved behaviours.

Operating model dashboards are essential: Effective executives either have extremely strong dashboards underpinning their decisions or have long standing experience at the firm across multiple functions and get to “know” their operating mode innately. This is a key gap in most firms. 50% of attendees chose improved metrics & accessibility of operating model perspectives as priority areas to invest in.

Less is more: Senior managers should not be looking at more than 200 data points to run and change their business. Focusing on the core and essential metrics is necessary to cut through the noise.

The operating model data exists, it should now be harvested: The data you need probably already exists in PowerPoint presentations, Excel spreadsheets and workflow tools. Firms have struggled to harvest this data historically and automate the gathering process. We demonstrated how operating model data can be collected and used to create insights for improved decision-making using the modellr platform.

Culture change is central: Culture was voted by attendees as the #1 area to invest in, in order to improve business decision-making. Organisational culture is a key barrier to operating model change. A culture that incentivises crossing business silos and transparency will create benefits across the enterprise.

Client-driven: Clients are driving firms to more real-time processing along with the capability to understand much more information. Approaches that combine human intelligence with machine intelligence are already feasible and moving into the mainstream.

Get comfortable with making decisions with near perfect information: Increasingly executives and firms need to get comfortable with “near perfect” information to make decisions, act and deliver rapid business benefits.

 

Future Topics of Interest

Regulatory Reassurance: Regulators continue to expect comprehensive, responsible and tangible governance and control from Senior Managers. How can firms keep up with their regulatory obligations in a clear and simple way?

Environmental, Social & Governance (ESG): An increasingly-popular subject, ESG considers  the impact of businesses on the environment and society. ESG metrics are becoming more important for investors & regulators and firms are looking for consistent ways to measure performance and progress in ESG metrics.

Operating Model-as-a-Service: As well as managing business operations themselves, firms need to monitor the models that describe those operations; their current state, their target state and the roadmap between the two. Currently, this is often done with expensive PowerPoint presentations that are usually left in cupboards and ignored because they are not “live” documents. Metrics around the operating model can be captured and tracked in a dashboard.

Anti-Financial Crime (AFC): Money laundering, terrorist financing, fraud, sanctions, bribery & corruption; the list of ways to commit financial crime through FS firms grows by the day. How can firms track their AFC risk levels and control effectiveness to see where they need to strengthen?

Information Security: With the huge volume of data that firms now collect, process & store, there are more and more risks to keep that data secure and private. Regulations like GDPR can impose very large fines on firms that break those regulations. Industry standards, such ISO 27001, help improve standards around information security.

*,**  Oliver Wyman, 2020, The State Of The Financial Services Industry

*** Gartner, 2019, Our Top Data and Analytics Predicts for 2019

 


Operational Resilience: data infrastructure and a consolidated risk view is pivotal to the new rules on operational risk

What have we learnt about Operational Resilience in the last three months?  

The last three months has taken the world – and Financial Services completely by surprise and further highlighted some major weaknesses in firms’ approaches to operational risk.

In January 2020, infectious diseases or Pandemic Risk, was not in the top 20 operational risks in Financial Services – at the time dominated by Cybercrime, data breaches and financial crime.[1] While many firms’ will have run pandemic scenarios at some point as part of their operational risk scenario analysis programme (probably based on SARs, or Ebola) – it’s becoming increasingly clear that many firms’ business continuity plans were being updated ‘on the fly’ as they moved to crisis management as the pandemic situation evolved. 70% of Operational Risk professionals say that their priorities and focus have changed as a result of Covid 19.[2]

This is understandable. No-one anticipated a situation of near total remote working that the pandemic has called for – even in extreme scenarios.

Many banks and insurance companies now have up to 90% of their staff working from home and are attempting to manage the plethora of associated impacts and increased risks resulting from this new environment.

Risks such as internal fraud or engaging in unauthorised activities are increasing as a direct consequence of the reduced monitoring capabilities caused by distance working as well as simple operational errors, mistakes, and omissions. While many other indirect risks are increasing, such as cyber criminals taking advantage of new vulnerabilities revealed by remote working.

 

Regulators are re-writing the rulebook on how to manage operational risk

The ability of Financial Services to cope in situations such as this has been an area of regulatory focus for some years now, in great part driven by the parliamentary response to high profile IT failures such as with TSB or RBS[3]. Named ‘Operational Resilience’, regulators are looking at the “ability of firms and the financial sector as a whole to prevent, adapt, respond to, recover, and learn from operational disruptions.”

The Bank of England & FCA released a discussion paper in 2018 on this topic, stating:

“The financial sector needs an approach to operational risk management that includes preventative measures and the capabilities – in terms of people, processes and organisational culture – to adapt and recover when things go wrong.”[4]

Covid 19 is a prime example of things ‘going wrong’.

As a result, regulators are closely monitoring this situation as Covid 19 replaces Brexit as the test case for UK financial services’ ‘Operational Resilience’ rules. How firms manage Covid 19 now, will shape the final form of the imminent legislation as firms’ successes and failures are factored into the final rules due in 2021.

A joint PRA/FCA consultation paper ‘CP29/19 Operational resilience: Impact tolerances for important business services’ released in December 2019[5] breaks down their proposed policy and regulatory requirements to reform operational risk management. Namely:

  1. Identification of Important Business services – A firm or Financial Market Infrastructure (FMI) must identify and document the necessary people, processes, technology, facilities, and information (referred to as resources) required to deliver each of its important business services.
  2. Set impact tolerances for those business services – firms should articulate specific maximum levels of disruption, including time limits within which they will be able to resume the delivery of important business services following severe but plausible disruptions
  3. Remain within those impact tolerances – Scenario testing: is the testing of a firm or FMI’s ability to remain within its impact tolerance for each of its important business services in the event of a severe (or in the case of FMIs, extreme) but plausible disruption of its operations.

The shift in focus means moving away from tracking individual risks to individual systems and resources towards considering the chain of activities which make up a business service and its delivery. This includes outsourcing and third party risk management, as made clear in a separate consultation paper. [6] As a result, operational risk management will become significantly more data intensive.

To understand business services’ impact tolerances in ongoing testing requires a significant level of infrastructure and data sophistication. Identifying and assessing the criticality of the ‘chain’ of activities involved is a project in itself, but defining, collecting, and reporting on the right metrics on an ongoing basis would require purpose built infrastructure.

As they stand, the rules under consultation require firms to produce a detailed end-to-end mapping of processes, applications, and people, new and updated policies, standards and procedures. Testing of operational resilience programs will require significant effort from firms depending on the scale and complexity of operations, testing frequency, or level of integration required.

Alongside these operational changes, the regulators expect Boards and senior management to consider operational resilience when making strategic decisions. As a result, robust information tools are needed that incorporate metrics such as KRIs, KCIs or KPIs into informed strategic decision making.[7]

 

How firms currently manage their operational risks is undergoing a paradigm shift

Firms’ existing operational risk management is primarily informed by the Basel II’s capital requirements legislation[8]. Firms are required to hold Operational Risk Capital (ORC) against aggregate operational risks calculated largely against quantifiable, historical ‘loss events’ (i.e. how much money was lost, and for what reason) and the RCSA[9] scores based on the adequacy of the controls designed to prevent those losses.

Basel II’s more sophisticated, model-based, advanced measurement approach (AMA) has been widely criticised as being difficult to implement and ineffective – leading many firms to default to the simpler Basic Indicator Approach (BIA) rather than invest in the infrastructure to support the AMA and eat the increased capital charges the BIA entails.

As a result, most operational risk scenarios have been largely event-driven e.g. what happens if the trade reconciliation system goes down. Firms largely don’t attempt to track what would happen if that system deteriorated by 20% for example.

This is the key difference in approach between the proposed operational resilience rules and existing frameworks. Where traditional operational risk management is much more siloed and vertical, operational resilience requires a much more holistic, and horizontal, approach internally.

Taking an end-to-end view of the ‘chain’ of activities that make up a service and its associated controls, means tracking the entirety of the inputs and outputs from front to back across business lines, middle and back offices, and 3rd party suppliers and outsourcing (e.g. from sales to execution to settlement).

As a result, analysing the impact of a deterioration in control effectiveness requires data infrastructure and risk management software designed for the purpose that can incorporate the relevant metrics (e.g. volume, uptime, etc.) and track the impact of changes across downstream processes.

Given many firms have challenges managing end-to-end business flows on a BAU basis without significant manual manipulation of data as they are so complex and fractured, there will likely be significant challenges around defining and delivering resilience thresholds which meet the regulatory requirements as the data sets underpinning such thresholds will also be complex and fractured.

Basel II’s system is now being overhauled with the new Standardized Measurement Approach (SMA) under Basel III regulations, now[10] due 2023. As a result, banks will need to ensure their internal loss data is as accurate and robust as possible to substantiate their calculated ORC.

How this system meshes with the operational resilience rules is an open question for the industry. Can they be aligned? or will firms be doomed to operate multiple and potentially conflicting risk frameworks?

 

Movement to the cloud needs purposeful development of operational resilience capabilities

The regulators are clear about how they see the future of Financial Institutions – they should be deeply interconnected with the regulators and be able to provide the data they need ‘on tap’. The move towards more granular, end-to-end views of operational resilience needs to be seen as a continuation of this objective.

According to ORX, the international operational risk management association:

“Risks are becoming more interconnected and traditional operational risk management is not suited to manage them … we have tools, we have tactics, we have value, but that we lack a strategy. We need a strategy to deal with the changing risk horizon, new business models, changing technology and, most of all, new expectations from senior management.”[11]

These are issues the UK regulators understand deeply, however, the Operational Resilience proposals need to be seen in the broader regulatory context. In the UK, the industry spends £4.5 billion in regulatory reporting, but the BoE wants to move towards a more integrated system.

“supervisors now receive more than 1 billion rows of data each month… the amount of data available in regulatory and management reports now exceeds our ability to analyse it using traditional methods.”[12]

As a result, the BoE has tabled proposals to pull data directly from firms’ systems or use APIs to ‘skip the middleman’ and go directly to source[13].

The drive towards innovation and digital transformation means the industry is aggressively moving towards wholescale cloud adoption. As firms such as a Blackrock, Lloyds, sign strategic partnership deals with Google, Microsoft or other cloud providers, in 2020, cloud technology is seen as a real, scalable and safe option for Financial Services.

While cloud security is a well-known concern, firms need to ensure that their cloud-based operating models are not only safe and secure, but address the capabilities required for operational resilience testing. Investment in frameworks and data analytics that can support these capabilities are essential – but should not be limited to purely operational resilience objectives.

Cloud adoption is a huge opportunity for firms to build ‘green field’ infrastructure that can not only support digitisation and business transformation objectives but also support ever increasing data requirements – regulatory or otherwise. The ability to handle and trace iterative regulatory requirements for new data sets need to be built into the fabric of firms’ operating models not just for compliance purposes but to track the impact of that compliance.

Conclusion

How many firms have today a consolidated view of their anti-financial crime, information security, or other non-financial or compliance risks, the resources devoted to their management, or the management information on tap to support decision making? It is clear firms need the right infrastructure and tools to support the granularity, and traceability of these data sets.

Real investment in operational risk data capabilities can yield significant business benefits – not just in the reduction of material risk and future spend on compliance, but as an invaluable source of internal intelligence for resource and business optimisation.

Top-of-the-line risk data positions Financial Institutions to further build out capabilities such as big data analytics, correlation and root cause analysis, and predictive risk intelligence.

However, in the face of the current pandemic, competing challenger institutions, market disruption, and the uncertainties of the future – the ability for firms to provide evidence they are robust and resilient organisations will give them a real competitive advantage as clients seek resiliency as core requirement in their banking/FMI partners.

Ultimately, the most important benefit a robust operational resilience framework can give firms is trust – from both customers and regulators.

 

[1] Risk.Net, March 2020, ‘Top 10 operational risks for 2020’ https://www.risk.net/risk-management/7450731/top-10-operational-risks-for-2020

[2] Elena Pykhova, 2020, ‘Operational Risk Management during Covid-19: Have priorities changed?’ https://www.linkedin.com/pulse/operational-risk-management-during-covid-19-have-changed-pykhova/

[3] House of Commons & Treasury Committee, October 2019, ‘IT failures in the Financial Services Sector’ https://publications.parliament.uk/pa/cm201919/cmselect/cmtreasy/224/224.pdf

[4] Bank of England & FCA, 2018, ‘Building the UK financial sector’s operational resilience’ https://www.bankofengland.co.uk/-/media/boe/files/prudential-regulation/discussion-paper/2018/dp118.pdf?la=en&hash=4238F3B14D839EBE6BEFBD6B5E5634FB95197D8A

[5] Bank of England/PRA, December 2019, ‘CP29/19 Operational resilience: Impact tolerances for important business services’ https://www.bankofengland.co.uk/-/media/boe/files/prudential-regulation/consultation-paper/2019/cp2919.pdf

[6] Bank of England/PRA, December 2019, ‘CP30/19 Outsourcing and third party risk management’ https://www.bankofengland.co.uk/-/media/boe/files/prudential-regulation/consultation-paper/2019/cp3019.pdf?la=en&hash=4766BFA4EA8C278BFBE77CADB37C8F34308C97D5

[7] Key Risk Indicators, Key Control Indicators, and Key Performance Indicators respectively.

[8] There are a whole host of regulations that impact operational risk management in a variety of ways such as CPMI-IOSCO Principles for Financial Market Infrastructures, the G7 Fundamental Elements of Cybersecurity for the Financial Sector, the NIST Cybersecurity Framework, ISO 22301, the Business Continuity Institute (BCI) Good Practices Guidelines 2018.

[9] (Risk Control Self Assessment)

[10] Delayed by a year as a result of Covid 19

[11] ORX, September 2019, The ORX Annual Report, https://managingrisktogether.orx.org/sites/default/files/public/downloads/2019/09/theorxannualreportleadingtheway_0.pdf

[12] Bank of England, June 2019, ‘New Economy, New Finance, New Bank: The Bank of England’s response to the van Steenis review on the Future of Finance’ https://www.bankofengland.co.uk/-/media/boe/files/report/2019/response-to-the-future-of-finance-report.pdf?la=en&hash=C4FA7E3D277DC82934050840DBCFBFC7C67509A4#page=11

[13]  Ibid

 

“Risks are becoming more interconnected and traditional operational risk management is not suited to manage them” –

ORX, The operational risk management association

 

 

Taking an end-to-end view of the ‘chain’ of activities that make up a service and its associated controls, means tracking the entirety of the inputs and outputs from front to back across business lines, middle and back offices, and 3rd party suppliers and outsourcing (e.g. from sales to execution to settlement).

 

Given many firms have challenges managing end-to-end business flows on a BAU basis without significant manual manipulation of data as they are so complex and fractured, there will likely be significant challenges around defining and delivering resilience thresholds which meet the regulatory requirements as the data sets underpinning such thresholds will also be complex and fractured.

 

“firms need to ensure that their cloud-based operating models are not only safe and secure, but address the capabilities required for operational resilience testing. Investment in frameworks and data analytics that can support these capabilities are essential”

 

No-one anticipated a situation of near total remote working that the pandemic has called for – even in extreme scenarios.

 

Real investment in operational risk data capabilities can yield significant business benefits – not just in the reduction of material risk and future spend on compliance, but as an invaluable source of internal intelligence for resource and business optimisation.

 

Nick Fry
Reg Change, Data SME, RegTech Propositions

Experienced financial services professional and consultant with 25 years’ experience in the industry. Extensive and varied business knowledge both as a senior manager in BAU and change roles within investment banking operations and as a project delivery lead, client account manager, practice lead and business developer for consulting firms

 

 

Alaric Gibson
Reg Change, Data SME, RegTech Propositions

Analyst with expertise in regulatory analysis and implementation, customer reference data management, and data driven transformation & delivery. Has worked for a number of RegTech start-ups within Capital Markets.

 

 


Time to Reset?

We see the varnish from the old oil painting of government, enterprise, business and leadership fade a bit every day. 2020 has already shown us how interconnected our world has become - a true Butterfly Effect. Interconnectivity is not a bad thing. It is the fragility, the brittleness of modern economies that is cause for concern. I believe this is a result of critical imbalances we have allowed to build up, without questioning. Now as the varnish from the old oil painting comes off, we have a once in a decade opportunity to reset and tackle these imbalances. To make bold brush strokes.

Where can we start?

Big Government or Small?

Do we need a Big Government or Small? The term ‘Big Government’ here is not intended to be derogatory. We see national priorities and decisions that don’t match that of the city, the village, or the council. Great plans and budgets that don’t translate into change on the ground. Equally, in the face of this crisis, we see barriers breaking down. A C-19 COVID Symptom tracker app, which each of us can use, allows a judicious allocation of scarce testing and treatment resources at a national and grassroot level. The opportunity is to examine the flow from the national to the level of council. Provide transparency and allow engagement. If it doesn't exist it should be created. Direct channels for us citizens to highlight problems, propose solutions, be data-driven and monitor implementation. It is not a question of a big government versus small. It is one that works transparently that matters.

Public or Private Sector Enterprise?

A key debate going into 2020 was about which sector provides a better service, is more efficient with resources - private or public sector enterprise? Think about the NHS, Transport, Energy, Manufacturing, Financial Services, Agriculture, Technology and Utilities. Healthy arguments and examples are cited to show the merits of both public and private sector. I believe the public-private argument completely misses the point. Whether an enterprise provides a good service or poor, spends judiciously or not is not down to public or private sector. It is down to some key principles - how it is governed, how accountable is its team and partners, does it know what good service looks like and is it equipped to provide these services. Enterprises can be funded by either public or private sector resources. The opportunity ahead is in data and tech enabled service delivery models, going digital. And public-private collaboration funding models can ignite innovation and value added services. The key to provide good service is not public or private sector, it is to provide a good service!

 

Role of Business

Businesses are standing out in two ways in these times. Those that care about their employees and partners and are doing their bit to help their communities and those that pretend to. People will remember businesses that care. Those that don't, will fall out of favour. That most of our essential "front line" staff in the face of a pandemic are paid low/ minimum wages is cowardly. It shows the scale of imbalances we have allowed to build up and seem to be comfortable with. Colleagues in maintenance, cleaning, nursing, restaurant, retail, agriculture, driving, security, manufacturing and teaching professions amongst others need to be compensated fairly. The opportunity here is to go after skewed compensation models, unviable business models and poor productivity with vigour. The tax structures reportedly exploited by big tech and conglomerates are ripe for reform and become principle driven. Likewise business owners having billions and calling for government bailouts or larger profitable companies using furlough schemes to offload their responsibilities to the public should face the consequence. This is a failure of law and the will of successive governments. Let us get it right this time. Bashing businesses and entrepreneurs is not the answer. They are born from the risk-reward equation and are the lifeblood of any economy.

Lessons in Leadership

As much as it is tempting to draw leadership lessons from the current pandemic, they are unique to the situation and not a one size fits all. But I find the war analogy somewhat flawed. The chancellor of the exchequer, Rishi Sunak said “we will be judged by our capacity for compassion and individual acts of kindness” – does that sound like a war? If anything, the lesson for future leaders is to be that much more focused on ensuring their team’s wellbeing, ensuring they are equipped with relevant resources. Good leaders will understand the importance of the informal and the invisible stuff – collaboration, unconventional thinking, meaningful conversations and problem solving over formal organisation structures. The world we have to navigate in is increasingly unpredictable and non-linear, command and control team structures and top-down change will not work.

Everyday we are seeing concrete examples of what is working in business, government and leadership and what is not. We can allow 2020 to be one mired in tragedy, lost lives, lost livelihoods and failed businesses or we can seize the once in a decade opportunity to reset and create the government, the enterprise, the business and leaders that we want and have lacked for some time. This is within reach.

What steps do you think will help create better business, government and leaders?

Please feel free to comment and share. Keep well!

Change leader with over 20 years’ experience in helping financial markets with their toughest business challenges in data, operating model transformation in sales, CRM, Ops, Data, Finance & MI functions, and delivery of complex compliance, front-to-back technology implementations. Significant line experience. Former partner in management consulting leading client solution development, delivery and P&L incl. Accenture. Specialities – Operating Models, Data Assets, Compliance, Technology Partnerships & Solutions in Capital Markets, Market Infrastructure, Buy-Side, Banking & Insurance.

"2020 has already shown us how interconnected our world has become - a true Butterfly Effect."

"It is not a question of a big government versus small. It is one that works transparently that matters."

"Businesses are standing out in two ways in these times. Those that care about their employees and partners and are doing their bit to help their communities and those that pretend to."

 

"We can allow 2020 to be one mired in tragedy, lost lives, lost livelihoods and failed businesses or we can seize the once in a decade opportunity to reset and create the government, the enterprise, the business and leaders that we want and have lacked for some time"

 


Reimagining trading platform support: Who's supporting you through turbulent times?

Trading platform support is, and has been, going through some heavy changes. It’s a changing world we live in and even putting the current situation to one side (we know it’s difficult but let’s try) it’s worth noting how cost reduction, market consolidations, and changes in approach, etc. have changed the landscape for how trading platforms are supported.

Good front line support for trading platform functionality is now more difficult to access and slower to respond resulting in fewer issues actually being resolved.

Changes in focus from vendors has meant the trading industry has had to come up with, let’s face it, a compromise, to ensure their businesses can continue to operate ‘as normal’. There are many new normals across all industries and sectors at present, but the trading world is highly arcane in nature and therefore any change is difficult for traders and salespeople alike. This has translated into moves towards other models like ‘Live Chat’ style support, which some find impersonal, with fewer experienced people showing up regularly at client sites.

At the sharp end this can mean less voice support and a reduction in face to face support resulting in declining reassurance for users from regular contact with the ‘floorwalkers’. Some trading platform users have found that trading support has been neglected and their experience has suffered as a consequence.

For instance, a Waters Technology article, published last year, reported one Fidessa user citing difficulties with issue resolution:

“It seems like they’ve lost the ability to distinguish between a general issue and an urgent issue that needs to be resolved because it’s putting our clients at risk. We’ve had some issues that have been sitting with them for months.”

Obviously this is a sub-optimal ongoing predicament to be in. Whether due to cost savings, staff attrition rates or other reasons – the provision of first line support has deteriorated.

Even so, the cost of support to a trading firm remains constant in real terms. But in terms of what they get in return, it effectively becomes an added overhead translating to something with a diminishing return.

Added to these ongoing, and somewhat reluctantly accepted concerns, new uncertainties are pushing themselves to the forefront of users minds. The big one currently of course are the changes companies and staff are having to make now to their working arrangements in relation to the current climate and the need to maintain a distributed workforce.

Uncertainties around this mean that some in this space now acknowledge a real need for flexibility and better business continuity planning and scalability options (there have been significant spikes in volumes and volatility) in the approach to providing support for users. One just needs to look at the increasing number of LinkedIn or Facebook posts of people attempting to replicate their office desk at home to see the level of impact.

All of the above factors appear to be leading to a dawning realisation for many trading platform users for two necessary changes:

  • A higher degree of self-sufficiency for navigating a platform and making full use of its features.
  • Fast and reliable turnaround for resolving complex issues and being trained in new functionality without the necessity to call upon a fixed cost resource pool.

So what is the obstacle here?

Think about applications like Word or Excel. How many people who regularly use applications such as these are proficient in just enough to enable them to carry out their daily job? Many of these people are probably utilising less than ten percent of what the application offers and therefore unable to identify avoidable bottlenecks and efficiency gains no matter how simple to implement – 90% of the potential benefits remain unused, an ‘unknown unknown’.

With such a wealth of functionality offered, knowing what *really* matters requires an understanding of both the application and your specific needs.

The same can be said for trading functionality; untapped opportunities for improved workflows are lying undiscovered and unutilised before users’ eyes. Comprehensive support and training in existing and new functionality can pave the way for users to discover that potential including, dare we say, the opportunity of alpha generation due to the possibility of speed of use through innate familiarity.

Communication and tailored collaboration with knowledgeable and experienced support teams is essential. Targeted, independent and focused front line support available from experienced outsourced providers presents a viable support proposition for platform users, wherever you sit in the organisation.

At Leading Point we are not only able to react to issues quickly but also know the information you are looking for (often before you need it) that will make a real difference to your daily trading platform experiences. With an innate ability to speak your ‘language’ we can provide seamless communication. All of this underpinned by an always available service when you and your users need it most.

  • Imagine an innovative trading support experience comprising an equally innovative commercial model enhancing an entire trading platform experience.
  • Imagine the knowledge your users can benefit from through such a collaboration and the degree to which that benefit is passed on to clients
  • Imagine, through the unlocking of that untapped potential, your regular users becoming super users

The time for change is NOW. If you’d like to get in touch, we would be delighted to tell you more about the potential benefits to you and your firm.

 

Untapped opportunities for improved workflows are lying undiscovered and unutilised before users’ eyes.

 

Good front line support for trading platform functionality is now more difficult to access and slower to respond resulting in fewer issues actually being resolved.

 

“It seems like they’ve lost the ability to distinguish between a general issue and an urgent issue that needs to be resolved because it’s putting our clients at risk.”


Legal Risk: Too big to manage?

Arguably, the model by which we manage legal risk in Financial Institutions is no longer fit for purpose.

The current model assumes that regulatory change can be accommodated “off the side of the desk” of the legal department using outsourced project teams to do the bulk of the work.  This model may not only be inappropriate in the current deluge of regulation and business generated data, it may actually introduce further risk.

As firms grow and change, they amass an enormous quantity and variety of contracts.  These contracts, coupled with regulations, form an array of legal obligations, which the firm attempts to track. The numbers surrounding regulation and legal data are astronomic:

  • Spending on regulatory compliance is now around 200 to 300 billion US dollars[i]
  • Hundreds of acts are promulgated in the EU alone every year[ii]
  • There are an estimated 50 million words in the UK statute book, with 100,000 words added or changed every month[iii]
  • 250  number of regulatory alerts issued daily  by over 900 regulators globally

And, when firms get into litigation, the figures boggle the mind:

“We’re now working on a case more than twice that size, with 65m [documents], and there’s one on the way with over 100m. It’s impossible to investigate cases like ours without technology.”[iv]

It is not all about the numbers either.  Each piece of new legislation, i.e. new law, is linked somehow with a number of existing laws so it is not just a matter of treating each one in isolation.[v]

In addition, there are self-made “laws” in the shape of legal agreements (contracts) which set out the respective obligations agreed between the parties entering into the agreement.  Both types of law need to be mapped and tracked throughout the contract lifecycle.  Data on this flow management is difficult to come by as many firms do not (or are not able to) collect management information about legal activity.

 

MANAGING LEGAL RISK IS A HUGE UNDERTAKING

Lawyers are working ever harder both in-house and in law firms than ever before.[vi]

It is difficult to generalise about the way in-house legal departments[vii] within financial services firms are run but two general themes are discernible.  General Counsel (GCs) are expected to run their departments aligned to business strategies with budgets provided by the Business[viii]; and, they are expected to manage regulatory and legal risk.

Managing Legal Risk for a large Financial Institution is huge undertaking. Ensuring that a firm tracks emerging regulation, operationalises compliance with new law, educates the workforce (and its clients) on compliance, agrees with its clients in writing how their relationship needs to change in response to new law, ensures that daily business activities are structured to be compliant and are recorded accurately in writing – all this is the management of regulatory and legal risk[ix].

There is no standard definition of legal risk, but can be defined as ‘the risk of loss to an institution that is primarily caused by’:[x]

  1. a defective transaction;
  2. a claim (including a defence to a claim or counterclaim) being made or some other event occurring that results in a liability for the institution or other loss (for example as a result of the termination of the contract);
  3. failing to take appropriate measures to protect assets (for example intellectual property) owned by the institution;
  4. a change in law.

The repercussions for failure to manage legal risk are many and varied.  One of the tools used by the regulators is to “name and shame” non-compliant firms.  Not only does a firm receive a fine but it is also publicly named in the Final Report[xi] and in the press as having failed to comply with the relevant regulation.

This has a direct impact on a firm’s reputation (hence the term “reputational risk”) - current and prospective clients will ask awkward questions or even leave the firm; the firm may lose credibility in the marketplace; the balance sheet and profitability will be impacted.  It also has an adverse impact on a firm’s ability to attract and retain staff.  Employees may ask awkward questions (in some cases whistle blow), leave the firm, or occasionally be able to claim compensation.

All this is in addition to whatever fine is levied which will have balance sheet and prudential management implications.  The firm may need to hold additional capital against the risk of future failure.  And the regulators, globally, will now be acutely aware of a firm’s failings and will be more watchful.

All four of these pillars of legal risk could potentially be in play in each regulatory change project, i.e. when a new law is introduced or an existing law has changed, because with every regulatory change there is always a document change. This means that as regulation evolves, and contracts continue to be developed, there are a myriad of obligations to manage and analyse.

Each regulatory change project, which is conducted in addition to a lawyer’s usual (BAU) duties, produces a plethora of new documents. Lawyers need to analyse each one to figure out how the introduction of new obligations impacts the old ones.  In addition, every new piece of legislation means more reading, more rethinking of business strategy, resulting in more paperwork.

 

IN-HOUSE LEGAL IS UNDER PRESSURE

Despite the scale and complexity of this task, as well as the negative consequences of getting it wrong, the legal department is generally regarded as a cost centre and may be underfunded.

The current model has the legal department in a more or less successful partnership with the Business providing advice on existing and new activities and projects, advising on existing law and new regulations, documenting the intent between the business and their counterparties, i.e. creating/updating legal agreements, negotiating those contracts, advising on strategy and execution when things go wrong.

The legal department is “paid” for its time by way of a budget provided by the business which covers the salaries of lawyers and support staff.  For more difficult matters, the advice of external counsel is sought – again paid for by the Business.

With budget constraints and cost cutting in firms, legal departments don’t have the staff numbers they used to. Like all other functions in-house legal departments are under pressure to cut costs and improve efficiency, transparency, user experience and access to data. Sometimes, more junior lawyers have been retained while seniors have been let go on the basis that external counsel can fill the gap.

If the Business increases its activity level or if there are a number of non-BAU projects then, clearly, these fewer resources are less likely to cope.  This results in slower service to the Business and, sometimes, increased costs as work needs to be outsourced.

The decrease in budget and lawyer numbers are likely to result in increased legal risk because:

  • Delays impact new business as Business may go ahead without legal documentation because they cannot afford to wait. When the deal is finally documented, the documentation may not accurately reflect what was agreed between the parties
  • Tired lawyers make poorer decisions
  • Institutional memory loss as staff leave and legal knowledge pertaining to the Business is lost
  • Increased opportunity costs as prioritisation means that urgent issues may be addressed while the important are left unaddressed[xii]
  • Legal tools which might alleviate some of the above are unavailable or poorly understood or unable to be used.

The result is an environment where legal functions spend the highest proportion of time (and budget) reacting to compliance breaches, misconduct, litigation and arbitration, rather than anticipating risk and prevention – leaving the legal department is unable to adequately support the business’ needs.

So, either the legal department needs more lawyers to keep up with demand or it needs to figure out how to use the lawyers it has more effectively so that they are not spending their time on low level, repetitive tasks which might more efficiently be done by a legal tool.

The model needs to change.

 

[i] KPMG RegTech – There’s a revolution coming puts the figure at $270bn - https://home.kpmg/content/dam/kpmg/uk/pdf/2018/09/regtech-revolution-coming.pdf

[ii] https://eur-lex.europa.eu/statistics/legislative-acts-statistics.html

[iii] https://gtr.ukri.org/projects?ref=AH%2FL010232%2F1

[iv] Ben Denison, Serious Fraud Office chief technology officer, https://www.ft.com/content/7a990f1a-d067-11e8-9a3c-5d5eac8f1ab4

[v] See, for example, John Sheridan’s visualisation of the interconnectedness of one piece of UK legislation (the Companies, Audit, Investigations and Community Enterprise Act 2004)

[vi] https://www.legalcheek.com/2018/11/revealed-law-firms-average-arrive-and-leave-the-office-times-2018-19/

[viii] Legal is perceived as a cost centre not a revenue generator.  The Business is a catch all term which refers to the revenue generating portions of a financial institution

[ix] Legal risk is a subset of operational risk under Basel II

[x] Cited in Legal risks and risks for lawyers, Herbert Smith Freehills and London School of Economics Regulatory Reform Forum, June 2013

[xi] The paper produced by the FCA setting out the details of the firm’s failings and the fine

[xii] President Eisenhower quoting a college president to the Second Assembly of the World Council of Churches: “This President said, "I have two kinds of problems, the urgent and the important. The urgent are not important, and the important are never urgent."”  https://www.presidency.ucsb.edu/documents/address-the-second-assembly-the-world-council-churches-evanston-illinois

 

legal functions spend the highest proportion of time (and budget) reacting... rather than anticipating risk and prevention

 

“We’re now working on a case ... with 65m [documents], and there’s one on the way with over 100m. It’s impossible to investigate cases like ours without technology.”

 

Despite the scale and complexity of this task, as well as the negative consequences of getting it wrong, the legal department is generally regarded as a cost centre and may be underfunded.

 

either the legal department needs more lawyers to keep up with demand or it needs to figure out how to use the lawyers it has more effectively  

 

in-house legal departments are under pressure to cut costs and improve efficiency, transparency, user experience and access to data.

 


Legal Technology in FS – The need for a new legal services operating model

Law, data, machines – these are not words that historically have had much to do with one another.

However, as the number of laws increases, communications traffic increases, and, as the fabric of the law can be read by machines, the interaction between these words will become ever more important.

90% of data in the world has been created in the last two years – and it’s not slowing down. [1]  As regulation increases, the ability of financial institutions to manage the legal risk flowing from that regulation becomes ever more challenged.  The resources being devoted to this increase every year and lawyers are starting to turn to technology to assist.

Recent research[2] found 82% of General Counsel have introduced various forms of technology into their department but 60% of lawyers don’t understand how that technology could help them.  This, at a time where the pressure on resources (both human and financial) means that there is a real need for technological assistance.

The regulatory environment has imposed an unprecedented burden on firms.  Legal risk has become increasingly complex and difficult to manage but is under-examined and often poorly understood.  Due to the massive technological, political, regulatory and cultural shift over the past 30 years, the model by which we manage legal risk is outdated. This has led to increased fines, customer loss and higher operational costs at the least.

Poor management of data results in missed opportunities and increased costs as businesses rerun regulatory change and other projects.  Effective management and exploitation of legal data could provide new business opportunities in addition to saving costs for business as usual (BAU).  There needs to be a more formalised data flow between Business and Legal, leading to an effective and efficient end-to-end framework.

The in-house legal model needs to change.  Technology can help.

But while the market is saturated with ‘RegTech’ and other legal solutions, these are disparate point solutions that do not address the underlying issues.  Lawyers are reluctant to spend time training machines unless results are proven.  This reluctance has resulted in suboptimal take up of the various solutions.

Machines are best at repetitious, low level tasks.  Much of the law is to do with context, relationships between ideas and situations and nuance at which humans are better.  While the race is on for machines to solve the problem of unstructured data, a tool pointed currently at the unstructured data lake that is ‘legal data’ results in unhelpful returns.

A new legal services operating model is needed to diminish the disjointed nature of legal and business issues.  This new operating model needs to take into account not only new technology, but also the underlying data efficiencies to appropriately assemble and deploy solutions seamlessly across legal and business units.

Firms can gain most value by structuring data to best deploy legal technology.  If firms do not make decisions about these issues now they will find themselves trapped in a never-ending loop of manually adjusting data to achieve the required results.

The hardest part of adoption of an “in the round” solution is implementing a framework within the firm which allows the various legal software tools to work optimally. A clear pathway needs to be created to reduce silos, create standards, appoint golden sources and create an enterprise architecture.

Law, data and machines can all work together successfully but it will take vision and hard work.

 

[This is part 1 of a 10 part series where we will consider the role of Legal Technology within Financial Services, how it can and should be applied, and what a ‘utopian’ target operating model for in-house legal departments looks like in FS]

 

[1] Presentation by Dr Joanna Batstone, VP IBM Watson & Cloud Platform, Legal and Technology Procurement 2018 – Thomson Reuters conference 8 November 2018

[2] Legal Technology: Looking past the hype, LexisNexis UK, Autumn 2018

 

There needs to be a more formalised data flow between Business and Legal, leading to an effective and efficient end-to-end framework.

 

A new legal services operating model is needed that takes into account not only new technology, but also the underlying data efficiencies to appropriately assemble and deploy solutions seamlessly across legal and business units.

 

the market is saturated with ‘RegTech’ and other legal solutions, these are disparate point solutions that do not address the underlying issues.

 


Excel Ninjas & Digital Alchemists – Delivering success in Data Science in FS

In February 150+ data practitioners from financial institutions, FinTech, academia, and professional services joined the Leading Point Data Kitchen community and were keen to discuss the meaning and evolving role of Data Science within Financial Services. Many braved the cold wet weather and made it across for a highly productive session interspersed with good pizza and drinks.

Our expert panellists discussed the “wild” data environment in Financial Services inhabited by “Excel Ninjas”, “Data Wranglers” and “Digital Alchemists”. But agreed that despite the current state of the art being hindered by legacy infrastructure and data siloes there are a number of ways to find success.

Here is the Data Kitchen’s ‘Recipe’ for delivering success in Data Science in Financial Services:

1. Delivery is key – There is a balance to strike between experimentation and delivery. In commercial environments, especially within financial services there is a cost of failure. ROI will always be in the minds of senior management, and practitioners need to understand that is the case. This means that data science initiatives will always be under pressure to perform, and there will be limits on the freedom to just experiment with the data.

2. Understand how to integrate with the business – Understanding what ‘good’ delivery looks like for data science initiatives requires an appreciation of how the business operates and what business problem needs to be solved. Alongside elements of business analysis, a core skill for practitioners is knowing how to ‘blend in’ with the rest the business – this is essential to communicate how they can help the business and set expectations. “Data translators” are emerging in businesses in response.

3. Soft skills are important – Without clear articulation of strategy and approach, in language they can understand, executives will often either expect ‘magic’ or will be too nervous to fully invest. Without a conduit between management and practitioners many initiatives will be under-resourced or, possibly worse, significantly over-resourced. Core competencies around stakeholder and expectation management, and project management is needed from data practitioners and to be made available to them.

4. Take a product mindset – Successful data science projects should be treated in a similar way to developing an App. Creating it and putting it on the ‘shelf’ is only the beginning of the journey. Next comes marketing, promotion, maintenance, and updates. Many firms will have rigorous approaches to applying data quality, governance etc. on client products, but won’t apply them internally. Many of the same metrics used for external products are also applicable internally e.g. # active users, adoption rates etc. Data science projects are only truly successful when everyone is using it the way it was intended.

5. Start small and with the end in mind – Some practitioners find success with ‘mini-contracts’ with the business to define scope and, later, prove that value was delivered on a project. This builds a delivery mindset and creates value exchange.

6. Conduct feasibility assessments (and learn from them) – Feasibility criteria need to be defined that take into account the realities of the business environment, such as:

  • Does the data needed exist?
  • Is the data available and accessible?
  • Is management actively engaged?
  • Are the technology teams available in the correct time windows?

If you run through these steps, even if you don’t follow through with a project, you have learned something – that learning needs to be recorded and communicated for future usage. Lessons from nearly 100+ use cases of data science in financial services and enterprises, suggest that implementing toll-gates for entry and exit criteria is becoming a more mature practice in organisations.

7. Avoid perfection - Sometimes ‘good’ is ‘good enough’. You can ‘haircut’ a lot of data and still achieve good outcomes. A lot of business data, while definitely not perfect, is being actively used by the business – glaring errors will have been fixed already or been through 2-3 existing filters. You don’t always need to recheck the data.

8. Doesn’t always need to be ‘wrangled’ – Data Scientists spend up to 80% of time on "data cleaning" in preparation for data analysis but there are many data cleansing tools now in the market that really work and can save a lot of time (e.g. Trifacta). Enterprises will often have legacy environments and be challenged to connect the dots. They need to look at the data basics – an end to end data management process, the right tools for ingestion, normalisation, analysis, distribution and embedding outputs as part of improving a business process or delivering insights.

Our chefs believed Data Science will evolve positively as a discipline in the next three years with more clarity on data roles, a better qualification process for data science projects, application of knowledge graphs, better education and cross pollination of business and data science practitioners and the need for more measurable outcomes. The lessons from failures are key to make the leap to data-savvy businesses.

Just a quick note to say thank you for your interest in The Data Kitchen!

We had an excellent turn out of practitioners from organisations including: Deutsche Bank, JPMorgan, HSBC, Schroders, Allianz Global Investors, American Express, Capgemini, University of East London, Inmarsat, One corp, Transbank, BMO, IHS Markit, GFT, Octopus Investments, Queen Mary University, and more.

And another Thank You to our wonderful panellists!

  • Peter Krishnan, JP Morgan
  • Ben Ludford, Efficio
  • Louise Maynard-Atem, Experian
  • Jacobus Geluk, Agnos.ai

…And Maître De – Rajen Madan, Leading Point FM
We would like to thank our chef’s again and to all participants for sharing plenty of ideas on future topics, games and live solutions.