Issue #5 – AI at the Edge: Governing the Future of Innovation

Introduction

Welcome to this week’s edition of The Trusted AI Bulletin, where we unpack the latest developments in AI governance, regulation, and adoption.

This week, we’re diving into OpenAI’s push for federal AI regulations, the launch of new compliance standards for bank-fintech partnerships, and the stark warnings from Turing Award winners about the unsafe deployment of AI models. As governments and businesses grapple with the dual demands of innovation and accountability, the conversation around responsible AI is reaching a critical inflection point.

The rapid evolution of AI is forcing a reckoning: how do we balance the need for speed and competitiveness with the imperative to build safeguards that protect society? From the financial sector’s embrace of AI-driven tools to IKEA’s leadership in ethical AI governance, the stories this week highlight both the opportunities and the risks of this transformative technology.

 


Key Highlights of the Week

1. OpenAI Appeals to White House for Unified AI Regulations Amidst State-Level Disparities

OpenAI has formally requested the White House to intervene against a patchwork of state-level AI regulations, advocating for a cohesive federal framework to govern artificial intelligence. This move underscores the company’s concern that disparate state laws could stifle innovation and create compliance challenges.

Notably, OpenAI’s Chief Global Affairs Officer, Chris Lehane, has highlighted the urgency of accelerating AI policy under the current administration, shifting from merely advocating regulation to actively promoting policies that bolster AI growth and maintain the U.S.’s competitive edge over nations like China.

In a 15-page set of policy suggestions released on Thursday, OpenAI argued that the hundreds of AI-related bills currently pending across the U.S. risk undercutting America’s technological progress at a time when it faces renewed competition from China. The company proposed that the administration consider providing relief for AI companies from state rules in exchange for voluntary access to their models.

Source: Bloomberg link

 

2. CFES Unveils New Standards to Strengthen Compliance in Bank-Fintech Partnerships

The Coalition for Financial Ecosystem Standards (CFES) announced in a press release this week the launch of a new industry framework aimed at strengthening compliance and risk management in bank-fintech partnerships. The STARC framework, comprising 54 standards, sets a benchmark for key areas such as anti-money laundering (AML), third-party risk, and operational compliance, providing financial institutions with a structured rating system to assess their maturity.

To support adoption, CFES has also established an Advisory Board featuring key industry players like the Independent Community Bankers of America (ICBA) and the American Fintech Council (AFC). With regulators increasing scrutiny on fintech partnerships, these standards could play an important role in helping firms navigate compliance without stifling innovation.

As artificial intelligence continues to reshape financial services, frameworks like STARC offer a structured approach to ensuring transparency and accountability.

Source: Press release PDF link, CFES Standards link

 

3. Turing Award winners warn over unsafe deployment of AI models

AI pioneers Andrew Barto and Richard Sutton have strongly criticised the industry’s reckless approach to deploying AI models, warning that companies are prioritising speed and profit over responsible engineering. They argue that releasing untested AI systems to millions without safeguards is a dangerous practice, likening it to building a bridge and testing it by sending people across.Their work, which underpins major advancements in machine learning, has fuelled the rise of AI powerhouses such as OpenAI and Google DeepMind.

The pair, who have been awarded the 2024 Turing Award for their foundational contributions to artificial intelligence, have expressed serious concerns that AI development is being driven by business incentives rather than a focus on safety. Barto criticised the industry’s approach, stating, “Releasing software to millions of people without safeguards is not good engineering practice,” while Sutton dismissed the idea of artificial general intelligence (AGI) as mere “hype.” As AI investment reaches unprecedented levels, their warnings highlight the growing tensions between rapid technological advancement and the urgent need for stronger governance and regulatory oversight.

Source: FT link

 


Featured Articles

1. How Artificial Intelligence is Shaping the Future of Banking and Finance

The financial services sector is experiencing a significant transformation through the integration of artificial intelligence (AI), with investments projected to escalate from $35 billion in 2023 to $97 billion by 2027, reflecting a compound annual growth rate of 29%.

Leading institutions such as Morgan Stanley and JPMorgan Chase have introduced AI-driven tools to enhance operational efficiency and client services. In the immediate term, AI co-pilots are streamlining workflows, while always-on AI web crawlers and automation of unstructured data tasks are providing real-time insights and reducing manual processes.

Looking ahead, AI’s potential to revolutionise risk management and customer experience through the use of synthetic data is becoming increasingly evident. Fintech companies are at the forefront of this evolution, democratising AI capabilities and enabling smaller financial institutions to compete effectively. This rapid AI adoption underscores the urgency for robust AI governance and regulatory frameworks to ensure ethical implementation and maintain public trust.

Source: Forbes link

 

2. Mandatory AI Governance: Gartner Predicts Worldwide Regulatory Adoption by 2027

According to Gartner’s research, by 2027, AI governance is expected to become a mandatory component of national regulations worldwide. This projection underscores the escalating concerns surrounding data security and the imperative for robust governance frameworks in the rapidly evolving AI landscape.

Notably, Gartner anticipates that over 40% of AI-related data breaches could stem from cross-border misuse of generative AI, highlighting the critical need for cohesive ethical governance. The absence of such frameworks may result in organisations failing to realise the anticipated value of their AI initiatives.

This development signals a pivotal shift towards more stringent AI oversight, emphasising the necessity for organisations to proactively adopt comprehensive governance strategies to mitigate risks and ensure compliance with forthcoming regulatory standards.

Source: CDO Magazine link

 

3. Balancing Control and Collaboration: Five Essential Layers of AI Sovereignty

The concept of AI sovereignty extends far beyond data localisation or regulatory compliance, requiring a multi-layered approach to ensure true independence.

Five key layers define AI sovereignty: legal and regulatory control, resource and technical independence, operational autonomy, cognitive sovereignty over AI models and algorithms, and cultural influence in shaping public perception and ethical norms. Each layer plays a crucial role in balancing national or organisational control with global collaboration, ensuring AI aligns with strategic interests while maintaining adaptability.

Without a structured approach to sovereignty, reliance on external AI infrastructure and governance could pose significant risks to security, competitiveness, and ethical oversight. As AI regulations evolve, this framework highlights the need for a proactive, layered strategy to navigate the complexities of AI governance effectively.

Source: Anthony Butler link

 


Industry Insights

Case Study: IKEA’s responsible AI governance

As AI becomes increasingly embedded in business operations, IKEA has taken a proactive and structured approach to AI governance, ensuring ethical and responsible deployment. Recognising the potential risks of AI alongside its benefits, IKEA introduced its first digital ethics policy in 2019, laying the foundation for responsible AI development.

By 2021, the company had established a dedicated AI governance framework, with a multidisciplinary team overseeing compliance, risk management, and ethical considerations. This governance model ensures that AI is used transparently, fairly, and in alignment with business goals.

Key areas of focus include enhancing employee productivity, optimising supply chains, and improving customer experiences—all while maintaining strict ethical standards. Additionally, IKEA’s AI literacy programme is designed to empower employees with the skills needed to navigate AI responsibly, reinforcing the company’s commitment to human-centric innovation.

Key Takeaways:
1. AI Governance as a Business Imperative: Rather than treating AI governance as a regulatory checkbox, IKEA integrates responsible AI principles into its core business strategy. This ensures that AI-driven innovations align with ethical considerations and organisational priorities.
2. Proactive Regulatory Compliance: IKEA’s commitment to responsible AI extends to early compliance with the EU AI Act. As a signatory of the AI Pact, the company is ahead of regulatory requirements, demonstrating leadership in ethical AI governance.
3. Empowering Employees Through AI Education: Understanding that responsible AI usage starts with people, IKEA has launched an AI literacy programme to train 30,000 employees in 2024. This initiative fosters a culture of accountability and awareness, reducing risks associated with AI adoption.

By prioritising governance, education, and ethical AI integration, IKEA is setting a benchmark for responsible AI adoption in the retail sector, ensuring that technological advancements serve both business needs and societal good.

 

Sources: CIO Dive inkGlobal Loyalty Organisation link

 


Upcoming Events

1. In-Person Event: AI for CFOs – Minimise Risk to Maximise Returns – 25 March 2025

On March 25th, 2025, The Economist is hosting the AI for CFOs event in London, focusing on how finance leaders can leverage artificial intelligence to enhance corporate performance. Attendees will explore AI’s role in delivering real-time insights, improving forecasting accuracy, automating compliance, and strengthening data security. This event offers a valuable opportunity to connect with industry experts and discover actionable strategies for integrating AI into financial operations.

Register now: The Economist link

 

2. Webinar: Strategies and Solutions for Unlocking Value from Unstructured Data – 27 March 2025

A-Team Insight’s upcoming webinar, Strategies and Solutions for Unlocking Value from Unstructured Data, will explore how firms can harness the vast potential of unstructured data—emails, customer feedback, and other text-based information—to drive smarter decision-making and gain a competitive edge. Industry experts will share practical approaches to extracting insights, improving operational efficiency, and uncovering new business opportunities. If you’re looking to turn your organisation’s unstructured data into a valuable asset, this session is not to be missed.

Register now: A-Team Insight link

 

3. Webinar: Five Essential Tips for Successful AI Adoption – 15 April 2025

This webinar, focuses on the critical role of data quality in AI success. As businesses rush to integrate AI, experts will discuss why clean, structured, and well-governed data must be a top priority to avoid AI becoming a liability. The session will cover key topics such as data governance, security, privacy, ethical considerations, and how to maximise AI ROI. Attendees will gain executive-level strategies to ensure AI delivers meaningful business impact.

Register now: CIO Dive link

 

4. In-Person Event: AI Breakfast & Roundtable – From AI Proof of Concept to Scalable Enterprise Adoption – 23 April 2025

Leading Point is hosting an exclusive AI Breakfast & Roundtable, bringing together AI leaders from top financial institutions, including banks, insurance firms, and buy-side institutions. This intimate, high-level discussion will explore the challenges and opportunities in scaling AI beyond proof of concept to enterprise-wide adoption.

Key discussion points include overcoming implementation barriers, aligning AI initiatives with business objectives, and best practices for AI success in banking, insurance, and investment management. This event offers a unique opportunity to connect with industry peers and gain strategic insights on embedding AI as a core driver of business value.

Want to be a part of the conversation?

If you are an executive with AI responsibilities in business, risk & compliance, or data contact Rajen Madan or Thushan Kumaraswamy to get a seat at the table.

 


Conclusion

The stories this week underscore a critical truth: AI governance isn’t just about compliance—it’s about building trust. From OpenAI’s push for federal oversight to IKEA’s ethical framework, the focus is shifting from rapid adoption to responsible deployment. The warnings from Turing Award winners Barto and Sutton are a stark reminder: innovation without safeguards is a risk we can’t afford.

As AI’s influence grows, the challenge is clear—businesses and policymakers must act now to bridge governance gaps, prioritise transparency, and ensure AI serves society as much as it drives progress. The future of AI depends on the choices we make today.

We’ll be back in two weeks with more insights. Until then, let’s keep pushing for a future where AI works for everyone.

 

 

Rajen Madan
Thushan Kumaraswamy