Transparency key to successful AI adoption for financial services

The adoption of AI for use in financial services decision-making systems without sufficient transparency could undermine public trust, regulatory compliance and risk management, warns a new report from CFA Institute.

The report, Explainable AI in finance: addressing the needs of diverse stakeholders, examines the growing complexity of AI systems such as those used in credit scoring, investment management, insurance underwriting, and fraud detection. It makes the case for “explainable AI”, a class of techniques designed to make AI decision-making transparent, auditable, and aligned with human understanding.

Dr. Cheryll-Ann Wilson, the report’s author and a senior affiliate researcher at CFA Institute, said: “AI systems are no longer working quietly in the background, they are influencing high-stakes financial decisions that affect consumers, markets, and institutions.
“If we can’t explain how these systems work – or worse, if we misunderstand them – we risk creating a crisis of confidence in the very technologies meant to improve financial decision-making.”

The report emphasises that different stakeholders, regulators, risk managers, investment professionals, developers, and clients, require different kinds of explanations. By mapping specific explainability needs to distinct user roles, the study introduces a framework to embed transparency into AI deployment across the financial value chain.

Among the report’s key recommendations are the development of global standards and benchmarks for measuring the quality of AI explanations, and the promotion of real-time explainability in AI systems used for fast-paced financial decisions.

As regulatory momentum builds, with frameworks like the EU AI Act and the UK’s own regulatory initiatives in development, CFA Institute calls on financial institutions to move proactively. Rhodri Preece, senior head of research at CFA Institute, added: “This is not about slowing down innovation; it’s about implementing it responsibly. We must ensure that AI systems not only perform well but also earn the trust of those who rely on them.”



Share Story:

YOU MIGHT ALSO LIKE


The Future of Risk & Resilience with AI & Data
CLDigital's Co-Founder, Tejas Katwala, joins CIR Magazine to discuss how CLDigital is transforming enterprise risk and resilience. By integrating business processes, AI and data-centric strategies, organisations can move beyond compliance to proactive risk management – simplifying operations, strengthening resilience, and driving business performance. Listen now to explore the future of intelligent risk management.

Investec is disrupting premium finance – Podcast
Investec made waves in entering the premium finance market, where listening and evolving in response to brokers made a real difference.