CFA Institute Warns: Rising Risks With Opaque AI
Artificial intelligence in finance risks losing public trust unless explainability becomes a priority, concludes a new CFA Institute report.
As artificial intelligence (AI) rapidly reshapes the financial sector, the CFA Institute is sounding the alarm.
Its latest report warns that opaque decision-making systems could undermine trust, compliance, and risk management. The study, «Explainable AI in Finance: Addressing the Needs of Diverse Stakeholders», urges firms to embed transparency into every stage of AI adoption.
«AI Is No Longer Quiet in the Background»
«AI systems are no longer working quietly in the background—they are influencing high-stakes financial decisions that affect consumers, markets, and institutions,» said Cheryll-Ann Wilson, CFA, the report’s author.
She cautioned that failing to understand or explain these systems risks creating a crisis of confidence in technologies designed to improve financial decision-making.
Tailored Transparency for Stakeholders
The report introduces a framework to meet the diverse needs of regulators, risk managers, developers, and clients.
By mapping different types of explanations to user roles, the CFA Institute makes the case that explainability is not one-size-fits-all but essential for all parts of the financial value chain.
From Ante-Hoc to Post-Hoc Tools
Explainable AI tools come in two forms: «ante-hoc» methods, where systems are designed with transparent rules from the start, and «post-hoc» methods, applied after decisions are made to clarify which data points mattered most.
Both approaches are already shaping areas like credit scoring, risk assessment, and fraud detection.
Call for Global Standards
Key recommendations include creating global benchmarks for explainability, tailoring interfaces for both technical and non-technical users, ensuring real-time transparency in fast-moving financial contexts, and investing in human–AI collaboration.
The report also explores advanced approaches such as evaluative AI and neurosymbolic AI, which combine reasoning with deep learning.
«Not About Slowing Innovation»
«This is not about slowing down innovation; it’s about implementing it responsibly,» said Rhodri Preece, Senior Head of Research at CFA Institute.
«We must ensure that AI systems not only perform well but also earn the trust of those who rely on them,» he added.
Implications for Asia-Pacific
Mary Leung, CFA, Senior Advisor for Capital Markets Policy in APAC, stressed the regional importance: «This report provides a vital roadmap for Asia-Pacific jurisdictions as they accelerate the development of responsible AI governance frameworks. Explainability standards are critical to ensuring fairness, fostering regional trust, and safeguarding consumer interests».