In an industry where transparency is vital, AI raises critical legal questions on whether financial institutions can trust the new technology.

While the use of artificial intelligence in the financial services industry is vast and varied, its opacity, referred to as the «Black Box» dilemma, presents significant challenges to the industry where transparency is paramount.

In his commentary published on «Reuters», British-American lawyer Joshua Dupuy highlighted the dilemma financial institutions and regulators are now confronted with in the adoption of AI for various purposes from fighting fraud and streamlining transactions to personalizing financial experiences and predicting trends.

«AI's potential to improve financial markets is vast. AI wields real power, [it raises] a critical legal question: can we truly trust what we cannot see?» Dupuy said.

Complexities, Legal and Regulatory Challenges

Integrating AI into the financial sector brings significant complexities as well as legal and regulatory challenges. The challenges are further exacerbated by data governance concerns, particularly regarding privacy and cross-border data flows and the need to comply with regulations such as the General Data Protection Regulation (GDPR).

«These rules influence AI's development and use in finance, dictating how data is handled and ensuring systems respect privacy and data sovereignty,» Dupuy said, highlighting the challenge of balancing AI's effectiveness with transparency remains challenging.

Lack of Transparency

In jurisdictions where data is scarce or guarded by restrictive licensing, AI development risks becoming a Black Box—opaque and inaccessible to external scrutiny, Dupuy warned.

«This lack of transparency can exacerbate secrecy in AI algorithms, making it difficult to understand, challenge or improve upon their decisions,» he added.

AI Algorithms and Data Imbalances

The opacity of AI algorithms and data imbalances adds further challenges, which Dupuy said could obscure decision-making and perpetuate biases. For example, AI-driven trading algorithms could trigger market fluctuations with just one unclear decision, while credit scoring systems make it difficult for regulators to carry out oversight due to the lack of transparency. There is also the risk of entrenching biases from historical data.

«These challenges are compounded by the interplay of algorithmic and human biases, making it difficult to assign responsibility for biased outcomes. AI's potential to inadvertently reinforce discriminatory practices, such as redlining or targeting vulnerable groups with unsuitable products, raises significant concerns,» Dupuy explained.

Explainable Decisions

Such biases, Dupuy warned, could have an impact on various decisions, and hence the need to balance AI's benefits against the risks of bias and discrimination.

He also pointed out that Industry leaders, such as J.P. Morgan CEO Jamie Dimon, have emphasized the need for AI systems to be «explainable» not just in making decisions but also to be able to clearly justify them.