«AI Turns Even the Most Foolish Into a Cunning Fraudster»

Artificial intelligence is making life easier for fraudsters: cases are increasing, methods are becoming more sophisticated and scalable. Tobias Thonak, Partner Scaling Data & AI at BearingPoint, explains why the financial industry is structurally at a disadvantage — and how it can still catch up.

Insurance fraud is rising noticeably with the advent of AI technologies. Estimates suggest that losses could already amount to as much as 10 percent of total claims.

For Tobias Thonak, however, this is less a new phenomenon than a structural intensification of a well-known problem: «Insurance fraud has always been relevant — but it is changing dramatically,» he says.

Banks, too, are increasingly becoming targets.

Silo Thinking Makes it Easy for Fraudsters

The key weakness lies not primarily in technology, but within organisations themselves. «The biggest vulnerability is often the lack of integration of data, processes and decision-making logic. Many financial service providers have grown historically and operate in silos.»

Tobias Thonak

Tobias Thonak, Partner Scaling Data & AI at BearingPoint. (Image: provided)

This fragmentation makes effective fraud detection considerably more difficult. Individual cases often appear plausible when viewed in isolation — only by linking different data sources do patterns emerge: «When you have the full picture — such as device information, previous claims or payment flows — you suddenly see connections that were previously invisible.»

Emergency Meeting Convened at Short Notice

Even with technological tools, caution is required. Just a few days ago, a new AI model caused concern at the highest levels: regulators and central banks raised the alarm, while leading Wall Street institutions are already testing the technology.

The model in question is «Mythos» by US developer Anthropic, which, according to authorities, is capable of independently identifying and exploiting vulnerabilities in operating systems and web browsers. This was reported by Bloomberg.

These capabilities have triggered immediate reactions at the highest political level. US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened leading bank executives for an emergency meeting at short notice. At the centre of the discussions was concern that cyber threats could fundamentally change with the use of advanced AI.

A New Phase in the Cyber Arms Race

Regulators see this as the potential beginning of a new generation of cyberattacks. While AI has so far been used primarily for defence, it is now opening up new opportunities for attackers.

In particular, the ability to autonomously identify and exploit previously unknown vulnerabilities points to an accelerated and far less predictable dynamic in the digital arms race. Experts speak of a structural paradigm shift in cybersecurity.

Banks Test — Despite Significant Risks

At the same time, major financial institutions are pushing ahead with the application of the technology. JPMorgan Chase is considered one of the first institutions involved in testing.

Goldman Sachs, Citigroup, Bank of America and Morgan Stanley are also said to have gained access or are about to do so. Official statements remain largely absent so far.

AI is Democratising Fraud

While financial institutions are upgrading their capabilities, the dynamics on the perpetrator side are shifting even faster. «AI makes the topic accessible,» says Thonak. «People who previously lacked the skills can now carry out high-quality fraud attempts.»

His conclusion is pointed: «AI turns even the most foolish into a cunning fraudster.»

This is no longer limited to organised crime. «In addition to professional gangs, there are increasingly copycats who simply try to see whether it works.»

Particularly critical is the declining reliability of traditional verification mechanisms. «You used to be able to rely on documents — today, you cannot.»

At the same time, processes on the perpetrator side are becoming more professional: «Fraudsters are automating their workflows just like financial service providers and are deliberately exploiting weaknesses in digital claims processes.»

Regulation and Culture Slow Down the Industry

Despite growing pressure, many insurers are reacting only slowly. The reasons are manifold: «It is a combination of legacy technology, regulatory requirements and organisational hurdles.»

In a regulated environment, rapid adjustments are particularly difficult: «You cannot operate in a ‘move fast and break things’ manner like Facebook — changes must be transparent and controlled.»

There are also cultural barriers: «Many employees see automation and AI as a threat to their jobs — this further slows down transformation.»

The Catch-Up Race Has Begun

Nevertheless, Thonak does not see the industry in a structural defensive position: «It is a cat-and-mouse game — but financial service providers are catching up.»

Investment momentum is clearly visible: «Spending on data and AI solutions in the insurance sector is currently growing by more than 30 percent per year.»

What matters is less the technology itself than the operating model: «Successful companies establish cross-functional teams — closely integrating claims, IT, data and compliance.»

This organisational approach enables shorter learning cycles and faster iteration: «You test, receive feedback and continuously improve — instead of running isolated projects for months.­»

Switzerland Solid, But Not Immune

In an international comparison, Thonak places Switzerland in the mid-range: «Switzerland is not a hotspot for AI-driven insurance fraud. The market is more trust-based and exhibits stronger governance than other regions.»

In Asian markets, the dynamics are significantly more pronounced.

He has no illusions, however: «The creativity of fraudsters will not stop.»

Nor, for that matter, will that of financial service providers.