Imagine if you could stop rogue trading when it was just the spark of an idea — a stray thought sparked by a trader’s expensive divorce, a big loss suffered at a poker game, or growing disillusionment with the daily grind.

Imagine if, instead of being bogged down in 10,000 emails a day with words like “fraud”, compliance teams could instead detect changes in tone and other subtle tics that show a trader’s behaviour is changing.

In a world where a Japanese company has launched artificial intelligence cameras that are designed to predict shoplifting before it happens, it is not so hard to believe that the world’s biggest banks are closing in on advances that will allow them to do the above and more.

Banks have already made major leaps in trader surveillance in the past few years, embracing communication monitoring tools that look for obvious flash phrases and keywords as well as less obvious ones like “let’s take this conversation offline”.

They have set stricter limits on traders’ activities, making it harder for anyone to make the kind of enormous bets that led to one-off losses of as much as $6bn (in JPMorgan’s “London Whale” scandal). Losses and fines in the past decade across the top 13 global banks add up to more than $10bn, according to analytics provider Corlytics.

Now, banks’ efforts are entering a new era, powered by AI and machine learning. “We have opened up the doors of what’s possible,” says Marc Andrews, vice-president of IBM’s Watson Financial Services division, as he outlines its tools that do everything from monitoring conversations for tone to using changes to credit scores in order to predict which traders are likely to go rogue.

About a dozen banks are already deploying Watson, IBM’s AI software, which they use to monitor everyday communications. “We’re looking at their emails and identifying their communication patterns, the tone of their emails in addition to what they’re saying,” says Mr Andrews.

“One of the benefits of . . . applying machine learning is that you’re not implementing specific rules that someone can just work their way around,” he adds. “As people do start changing their behaviour . . . the models will learn over time and will be able to adapt much more quickly.”

One important outcome is cutting down the number of “false positives” banks have to deal with under traditional systems which flag hundreds of thousands of potentially suspect messages a month, leaving banks to find a needle in a haystack.

The IBM software differentiates between higher and lower-risk alerts, to allow banks to cut through the mass that they receive. The idea, Mr Andrews says, is to reduce the effort and cost they are spending on the low-risk incidents.

Erkin Adylov, whose company Behavox also provides surveillance tools to banks, describes one case where a bank was getting 450,000 alerts per month for “silly things” like a trader asking his wife for a favour.

“When we came in we reduced that by 95 per cent,” he adds.

Using AI to spot potential rogue traders

Mr Andrews says banks are already pulling other metrics into their surveillance systems, including human resources reviews and credit scoring reports to identify traders with either the motive, or predisposition, for rogue activity.

The technology company has separately developed tools that would allow other information like public filings to be integrated into their trader surveillance system. A court conviction, or sizeable divorce settlement could be used as a red flag. “We’ve had a lot of enquiries about that, but none have started putting it into production,” says Mr Andrews of the newest technology.

“Lots of these firms are very much in that experimental stage . . . trying to figure out which of these techniques are worth investing in.”

Typical costs of less than $10m for rogue trading detection systems are small compared to the multibillion-dollar losses banks have taken on rogue trading, but Mr Andrews says banks are still “price-sensitive” on what they will adopt.

He believes this is partly because there have been no big rogue trading cases in recent years and regulators are currently more focused on requirements such as “know your client” and anti-money laundering practices.

As people do start changing their behaviour, the models will learn

Marc Andrews, Watson Financial Services

Banks do not ask for solutions that eliminate rogue trading altogether, he says, but do ask questions like: “can you cut out 20 per cent of our alerts and guarantee there won’t be any events in that bottom 20 per cent?” His answer is always no.

The big question is whether these sophisticated tools, alongside other factors including changes in bank culture, stricter trading limits and regulatory pressure, will lead to an era where rogue trading is an anachronism.

Lex Sokolin, a fintech analyst at research provider Autonomous, says the combination of machine learning tools and so-called “regtech”, which aligns what machines are able to do with regulators’ requirements, makes it likely that large banks will see suspicious internal activity before it hits the market. “[But] no system is foolproof,” he adds.

The head of one large US investment bank says that while rogue trading is seen as less of a live problem than it was a decade ago, it has not gone away and never will — a sentiment echoed privately by other banks. Many decline to speak publicly about it, though, for fear of tempting fate.

“Fundamentally it [rogue trading] is always a concern,” says Eoin Cumiskey, of UK-based financial services advisory firm FSCom.

“The application of technology allows for greater controls to be in place but . . . no one we’ve come across has really said: ‘It’s fine, we have a little black box in the corner, that stamps it out, it’s yesterday’s issue.’ ”