The airy downtown Manhattan headquarters of artificial intelligence pioneer Behavox, featuring sweeping views of New York harbor, is a long distance from its CEO’s birthplace in the former Soviet republic of Kyrgyzstan. “The middle of nowhere” is how Erkin Adylov describes the country of his birth.

Adylov has taken the long journey from Kyrgyzstan—where his doctor mother and engineer father made less than $100 a month—one step at a time. “I didn’t really have many options, so I had to leave,” he says, matter of factly.

In 2000, Adylov won a scholarship from George Soros’ Open Society Foundation that sent him to a Michigan high school. He got his bachelor’s degree in political science and government from Hawaii Pacific University, then earned a full scholarship to the London School of Economics. Adylov spent 14 years in London, working for Goldman Sachs and hedge fund GLG Partners, among others, as an analyst covering financial institutions.

The challenges facing the financial industry, from increased competition to more stringent regulations and capital requirements to the prevalence of market manipulation and insider trading, gave him the idea of starting a firm that would embrace artificial intelligence. In 2014, Adylov launched Behavox with backing from institutions such as Citigroup, European VC firm Index Futurx and some of his former GLG colleagues. The company now has operations in Singapore, London, New York, Montreal and California. It boasts 100 employees—and various blue chip financial institutions and hedge funds among its clients.

Worth spoke with Adylov about how artificial intelligence can be used to improve financial institutions, from discovering illicit activity to avoiding the losses of overconfident traders. 

Q: What got you interested in artificial intelligence, and its application to finance?

A: The way people work today is highly digital—we send emails, we’re on the phone, we’re on Uber chat—and that is amplified in financial services, where we no longer go to restaurants to trade stocks, and we certainly don’t write them up on tickets. Everything is electronic, which means the workplace is no longer a place—it’s a digital domain. The way we run companies, the way we motivate people, the way we analyze our people, the way we do everything has to be completely changed, because when it’s digital it means that you can analyze it using computers. And if you can analyze it using computers, artificial intelligence is able to do all of these amazing things.

The vision of the future I see in financial services is that probably five or 10 years from now there will be no traders and there will be no sales.

That could sound ominous. Where is this headed?

The vision of the future I see in financial services is that probably five or 10 years from now there will be no traders and there will be no sales.

What will there be?

Cyborgs—a hybrid between artificial intelligence and humans. Humans are really good at certain things, like having in-person meetings. Humans are really good at building relationships. Humans are also really good at making decisions, especially when it’s not black and white. If I show you a piece of data, and I say here’s the earnings, and what do you think is going happen in the future, machines are terrible at doing that. Humans are really good at it because we can imagine and we can make decisions, especially in those gray zones.

Where does artificial intelligence come in?

Machines are really good at understanding large volumes of data. As an example, a machine could crunch through all your emails since you started your email inbox, and it could rank all the people that you’ve ever interacted with and say, “Here are the people that used to be important to you maybe a year ago, two years ago, and you have forgotten about them. Would you like to rekindle the relationship?”

This future is how far away?

It’s happening right now.

The cyborg future?

The cyborg future, that hybrid is happening right now, as we speak.

Can you talk about how artificial intelligence can help financial institutions deal with their risks in today’s heightened regulatory environment, and also to address some of the scandals they’ve been plagued with, like market manipulation or insider trading?

The companies that genuinely care about compliance care about very specific things. One of them is something called languages. If I want to hide something from the compliance team, there are a million different ways I can do that. I could invent a language; I could invent code words. So that’s something that all of our customers are worried about.

Do people actually do that?

Yeah. It’s easy to catch somebody who’s trying to manipulate markets out in the open. And every time they get caught, people say, “Well, gee, they must be not that smart because clearly all the Bloomberg chats and emails have been monitored, so it was just a matter of time before they got caught.” The thing that really worries institutions is that the people who really want to do damage are going to become more sophisticated in their communication style, so that it’s not as apparent that they’re trying to manipulate markets or do something illegal. The more people get caught in the open the more that they’re likely to start to find more devious ways.

Can your firm help monitor those?

Yes. Typically how it works in compliance is that if you have a regulatory issue that you’re worried about—let’s say market abuse, market manipulation—you would put together a list of key words that you would look for. “Let’s take advantage of this client” or “don’t tell them that we took advantage of them.” It could be stuff like that. Or, “let’s fix this rate at these levels.”

So the key words are “take advantage” or “fix”?

There can be specific lexicons of words. The problem with key words is that once I know what compliance is looking for, then I’m probably not going to use them if I’m smart.

And how do people inside the organization know that these are the key words?

They usually pick that up from the press. The more these events get flagged up to the market, the more the market starts thinking, A, they are monitoring it,  and B, the probability of me being discovered is now high.

How can you find out the secrets people are using to avoid compliance?

Say the two of us will go to a restaurant, and we’re up to no good. We would pick up the menu in the restaurant and we would say, “If I say to you fried chicken, it means X. And if I say to you salad, it means Y.”

Now that we’ve agreed on that, when we go back to our trading, on the electronic messaging platform I’m going to message you and say, “What do you think about fried chicken?” Now you know exactly what that means, except that when the machine is reading it, or the compliance team is reading it, nobody has a clue what it actually means. So it makes it very difficult to detect it, and very difficult to identify what is going on. The way to think about this is by going back to the point that I said that AI is reading and listening to every single conversation. Chicken, or items of food, happen to be a very unique thing that generally you don’t see in communications a lot. It also happens to be the type of thing that you and I haven’t spoken about before. So if, say, for the last two years we never talked about fried chicken, and all of a sudden we’re talking about it all the time, then that would be the type of signal that the artificial intelligence is able to pick up on.

Fried chicken, who knew?

AI then highlights it to a human and says, “Look, these people have never talked about this item, about chicken, in the last two years. Moreover, none of their peers are talking about fried chicken. So why in the hell are these people talking about fried chicken? Would you like to investigate?” And that’s the hybrid between a machine and a human.

Isn’t there a way to avoid being caught by using nonmonitored communications devices?

If you really wanted to, you can take conversations offline completely, right? That’s actually exactly one of the things that we look for, indications that you want to take conversations privately. We also look for that behavioral change: You used to use the office phone, and you no longer do that after we’ve started recording those calls. Although you’re not recording phone conversations, the phone bills and phone logs are still coming through. That’s a trigger point.

Financial services institutions are investing somebody else’s money, and they get paid very well. That comes with huge responsibilities. They have a fiduciary duty.

It’s very Big Brotherish, isn’t it?

But they deserve it. Financial services institutions are investing somebody else’s money, and they get paid very well. That comes with huge responsibilities. They have a fiduciary duty.

What else can artificial intelligence do in the financial world?

The elephant in the room is what it means five years down the line if you have a powerful analytical tool like this. It completely changes the way you make money and interact with markets. What I’m proposing is that the machine should be analyzing the humans, not markets. That’s the people analytics, which can flush out the biases that a person has.  For example, what everybody wants to deeply understand is, how can you quantify conviction? A human has something called conviction: I see a stock, I’ve done all the work on it and I want to buy it.

So artificial intelligence would measure psychology?

The machine knows you better than you know yourself. By studying your behavior, it starts making those inferences. It’s going to come back to you and say, “You think that this is your conviction, whereas in reality, I know that that’s not true.” Humans are very subjective beasts. They get influenced very easily. The part of a brain that gets triggered when you make money is exactly the same part that gets triggered by drugs like cocaine and heroin. So if I made money today and yesterday and the day before, I am going feel like God because I made money three days in a row. The probability of me making wrong decisions at that stage is very high. So if you ask me at that point in time, what is my conviction, I’m going tell you that it’s 10 out of 10 on these stocks.

How does artificial intelligence analyze this God-like behavior?

It measures your output. It doesn’t take into account how much money you are making, or losing, or the chemical reactions that are going on in your brain. Based on that, it knows the biases that you’re plagued by.

What do you mean by biases?

When you’re making money every day for the next, say, 50 days, you’re going to feel like God at the end of that day 50. I promise you, you’re going to feel like God. That’s a bias.

Right. The dopamine effect.

And I can know that because I’ve analyzed all of your historical data, and every time you behave yourself like this, you lose money. That’s the hybrid approach of using both machines and humans.

I know some people who would say, “I’m going to make that trade anyway.”

The machine is not going to come to you and tell you that you’re about to lose money. You’re going to feel like God and you’re going to do all those things, then you’re going to go in and do some trades that will be wrong. And then two weeks from now, you’re going to regret it. But when you regret it, because the machine already knew at that time that you were going to make a mistake, it just took the other side of the trade quietly, without telling you.