Fahreen Kurji

On April 6th, Behavox hosted the AI for Compliance & Security Summit in New York to explore the latest advancements in Artificial Intelligence (AI) and its potential to improve compliance and security functions. To get a glimpse of the event, you can watch a quick recap provided here.

At the event our CEO, Erkin Adylov, demystified how ChatGPT and other Large Language Models (LLMs) work and provided an in-depth overview of what makes Behavox LLM so special. The highlight of the event was a live demo of the Behavox AI where attendees got a first-hand experience of how the system operates.

The event was a culmination of the string of major announcements made by Behavox earlier last week:

The two key questions addressed during the afternoon were:

  1. How can Compliance Surveillance professionals, who traditionally have been reliant on the lexicon approach, get comfortable with AI-based Risk Policies?
  2. How can Compliance Surveillance professionals represent and explain AI-based Risk Policies to internal stakeholders as well as global regulatory bodies?

In response to the above questions, we explained that in order to be considered robust and accepted for use in a regulated environment, any AI model – or lexicon scenario – implemented by compliance teams must adhere to three pillars of model risk management as described in the SR 11-7 framework pioneered by the Federal Reserve Bank. These pillars are:

  • Conceptual soundness – understanding the model’s functioning and being able to explain its fundamental architecture.
  • Outcomes analysis – ensuring that the model’s results align with our expectations.
  • Ongoing monitoring and change management – regularly evaluating models and subjecting any changes to a rigorous evaluation process before implementing them in production.

If you’re interested in learning more about deploying Compliance Surveillance AI and passing regulatory inspections, you can check out our CEO’s blog, which provides a checklist of the steps required for a successful rollout of AI in Compliance.

It is clear that the quality bar for AI acceptance in financial services is set exceptionally high. The consequence of this, as we explained at the event, is that simply connecting ChatGPT or other LLMs to an archiving solution won’t be accepted by the regulator. To make matters worse, the performance of ChatGPT as a general purpose AI system falls short of the domain-specific and task-specific AI, Behavox LLM, with 18% vs 84% planted True Positives detected, respectively.

In conclusion, Behavox regularly hosts AI 101 Briefings for our customers and partners – if you would like to learn more and if you are in the process of adopting AI-based Risk Policies, please contact us at and visit our events page to sign up for our next summit, seminar or roundtable near you.