What is a Regulatory AI Agent?
What is a Regulatory AI Agent?
Norm Ai converts regulations into AI agents that can make dynamic compliance determinations.
This enables businesses to complete comprehensive compliance analyses nearly instantaneously, while also empowering humans to understand the basis for a determination and how to specifically improve content to make it compliant.
Through the Norm Regulatory AI Domain Specific Language, we can produce a representation a human can view and at the same time enable the animation of the regulation into a live machine implementation.
The Norm Ai Computational Law approach:
1. Ensures our Regulatory AI agents can make highly reliable determinations of whether anything submitted by a user is compliant.
2. Provides interpretable regulation representations and real-time explanations to humans.
John shares more and some case studies of specific regulations in this interview.
Read the conversation below:
Judy Shaw: Now you and your company have pioneered the concept of regulatory AI, tell me, how does it work? John Nay: Good question. It's definitely a new thing, and the way that we think about it is that regulations are extremely complex. So for example, SEC marketing rule, it's a 430 page PDF. Don't quote me on the exact number of pages, but something like that. And that is something that every day, every investment advisor has to make sure that everything they put out into the world is compliant with that. And so what we do is we take regulations like that, which are very complicated, very standards and principled based, and we turn them into AI agents. So when we say regulatory AI agent, we mean an AI agent that can pursue an action, in this case, assessing whether something is compliant or not, with a specific regulation. And so the way that we do that is we break down the regulation into all of its different requirements and different nuances, and then we build that into a decision tree. The decision tree is something that represents it legally, but also enables, from the technology perspective, an AI to go very specific in each of its determinations, and then over time, ultimately say at the end of the decision, is this compliant or not with that regulation? And if not, why specifically is it not? And then what can I do to improve? So that's a pattern that we've built out more broadly, not just for SEC, not just for FINRA, but we're doing this over time. The reason we call it regulatory AI is for regulations more broadly. So this could be anything that a company is subject to. We're, over time, turning that into an AI version of the regulation
Judy Shaw: And what type of team is required to build this?
John Nay: So as you can imagine, from the nuance of that and the technical legal aspects to that, it does require lawyers on staff. So we have people with law degrees from Harvard and Stanford and Yale. But then, of course, it requires AI engineers and software engineers and even AI researchers. So that's part of my background, I have a PhD focused on AI research, but I was always applying the methods at the intersection of AI and law, and so trying to figure out how to get machine learning systems to better understand law and legal concepts. And then, over time, recruited a lot of other people to that cause. And what we have at the team now, we have AI engineers from Google and meta and places like that, we have software engineers, we have the lawyers, and then we're now building out a new function called a Legal Engineer. So this is a new role that we invented. So the way that this works is you are someone that has the legal training or the regulatory training, and you have that domain expertise, but you don't necessarily want to be coding all day programming, and so you use our tools that we've developed internally to take a regulation, take a corporate policy, convert it into a regulatory AI agent by clicking and dragging and dropping. So it's a no code tool. So a legal engineer is someone who can develop the product, develop new regulations, but not need to code. So what we built over time is a team that's very interdisciplinary, as you pointed out across law, AI, software engineering research and now legal engineers as well. And then it also requires thinking about this from a bigger picture, so not just in the weeds, but how would the current regulators think about this, and how would, over time, many in the industry think about this. So what we've done is we built a regulatory advisory board. We have a former SEC Commissioner, former SEC enforcement lawyer, other former heads of different agencies. And so what that allows us to do is to say, let's take a step back and let's look at the whole regulatory landscape. What are the areas of regulation that are most amenable to this approach for now (all will be soon). And then also, how would the current regulators think about the output of what we're doing, and can we begin to collaborate more with them?
Judy Shaw: And John finally, tell me, what’s the 5 to 10 year vision for Norm Ai, and how does that tie in with AI advancements more broadly?
John Nay: Yes so this actually circles back to the talk we had here at the New York Stock Exchange in February where we explored two sides of the coin. So one was, how do you use AI to encode regulations and make it more efficient and effective to comply with those regulations? But the flip side of that is as AI gets more advanced and it’s deployed more broadly, as GPT5 comes out from OpenAI after GPT4, and then new and newer advancements come out from other companies as well, that is enabling the more autonomous deployments of AI. So AI generating marketing from scratch, AI making decisions, talking to clients on your behalf at a bank, for example. So as that happens and it happens in higher and higher stakes situations, it’s still subject to all the same laws and regulations. So part of our vision is we will be sitting across from that. So the AIs, everyone’s developing their own AIs or they’re deploying some other one, but they could leverage us to make sure that the outputs and the proposed actions and proposed content coming out of the AI is compliant with any of the relevant regulations. So we have a two part vision: how can we make companies and people more compliant effortlessly with less work? But also as AI is more deployed, how do we make it more compliant as well?
Judy Shaw: John it’s been great to talk to you, thanks for joining me on Floor Talk today.
John Nay: Thank you so much for having me!