AI Agents & the Law NYSE Event Recap
We had the pleasure of hosting an event at the New York Stock Exchange on AI Agents and the Law. After a fireside chat with Lawrence H. Summers (OpenAI Board Member and Harvard Professor), our CEO, John Nay, moderated a panel with the Attorney General of New Jersey, Matthew Platkin, Lane Dilg of OpenAI, Eric Vandevelde of Gibson Dunn, and Dr. Megan Ma of Stanford CodeX.
We explored the following aspects of the complex landscape related to the rise of AI agents, focusing on AI liability, AI governance, and the potential deployment of AI for improving government and regulatory functions.
Regulatory Regimes for AI Agents
- Eric kicked off the discussion by highlighting the scramble he is witnessing in the regulatory environment. He noted that regulators are anxious to not be perceived as missing the boat this time, and are using existing legal authorities to stake their claim over different aspects of AI governance that might fall under their purview.
- From the state perspective, the Attorney General warned against falling into the trap of thinking that established laws shouldn’t apply to new technologies. He also argued that we need to be careful to not codify inflexible laws that may make it difficult to regulate new AI technologies.
- Megan, drawing on her work at Stanford, explored how we are moving from AI-driven companies to an AI-native world. She raised the question of whether we need Chief AI Officers to tie the different pieces of the AI governance puzzle together, given the coming ubiquity of AI agents.
Liability for AI Agent Caused Harms
- Shifting the focus to liability, panelists delved into the complex question of where responsibility should lie with AI agents along the stack of the foundation model providers and on up the chain to software providers and ultimately companies deploying agents.
- Eric touched on the distinction between criminal and civil law in this context.
- The Attorney General drew a parallel to holding people accountable for harms in other realms, where there can also be complicated supply chains. He emphasized the need for more regulatory guidance, and cautioned against acting too quickly to carve out broad liabilities (or lack thereof).
- Megan explored the idea of drafting a “Terms of Service for an AI agent,” highlighting the difficulty of using prosaic contractual mechanisms to govern novel agentic systems that move far beyond traditional deterministic software.
- Panelists also discussed John’s recently published Science article, Artificial Intelligence and Interspecific Law, where he argues that, rather than attempt to inhibit the development of powerful AI, wrapping of increasingly advanced AI in legal (e.g., corporate) form could mitigate AI legal harms by defining targets for legal action and incentivizing insurance obligations.
Supervisory Systems for AI Agents
- The discussion then turned to the role of human oversight in governing AI agents. Megan discussed a Supervisory AI Agent Approach to responsible use of generative AI in the legal profession, where AI guardrails agents can send the highest-profile potential issues to human counsel. Joint research between Stanford and Norm Ai is advancing this idea.
- Eric acknowledged that, even if AI agents are highly efficient, there are some situations where society will not currently accept fully autonomous deployments.
- The Attorney General echoed this sentiment, emphasizing that there are certain interactions where a human user's approval should always be required, especially in law enforcement.
Use of Regulatory AI Agents by the Government and Reducing "Legal Sludge"
- American citizens have a paperwork burden of more than 10 billion hours a year. There is a “legal sludge” problem.
- The Attorney General highlighted the large gap in AI expertise between the government and the private sector, underscoring the need for a collaborative effort to set the rules and determine where human responsibility should apply. He noted opportunities for deploying AI to help citizens obtain occupational licenses easier with the example of identifying errors or omissions in applications.
- Megan discussed opportunities for using regulatory AI agents to serve as the first point of contact to triage issues and escalate complex problems to specialized lawyers.
- Lane inspired us with a vision for how AI companies could partner with government agencies by pursuing a "walk-crawl-run" approach, starting with improving the accessibility of regulatory knowledge bases, then moving towards cost-benefit and impact analysis of regulations, and ultimately even recommending concrete policies to achieve particular human goals.