AI Agents & the Law NYSE Event Recap

We had the pleasure of hosting an event at the New York Stock Exchange on AI Agents and the Law. After a fireside chat with Lawrence H. Summers (OpenAI Board Member and Harvard Professor), our CEO, John Nay, moderated a panel with the Attorney General of New Jersey, Matthew Platkin, Lane Dilg of OpenAI, Eric Vandevelde of Gibson Dunn, and Dr. Megan Ma of Stanford CodeX.

We explored the following aspects of the complex landscape related to the rise of AI agents, focusing on AI liability, AI governance, and the potential deployment of AI for improving government and regulatory functions.

Regulatory Regimes for AI Agents

Liability for AI Agent Caused Harms

  • Shifting the focus to liability, panelists delved into the complex question of where responsibility should lie with AI agents along the stack of the foundation model providers and on up the chain to software providers and ultimately companies deploying agents.
  • Eric touched on the distinction between criminal and civil law in this context.
  • The Attorney General drew a parallel to holding people accountable for harms in other realms, where there can also be complicated supply chains. He emphasized the need for more regulatory guidance, and cautioned against acting too quickly to carve out broad liabilities (or lack thereof).
  • Megan explored the idea of drafting a “Terms of Service for an AI agent,” highlighting the difficulty of using prosaic contractual mechanisms to govern novel agentic systems that move far beyond traditional deterministic software.
  • Panelists also discussed John’s recently published Science article, Artificial Intelligence and Interspecific Law, where he argues that, rather than attempt to inhibit the development of powerful AI, wrapping of increasingly advanced AI in legal (e.g., corporate) form could mitigate AI legal harms by defining targets for legal action and incentivizing insurance obligations.

Supervisory Systems for AI Agents

  • The discussion then turned to the role of human oversight in governing AI agents. Megan discussed a Supervisory AI Agent Approach to responsible use of generative AI in the legal profession, where AI guardrails agents can send the highest-profile potential issues to human counsel. Joint research between Stanford and Norm Ai is advancing this idea.
  • Eric acknowledged that, even if AI agents are highly efficient, there are some situations where society will not currently accept fully autonomous deployments.
  • The Attorney General echoed this sentiment, emphasizing that there are certain interactions where a human user's approval should always be required, especially in law enforcement.

Use of Regulatory AI Agents by the Government and Reducing "Legal Sludge"