Exploring the Role of AI in Government Rulemaking: Insights From the Recent Federalist Society Panel

Recently, our Founder and CEO, John Nay, participated in a Federalist Society panel discussion titled “How Does AI Affect Rulemaking?” The event convened leaders in law, technology, and policy to explore the complex dynamics between AI and regulatory frameworks. Here are the key takeaways.

Panel Overview

Moderator: Daniel M. Flores, Senior Counsel, Committee on Oversight and Accountability, U.S. House of Representatives

Speakers:

  • J. Kennerly Davis, Jr.: Former Senior Attorney, Hunton Andrews Kurth LLP
  • John Nay: Founder & CEO, Norm Ai
  • Catherine Sharkey: Segal Family Professor of Regulatory Law and Policy, New York University School of Law

Main Discussion Themes

Computational Law & Generative AI
The panel opened with John introducing the concept of computational law, a method of encoding legal requirements into computer-readable formats. Norm Ai has pioneered the application of Generative AI and Large Language Models to the integration of auditable symbolic systems and decision trees that can automate a preliminary regulatory compliance assessment. Our work demonstrates that some regulatory requirements can indeed be translated into code, provided they are well-defined.

Administrative Use Cases of AI
Catherine Sharkey highlighted her research with the Administrative Conference of the United States (ACUS), which examines government use of AI. Agencies like the Department of Health and Human Services (DoHHS) and the Department of Transportation (DoT) have piloted AI tools for enforcement and adjudication, achieving breakthroughs in data analysis and issue identification. Despite limited precedents, these experiments underscore AI's potential for aiding in administrative decision-making, albeit with the need for strict oversight.

Transparency and Accountability in AI-Driven Rulemaking
Transparency emerged as a critical concern, especially regarding AI's role in decision-making processes. Ken Davis emphasized the importance of regulatory agencies remaining transparent and bearing the burden of proof when employing AI tools. If AI technologies progress to the point where their outputs are indispensable, agencies must demonstrate that they retain control over decisions to avoid legal challenges for arbitrary and capricious conduct.John echoed this sentiment, noting the necessity of treating AI as a general-purpose reasoner but orchestrating every call to an AI with very careful and auditable input/outputs. He explained that AI systems must be used in a manner that is incremental and task-specific to minimize the risks associated with bias or systemic unforeseen impacts.

Judicial Review and Legal Boundaries
The discussion also covered the evolving legal landscape. Ken pointed to questions surrounding statutory authority and emphasized the implications of the Chevron deference decision. The adequacy of AI disclosures in regulatory enforcement will likely be tested through litigation, as agencies must prove their use of AI does not undermine their statutory responsibilities.

Future and Security Considerations

The panelists acknowledged that AI's role in national security and enforcement will bring unique challenges. Catherine mentioned that agencies will have to adapt their procurement processes, possibly leaning toward open-source or interoperable solutions. Ken raised concerns about cybersecurity risks and the necessity for robust system security measures. Both agreed that, while AI offers opportunities, it also requires careful planning.

Conclusion

The panel illuminated the intricate interplay between AI and the regulatory framework. As AI continues to advance, the regulatory community must grapple with issues of transparency, accountability, and authority. At Norm Ai, we remain committed to developing solutions that not only simplify compliance but also uphold rigorous standards for legal and ethical governance.

We are grateful to have been a part of this meaningful conversation and look forward to ongoing discussions that shape the future of AI in regulatory processes.

For more insights into our work and approach, explore our website or follow us on LinkedIn.


See proactive compliance in action

clouds