The rapid integration of artificial intelligence into military systems raises critical questions of ethics, design and safety. While many states and organizations have called for some form of “AI arms control,” few have discussed the technical details of verifying countries’ compliance with these regulations.
In this peer-reviewed report—”Mechanisms to Ensure AI Arms Control Compliance“—Institute for Security Policy and Law AI Policy Research Fellow Matthew Mittelsteadt offers a starting point, defines the goals of “AI verification,” and proposes several mechanisms to support arms inspections and continuous verification.
The report is part of a research partnership between SPL and the Center for Security and Emerging Technology investigating the legal, policy, and security impacts of emerging technology.
Mittelsteadt explains that his report defines “AI Verification” as the process of determining whether countries’ AI and AI systems comply with treaty obligations. “AI Verification Mechanisms” are tools that ensure regulatory compliance by discouraging or detecting the illicit use of AI by a system or illicit AI control over a system.
Despite the importance of AI verification, few practical verification mechanisms have been proposed to support most regulation in consideration. Without proper verification mechanisms, AI arms control will languish.
To this end, Mittelsteadt’s report seeks to jumpstart the regulatory conversation by proposing mechanisms of AI verification to support AI arms control.