In 2019, SPL received a two-year grant from the Center for Emerging Technologies (CSET) at Georgetown University to conduct research on Artificial Intelligence (AI) and national security, law, ethics, and policy.
This research will seek solutions to national security risks raised by the emergence of AI while balancing the many legal and ethical concerns raised by its misapplication. IPL faculty—including the SPL Director the Hon. James E. Baker, Deputy Direct Robert Murrett, Professor Laurie Hobart, and Research Fellow Matthew Mittelsteadt—are working in collaboration to publish research and white papers to help drive the conversation on AI national security policy.
The Hon. James E. Baker
Brookings Institution Press
Publishing date: December 2020
Paperback ISBN: 9780815737995
The increasing use of artificial intelligence poses challenges and opportunities for nearly all aspects of society, including the military and other elements of the national security establishment. This book addresses how national security law can and should be applied to artificial intelligence, which enables a wide range of decisions and actions not contemplated by current law. Written in plain English, The Centaur’s Dilemma will help guide policymakers, lawyers, and technology experts as they deal with the many legal questions that will arise when using artificial intelligence to plan and carry out the actions required for the nation’s defense.
A DPA for the 21st Century (April 2021)
By the Hon. James E. Baker
The Defense Production Act can be an effective tool to bring US industrial might to bear on broader national security challenges, including those in technology.
If updated and used to its full effect, the DPA could be leveraged to encourage development and governance of artificial intelligence. And debate about the DPA’s use for AI purposes can serve to shape and condition expectations about the role the law’s authorities should or could play, as well as to identify essential legislative gaps.
Observations from a Symposium hosted by the Institute for Security Policy and Law and the Georgetown Center for Security and Emerging Technology (Oct. 29, 2020)
The symposium commenced with a presentation on what AI is and how it works to make the technology behind AI accessible to national security generalists. For readers who did not attend the Symposium we collect at the outset of this report some of the general observations made about the constellation of technologies referred to as AI.
We then present the key points and observations from each of three panels – AI and the Law of Armed Conflict; AI and National Security: Ethics, Bias, and Principles; and AI and National Security Decision-Making. The Report concludes with a discussion about the role of lawyers, policy-law-technology teaming, and importance of making purposeful ethical and legal choices, which will embed our values in AI applications but also result in more accurate and effective national security tools.
By the Hon. James E. Baker
The law plays a vital role in how artificial intelligence can be developed and used in ethical ways. But the law is not enough when it contains gaps due to lack of a federal nexus, interest, or the political will to legislate. And law may be too much if it imposes regulatory rigidity and burdens when flexibility and innovation are required.
Sound ethical codes and principles concerning AI can help fill legal gaps. In this paper, CSET Distinguished Fellow James E. Baker offers a primer on the limits and promise of three mechanisms to help shape a regulatory regime that maximizes the benefits of AI and minimizes its potential harms.
AI Verification: Mechanisms to Ensure AI Arms Control Compliance (February 2021)
By Matthew Mittelsteadt, SPL AI Policy Fellow
The rapid integration of artificial intelligence into military systems raises critical questions of ethics, design and safety. While many states and organizations have called for some form of “AI arms control,” few have discussed the technical details of verifying countries’ compliance with these regulations. This brief offers a starting point, defining the goals of “AI verification” and proposing several mechanisms to support arms inspections and continuous verification.
The report defines “AI Verification” as the process of determining whether countries’ AI and AI systems comply with treaty obligations. “AI Verification Mechanisms” are tools that ensure regulatory compliance by discouraging or detecting the illicit use of AI by a system or illicit AI control over a system.
Despite the importance of AI verification, few practical verification mechanisms have been proposed to support most regulation in consideration. Without proper verification mechanisms, AI arms control will languish. The report seeks to jumpstart the regulatory conversation by proposing mechanisms of AI verification to support AI arms control.
National Security Law and the Coming AI Revolution
On Oct. 28, 2020, Syracuse University Institute for Security Policy and Law and the Center for Security and Emerging Technology at Georgetown University’s Walsh School of Foreign Service presented a one-day virtual symposium on “National Security Law and the Coming AI Revolution,” including panels on:
- AI & the Law of Armed Conflict
- AI & National Security Ethics: Bias, Data, & Principles
- AI & National Security Decision-Making