News & Events

Professor William C. Banks Speaks to Bloomberg Law About Secret DOJ Subpoenas

Trump DOJ Secret Subpoenas Crossed Line

(Bloomberg Law | June 15, 2021) National security law expert William Banks, a professor at Syracuse University College of Law, discusses the controversy over revelations the Justice Department under former President Donald Trump had secretly subpoenaed records from House Democrats, former White House counsel Don McGahn and members of the media.

Listen to the podcast.

 

Mike Flynn and Military Law: Professor Mark Nevitt Speaks to The Washington Post

Why the Pentagon isn’t heeding calls to prosecute Michael Flynn under military law

(The Washington Post | June 5, 2021) When Michael Flynn, a retired three-star general, appeared to back calls for a coup last week, critics accused him of defying military deference to civilian authority, a tenet that is central to the ethos of the armed forces.

Speaking at a QAnon-themed conference in Texas, Flynn was asked why a coup similar to one that occurred in Myanmar could not happen in the United States. Flynn, President Donald Trump’s first national security adviser, has remained a vocal supporter of the former president and the false assertion he won a second term in office.

“I mean, it should happen here,” Flynn responded to the questioner, a man who identified himself as a Marine. “No reason.”

While Flynn subsequently disavowed any support for a coup on social media, saying his words had been misrepresented by the media, the comments intensified calls from some lawmakers and other critics for the military to prosecute the former officer, who receives a military pension, for sedition …

… According to Mark Nevitt, a former military lawyer who teaches law at Syracuse University, most of the instances in which the military had used the UCMJ to hold retirees accountable have had a “clear military nexus,” for example when an incident occurs on a military base or involves a military victim …

Read the full article.

The Climate Challenge: Professor Mark Nevitt Interviewed by Yale Climate Connections

Revitalized U.S. urgency on climate change and national security

(Yale Climate Connections | May 7, 2021) “An urgent national security threat.” That’s the phrase U.S. Director of National intelligence Avril Haines used in describing climate change at the White House Climate Summit on Earth Day a few weeks ago.

It’s the kind of language that national security interests have applied previously, but not since the Trump administration took office on January 20, 2017, and soon put the kibosh on such talk. Conversations about climate change and national security continued under the Trump presidency, but not so much in the open, and certainly not with the imprimatur of the Oval Office …

… While climate change and global security for some time have been a topic of policy deliberations, the Global Trends 2040 report brings climate change to the forefront more than any of its predecessors had done.

“It’s a pretty clear-eyed objective report,” [Professor Mark] Nevitt said. “There’s five different themes on the first few pages. And climate change is right there with the global challenges, right there with technology, disruption, disease, financial crisis.”

Sikorsky said the team putting together the report knew climate change would need to be emphasized more than in earlier years. The report, she said, is informed by data and models, and also through conversations with experts and qualitative research.

“The authors travel around the globe, and meet with people and talk to them about their experiences,” Sikorsky said. “And it’s impossible to have those conversations in a lot of the world without climate change being discussed as something that’s shaping people’s everyday lives already.”

Nevitt noted that he is pleased the report digs into areas like attribution science which is used to understand the role climate change plays in shaping weather events, and also explores the importance of feedback loops. “That’s sort of the cutting edge of climate science that’s being integrated into an intelligence document,” He said. “That shows me that there’s a real active engagement, it’s not passive.”

Nevitt’s only qualm? He is concerned the report may be overly optimistic about how much the international community can agree on a critical point: quickly reducing, and perhaps also eliminating, greenhouse gas emissions in order to prevent exceeding 1.5°C of warming even earlier than the report expects …

Read the full story.

 

“A Specialized Society?” Professor Mark Nevitt Discusses Monitoring Military for Domestic Extremists in The Washington Post

The Pentagon wants to take a harder line on domestic extremism. How far can it go?

(The Washington Post | May 5, 2021) Pentagon officials are considering new restrictions on service members’ interactions with far-right groups, part of the military’s reckoning with extremism, but the measures could trigger legal challenges from critics who say they would violate First Amendment rights.

Under a review launched by Defense Secretary Lloyd Austin, Defense Department officials are reexamining rules governing troops’ affiliations with anti-government and white supremacist movements, ties that currently are permissible in limited circumstances.

Austin, who has pledged zero tolerance for extremism, ordered the review after the events of Jan. 6, when rioters including a few dozen veterans — and a handful of current service members — stormed the U.S. Capitol in an attempt to overturn the presidential election results …

Mark Nevitt, a former Navy lawyer who teaches at the Syracuse University College of Law, pointed to other cases in which courts have characterized the military as a “specialized society separate from society.”

“Federal courts will likely provide a healthy dose of deference to the military if challenged, particularly if the military can link the new definition to the underlying military mission and good order and discipline,” he said …

Read the full article.

A DPA for the 21st Century

Download Report

By the Hon. James E. Baker

Some commentators say the field of artificial intelligence is ungovernable. It covers many fields and capabilities, they note, and involves a breadth of private and academic actors, many working in secrecy to protect intellectual property and profit potentials. But it is an overstatement to call AI ungovernable.

Several existing laws and executive orders give various agencies and elected officials tools to regulate the national security development of AI, as does the Constitution. Policymakers should become familiar with these tools, examine their strengths and shortcomings, and become involved in efforts to modify and improve the AI governance architecture. As with other “ungovernable” areas, like nonproliferation, where there are also myriad actors and challenges, we can design an effective governance architecture if we are purposeful about doing so. This paper considers one of the most important potential tools in this effort, the Defense Production Act (DPA); however, it would be a more effective tool if updated and used to its full effect.

AI development depends on hardware, data, talent, algorithms, and computational capacity.1 Thus, any law that can (1) help ensure an adequate supply of these assets and in appropriate form; and (2) prioritize the use of these assets to achieve national security policy objectives is an important national security tool. That is not to say the DPA’s full authority should be used at this time. Extraordinary tools, such as the DPA’s allocation authority, might more appropriately be used at a moment of emergency, for example, in time of conflict or should another nation achieve an AI breakout creating decisive security advantage.

Thus, at this time, the most important function a debate about the use of the DPA for AI purposes can serve is to shape and condition expectations and understandings about the role such authorities should, or could, play, as well as to identify essential legislative gaps so that we do not learn of these gaps (and are not hesitant to use the authority we have) when the authority is needed. However, in less dramatic manner, the DPA’s other authorities might well be used, or more fully used at this time to shore up America’s AI supply line, as illustrated with the examples below.

While obscure to the public, the DPA got a burst of national attention in early 2020 when the coronavirus pandemic began overwhelming U.S. hospitals, first in New York City and then elsewhere. In the absence of federal leadership, in March 2020 national security specialists familiar with the DPA urged its full use to mobilize the nation’s capacity to provide medical equipment and personal protective equipment (PPE) to address COVID-19.

In April 2020, as the spreading virus was depleting national supplies of ventilators and PPE for health workers, President Donald Trump generated headlines by invoking the DPA, ostensibly to compel businesses to manufacture such equipment. A second order authorized the Secretary of Health and Human Services and the Director of the Federal Emergency Management Agency to “use any and all authority available” under the DPA to acquire N95 respirator masks from 3M. By mid-July, however, CNN noted that “the Trump administration has made only sparing use of its authorities [under DPA], leaving front-line workers in dire need of supplies like masks, gowns and gloves.”2

The Trump Administration did eventually use the DPA during the second half of 2020 to prioritize contracts (eighteen times to channel raw materials to the manufacture of vaccines and therapeutics) and to incentivize the production of medical supplies like testing swabs; however, the DPA was never used to full effect, nor in a strategic and transparent manner.

In contrast, as a candidate for the White House, President Biden promised full use of the DPA to put the United States on a “war time” footing to meet COVID supply chain challenges. Since assuming office, the Biden Administration has used the DPA, and other laws, to address bottlenecks in the supply chain for components needed for vaccine manufacture and to prioritize supply contracts to allow Merck to assist in making Johnson & Johnson vaccines. In addition, the Biden Administration has used Title III financing authorities to incentivize the building of factories and supply lines for COVID tests and rubber plants for medical gloves.

What is significant here, is not just that the Biden Administration used the DPA to provide vaccine capacity to plug supply chain gaps, it did so after the president-elect and then president conditioned industry for its use in this manner and directed the federal government to lean into the law. It also made “friendly” use of the DPA, identifying needs in consultation and partnership with industry, with a focus on the result rather than the means. These are lessons worth noting in the AI context going forward. With COVID, as with AI, the legal policy question is not whether and how to use the DPA to accomplish a task but how to use the full range of available law effectively, purposefully to meet the nation’s needs, and in a manner consistent with our values. With COVID, it turned out, the DPA was one of several laws that could be used to harness America’s industrial capacity to address the pandemic.

The government’s handling of the pandemic is a topic for another day. The point here is that the mere mention of the DPA’s potential clout reinforced the view, in some people’s eyes at least, that the law is a vehicle to “nationalize” industry, a “commandeering” authority, which empowers the government to take over and run the nation’s defense industries. This fed into an already existing narrative about government regulation and opposition from the Chamber of Commerce.

In fact, as this paper shows, the DPA contains many different authorities, some narrow and others potentially broad in scope. It is important for policymakers to understand that the DPA is not limited to military equipment and actions, and its powers are not solely addressed to, or limited to, “commandeering.” Rather, the law establishes a national mobilization capacity to bring the industrial might of the U.S. to bear on broader national security challenges, including technology challenges and public health challenges. Thus, the DPA is both a potential macro tool and a micro tool. Its application to artificial intelligence can be substantial.


  1. Ben Buchanan, “The AI Triad and What It Means for National Security Strategy” (Center for Security and Emerging Technology, August 2020), https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Triad-Report.pdf
  2. Priscilla Alvarez, Curt Devine, Drew Griffin, and Kristen Holmes, “Trump administration’s delayed use of 1950s law leads to critical supplies shortages,” CNN, July 14, 2020, https://www.cnn.com/2020/07/13/politics/delayed-use-defense-production-act-ppe-shortages/index.html

Emergency Powers and Policing: Professor William C. Banks Talks to Rewire

How Emergency Powers Pave the Way for Police Brutality at Protests

(Rewire | April 21, 2021) When curfew hit at 8 p.m. on April 13 in Brooklyn Center, Minnesota, it felt like someone had flipped a switch.

Reporters on the ground say the protest outside the police department had been peaceful, full of speeches and songs.

But the environment quickly changed as law enforcement began to use more aggressive tactics, firing less-lethal rounds, tear gas and flash grenades at protesters in an attempt to disperse the protest …

… In 1878, The Posse Comitatus Act was passed to prevent the federal military from engaging in law enforcement activity. There was a desire for the military and law enforcement to be separate entities.

“They’re supposed to keep the peace, prevent disturbances, quell disorder, but not enforce the law. That’s for the cops,” said William Banks, professor emeritus at the Syracuse University College of Law.

But states aren’t burdened by that restriction.

“If the governor wishes, depending on how the state law is written, National Guard forces could enforce the curfew or engage in a search or make an arrest of an individual who’s violating the law,” Banks said.

In the past 20 years, the lines have further blurred. That’s because military-grade force doesn’t just come from the military.

Since 1997, federal programs have transferred surplus military equipment to local police departments. Police departments often respond to protests in full tactical military gear, with gas masks, shields and armored vehicles.

For instance, as NPR reported, St. Paul suburb Cottage Grove’s police department alone acquired $1 million in military gear during the Trump administration. The department received 39 bayonets in December 2019.

“That kind of a force, particularly if it’s made distant from the people by virtue of the equipment that they use and the paraphernalia that they wear, and the rules of engagement that follow, they’re no longer being responsive to the people,” Banks said …

Read the full article.

ABA Podcast: 1L Meghan Steenburgh Discusses National Security Concerns with Professor William C. Banks

Critical Issues in National Security Law

(ABA Law Student Podcast | April 20, 2021) In the daily onslaught of news from all corners of the globe, it is sometimes difficult to decipher the implications of current events within our own country.

From the pandemic, to cybersecurity, to international relationships, linking current events and national security interests to law helps us understand our country’s responses to the things we see in the media. ABA Law Student Podcast host 1L Meg Steenburgh talks with Professor William Banks of Syracuse University about the most critical national security issues facing our nation both at home and abroad, including China tensions, nuclear weapons concerns worldwide, the Jan. 6 Capitol riots, and more.

William C. Banks is a Syracuse University College of Law Board of Advisors Distinguished Professor and Emeritus Professor at the College of Law and the Maxwell School as Professor of Public Administration and International Affairs.

Listen to the podcast.

Hon. James E. Baker: Ethics and Artificial Intelligence—A Policymaker’s Introduction

Download Report

Policymakers contemplating the burgeoning field of artificial intelligence will find, if they have not already, that existing laws leave huge gaps in deciding how (and whether) AI will be developed and used in ethical ways. The law, of course, plays a vital role. While it does not guarantee wise choices, it can improve the odds of having a process that will lead to such choices. Law can reach across constituencies and compel, where policy encourages and ethics guide. The legislative process can also serve as an effective mechanism to adjudicate competing values as well as validate risks and opportunities.

But the law is not enough when it contains gaps due to lack of a federal nexus, interest, or the political will to legislate. And law may be too much if it imposes regulatory rigidity and burdens when flexibility and innovation are required. Sound ethical codes and principles can help fill legal gaps. To do so, policymakers have three main tools:

  • Ethical Guidelines, Principles, and Professional Codes
    Academic Internal Review Boards (IRBs)
  • Principles of Corporate Social Responsibility (CSR)
  • Below is a primer on the limits and promise of these three mechanisms to help shape a regulatory regime that maximizes the benefits of AI and minimizes its potential harms.

This paper addresses six specific considerations for policymakers:

  1. Where AI is concerned, ethics codes should include indicative actions illustrating compliance with the code’s requirements. Otherwise, individual actors will independently define terms like “public safety,” “appropriate human control,” and “reasonable” subject to their own competing values. This will result in inconsistent and lowest-common-denominator ethics. If the principle is “equality,” for example, an indicative action might require training data for a facial recognition application to include a meaningful cross-section of gender and race-based data.
  2. Most research and development in AI is academic and corporate. Therefore, Institutional Review Boards and Corporate Social Responsibility practices are critical in filling the gaps between law and professional ethics, and in identifying regulatory gaps. Indeed, corporations might consider the use of IRBs as well.
  3. Policymakers should consider the Universal Guidelines for Artificial Intelligence (detailed below) as a legislative checklist. Even if they don’t adopt the guidelines, the list will help them make purposeful choices about what to include or omit in an AI regulatory regime consisting of law, ethics, and CSR.
  4. Academic leaders and government officials should actively consider whether to subject AI research and development to IRB review. They should further consider whether to apply a burden of proof, persuasion, or a precautionary principle to high-risk AI activities, such as those that link AI to kinetic or cyber weapons or warning systems, pose counterintelligence (CI) risks, or remove humans from an active control loop.
  5. Corporations should create a governance process for deciding whether and how to adopt CSR national security policies answering the question: What does it mean to be an American corporation? They should consider adopting a stakeholder model of CSR that is, in essence, a public-private partnership that includes input from consumers and employees as well as shareholders and the C-Suite.
  6. Policymakers, lawyers, and corporate leaders should communicate regularly about the four issues that may define the tone, tenor, and content of government-industry relations: uniformity in response, business with and in China and Russia, encryption, and privacy.
  7. Where government agencies, corporations, and academic entities have adopted AI Principles, as many institutions now have, it is time to move from statements of generic principle to the more difficult task of applying those principles to specific applications.

Professor Robert Murrett Discusses Afghanistan Withdrawal with WAER

SU Professor Weighs In on President Biden’s Plan to Remove Troops from Afghanistan

(WAER | April 16, 2021) A Syracuse University International Affairs professor says there is reason for concern after the United States’ withdrawal from Afghanistan this fall.

President Joe Biden announced yesterday the United States will fully remove troops from Afghanistan starting September 11th, exactly 20 years after the conflict began. Maxwell School Professor Robert Murrett also served in the navy for over 30 years. He says the US’ biggest focus now will be monitoring the Taliban’s activity in the country.

“The continued territorial gains which are likely by the Taliban forces, the continued viability of the Afghanistan government and challenges with the Taliban make it to them: whether it’s some sort of shared governing model or one that’s not shared at all in the case of significant territorial gains by the Taliban.” said Murrett …

Read the full story.

Symposium Report: National Security Law and the Coming AI Revolution

Download the Symposium Report

By the Hon. James E. Baker, Director, SPL

On October 29, 2020, Georgetown CSET and the Syracuse University Institute for Security Policy and Law sponsored a symposium for national security law practitioners titled “National Security Law and the Coming AI Revolution.” The discussants—lawyers, policymakers, and technologists—addressed the following topics:

  • AI as a constellation of technologies;
  • AI and the Law of Armed Conflict;
  • Al ethics, bias, data, and principles;
  • AI and national security decision-making; and
  • The role of law and lawyers.

Two of the discussants have gone on to senior national security technology posts in the Biden Administration. Former CSET Founding Director Jason Matheny is now Deputy Assistant to the President for Technology and National Security, among other titles. Tarun Chhabra is now Senior Director for Technology and National Security on the NSC staff. Other senior discussants continue their important work on AI at PCLOB, JAIC, the Naval War College, and the Office of Naval Research, and in academia and industry. A list of discussants can be found in the symposium report.

The event drew more than 180 attendees. To make the discussion available to a larger audience, the sponsors summarized many of the observations in the report. The following collective themes emerged from the panels:

  • AI will transform national security practice including legal practice. National security will be better served with the meaningful, thoughtful, and purposeful application of law and ethics to AI. It is not an either-or choice between security and law and ethics. Whatever we do to further law and ethics helps ensure our competitive advantage by improving accuracy, efficacy, and confidence in the results.
  • Policymakers, commanders, and technologists need to understand law so that they can spot issues and create the time and space to embed law and ethics in AI applications. If the government waits to apply law and ethics at the use or decision point, it may be too late to meaningfully influence outcomes. Therefore, as we consider and apply the concept of human-machine teaming, we should pay equal attention to teaming between lawyers, policymakers, and technologists to make purposeful legal and ethical AI choices.
  • It is time for national security practitioners to move from bromides and principles to the application of those principles to specific AI applications. Negotiations about AI ethics and norms will need to be on a case-by-case, scenario-by-scenario basis to be meaningful.
  • Fundamentally, AI is a computer algorithm designed to “predict optimal future results based on past experience and recognized patterns.” It is the task of policymakers to determine whether that AI or a human has the authority to act on those predictions and make decisions.
  • AI is both nimble and brittle. It has the potential to adapt in super dynamic, unstructured situations; it has the potential to adapt at machine speed and in the presence of overwhelming incoming data; and it does not feel fear or fatigue. However, the AI systems we have today are not yet safe, secure, or reliable enough to process real-time data in rapidly changing environments, then update themselves and learn in real time, and thus be used for targeting or other immediate decisional support. This is especially true because the enemy will be targeting the AI systems.
  • Part of taking responsibility for AI, including mitigating against AI bias, means involving stakeholders in all stages of development and, where possible, deployment. However, as we employ more and more autonomous systems, it will become increasingly difficult to dedicate time and resources to refining the decision-making of each of those systems. In other words, with the proliferation of autonomous systems, we may be less likely to engage in the type of meaningful human-machine teaming that ethical deployment would require.
  • Law and ethics must be applied throughout an AI lifecycle. Practitioners should think intentionally about issues such as bias from the beginning of a project. AI often fails when conditions change, and conditions will change in the national security world. Ethical failures can occur at any point in an AI software program. Moreover, large organizations, such as DOD, face the risk that a thousand-to-one or one-in-a-million type problem will occur.
  • Lawyers should distinguish between law, policy, and ethics. Without clarity, government actors may be discouraged from applying higher ethical standards lest those standards later become construed as legally binding as distinct from wise policy choices.
  • National security lawyers working in a classified environment have a heightened responsibility to be exceptionally conscious of bias.

The sponsors encourage readers of this blog to review the report, which offers detail and nuance on the themes identified above. Thank you.