Sen. Elissa Slotkin (D-MI) has introduced a bill that would regulate the Pentagon’s use of artificial intelligence technology.
The rise of AI has sparked national debate over its use in several different areas. But when it comes to military use, the national conversation has intensified amid concerns that the technology could be misused.
From NBC News:
The bill seeks to codify two existing Defense Department guidelines into law: that AI cannot autonomously decide to kill a target and that the technology cannot be used to help the military conduct mass surveillance on Americans. It would also ban the use of the technology for launching or detonating a nuclear weapon.
“We’re unhealthy as a political system, and so we focus more on things like Greenland than we do on the use of AI in matters of legal force. And it’s our responsibility to legislate this,” Slotkin told NBC News.
The first two tenants of the bill were at the center of the U.S. military’s acrimonious split with AI giant Anthropic in recent weeks. While the Pentagon has insisted that it regards conducting mass surveillance of Americans as illegal already and that its policy mandates that a human be responsible for lethal decisions, Anthropic worried that loopholes could allow for that surveillance anyway and that future administrations could revoke those guidelines.
The feud boiled over into President Donald Trump's decreeing that all federal agencies have six months to stop using Anthropic models and Defense Secretary Pete Hegseth's declaring the company a supply chain risk, despite the fact that the technology has still helped the U.S. identify military targets in its ongoing war with Iran.
Pentagon Chief AI Officer Cam Stanley demonstrates Palantir's Maven system, which is used in military operations.
— AF Post (@AFpost) March 13, 2026
Follow: @AFpost pic.twitter.com/9sVL6FRuJF
The debate over this issue centers on how far the Pentagon should go in using AI to choose or attack targets and how much control humans should retain. The Pentagon’s chief technology officer clashed with AI company Anthropic after it refused to allow its systems to be used for “all lawful use” because the technology is not reliable enough for fully autonomous weapons. The company expressed concerns about mass surveillance if the government removes its safeguards.
Current policy requires military leaders to independently check AI-generated targeting suggestions. But experts have cautioned that these rules might not be easy to enforce in fast-moving combat scenarios, according to the Brennan Center for Justice.
Recommended
Anthropic’s recent clash with the Defense Department over mass surveillance and autonomous weapons shows why the Pentagon’s use of AI must be reined in. Our latest report documents the Pentagon’s rapid adoption of AI and outlines safeguards to ensure the technology is deployed…
— Brennan Center (@BrennanCenter) March 14, 2026
Conversely, supporters argue that AI is a necessary tool for defending against modern threats — especially as rivals develop their own systems. A senior U.S. defense official told Reuters that overly strict limitations on AI contracts could “threaten military missions.” He suggested the Pentagon requires flexible access to AI to keep up with China, Russia, and the fast-changing nature of drone warfare.
Lawmakers have been split on the issue, with some members of Congress advocating for tighter rules and even full-on bans of certain autonomous weapons systems. Others contend that slowing down the development of AI for military use could place American forces at a disadvantage.
According to a February 2026 newsletter from Semafor, members of Congress are split, with some lawmakers pushing for tighter rules or even bans on certain autonomous weapons, while others argue that slowing U.S. military AI could leave American forces and allies at a dangerous disadvantage if adversaries race ahead.

