"In the long run, Congress should catch up with where this technology is going."

Anthropic CEO Dario Amodei repeated that sentiment nearly a dozen times in a 30-minute CBS News interview last week, hours after the Pentagon canceled a $200 million contract with the company.

The immediate dispute was simple: the Pentagon requires AI contractors – Anthropic, OpenAI, Google, and xAI – to allow their systems to be used for "all lawful purposes". Anthropic drew two red lines: no mass domestic surveillance, and no fully autonomous lethal weapons. Alas, an impasse. By weekend's end, the Pentagon had designated Anthropic a supply chain risk – an unprecedented move – and signed a new deal with OpenAI.

Commentators have unpacked the clash, surfacing important questions: Who decides how powerful AI systems should be governed and deployed? When is government intervention into private industry justified on national security grounds? Should AI be entrusted with lethal decision-making in war?

Important as these questions are, they overlook a more basic one: Why hasn't Congress voted on these limits?

Lethal Force
8 in 10
Americans81%
Say a human being should always make the final decision before any use of lethal force.
Dem 81% Rep 81%
Surveillance
7 in 10
Americans~70%
Say AI surveillance without a warrant violates constitutional protections.
Morning Consult / ITIF, Feb. 2026 · n=1,976 U.S. adults · itif.org

After all, Dr. Amodei's redlines are not controversial. Eight in ten Americans say a human should always make the final decision before the use of lethal force – a view held equally by Democrats (81%) and Republicans (81%). Seven in ten say AI surveillance without a warrant violates constitutional protections.

Bills to codify both red lines already exist in Congress. They simply haven't moved. So what's stopping them?

The reason is less philosophical than institutional: both issues are stuck in Senate committees where a small number of members can quietly block legislation.

AI & Surveillance

Daniel Woo was followed home from work by Immigration and Customs Enforcement agents. Emily Belz was startled when agents approached her car and shouted her name and home address. Federal immigration agents showed up at the home of Chongly Thao, a U.S. citizen, and detained him at gunpoint – without a warrant.

It has since been widely reported that ICE used facial recognition software and AI surveillance tools during its surge in Minnesota earlier this year. But here's the rub: None of it was illegal.

The Fourth Amendment requires warrants for government searches. But two loopholes have allowed agencies to sidestep that requirement: buying data from commercial brokers and collecting Americans' communications incidentally through foreign surveillance programs.

None of this is new. What's changed is the scale. The rapid advancement of large language models means ICE and other agencies can now scrape, bundle, and cross-reference data from commercial brokers to build detailed profiles of individuals and civil society groups at a speed and depth that would have been unimaginable even three years ago.

Two bills in the Senate aim to close both loopholes.

The SAFE Act, introduced by Sen. Dick Durbin (D-IL) and Sen. Mike Lee (R-UT), would require a warrant before intelligence agencies can access Americans' communications – while preserving their ability to surveil foreign targets. It passed a two-year reauthorization in 2024 and is set to expire on April 20.

The Fourth Amendment Is Not For Sale Act (FANSFA) takes aim at the data broker loophole. Originally introduced by Sen. Ron Wyden (D-OR) in 2021, the bill would require federal agencies to obtain a warrant before buying Americans' location data, browsing histories, or other sensitive information from commercial data brokers. It passed the House 219-199 in 2024. One hundred civil society organizations have called for its passage. Its 20 Senate co-sponsors span the ideological spectrum – from Sen. Rand Paul (R-KY) to Sen. Bernie Sanders (I-VT).

Yet it has never received a vote.

Supporters of both bills make a basic argument: the Fourth Amendment should still apply in the digital age. Government agencies should not be able to bypass warrant requirements simply by buying data from private companies or sweeping up Americans' communications incidentally through foreign surveillance programs.

Opponents, mainly law enforcement agencies and the executive branch, counter that restricting access to commercially available data could slow investigations into serious crimes and limit national security flexibility.

How does this actually get done?

The most viable path runs through the Senate Judiciary Committee. A simple majority of the committee's 22 members need to approve a bill before it heads to the full Senate floor. That means 12 senators – most of whom have never taken a public position on this – are the entire obstacle.

To understand why the bill has stalled, I reviewed public statements, voting records, and past surveillance votes for every member of the Senate Judiciary Committee. The results are below:

Senate Judiciary Committee · 119th Congress
Senate Judiciary Committee · 119th Congress
Who Supports the Fourth Amendment Is Not For Sale Act?
Tracking 22 committee members on legislation requiring court approval before the government can buy your data from data brokers.
Republicans · 12 members
Democrats · 10 members
Support
Persuadable
Oppose
Democrat
Republican
Click a seat to see their record
0
✓ Support
0
~ Persuadable
0
✗ Oppose
Path to committee majority (12 of 22 votes needed)
Sources: Senate voting records, co-sponsorship data & public statements · Research by Kliger's Corner

The math is tight, but the political opportunity lies with the persuadable members.

Before reintroducing FANSFA, Ranking Member Dick Durbin should make two targeted changes that reframe the bill – less as restricting surveillance, more as extending the warrant standards Americans already expect for commercially purchased data.

Emergency carve-outs. First, allow warrantless access to commercial data in narrowly defined emergencies – an active kidnapping, an imminent terror threat. This neutralizes the most common law enforcement objection.

National security carve-out. Second, preserve the warrant requirement for domestic criminal investigations while allowing limited national security exceptions, paired with mandatory congressional reporting.

With these adjustments, Senator Durbin should reintroduce FANSFA while the committee simultaneously debates SAFE – keeping both bills in motion to signal that Congress is closing both surveillance loopholes at once. His office should work with the persuadable members directly, starting with Ranking Republican Chuck Grassley.

There is a viable path forward for FANSFA. But it requires tightening the bill's language and prioritizing it alongside the SAFE Act reauthorization in April.

AI & Lethal Weapons

In 2023, Rep. Ted Lieu (D-CA) and Sen. Ed Markey (D-MA) introduced the Block Nuclear Launch by Autonomous Artificial Intelligence Act – legislation that would prevent AI from autonomously authorizing a nuclear strike and require a human to remain in the loop for any potentially lethal decision.

At the time, the proposal felt somewhat precautionary. Generative AI's role in warfare was being discussed mostly as a future risk – something policymakers should guard against before the technology matured.

That future has arrived. AI systems are already being integrated into military planning and battlefield decision-making. The concern that lethal decisions could drift toward automation no longer feels hypothetical.

Yet the legislation is stalled in the Senate.

Like FANSFA and SAFE, this bill is stuck in committee stage. This time, the Senate Armed Services Committee – 27 members, 14 votes needed to advance – is the gatekeeper.

Restricting AI's role in autonomous lethal weapons has overwhelming bipartisan support among the American public. Inside the Armed Services Committee, it's a different story. I conducted the same analysis of past statements and votes by Armed Services Committee members. The partisan divide is much sharper: Republicans on the committee are largely eager to accelerate AI military capabilities and resistant to statutory constraints they see as tying the military's hands, while Democrats on the Committee are more willing to push for human oversight.

Senate Armed Services Committee · 119th Congress
Senate Armed Services Committee · 119th Congress
Who Supports Restricting AI in Lethal Force Decisions?
Tracking committee members on legislation requiring human oversight before AI-enabled lethal military action. Click any seat to see their record.
Republicans · 14 members
Democrats & Independents · 13 members
Support restrictions
Persuadable
Oppose restrictions
Outer ring = party
Click a seat to see their record
0
✓ Support restrictions
0
~ Persuadable
0
✗ Oppose restrictions
Path to committee majority (14 of 27 votes needed)
Sources: Senate voting records, co-sponsorship data & public statements · Research by Kliger's Corner

The objection from defense hawks is simple: any statutory constraint on military AI is a constraint on American power. They're not entirely wrong — it is. The question is whether that tradeoff is worth it.

There is another factor at play. The Department of Defense already has a directive stating that autonomous systems should allow for "appropriate levels of human judgement" over the use of force.

That directive matters politically. Even though it carries far less weight than a law passed by Congress, its existence gives some Senators pause. If the Pentagon already has internal guidelines, why legislate?

For supporters of the bill, the answer is simple: internal policy can change overnight. Statute cannot.

What can be done?

Senator Roger Wicker (R-MS), Chairman of the Armed Services Committee, is not going to support a standalone AI weapons bill. He is a defense hawk who has spent years pushing to accelerate the integration of AI and autonomous systems into the military, framing it as "peace through strength".

But there is a realistic path – and Wicker himself may have pointed to it.

Last week, Wicker was one of a small group of senators who intervened in the standoff between the Pentagon and Anthropic. In a letter co-signed by Jack Reed, Mitch McConnell, and Chris Coons, he expressed concern about the escalation, writing: "contract negotiations are not the ideal context in which to establish policy."

He's right. Now, let's hold his feet to the fire.

The most realistic path is a targeted amendment to the National Defense Authorization Act (NDAA) – the annual defense bill that sets DoD funding and policy, and passes every December. Senator Wicker is the key player. Here's the key framing:

Narrow the scope to nuclear launch decisions. A provision banning AI from autonomously authorizing a nuclear strike is harder to vote against than broad AI regulation. Frame it as a safeguard against technical malfunction or cyber manipulation – not as tying the military's hands.

Lead with deterrence, not ethics. Security researchers have documented how autonomous systems can trigger false alarms and escalate faster than any human chain of command can respond. Keeping humans in the loop reduces the risk of accidental nuclear war — that argument lands with conservative defense leaders in a way humanitarian appeals don't.

Get the Pentagon on record. Endorsements from senior officials would change the political calculus overnight. When uniformed leaders say a safeguard strengthens deterrence rather than weakens it, skeptical senators listen.

Wicker said contract negotiations aren't the right place to make AI policy. He's correct. The NDAA is. Now someone needs to make him own that.

In short, the most realistic short-term path to regulating AI's use in war is not through sweeping regulation. It is a narrowly tailored safeguard inserted into the defense bill Congress already passes every year.

What Comes Next

Both of Dario Amodei's red lines have broad public support. Americans don't want AI-powered mass surveillance without warrants. They don't want machines making life-and-death decisions alone. Congress has drafted bills that reflect both of those instincts. But neither has generated sustained political urgency.

The surveillance fight has more near-term promise. The SAFE Act will be renegotiated in the coming weeks ahead of its April 20 expiration. That creates a natural opening to pair the debate with FANSFA, which has already passed the House and has real bipartisan support.

Conversely, I don't see Armed Services Leadership codifying a human-in-the-loop requirement for AI any time soon. Too many members of the committee are war hawks who see their job primarily as ensuring the U.S. military is never constrained relative to its adversaries. The more realistic path is a targeted amendment to the NDAA in December, focused narrowly on nuclear launch decisions. Senator Wicker will be the key player there.

The bottom line is this: Lawmakers should be clear with the public about why these guardrails haven't been codified. If they believe national security flexibility outweighs constitutional clarity, they should say so plainly, and let voters respond.

What they shouldn't do is leave the question to contract negotiations between the Pentagon and a private AI company.

AI governance will not move on autopilot. It will move when the political cost of inaction finally exceeds the comfort of delay. Right now, most members of Congress have calculated that it hasn't. The question is what will change that math.