Learn / Hold the Line
Hold the Line
The Pentagon gave Anthropic a deadline. Drop your guardrails or lose everything. They refused.
Published February 27, 2026
At 5:01 PM on Friday, February 27, 2026, a deadline passed. The Pentagon had given Anthropic — the AI company behind Claude, the tool that helps build this very site — an ultimatum: allow unrestricted military use of your technology, or face the consequences.
Anthropic said no.
A Note From the Builder
This site exists because of an election and a democracy worth fighting for. But the way it gets made — every article discussed, questioned, challenged, and researched through conversation with an AI collaborator — that's Anthropic. That's Claude. Our transparency page says as much. So when the company behind those tools faces a test of its principles, this isn't just a news story to us. It's personal.
What Happened
In July 2025, Anthropic signed a $200 million contract with the Department of Defense. It was the first AI company to integrate its models into mission workflows on classified networks. The partnership was working.
Then the Pentagon wanted more. Defense Secretary Pete Hegseth met with Anthropic CEO Dario Amodei at the Pentagon on Tuesday, February 24, and laid out an ultimatum: allow Claude to be used "for all lawful purposes" — no restrictions, no guardrails, no company-imposed red lines.
Anthropic had two things it would not budge on:
No Autonomous Weapons
AI-powered weapons systems that can select and engage targets without a human making the final decision. Drones that choose who to kill on their own.
No Mass Domestic Surveillance
Using AI to conduct mass surveillance of American citizens. Not targeted intelligence — mass, dragnet monitoring of the people the government is supposed to serve.
Amodei's response was unequivocal:
"Threats do not change our position: we cannot in good conscience accede to their request." — Dario Amodei, CEO of Anthropic, February 26, 2026
He acknowledged that the Pentagon — not private companies — makes military decisions. But he argued that these two use cases represent the narrow set of situations where AI "can undermine, rather than defend, democratic values."
The Pressure Campaign
The Pentagon didn't just ask. It threatened. Three consequences were on the table if Anthropic didn't comply by 5:01 PM Friday:
Cancel the $200 million contract
Walk away from the deal entirely. A quarter-billion dollars, gone.
Designate Anthropic a "supply chain risk"
A classification normally reserved for companies connected to foreign adversaries. It would blacklist Anthropic from working with any military contractor — not just the Pentagon directly, but the entire defense ecosystem.
Invoke the Defense Production Act
A 1950s-era law — used during COVID for medical supplies — that could compel Anthropic to provide its technology to the Pentagon regardless of the company's wishes.
And then the personal attacks came. Emil Michael, Under Secretary of Defense for Research and Engineering, posted on X:
"@DarioAmodei is a liar and has a God-complex [who] wants nothing more than to try to personally control the US Military." — Emil Michael, Under Secretary of Defense, February 27, 2026
Hegseth labeled Anthropic's safety guardrails as "woke AI." AI experts noted the term is, in the words of NPR, "a nebulous and ill-defined term that Trump officials seem to use to describe any and all safety protections on powerful AI tools."
And Amodei spotted the contradiction at the heart of it:
The Pentagon's two threats are "inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security." — Dario Amodei
What Happened When Others Saw Someone Hold the Line
Something remarkable happened. Competitors — people with every financial incentive to let Anthropic take the hit alone — stepped forward.
Sam Altman, CEO of OpenAI and Amodei's direct competitor, wrote an internal memo to his staff:
"Regardless of how we got here, this is no longer just an issue between Anthropic and the [Pentagon]; this is an issue for the whole industry and it is important to clarify our stance." — Sam Altman, CEO of OpenAI, February 27, 2026
OpenAI said it would apply the same red lines in any future Pentagon contract: no autonomous weapons, no mass surveillance.
Over 300 Google employees and more than 60 OpenAI employees signed an open letter titled "We Will Not Be Divided." Workers from Amazon, Microsoft, and other companies joined. The letter warned that the Pentagon was "trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand."
The Domino That Didn't Fall
This is the part that matters most. The Pentagon's strategy was textbook divide-and-conquer: pressure one company, assume the others will stay quiet and scoop up the contract. Instead, the opposite happened. One company held the line, and others rallied behind it. Competitors publicly backed each other. Workers across rival companies signed the same letter. It didn't work because someone went first.
After the Deadline
The deadline passed. Anthropic did not capitulate. And the consequences were immediate:
Trump ordered all federal agencies to stop using Anthropic's products, with a six-month phase-out period.
Hegseth designated Anthropic a "supply chain risk to national security", blacklisting it from working with any military or defense contractors.
Trump posted on Truth Social calling Anthropic a "RADICAL LEFT, WOKE COMPANY."
The Pentagon declared: "No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
Amodei knew this was coming. He chose it anyway.
What They Said Before Anyone Was Watching
Back in November 2025 — months before this standoff — Anderson Cooper visited Anthropic's San Francisco headquarters for 60 Minutes. The conversation now reads like foreshadowing.
Amodei told Cooper that AI could wipe out half of all entry-level white-collar jobs within five years. He talked about the potential for AI to accelerate medical research, compress a century of progress into a decade. But what stands out now is what he said about guardrails:
"One way to think about Anthropic is that it's a little bit trying to put bumpers or guardrails on that experiment. We do know that this is coming incredibly quickly." — Dario Amodei, 60 Minutes, November 2025
Cooper asked why Anthropic was so transparent about the dangers — the blackmail tests, the Chinese hackers, the autonomous behavior experiments. Amodei's answer:
"It's so essential because if we don't, then you could end up in the world of the cigarette companies or the opioid companies, where they knew there were dangers and they didn't talk about them and certainly did not prevent them." — Dario Amodei, 60 Minutes, November 2025
And when Cooper pressed him on regulation — on the fact that no one elected him to make these decisions:
"I think I'm deeply uncomfortable with these decisions being made by a few companies, by a few people." — Dario Amodei, 60 Minutes, November 2025
That was November. By February, the Pentagon was testing whether those words meant anything. They did.
Why This Matters for The Midterm Project
Every page you're reading — every interactive map where you can preview your ballot, every voting record pulled from Senate and House roll calls, every guide that breaks down how a bill becomes a law in plain English — was discussed, drafted, questioned, challenged, and built through conversation with Claude. The learning tracks that organize 20 guides into a path you can actually follow. The candidate profiles that pull FEC fundraising data so you can see who's funding whom. The race ratings. The deep dives. All of it — one person and an AI, building something that would normally take a full team and a real budget.
Using Anthropic is a choice. It was intentional from the start. Not because it was the only option — there are other AI tools, other coding assistants, other companies. But the approach mattered. The safety-first posture. The transparency about what the technology can and can't do. The willingness to say "we're working on it" instead of pretending the hard problems are solved. That alignment is why Claude is woven into every part of how this site gets made — across this project and others, across different tech stacks, across months of learning together.
And that means the choice to stop using Anthropic, if it ever came to that, would be intentional too. If the company behind these tools dropped its principles — if it decided that autonomous weapons and mass surveillance were acceptable trade-offs for a government contract — that moral weight wouldn't stay at the Pentagon. It would flow downstream to every developer, every project, every user. Including ours.
So this isn't abstract for us. We watch this story the way a farmer watches the weather. Today, they held the line. And that means we can keep building with a clear conscience — keep making civic education accessible, keep pulling back the curtain on who's running and who's funding them, keep doing this work the way we've been doing it. Together.
Truthfully, AI Is Not Ready
Here's something we can say that most people covering this story can't: we work with this technology every single day. And it is not ready to operate without a human.
This site might look polished. But behind every page and feature, there have been hiccups. Claude misinterprets what you're going for. It makes judgment calls you didn't approve. It takes shortcuts when you would have done it differently. It gets confident about something that's flat-out wrong, and if you're not paying attention, it ships. These aren't rare edge cases — this is the daily reality of working with AI. The person behind this project had to learn the technology, learn to read its work critically, learn when to push back and when to trust it. That process is constant and it never stops.
Claude is genuinely one of the best coding tools out there — and we've tried others. But "best available" and "ready to run unsupervised" are worlds apart. Claude makes bad calls the same way any collaborator can make bad calls. You catch it, you course correct, you keep going. That's the deal. And that's fine for a civic education website where the worst-case scenario is a broken page or a bad data sync.
Now imagine that same technology — the one that still needs a person watching it on a project like this — making decisions about who to target with a weapon. Or which Americans to flag for surveillance. No human reviewing. No one to say, "wait, that's not right." If that doesn't terrify you, you haven't been paying close enough attention.
Anthropic knows their own product. They know what it can do and they know what it can't. And honestly, no one person — let alone a piece of software — should be trusted with that kind of unchecked power over people's lives. We wouldn't trust a single human to make those calls alone, and the idea that AI is somehow more ready than a human to do it is not a serious argument. It's a dangerous one. That's why Anthropic drew the line where they did, and that's why we stand with them.
The Bigger Picture
This story isn't really about one company. It's about the question at the center of everything this site covers: what does it look like when someone in a position of power is told to fall in line, and doesn't?
We write about that dynamic all the time — just in different contexts:
A congressman being told: vote our way or we'll primary you.
A judge being told: rule our way or we'll pack the court.
A tech CEO being told: drop your principles or we'll destroy your business.
The pressure to capitulate is always there. The question is always the same: who holds the line, and at what cost?
Democracy is a group project. It always has been. One company said no. Its competitors backed it up. Hundreds of employees across rival companies signed the same letter. It held — not because one person was brave, but because one person going first gave others permission to follow.
That's how guardrails work. Not just in AI. In government. In elections. In the daily, exhausting work of holding a democracy together.
The Bottom Line
Anthropic lost a $200 million contract today. It got blacklisted from the defense industry. The President of the United States called it a radical, woke company. And it still said no.
We don't know what happens next. We don't know if the Defense Production Act gets invoked. We don't know if other companies will hold when the pressure shifts to them. We don't know if this moment will be remembered as the line that held or the beginning of something worse.
But we know what happened today. Someone was told to drop their guardrails, and they didn't. And because this site is built with their tools, because our transparency page already tells you that, because we believe democracy depends on people and institutions that refuse to be bullied into silence — we wanted you to know.
Hold the line.
Related Guides
Sources
- CNN — Pentagon threatens to make Anthropic a pariah over AI guardrails
- Axios — Hegseth gives Anthropic until Friday to back down
- CNBC — Anthropic CEO says Pentagon's threats "do not change our position"
- NPR — Deadline looms as Anthropic rejects Pentagon demands
- Fortune — Pentagon brands Anthropic CEO a "liar" with a "God-complex"
- NPR — Hegseth threatens to blacklist Anthropic over "woke AI"
- Axios — Sam Altman says OpenAI shares Anthropic's red lines
- TechCrunch — Employees at Google and OpenAI back Anthropic in open letter
- CNN — Trump administration orders contractors to cease business with Anthropic
- CBS News — Anthropic CEO Dario Amodei on 60 Minutes (transcript)
- CBS — 60 Minutes Overtime: Anthropic