The week ending February 28, 2026 produced one of the strangest stories in tech history. The Trump administration blacklisted one of America’s most advanced AI companies, called it a national security threat, and ordered every federal agency to stop using its product. And within 48 hours, that same product hit #1 on the App Store.
That’s not a typo. Anthropic, the company behind Claude, was simultaneously banned by the government and embraced by the public in a way that almost no one predicted. The reaction online and in the press tells you everything about how people actually feel about AI, government overreach, and who they trust with their data.
Here’s how it all played out
What Actually Happened (The Short Version)
Anthropic had two non-negotiable rules baked into its contract with the Pentagon: Claude would not be used for mass domestic surveillance of Americans, and it would not power fully autonomous weapons that fire without human involvement. These weren’t new conditions. They were part of the original deal when Claude became the first frontier AI model cleared for classified military use.
The Pentagon wanted those rules removed. Anthropic refused. Defense Secretary Pete Hegseth designated the company a “supply chain risk to national security” — a label previously reserved for Chinese firms like Huawei. Trump followed with a Truth Social post ordering every federal agency to immediately cease using Anthropic’s technology. CEO Dario Amodei went on CBS News and called the action “retaliatory and punitive,” said it was “unprecedented,” and filed a legal challenge in federal court.
Then the internet got involved.
The Social Media Response: An Underdog Wins the Weekend
The public reaction broke in Anthropic’s favor almost immediately. Claude’s app climbed from #131 on the App Store in late January to #1 by Saturday, February 28, overtaking ChatGPT, Google Gemini, and every other app across all categories. Anthropic reported daily signups broke records every day during the final week of February, with free users up over 60% and paid subscribers more than doubling in 2026.
Two separate boycott movements collided in the same news cycle and amplified each other. The #QuitGPT campaign had already been building since early February, when FEC filings showed OpenAI president Greg Brockman donated $25 million to MAGA Inc. That movement reportedly drew 700,000 pledged cancellations. Then on Friday afternoon, when Sam Altman announced OpenAI had signed a Pentagon deal just hours after Anthropic was blacklisted, #CancelChatGPT erupted on X/Twitter.
The weekend feeds were full of screenshots: ChatGPT cancellation confirmations side by side with new Claude subscription receipts. Katy Perry posted a screenshot of Claude’s pricing page with a red heart and the word “done.” The post spread across platforms and became one of the weekend’s most-shared moments. Actor Mark Ruffalo’s earlier Instagram post supporting the QuitGPT campaign had already reached 40 million views and 2 million likes.
Anthropic helped its own cause. The company launched an “Import Memory” feature that lets users bring their saved ChatGPT preferences directly into Claude, cutting the friction of switching. It was smart timing. The company turned a government attack into a growth engine.
What the Polls Actually Said
This wasn’t just loud voices on social media. Hard data backed up the sentiment.
An ITIF/Morning Consult survey of 1,976 U.S. adults published on February 26 found:
- 67% believe private tech companies have a responsibility to set limits on their products, even when the government disagrees. That number held at 65% among Republicans.
- 79% said a human should always make the final lethal force decision — identical across party lines.
- 70% agreed that AI monitoring of Americans without a warrant violates the Fourth Amendment.
- 50% viewed the government’s move to penalize Anthropic as overreach, versus 35% who called it necessary for national security.
The public, across political lines, sided with the company that said no.
How the AI Industry Reacted
The tech industry split. Elon Musk’s xAI had already signed an “all lawful purposes” deal with the Pentagon, and Musk posted on X calling Anthropic a company that “hates Western civilization.” OpenAI’s Sam Altman told employees at a Friday all-hands that his deal included the same two red lines Anthropic demanded — no surveillance, no autonomous weapons — while simultaneously announcing the contract. Critics called it an attempt to have it both ways.
The most striking response came from the rank and file. Over 430 employees from Google, OpenAI, Amazon, Microsoft, and other companies signed an open letter titled “We Will Not Be Divided,” published at notdivided.org. Roughly 300 Google employees and 65 OpenAI employees participated, writing: “The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused. They’re trying to divide each company with fear that the other will give in.”
Anthropic’s own employees held firm. Alignment researcher Trenton Bricken posted: “Time and time again I’ve seen us stand to our values in ways that are often invisible from the outside. This is a clear instance where it is visible.” No internal dissent, no leaks, no drama. For a company that prides itself on safety culture, the crisis became something of a proof point.
How the Press Covered It
The editorial response was nearly unanimous in one direction: the government overreached.
The Wall Street Journal editorial board — not a progressive publication — ran a piece calling it a “needless display of brute government punishment” and concluded: “The People’s Liberation Army is the winner of the Anthropic ban.” The Washington Post called Anthropic’s stance “a principled stand in the face of threats from the administration to commandeer its cutting-edge artificial intelligence technology.” The New York Times framed it as the most significant confrontation between Silicon Valley and Washington in a generation.
Tech journalist Kara Swisher called Hegseth’s threats “Putin-esque” on CNN. Platformer’s Casey Newton drew a pointed parallel to the Biden administration’s alleged social media jawboning — noting that conservatives who were outraged then were largely silent now.
Former Pentagon AI Center director Gregory Allen of CSIS delivered what may have been the most damning assessment, telling Bloomberg Radio: “The user base within the Department of Defense loves Anthropic, loves Claude, and says that their restrictions on usage have never been triggered — meaning the safeguards the Pentagon demanded to remove had never once blocked a real military operation.”
Then came the twist that no one could write as fiction. The Wall Street Journal reported that U.S. Central Command used Claude during airstrikes on Iran on Saturday, February 28 — hours after Trump ordered every federal agency to stop using it. Claude was reportedly used for intelligence assessments, target identification, and battle planning simulations. The military’s own commanders proved, inadvertently, that the ban was operationally impossible.
International coverage was extensive. The BBC reported directly from Pentagon sources. European outlets compared it to EU AI Act regulatory approaches. The UK’s Bloomsbury Intelligence and Security Institute warned the standoff “is almost certain to establish lasting precedent for AI company-government relationships globally.”
Frequently Asked Questions
Why did Claude’s app downloads spike after the ban?
Consumers rallied behind Anthropic as the company that refused to let its AI be used for mass surveillance or autonomous weapons. The ban created a “boycott in reverse” effect — people switched to Claude specifically because the government tried to punish the company for holding those positions.
Did OpenAI get the same deal Anthropic refused?
Altman claimed OpenAI’s Pentagon agreement includes the same two red lines Anthropic demanded. Critics argue the structural difference matters: Anthropic wanted contractual protections beyond current law, while OpenAI’s deal relies on existing statutes that don’t yet cover AI-enabled data aggregation from public sources.
What is a “supply chain risk” designation and why does it matter?
It’s a legal label under 10 USC § 3252 that prevents companies doing business with the Pentagon from working with the designated company. It has historically been used against foreign adversaries like Huawei. This is the first time it has been publicly applied to an American company. Anthropic is challenging the designation in federal court.
How does this affect Anthropic’s business long-term?
The $200 million Pentagon contract is roughly 1.4% of Anthropic’s annual revenue, so the direct hit is manageable. The real threat is the supply chain designation, which could force Fortune 500 companies with Pentagon exposure to cut ties. Anthropic’s legal challenge aims to block that outcome.
What happens to the military systems that run on Claude?
The Pentagon gave itself a six-month phase-out period. Defense analysts say even that may be insufficient — Claude is embedded in classified systems through Palantir’s Maven Smart System, and replacing it requires air-gapped security engineering and authority-to-operate approvals that typically take 6 to 18 months minimum.
What This Means for AI Going Forward
This isn’t just a story about one company’s fight with one administration. It’s a preview of a question every business using AI tools will eventually face: who controls the guardrails on the technology you’ve built your operations around?
For marketing, for SEO, for content and automation workflows, the AI tools your agency depends on are created by companies making exactly these kinds of choices. Understanding the values and stability of those companies matters more than it used to.
The public reaction this weekend made one thing clear: people don’t just want AI that’s capable. They want AI from companies they can trust. That’s a shift worth paying attention to, whether you’re building a company, managing a brand, or advising clients on their digital marketing strategy.
Want to understand how AI search is changing what your clients see and what gets cited? Explore our SEO services or request a free SEO audit to see where your content stands.