The AI Safety Company Just Entered the Lobbying Game
Anthropic spent years positioning itself as the responsible AI lab — the company that would slow things down, do things right, put safety before profits. On Friday, April 3, 2026, they filed paperwork to create a political action committee.
It’s called AnthroPAC. And it signals that Anthropic has decided Washington isn’t something you can fix from the outside.
What AnthroPAC Actually Is
The filing went to the Federal Election Commission on Friday afternoon. Anthropic’s treasurer, Allison Rossi, signed it from the company’s San Francisco headquarters at 548 Market Street. Jared Powell serves as assistant treasurer.
The structure is straightforward — almost boring, if you’re not paying attention:
- Employee-funded: Only Anthropic employees can contribute. No corporate money. No executive mega-donations. Voluntary contributions capped at $5,000 per person, per year.
- Bipartisan board: A board of directors will decide which House and Senate candidates receive funding. Both parties are eligible. The filter is AI policy alignment, not party loyalty.
- Direct contributions: Unlike a Super PAC, AnthroPAC can write checks directly to candidate campaigns. But unlike a Super PAC, it can’t accept unlimited donations. Everything disclosed through FEC filings.
Google has one. Microsoft has one. Amazon, Meta — they’ve all run employee-funded PACs for years. Anthropic is copying the playbook its competitors wrote. The difference is the timing.
$300 Million and Counting
AnthroPAC enters a midterm landscape that’s already drowning in AI money. The Washington Post reported in March that AI companies had contributed $185 million to 2026 races. That number has since surpassed $300 million — outpacing the crypto industry’s entire 2024 spending cycle.
No technology sector has moved this kind of political money this early in a midterm cycle.
The spending is split across competing visions for AI regulation. On one side: Leading the Future, a Super PAC that’s raised $125 million from OpenAI co-founder Greg Brockman, Andreessen Horowitz, and Silicon Valley investors pushing lighter regulation. That group spent $900,000 opposing a New York assemblyman who sponsored an AI safety bill. On the other: Public First Action, a Super PAC that received $20 million from Anthropic in February, backing candidates who favor stronger AI regulation.
AnthroPAC is the smaller piece of this puzzle. But it’s the more interesting one — because it puts Anthropic’s employees directly in the game.
The Pentagon Fight Makes This Personal
You can’t understand AnthroPAC without understanding the legal war Anthropic is fighting simultaneously.
In February, Defense Secretary Pete Hegseth labeled Anthropic a “supply chain risk” after President Trump directed all federal agencies to phase out their use of the company’s services over six months. The dispute centered on a $200 million Pentagon contract — Anthropic wanted control over how its AI models were used by the military. The Pentagon wanted to use Claude however it wished.
Anthropic filed two separate lawsuits — one in California, one in Washington, D.C. — challenging the designation. A Biden-appointed federal judge in San Francisco blocked the Pentagon from taking punitive action. The DOJ filed a notice of intent to appeal on Thursday.
The second lawsuit is still pending in the U.S. Court of Appeals for the D.C. Circuit.
AnthroPAC isn’t just about midterm influence. It’s about building the political infrastructure to ensure Anthropic never gets blindsided by Washington again.
What This Means for AI Policy
For the AI industry: Every major lab now has a political operation. OpenAI has Leading the Future. Anthropic has Public First and now AnthroPAC. Google, Microsoft, Amazon, Meta all have established PACs. The era of AI companies staying out of politics is officially over.
For Congress: AI policy is now the most well-funded tech lobbying effort in history. Lawmakers running in 2026 will receive more AI industry money than they’ve ever seen. The question is whether that money shapes smart regulation or regulatory capture.
For voters: The two competing playbooks for governing AI — America’s draft blueprint versus Europe’s enforcement machine — are about to be stress-tested by hundreds of millions of dollars in campaign contributions. What Congress decides in 2027 will shape AI development for a decade.
The Safety Company’s Dilemma
There’s an irony here that’s hard to ignore. Anthropic was founded on the principle that AI development should be guided by safety, not profit. The company’s CEO, Dario Amodei, has written extensively about the risks of advanced AI. Claude is designed with guardrails that competitors sometimes skip.
And now Anthropic is playing the same lobbying game as every other tech company. Employee-funded PAC. Bipartisan donations. Candidate alignment on policy.
Is this a betrayal of the mission? Or is this what it looks like when a safety-first company realizes that idealism without political power is just a nice blog post?
The answer probably depends on who you ask. But one thing is clear: Anthropic has decided that shaping AI regulation requires showing up in Washington with more than principles.
The Bottom Line
AnthroPAC is a small committee. Employee-funded. Bipartisan. $5,000 caps. But it represents something larger: the formalization of AI industry political power. The companies building the most transformative technology in human history are now spending hundreds of millions to shape the rules governing it.
The question isn’t whether AI will be regulated. It’s who will write the regulations — and what they’ll cost.
Sources
- TechCrunch — Anthropic Ramps Up Its Political Activities with a New PAC
- Implicator — Anthropic Forms AnthroPAC as AI Midterm Spending Hits $300M
- Washington Examiner — Anthropic Files to Create PAC Amid Pentagon Legal Battle
- The Hill — Anthropic Launches New Corporate PAC
- FEC — AnthroPAC Statement of Organization
- Washington Post — Midterms Set to Be Inundated with AI Money
- Implicator — Anthropic Donates $20M to Public First Action
Related Reading
- Two Playbooks for Governing AI: America’s Draft Blueprint vs. Europe’s Enforcement Machine
- Anthropic’s ‘Mythos’ AI Model Accidentally Leaked — And It Could Be Claude’s Biggest Upgrade Yet
- Britain Bets £500 Million on Sovereign AI: What the Fund Actually Means
- The UK Just Told Microsoft Its AI Strategy Needs Regulatory Supervision
- The SpaceX IPO: Elon Musk Is About to Rewrite Wall Street’s Playbook
