Anthropic Tried to Kill 8100 GitHub Repos. Then This Happened.

Published:

The Leak That Changed Everything

At 1:23 AM on Tuesday, March 31, 2026, a software engineer named Chaofan Shou posted a link on X. Inside it: a zip file containing 512,000 lines of source code for Claude Code — Anthropic’s category-leading AI coding agent.

The code had been accidentally shipped through a standard npm release. A source map file, meant only for internal debugging, was included in the public package. Within minutes, the internet had the blueprints to Anthropic’s most important product.

By the time Anthropic’s lawyers woke up, it was already too late.

What the Code Revealed

The leak peeled back the curtain on how Anthropic actually builds AI agents — not the polished marketing version, but the raw engineering underneath. Developers found:

  • Unreleased models: References to Opus 4.7 and Sonnet 4.8, codenamed “Capybara” and “Tengu” — models Anthropic hasn’t announced publicly.
  • KAIROS: A 24/7 autonomous agent that creates daily logs and handles tasks before you ask. Boris Cherny, Claude Code’s creator, later confirmed on X that they’re “always experimenting” and asked: “Should we ship it?”
  • Coding pets: A Tamagotchi-like creature that “sits beside your input box and reacts to your coding.”
  • The “fucks” chart: An internal dashboard tracking developer frustration metrics — so named by Cherny himself.
  • Spinner verbs: A list of loading animations including “scurrying,” “rummaging,” and other oddly charming status indicators.
  • Agent memory architecture: How Claude Code maintains context, manages tool calls, and orchestrates multi-step coding workflows — the engineering secrets competitors have spent years trying to reverse-engineer.

As Business Insider reported, the leak revealed “an unusually clear look at where AI coding agents are going.” Competitors took notes.

The DMCA That Nuked 8,100 Repos

Anthropic’s response was fast — and catastrophically imprecise.

The company filed a DMCA takedown notice with GitHub, asking for the removal of repositories containing the leaked code. But the targeted repository sat inside a fork network connected to Anthropic’s own public Claude Code repository. When GitHub processed the notice, it cascaded across the entire network.

8,100 repositories were disabled.

Not just copies of the leak. Legitimate forks of Anthropic’s own public code. Developers who had forked Claude Code legally — for study, for contribution, for personal use — woke up to takedown notices. One developer’s fork contained zero leaked code. It got nuked anyway.

The backlash was immediate. Blocked repositories. Disrupted workflows. Angry posts flooding social media aimed at the company that markets itself as the responsible AI lab. The cleanup attempt made everything worse.

Boris Cherny, Anthropic’s head of Claude Code, went on X to acknowledge the mistake: “This was not intentional. Should be better now.” Thariq Shihipar, another Anthropic employee, called it “a communication mistake.”

Anthropic retracted the bulk of the notices, narrowing the takedown to one repository and 96 forks containing the actual leaked code. GitHub restored access to the affected repositories.

But by then, the damage was done — and not in the way Anthropic expected.

4 AM: The Rewrite Begins

Sigrid Jin’s phone was blowing up at 4 AM.

The 25-year-old University of British Columbia student had consumed over 25 billion Claude Code tokens in the past year. He’d been profiled by the Wall Street Journal as one of the tool’s most prolific power users. When the leak happened, Jin didn’t panic. He saw opportunity.

By sunrise, Jin and Seoul-based collaborator Yeachan Heo had completed a clean-room Python rewrite of Claude Code’s core agent harness architecture. Two humans. Ten AI agents. A MacBook Pro. A few hours.

They called it Claw Code.

The result wasn’t a copy. It was a reimplementation — reading the architectural patterns of the leaked harness and rebuilding them from scratch in Python, without copying Anthropic’s proprietary source. Different language. Different codebase. Same functionality.

Jin pushed it to GitHub. The internet responded.

The Fastest Repo in GitHub History

What happened next defied every precedent in open-source history.

Claw Code hit 50,000 stars in two hours. By the end of the first day, it had 100,000 stars. As of this writing, the numbers are staggering:

  • 110,000+ GitHub stars
  • 100,000+ forks
  • 5,000 Discord members joined in 24 hours
  • 512,000 lines of the original TypeScript source are now in the wild forever

No repository in GitHub’s history has climbed this fast. Not React. Not Vue. Not Linux. The combination of high-profile leak, aggressive takedown, and community outrage created a feedback loop that no marketing budget could replicate.

xAI, Anthropic’s competitor, poured gas on the fire. Jin posted that xAI sent him Grok credits. “Very excited to see what you continue to build!” responded Umesh Khanna from xAI.

“The best part,” Jin told Business Insider, “is that it results in greater democratization of coding tools. Non-technical people are using these agents to build real things. We’re talking about cardiologists making patient care apps and lawyers automating permit approvals. It has turned into a massive sharing party.”

The Copyright Problem That Cuts Both Ways

Here’s where the story gets legally fascinating — and deeply ironic.

Anthropic invoked copyright law to protect its code. But Anthropic also faces active copyright lawsuits from authors, publishers, and Universal Music Group over allegations it trained Claude on copyrighted material without permission. A court ordered $1.5 billion in damages in one such case.

Now the company leans on the same legal framework it’s accused of violating.

And the foundation may be thinner than expected. Anthropic has publicly acknowledged that Claude Code is largely AI-generated — reportedly 90%, according to the company’s own disclosures. Under the DC Circuit’s March 2025 ruling in Thaler v. Perlmutter, works generated solely by AI cannot receive copyright protection.

If courts determine that portions of Claude Code lack sufficient human authorship, Anthropic’s copyright claims over those sections collapse. As The Innovation Attorney put it: this is “a genuine copyright gap.”

You ship code written by your own AI. Then you invoke copyright to claw it back. The law may not cooperate.

The Gartner Take: “A Systemic Signal”

Gartner, the enterprise research firm, didn’t mince words. They called Anthropic’s March cluster of incidents “a systemic signal” ahead of a possible $380 billion IPO.

Consider the sequence:

  • Source code leak via packaging error
  • DMCA takedown nukes 8,100 repos — including legitimate ones
  • Head of product goes on social media apologizing
  • Clean-room rewrite becomes fastest-growing repo in GitHub history
  • Competitors actively recruiting the rewrite’s creators
  • Legal framework protecting the code may not hold up in court

That’s not a bad week. That’s a shareholder lawsuit waiting to happen.

The Deeper Story: You Can’t Put the Genie Back

Forget the drama for a moment. The real lesson here is about the nature of software in the age of AI.

Claude Code’s architecture — the agent harness, the memory system, the tool orchestration, the multi-step planning logic — took Anthropic years to build. Millions of dollars in engineering. Decades of combined experience.

And in less than 24 hours, a 25-year-old student rebuilt the core patterns in a different language. Not because he’s a genius (though he might be). Because AI-assisted development has reached the point where understanding an architecture is faster than building it.

Read the leak. Understand the patterns. Reimplement from scratch. Different codebase. Same ideas. Untouchable by DMCA.

This is the new reality. The moat isn’t the code. The moat is the team, the training data, the infrastructure, and the brand. Code itself has become a commodity — and the faster AI can help developers understand and reimplement architectures, the less proprietary value any single codebase holds.

What Comes Next

For Anthropic: The immediate crisis is manageable — the DMCA was retracted, GitHub restored access, and the company has moved on. But the longer-term implications are harder to ignore. Their most important product’s architecture is now public knowledge. The Rust port is in progress. The Python version has more stars than Anthropic’s own repository. The competitive advantage has shifted from “secret sauce” to “execution speed.”

For developers: Claw Code is open. It works with multiple AI models — not just Claude. You can use GPT, Llama, or any other model with it. The monopoly on AI coding agents is cracking.

For the industry: This is the open-source moment for AI coding tools. Just as Linux challenged proprietary operating systems and Android challenged iOS, Claw Code challenges the assumption that the best AI tools must be closed-source and proprietary.

Anthropic tried to kill 8,100 GitHub repos. The internet responded by creating 100,000 forks of something better.

The Bottom Line

You cannot make this up. A packaging error. A DMCA that backfired. A 4 AM rewrite. A GitHub record shattered. An iron-clad legal theory that might not be iron-clad at all.

Anthropic’s biggest embarrassment just became their most dangerous competitor. And the 512,000 lines of code that leaked through a source map file? They’re in the wild forever. No takedown notice can change that.

The genie is out of the bottle. And it’s rewriting itself in Python, Rust, and whatever comes next.

Sources

Related Reading

TSN
TSNhttps://tsnmedia.org/
Welcome to TSN. I'm a data analyst who spent two decades mastering traditional analytics—then went all-in on AI. Here you'll find practical implementation guides, career transition advice, and the news that actually matters for deploying AI in enterprise. No hype. Just what works.

Related articles

Recent articles