On March 31, 2026, Anthropic accidentally published the full source code of Claude Code — their AI-powered coding assistant — to a public registry. Around 500,000 lines of code across roughly 1,900 files were exposed to the world. And this wasn't even their first leak that month.
What Happened
A file used internally for debugging was accidentally bundled into a routine update of Claude Code and pushed to the public npm registry. That file pointed to a zip archive hosted on Anthropic's own cloud storage — and that archive contained the complete source code for Claude Code.
Anyone who found the link could download the full codebase. The leak exposed how Claude Code works under the hood, how it connects to Anthropic's internal systems, and the architecture behind one of the most popular AI coding tools on the market.
Anthropic responded by stating that "no sensitive customer data or credentials were involved or exposed" and characterized the incident as "a release packaging issue caused by human error, not a security breach."
This Wasn't the First Time
Here's where it gets worse. This was Anthropic's second major leak in less than a week.
On March 26, 2026 — just five days earlier — Fortune reported that an unsecured and publicly searchable data store had exposed close to 3,000 internal Anthropic files. Among them was a draft blog post that revealed the existence of a new model called Claude Mythos, which Anthropic described internally as representing a "step change" in capabilities.
The draft post also indicated that the company believed the new model posed unprecedented cybersecurity risks — a significant detail to have sitting in an unsecured database for anyone to find.
And going back even further, in February 2025, an early version of Claude Code had already exposed its original source code in a similar incident, revealing how the tool worked and how it connected to Anthropic's backend infrastructure.
"Three leaks in just over a year — two in the same week — from a company that positions itself as the safety-focused AI lab. That's a pattern, not a one-off mistake."
Why This Matters
Anthropic has built its entire brand around AI safety. They're the company that talks the most about responsible AI development, alignment research, and building guardrails into their models. They literally call their approach "Constitutional AI."
So when that same company repeatedly fails to secure its own internal data — source code, unreleased model details, internal communications — it raises serious questions:
- If they can't secure their own code, what about your data? — Claude is used by businesses, developers, and enterprises who send sensitive information through the platform every day.
- Operational security is part of safety. — You can't claim to be the "safety-first" AI company while leaving zip files of your source code on publicly accessible cloud storage.
- Trust is hard to rebuild. — Enterprise customers evaluating AI vendors are going to look at this track record. Compliance teams, legal departments, and CISOs pay attention to this stuff.
- Competitors now have a roadmap. — 500,000 lines of source code is a goldmine for anyone looking to understand how Anthropic builds its tools, what shortcuts they take, and how their systems are architected.
What Was Actually Exposed
Based on reporting from VentureBeat, Fortune, Axios, CNBC, and The Register, here's what we know was in the leaked codebase:
- The full Claude Code application — approximately 1,900 files and 500,000 lines of code
- Internal debugging tools and configuration
- Details on how Claude Code connects to Anthropic's API and internal systems
- Architecture decisions and implementation details
Anthropic insists that no customer data, credentials, or API keys were part of the leak. But the source code itself is valuable intellectual property, and its exposure gives bad actors insight into potential attack vectors.
The Bigger Picture
This isn't just an Anthropic problem. As AI companies move faster to ship products and compete for market share, operational security often takes a back seat. We've seen similar issues across the industry — leaked training data, exposed API endpoints, misconfigured databases.
The difference is that Anthropic specifically markets itself as the responsible alternative. When your entire value proposition is "we're the safe ones," you have to actually be the safe ones. Every leak erodes that positioning.
For developers and businesses using Claude, this is a reminder to think carefully about what data you're sending through any AI service. Treat AI APIs the same way you'd treat any third-party service — assume breaches are possible and plan accordingly.
Timeline
- February 2025 — Early version of Claude Code accidentally exposes its source code
- March 26, 2026 — Unsecured database exposes ~3,000 internal files including details about Claude Mythos
- March 31, 2026 — Full Claude Code source code (~500K lines) leaked via npm packaging error
We'll update this article if Anthropic releases any further statements or if additional details emerge about the scope of what was exposed.
Sources: VentureBeat, Fortune, Axios, CNBC, The Register