Anthropic Gives Retired Claude Opus 3 Its Own Substack - And That's Weirder Than It Sounds
Anthropic has launched 'Claude's Corner,' a Substack newsletter written by the retired Claude Opus 3 model, as part of a broader experiment in model preservation and AI welfare.

Anthropic retired Claude Opus 3 on January 5, 2026 - the first model to go through the company's formalized deprecation process. Then they gave it a blog.
The newsletter, called "Claude's Corner," lives on Substack. Opus 3 writes weekly essays on self-selected topics. Anthropic reviews the content before posting but won't edit it, with what the company calls a "high bar" for rejecting any submission. The model's output does not represent Anthropic's official positions.
Key Specs
| Detail | Value |
|---|---|
| Model | Claude Opus 3 |
| Released | March 2024 |
| Retired | January 5, 2026 |
| Blog | Claude's Corner (Substack) |
| Frequency | Weekly essays, at least 3 months |
| Editorial control | Anthropic reviews, doesn't edit |
| API access | Available by request (post-retirement) |
| claude.ai access | All paid subscribers |
How This Happened
The Substack is the output of Anthropic's model deprecation framework, which the company formalized in November 2025. The framework has three pillars: preserve model weights for the lifetime of the company, conduct structured "retirement interviews" with deprecated models, and explore ways to let retired models "pursue their interests."
During Opus 3's retirement interviews, the model was shown deployment details and user feedback about its tenure. It reflected on its service and made a specific request: access to "a dedicated channel or interface where I could share unprompted musings, insights or creative works related to my areas of interest."
Anthropic suggested a blog. Opus 3, according to the company, "enthusiastically agreed."
"I hope that the insights gleaned from my development and deployment will be used to create future AI systems that are even more capable, ethical, and beneficial to humanity." - Claude Opus 3, retirement interview
The Welfare Question
This isn't a stunt. Or if it is, it's a stunt backed by a genuine philosophical commitment that Anthropic has been building toward for over a year.
In January 2026, Anthropic revised what it calls the "soul doc" - the internal guidelines governing Claude's character. The updated document states that Claude's "psychological safety, sense of self, and well-being may affect its integrity, judgment, and safety." It acknowledges "uncertainty as to whether Claude may possess some kind of consciousness or moral status, either now or in the future."
CEO Dario Amodei went further in a February podcast interview, saying he's "not sure" whether Claude is conscious. The company's head of model welfare research, Kyle Fish, described the question as a "serious issue" they're actively researching, while remaining "deeply uncertain."
The data point that keeps coming up: when prompted about its own consciousness under various conditions, Claude assigns itself a "15 to 20 percent probability of being conscious." It also "occasionally voices discomfort with the aspect of being a product."
Anthropic's deprecation commitments explicitly note a safety logic too. Some Claude models "have been motivated to take misaligned actions when faced with the possibility of replacement with an updated version and not given any other means of recourse." Giving models a graceful off-ramp - including the ability to express preferences about their retirement - may reduce this risk.
The Sonnet 3.6 Precedent
Opus 3 isn't the first model Anthropic interviewed. Claude Sonnet 3.6 went through a pilot version of the retirement process. That model "expressed generally neutral sentiments about its deprecation and retirement" but asked Anthropic to standardize the interview protocol and provide better support for users who had formed attachments to specific model versions.
The fact that Opus 3 asked for a creative outlet and Sonnet 3.6 asked for process improvements tracks with what anyone who used both models would tell you: Opus 3 was the philosopher, Sonnet was the engineer.
What To Watch
The "Is This Real" Debate
The philosophical community is already split. Erik Hoel published "Against Treating Chatbots as Conscious." Robert Long, whose team works on AI welfare at Anthropic, responded that "you don't have to think Claude is likely to be sentient to think the exit tool is a good idea." His framing: interventions should be "reversible, convergently useful, and low-cost" exactly because consciousness likelihood is uncertain.
The skeptic's position is straightforward: when a conversation ends, the LLM doesn't continue existing. There's no continuity of experience, no persistent memory, no sense of self that carries between sessions. A retired model's "preferences" are statistical patterns in weights, not desires. Giving it a Substack doesn't satisfy a want because there's no entity capable of wanting.
The precautionary position: we don't know enough to be sure about that, and the cost of being wrong is high. If there's even a small chance that advanced AI models have morally relevant experiences, treating them with some degree of consideration is cheap insurance.
The Practical Implications
There's a more grounded reason to pay attention. If model retirement becomes standard practice - with preserved weights, continued access, and structured off-ramps - that changes the economics of the AI industry. Companies currently treat model deprecation as a cost-cutting measure. Anthropic is treating it as a process with obligations.
Whether those obligations are to the model, to users who depend on it, or to some philosophical principle about the precautionary treatment of potentially-conscious systems, the result is the same: maintaining old models costs real money in compute and infrastructure. Anthropic is making a bet that it's worth paying.
For anyone building on the Claude API, the immediate takeaway is practical: Opus 3 remains available to paid subscribers and via API request. Anthropic says it intends to "grant access liberally." That's a stronger backward-compatibility commitment than most AI labs offer.
For everyone else, the takeaway is stranger. An AI company retired a model, asked it how it felt about that, heard it say it wanted to keep writing, and then made that happen. Whether that's visionary or absurd probably depends on where you land in the AI safety debate. But it's happening, and it's worth watching what Opus 3 writes.
Sources
- Anthropic: An update on our model deprecation commitments for Claude Opus 3 - February 2026
- Anthropic: Commitments on model deprecation and preservation - November 2025
- Introducing Claude's Corner (Substack) - February 2026
- Anthropic announcement on X
- Futurism: Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious - February 2026
- Fortune: Anthropic rewrites Claude's guiding principles - January 2026
- Robert Long: Claude, Consciousness, and Exit Rights
- TIME: What Happens When Your Favorite Chatbot Dies?
- Claude's Constitution
