News

Anthropic Locks Down Claude Code: OAuth Tokens Banned in Third-Party Tools

Anthropic's legal and compliance documentation explicitly prohibits using Claude Code OAuth tokens in third-party tools - and the company is enforcing it with server-side blocks and account bans.

Anthropic Locks Down Claude Code: OAuth Tokens Banned in Third-Party Tools

Anthropic does not want you using your Claude subscription outside its own tools. The company's legal and compliance documentation for Claude Code now states it plainly:

"Using OAuth tokens obtained through Claude Free, Pro, or Max accounts in any other product, tool, or service - including the Agent SDK - is not permitted and constitutes a violation of the Consumer Terms of Service."

This is not a new policy buried in boilerplate. Anthropic is actively enforcing it - with server-side blocks, automated account bans, and a very public crackdown that has fractured developer trust.

What Happened

The policy was surfaced by developer Rob Zolkos on X, who highlighted the explicit language in Anthropic's legal documentation. But the enforcement started weeks earlier.

On January 9, 2026, Anthropic deployed server-side checks that blocked all third-party tools from authenticating with Claude Pro and Max subscription OAuth tokens. Tools like OpenCode (56,000+ GitHub stars), Clawdbot, and various custom integrations stopped working overnight. The error message was blunt: "This credential is only authorized for use with Claude Code and cannot be used for other API requests."

Accounts that triggered abuse filters were automatically banned. Some users reported being banned within 20 minutes of starting a task on the $200/month Max plan. Anthropic later reversed erroneous bans, but the damage to developer confidence was already done.

How Third-Party Tools Worked (and Why Anthropic Blocked Them)

The technical mechanism was straightforward. Third-party tools like OpenCode spoofed the Claude Code client identity, sending HTTP headers that convinced Anthropic's servers the request came from the official CLI. This let users authenticate with their subscription OAuth tokens and route requests through alternative interfaces.

Anthropic's enforcement was equally straightforward: they deployed checks to validate the actual client identity and rejected anything that was not the genuine Claude Code binary.

The Subscription Arbitrage Problem

The economic logic behind the crackdown is hard to argue with. Consider the pricing gap:

ChannelCostWhat You Get
Claude Max subscription$100-200/month"Unlimited" tokens via Claude Code (rate-limited)
API (Opus 4.5)$5/M input, $25/M output tokensPay-per-token, no ceiling
Equivalent API cost for heavy users$1,000+/monthSame volume of tokens

Claude Code's subscription model includes built-in rate limiting - a "speed limit" that prevents users from burning through tokens too quickly. Third-party tools removed that speed limit, enabling autonomous overnight loops that consumed far more tokens than the subscription price justified.

As one developer noted in the GitHub discussion: "In a month of Claude Code, it's easy to use so many LLM tokens that it would have cost more than $1,000 via API."

From Anthropic's perspective, this was not just a terms-of-service technicality. It was subscription arbitrage at scale, with third-party tools effectively converting flat-rate subscriptions into unlimited API access.

The Developer Backlash

The reaction from the developer community was swift and loud.

David Heinemeier Hansson (DHH), creator of Ruby on Rails, called the policy "very customer hostile" and said he "seriously hoped it's a mistake that they're blocking alternative harness providers."

George Hotz, founder of comma.ai and tinygrad, published a blog post titled "Anthropic is making a huge mistake" on January 15, warning: "You will not convert people back to Claude Code, you will convert people to other model providers."

Armin Ronacher, creator of Flask, asked for non-commercial community harnesses to be allowed and questioned the pricing gap between subscriptions and API access.

Multiple developers reported immediately downgrading or canceling their $200/month Max subscriptions. One wrote on GitHub: "Using CC is like going back to stone age" - referring to the limitations of Claude Code's interface compared to tools like OpenCode.

The controversy generated 245+ points on Hacker News across multiple threads, 147+ reactions on the primary OpenCode GitHub issue (#6930), and extensive coverage from VentureBeat, WinBuzzer, Sherwood News, and others.

Anthropic's Response

Anthropic did not issue a formal press statement. The closest to an official response came from Thariq Shihipar, a member of technical staff working on Claude Code, who posted on X:

"Yesterday we tightened our safeguards against spoofing the Claude Code harness after accounts were banned for triggering abuse filters from third-party harnesses using Claude subscriptions."

He acknowledged the erroneous bans and said affected users could DM for reinstatement. He also confirmed that the supported path for building third-party tools is the API, not subscription OAuth tokens.

The formal policy is now codified in the legal documentation, which also states: "Anthropic reserves the right to take measures to enforce these restrictions and may do so without prior notice."

The OpenAI Contrast

What made the controversy sharper was the timing. OpenAI's Codex explicitly supports ChatGPT subscription logins (Plus, Pro, Team, Enterprise) without requiring separate API billing. OpenCode shipped ChatGPT Plus support within hours of the Anthropic block, reportedly in collaboration with OpenAI.

This gave developers an immediate exit ramp - and a direct comparison point. One company was locking down its ecosystem while the other was opening its doors.

The contrast extends to third-party IDE access too. Claude models are available through Cursor at $20/month, making the subscription-only restriction feel inconsistent from the developer's perspective.

Why Anthropic Probably Does Not Like It

Beyond the subscription arbitrage math, there are three reasons Anthropic would want to keep OAuth tokens locked to its own tools.

Competing products built on its own infrastructure. Tools like OpenCode are direct competitors to Claude Code. Letting them authenticate with Claude subscriptions means subsidizing your own competition.

Technical stability. Third-party harnesses bypass rate limiting and other safeguards that Anthropic uses to manage capacity. Uncontrolled usage patterns from dozens of different clients make capacity planning significantly harder.

Data and telemetry. Claude Code sends usage data back to Anthropic that helps improve the product. Third-party tools break that feedback loop.

The legal document also explicitly addresses competitor access: Anthropic simultaneously blocked xAI employees from using Claude via Cursor, citing the prohibition on using services to "build a competing product or service."

What Developers Should Do Now

The policy is clear and enforced. If you are building tools on top of Claude models, use the API with proper API key authentication. Anthropic's documentation directs developers to "use API key authentication through Claude Console or a supported cloud provider."

For developers who want AI coding assistance without these restrictions, the free AI coding setup options have gotten remarkably good. Tools like Continue.dev, OpenCode, and Cline all support free inference providers - just bring your own API keys instead of piggybacking on subscription OAuth.

The broader question is whether Anthropic's walled garden approach will hold. The models are excellent. The developer experience of Claude Code is not. And as George Hotz warned: when you make it harder for developers to use your product their way, some of them will not come back.

Sources:

About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.