GitHub Copilot Opens Claude and Codex to All Paid Users - No Extra Cost
GitHub makes Claude by Anthropic and OpenAI Codex available to all Copilot Business and Pro subscribers at no additional cost, turning Copilot into a true multi-model platform.

GitHub just made the most consequential change to Copilot since its launch: every paid subscriber can now choose Claude by Anthropic or OpenAI Codex as their coding agent, at no extra charge. The update, announced on February 26, extends access that was previously limited to the more expensive Enterprise and Pro+ tiers down to Copilot Business ($19/user/month) and Copilot Pro ($10/month). For a platform with over 20 million users and 90% adoption among Fortune 100 companies, this is not a minor toggle in a settings menu. It's a fundamental shift in how developers interact with AI.
TL;DR
- Claude (Anthropic) and OpenAI Codex are now available as coding agents for all Copilot Business and Pro subscribers
- No additional subscription required - access is included in existing Copilot plans
- Each agent session consumes one premium request during the current public preview
- Agents can be assigned to issues and PRs, run in parallel, and produce draft pull requests
- Claude Opus 4.6 carries a 3x premium request multiplier; lighter models like GPT-4.1 cost zero premium requests
What Changed
Until yesterday, running Claude or Codex inside GitHub required either a Copilot Enterprise subscription ($39/user/month) or the individual Pro+ plan ($39/month). Those tiers have had access since early February, when GitHub launched Agent HQ - its multi-agent orchestration layer - in public preview. The February 26 expansion removes the paywall for the remaining paid tiers.
The practical result: a solo developer on the $10/month Pro plan and a team on the $19/user/month Business plan now have access to the same model roster as enterprises paying nearly four times more. That roster is major.
The Model Lineup
| Model | Provider | Premium Request Multiplier | Best For |
|---|---|---|---|
| Claude Opus 4.6 | Anthropic | 3x | Complex agentic coding, planning, tool use |
| Claude Sonnet 4.6 | Anthropic | 1x | Balanced agentic work, search operations |
| Claude Haiku 4.5 | Anthropic | 0.33x | Fast lightweight queries |
| GPT-5.3-Codex | OpenAI | 1x | Agentic software development |
| GPT-5.2-Codex | OpenAI | 1x | Agentic task handling |
| GPT-5.1-Codex-Max | OpenAI | 1x | Dedicated agentic workflows |
| GPT-4.1 | OpenAI | 0x | Included code completions and chat |
| Gemini 3 Pro | 1x | Advanced code generation |
The multiplier column matters more than it appears. Copilot Pro subscribers get 300 premium requests per month. Using Claude Opus 4.6 at its 3x multiplier means each session costs three of those 300. A Sonnet 4.6 session costs one. GPT-4.1 costs nothing at all. This tiered pricing gives developers a natural reason to match model weight to task difficulty rather than defaulting to the most powerful option for everything.
How Agent HQ Actually Works
GitHub's Agent HQ is the infrastructure layer that makes multi-model access possible. It isn't simply a model picker dropdown. Agents operate asynchronously within your existing GitHub workflows - you can assign Claude, Codex, or Copilot to issues and pull requests, mention them with @Claude or @Codex in comments, and review their outputs as draft PRs with full reasoning logs.
Running Agents in Parallel
The genuinely interesting capability is parallel execution. Assign the same issue to Copilot, Claude, and Codex simultaneously, and you get three independent approaches to compare. Each agent has access to repository code, commit history, issues, Copilot Memory, and repository policies. They share the same governance layer - the Agent Control Plane, which is now generally available - so organizations maintain consistent code quality checks, audit logging, and usage metrics regardless of which model is running.
IDE and Platform Support
In VS Code (version 1.109 or later), you get three session types: Local, Cloud, and Background. The model picker is available across chat, ask, edit, and agent modes. The same picker works in Visual Studio, JetBrains IDEs, Xcode, Eclipse, GitHub Mobile, and the github.com web interface. Claude and Codex must be explicitly enabled in your settings before use - they're not on by default.
The Competitive Picture
This move does not happen in isolation. The AI coding assistant market has fragmented into a near three-way race. GitHub Copilot holds roughly 42% market share among paid tools, but Cursor crossed $500 million in annual recurring revenue in 2025, and Anthropic's own Claude Code has carved out a significant share of the CLI-based coding space. For a broader view of the landscape, see our comparison of the best AI coding assistants.
By absorbing its competitors' models into its own platform, GitHub is making a calculated bet: developers will stay where their repositories already live rather than switch to a standalone editor or CLI tool, as long as the model quality is equivalent. It is the same playbook Microsoft has run for decades - make the platform the default by removing reasons to leave.
OpenAI's GPT-5.3-Codex, the latest in its coding-specific line, is part of the deal. So is the full Claude model family, including the Opus 4.6 that has been outperforming GPT-5.2-Codex on Terminal-Bench 2.0. GitHub is effectively telling developers: you don't need to choose a side in the model wars. Pick the best tool for the job, and we'll handle the plumbing.
What It Does Not Tell You
There are several things this announcement omits or downplays.
Premium request limits are real constraints. The 300 monthly premium requests on the Pro plan sound generous until you consider that a single Claude Opus 4.6 session burns three of them. That's 100 Opus sessions per month - roughly three or four per workday. Power users will hit that ceiling fast, and overages cost $0.04 per premium request. At the 3x multiplier, that's $0.12 per Opus session beyond your allowance.
"Public preview" means the rules can change. GitHub hasn't committed to keeping the one-premium-request-per-agent-session pricing permanent. The current rate is explicitly framed as preview pricing. When general availability arrives, multipliers or session costs could shift.
Model availability is not model parity. Getting access to Claude Opus 4.6 through Copilot isn't the same as using it through the Anthropic API or Claude Code directly. Copilot mediates the interaction through its own Agent Control Plane, its own context management, and its own governance layer. For most use cases this is fine, but developers who need the full capabilities of the underlying model - like Opus 4.6's 1M token context window or agent teams - may still find the direct API more capable.
The governance story is incomplete. Organizations can control which agents are enabled and monitor usage, but the audit logging and policy enforcement are still maturing. Enterprises with strict compliance requirements should test thoroughly before rolling this out across engineering teams.
GitHub absorbing Claude and Codex into its standard paid tiers is the kind of move that reshapes defaults. Most developers will never install a separate coding assistant if the one bundled with their repository host is good enough and offers model choice. That's exactly what GitHub is counting on. The question is whether "good enough within Copilot" can match the experience of purpose-built tools like Codex CLI or Claude Code running with full API access. For now, the answer depends on how you work - but the gap is narrowing.
