Copilot Max Arrives as AI Coding Moves to Credit Accounting
GitHub Copilot is introducing AI Credits and a $100 Max plan, turning agentic coding from a flat subscription into metered developer infrastructure.
- What happened: GitHub Copilot moves to
AI Credits-based usage accounting on June 1, 2026.- On May 12, GitHub added flex allotments to Pro and Pro+ and introduced the new $100/month Copilot Max plan.
- Why it matters: Copilot is no longer just an autocomplete subscription. It is becoming developer infrastructure where long agent sessions expose real inference costs.
- Builder impact: Code completions and Next Edit suggestions stay included, while chat, agent work, code review, and premium model use now sit behind credits and budget controls.
Copilot code reviewis especially important because it will consume both AI Credits and GitHub Actions minutes.
GitHub Copilot's pricing model is leaving the autocomplete era behind. On April 27, GitHub announced that Copilot will move to usage-based billing on June 1. On May 12, it followed with another post that adds flex allotments to individual plans and introduces Copilot Max. At the surface, this looks like a pricing-table update. For developers, the more important shift is that Copilot is moving from a flat-fee coding assistant into a metered AI development platform that teams will need to monitor, budget, and operate.
This did not come out of nowhere. Copilot began as IDE autocomplete and short-form chat. In 2026, it includes cloud agent sessions, code review, CLI agent work, the Copilot app, REST APIs, and team metrics. Users are no longer only accepting a suggested line. They can assign an issue, ask Copilot to read a repository, create a branch, fix a failing test, and respond to review comments. Those jobs are hard to price as one simple "request." The same Copilot surface can represent a short suggestion or a multi-step agent run that spends many minutes reading context and generating code.
GitHub's explanation points directly at that split. Copilot is no longer the same product it was a year ago. It has evolved from an editor assistant into an agentic platform that can run long coding sessions. A pricing model that treats a quick question and an autonomous coding task as the same unit is hard to sustain. So GitHub is replacing premium request units with GitHub AI Credits, calculated from actual token usage across input, output, and cached tokens, with rates tied to each model.
AI Credits are an accounting shift
Starting June 1, premium request units are replaced by GitHub AI Credits. Paid plans receive monthly included credits, and customers can buy more usage when needed. Monthly Pro and Pro+ individual subscribers move automatically to the new model on June 1. Annual Pro and Pro+ subscribers keep the existing premium request-based pricing until their plans expire, although they are still affected by model multiplier changes beginning June 1.
The central change is not the seat price. Pro, Pro+, Business, and Enterprise base subscription prices remain the same. The change is how advanced usage gets measured. GitHub says AI Credits are consumed based on token usage, including input, output, and cached tokens, using model-specific public API rates. If a developer uses stronger models, passes large context, or asks an agent to reason through multiple steps, more credits are consumed. Code completions and Next Edit suggestions remain included capabilities and do not consume AI Credits.
That distinction changes what Copilot is. Autocomplete remains a subscription productivity feature. Chat, agent work, code review, and premium model access increasingly resemble API consumption inside the developer workflow. The relevant question becomes less "Do we use Copilot?" and more "Which Copilot surfaces do we use, with which models, for how long, and under what budget?"
| Individual plan | Price | Base credits | Flex allotment | Included usage on June 1 |
|---|---|---|---|---|
| Pro | $10/month | $10 | $5 | $15 |
| Pro+ | $39/month | $39 | $31 | $70 |
| Max | $100/month | $100 | $100 | $200 |
The May 12 announcement is best understood through this table. GitHub says it heard questions about whether the included usage in Pro and Pro+ was enough. Its answer is to separate base credits, which map one-to-one to the subscription price, from a flex allotment layered on top. Pro gets $15 of included usage for a $10 monthly price. Pro+ gets $70 for a $39 price. The new Max plan provides $200 of included usage for $100 per month and is aimed at individual high-volume users who run sustained Copilot workflows.
The word "flex" matters. Base credits are fixed to the subscription price, but flex allotments can change. GitHub says flex allotments may move as model pricing, new models, and efficiency improvements change the AI cost structure. In other words, this is not a permanent promise of doubled usage forever. It is a buffer between user expectations and the cost reality of agentic AI.
Agentic coding is breaking flat pricing
The reason is straightforward: coding agents spend far more inference than autocomplete. Autocomplete usually reads the current file and nearby context, then returns a short suggestion. Agentic coding is different. It explores repositories, reads multiple files, plans, edits code, runs tests, interprets failure logs, and loops back through the task. When handling pull request review comments, it also needs to understand the change intent, reviewer feedback, CI state, and branch context.
That difference affects both costs and product design. The expectation that $10 per month can cover near-unlimited AI coding was plausible in a world centered on completions. Once agents run for long periods, a small number of heavy users can consume a large share of inference cost. GitHub says it absorbed substantial inference cost under the premium request model. Now it is exposing that cost through credits, budgets, pools, and model rates.
Anthropic is moving in a similar direction. It has separated Claude Agent SDK and claude -p usage from ordinary subscription limits by introducing monthly Agent SDK credits. The details differ, but the underlying problem is the same. AI used by a human in a terminal or editor is not economically identical to AI launched by an agent that runs for a long time and burns tokens through scripts and automation.
Copilot Max is GitHub acknowledging that shift in the individual developer market. High-end coding plans have often been explained as better model access or a larger request allowance. Max is a little different. It is a larger cost bucket for sustained, high-volume Copilot work. If a developer treats coding agents as workers that can be kept busy, that developer now needs a larger ledger.
Code review now shows the two-layer cost model
The most revealing feature in this transition is Copilot code review. In an April 27 changelog post, GitHub said Copilot code review will start consuming GitHub Actions minutes on June 1 in addition to AI Credits. When review runs in a private repository, it uses the existing Actions entitlement. Once included minutes are exceeded, standard Actions rates apply.

That sounds like an implementation detail, but it captures the economics of AI coding tools. A code review agent does not only call a model. It runs on GitHub Actions, checks out the repository, reads the diff, and produces review output. The cost splits into two layers. Model inference consumes AI Credits. Execution infrastructure consumes Actions minutes.
For companies, that distinction matters. Enabling Copilot code review across an organization does not only increase AI Credits usage. It can also move Actions usage. Review frequency, repository size, private repository share, branch protection rules, and automated review triggers all become cost variables. A decision that used to be framed as code quality or developer experience now also belongs in the CI/CD budget conversation.
Copilot chat, agent work, and code review requests
Token usage by model: input, output, and cached tokens
Base credits are consumed first, then flex allotment applies
Extra usage requires additional purchase or stops at the budget cap
The disappearance of the old fallback experience is also important. Under the previous model, users who exhausted premium requests could still fall back to a lower-cost model and keep working. Under the new model, available credits and administrator budget controls decide whether work can continue. That improves cost predictability, but it reduces the cushion of "continue on a cheaper model" when a user hits the limit.
Enterprises will watch pools and budgets
Enterprise changes are more operational than individual pricing changes. Business and Enterprise seat prices remain $19 per user per month and $39 per user per month. During June, July, and August, GitHub is offering transition support with $30 of monthly AI Credits for Business and $70 for Enterprise. After that, included usage, additional purchases, and budget controls become the main operating levers.
One enterprise feature GitHub emphasizes is pooled included usage. Instead of leaving unused included usage isolated at each user, a business can share that capacity across the organization. That reflects real usage patterns. One developer may mostly use completions. Another may run cloud agents and code review heavily. If credits are trapped per user, some capacity sits unused while another person hits a wall. A pool reduces that waste.
Pools can also hide important behavior if they are not paired with measurement. A team doing a large migration with Copilot agent might create high usage for a good reason. Another team might burn credits on repeated exploratory chats without a workflow change to show for it. That is why GitHub's budget controls at the enterprise, cost center, and user level matter. Once AI coding tools become infrastructure, FinOps vocabulary inevitably follows.
Recent team-level Copilot usage metrics point the same way. Organizations need to see which teams use completions, chat, CLI, code review, and cloud agent, and in what proportions. Copilot adoption can no longer be measured only by seat count. The better questions are which features change the development workflow, which model and agent tasks generate cost, and which teams produce value against budget.
Developers need a new usage instinct
For individual developers, the most immediate change is model and task awareness. Autocomplete and Next Edit suggestions remain included. Everyday inline suggestions may feel mostly unchanged. But developers who repeatedly run long chats with premium models, hand large issues to agents, or invoke code review many times will start watching credits.
That is not only bad. It may push AI coding tools toward more deliberate use. Small questions can go to auto mode or cheaper models. Complex bugs and large refactors can justify stronger models. Before asking an agent to read an entire repository, a developer can narrow the task. Instead of pasting a giant test log, they can isolate the failing command, error, and relevant file path. Those habits improve both cost and output quality.
The harder problem is cost predictability. Token-based pricing is not intuitive for most developers. "How much did this PR review cost?" "Why did this agent session spend so much?" "How were cached tokens counted?" If those questions cannot be answered at the task level, frustration will continue. GitHub says it is providing preview billing experiences and dashboards, but the real test is whether developers and admins can trust the explanation after a surprising charge.
Community reactions around the announcement focus on exactly that tension. Some users accept the cost reality of agentic workloads. Long coding sessions and top-tier model use cannot be priced like short chats forever. Others feel that a subscription product is turning into a store without clear price tags. The May 12 flex allotments and Max plan soften the transition, but because flex is variable, long-term predictability remains unresolved.
Copilot Max is both defense and signal
Copilot Max is not just another upper tier. It is a defensive move to keep heavy individual users inside Copilot, and it is also a signal about where AI coding economics are heading. A $100 monthly plan with $200 of included usage says GitHub wants to offer a bigger budget to people who use Copilot heavily. The variable $100 flex portion also says GitHub is not treating that subsidy as a fixed forever guarantee.
This puts Copilot in a different comparison set. Users will compare it with Cursor, Claude Code, Codex, and local coding models not only on code quality. They will ask which tool gives predictable pricing for long agent runs, which tool supports team budget controls, which tool offers useful model choice, and how each product charges for code review and execution infrastructure. AI coding is now a UX competition and a billing-model competition at the same time.
GitHub has strong advantages in this contest. Repositories, pull requests, Actions, code review, Copilot, and team metrics all live on the same platform. That gives GitHub more context and more surfaces for governance. It also pulls users deeper into GitHub's cost boundary. When code review consumes Actions minutes and agent tasks consume GitHub AI Credits, organizational AI usage becomes more visible inside GitHub billing.
This is not the end of subscription AI
The wrong reading is simply "Copilot got more expensive." The larger story is that AI products are splitting into two economic categories. One category remains close to subscription software: code completion, Next Edit, lightweight assistant features, and always-on productivity nudges. The other category looks like usage-based infrastructure: repository-scale work, code review, cloud agents, premium models, and multi-step tasks that run for a long time.
That split is likely to appear in other AI products. Meeting summaries and document drafts may remain bundled into subscriptions. Full-day research agents, cross-system business automation, and long-running coding agents are more likely to move toward credits, budgets, usage caps, and API-like rates. Copilot is one of the first developer-market products to make this boundary highly visible.
The practical response for development teams is not to rush into turning Copilot off or on. First, separate the surfaces. Which Copilot features remain included? Which ones consume credits? Which ones also consume Actions minutes? Then decide where code review should run automatically, which repositories should allow agent tasks, which models are appropriate for which work, and how team budget caps should be set.
Visible cost is uncomfortable, but it also creates an operating surface. GitHub Copilot Max and AI Credits show that the next phase of AI coding is not just smarter autocomplete. It is measurable agent execution. Copilot still helps developers, but that help now moves through a credit ledger and budget policy. As coding agents enter real development workflows, developers will need to design prompts, task boundaries, and cost paths together.
Sources
- GitHub Blog: GitHub Copilot is moving to usage-based billing
- GitHub Blog: GitHub Copilot individual plans: Introducing flex allotments in Pro and Pro+, and a new Max plan
- GitHub Blog: Changes to GitHub Copilot Individual plans
- GitHub Changelog: Copilot code review will start consuming GitHub Actions minutes
- GitHub Docs: GitHub Copilot billing