Claude Max $100 vs $200: What You Actually Get (Measured, Not Guessed)
Everyone running Claude Code at scale hits the same wall: rate limits. You see percentages tick up, you get throttled, you wait. Anthropic doesn’t publish the exact numbers behind those percentages. The community guesses. Blog posts recycle each other’s estimates. Nobody actually measures.
I did.
The Problem With Everything You’ve Read
Search for “Claude Code rate limits” and you’ll find variations of the same table:
| Plan | 5-Hour Window |
|---|---|
| Pro ($20) | ~45 messages |
| Max 5x ($100) | ~225 messages |
| Max 20x ($200) | ~900 messages |
These message-count estimates come from community gists, blog roundups, and casual observation. Some sources report token-based estimates (44k / 88k / 220k per 5-hour window) that don’t even agree with the message-based numbers.
But here’s the question nobody has answered: what is the relationship between the 5-hour limit and the weekly limit, and how does it differ between the $100 and $200 plans?
This matters because the weekly limit — introduced in August 2025 — is what actually gates sustained usage. You can manage your 5-hour windows perfectly and still hit the weekly ceiling by Wednesday.
How I Measured
I run Botfarm, an autonomous dispatcher that assigns Linear tickets to Claude Code agents and manages their lifecycle. Part of its job is polling the Anthropic usage API and storing snapshots every ~60 seconds into a local SQLite database.
Over two consecutive weekly cycles, I happened to be on different plans:
- Feb 22 – Feb 27: Max 5x ($100/month)
- Feb 28 – Mar 1 (ongoing): Max 20x ($200/month)
This gave me ~3,600 usage snapshots across both plans under real workloads — autonomous coding agents running implementation, review, and CI-fix loops on production tickets.
Methodology
The usage API reports two rolling percentages: utilization_5h (5-hour window) and utilization_7d (weekly window). To measure total consumption in terms of 5-hour limits, I computed the sum of positive deltas in the 5h utilization. Each time the 5h percentage increases between consecutive snapshots, that’s new tokens being consumed. The sum of all such increases represents total consumption expressed as multiples of the 5-hour limit.
I then compared this cumulative 5h consumption against the peak weekly utilization reached during the same period.
The Raw Data
Max 5x ($100/month) — Two Weekly Cycles
| Metric | Cycle 1 (Feb 22–27) | Cycle 2 (Feb 27–28) |
|---|---|---|
| Duration | 5.1 days | 0.9 days |
| Snapshots recorded | 2,109 | 838 |
| Peak 5h utilization | 100% | 100% |
| Peak weekly utilization | 80% | 28% |
| Cumulative 5h consumption | 976% | 315% |
| Times 5h hit 80%+ | 8 | — |
| 1x 5h fill = weekly % | 8.2% | 8.9% |
| Weekly = N x 5h fills | 12.2x | 11.2x |
Max 20x ($200/month) — Current Cycle
| Metric | Feb 28 – Mar 1 |
|---|---|
| Duration | 1.9 days (ongoing) |
| Snapshots recorded | 717 |
| Peak 5h utilization | 47% |
| Current weekly utilization | 22% |
| Cumulative 5h consumption | 146% |
| 1x 5h fill = weekly % | 15.1% |
| Weekly = N x 5h fills | 6.6x |
The Findings
Finding 1: The 5h-to-weekly ratio is consistent within a plan
On the $100 plan, both cycles showed the same ratio: filling the 5-hour bucket once consumes ~8–9% of the weekly budget. The consistency across a 5-day heavy-usage cycle and a 1-day cycle gives us confidence this is a stable property of the plan, not a measurement artifact.
Finding 2: The $200 plan has a very different ratio
On the $200 plan, one 5h fill consumes ~15% of the weekly budget — nearly double the $100 plan’s ratio. This means the weekly limit is proportionally tighter relative to the 5h limit on the higher plan.
Finding 3: The plans don’t scale uniformly
Here’s the key insight. The plan names suggest 20x/5x = 4x more capacity on the $200 plan. But “4x more” only applies to the 5-hour window:
| Limit Type | $200 plan / $100 plan |
|---|---|
| 5-hour limit | ~4x |
| Weekly limit | ~2.1x |
The math:
- On the $100 plan, one 5h fill = 8.2% of weekly → weekly ≈ 12.2 fills
- On the $200 plan, one 5h fill = 15.1% of weekly → weekly ≈ 6.6 fills
- The $200 5h bucket is 4x larger in absolute tokens
- So the $200 weekly budget = (4 × 6.6) / 12.2 = 2.16x the $100 weekly budget
The $200 plan gives you 4x the burst capacity but only ~2x the sustained weekly budget.
What This Means In Practice
If you’re choosing between the $100 and $200 plans, the decision depends on your usage pattern:
Choose $200 if you need high burst throughput — running multiple agents in parallel, doing intensive sessions, or iterating rapidly on complex tasks. The 4x larger 5-hour window means you can go significantly harder before getting throttled mid-session.
Stick with $100 if your bottleneck is sustained weekly usage. You’re paying 2x the price for only ~2x the weekly capacity. The per-dollar efficiency for weekly limits is roughly equal between plans.
The worst case is the $200 plan user who burns through the 5h window at full speed. The weekly budget is only 6.6 fills deep — meaning ~33 hours of maxed-out usage (6.6 × 5h) could exhaust your entire week. On the $100 plan, the same behavior gives you ~61 hours (12.2 × 5h), though each hour processes 4x fewer tokens.
| Scenario | $100 plan | $200 plan |
|---|---|---|
| 5h fills before weekly limit | ~12 | ~7 |
| Hours at max throughput before weekly limit | ~61h | ~33h |
| Tokens processed in those hours | 1x | 2.1x |
So the $200 plan gets you through ~2x more total work before the weekly limit hits, but it gets there in roughly half the wall-clock time.
Caveats
- These measurements come from Claude Code workloads (autonomous coding agents), not interactive chat. Chat usage may follow different accounting.
- The $200 plan data covers only 1.9 days so far. I’ll update this article when the full weekly cycle completes.
- Model choice matters. Opus consumes ~5x more allocation than Sonnet. My workloads are ~95% Opus. If you primarily use Sonnet or Haiku, the ratios may differ.
- Anthropic can and does adjust limits without notice. These numbers reflect late February 2026. The September 2025 limit changes and subsequent adjustments show this is a moving target.
How I Got This Data
This analysis was a side effect of running Botfarm — an open-source autonomous dispatcher for Claude Code. Botfarm polls the Anthropic usage API as part of its job (it pauses dispatch when limits are close), and stores every snapshot to SQLite for exactly this kind of analysis.
If you’re running Claude Code at scale — whether for autonomous agents, CI pipelines, or team workflows — and want visibility into your actual usage patterns, the usage tracking module might be worth a look.
Published March 1, 2026. Data from 3,664 usage snapshots collected between February 22 and March 1, 2026.