Anthropic Kills Third-Party Agent Access on Claude Subscriptions — Here's What Actually Happened
As of April 4, Claude Pro and Max subscribers can no longer power tools like OpenClaw through their subscriptions. A $200/month plan was covering $3,650 in compute. Anthropic pulled the plug — and it's not just them.
What happened
On the evening of April 3, Boris Cherny — head of Claude Code at Anthropic — posted on X that starting the next day at noon PT, Claude Pro and Max subscriptions would no longer work with third-party agent frameworks. Less than 24 hours notice. The cutoff hit at noon on April 4, and if you were running OpenClaw or any similar tool on your subscription, it just stopped working.
The biggest casualty: OpenClaw, the open-source AI agent framework built by Austrian developer Peter Steinberger. What started as a weekend hack called Clawdbot in late 2025 had exploded to 250,000+ GitHub stars by March 2026 — surpassing React. An estimated 135,000 active OpenClaw instances were running at the time of the announcement.
All of them broke.
Why Anthropic did it
The math doesn't work. Someone tracked their Claude Code Max usage through network logs for a week and projected it to a full month at API rates: $3,650. Their subscription cost: $200/month.
That's an 18x gap.
Anthropic's internal numbers paint a similar picture. While the average Claude Code user costs about $6/day to serve, heavy agent users were burning through $1,000 to $5,000 worth of compute daily on that same $200 plan. The subscription model assumes humans use it — humans who pause, think, sleep, and context-switch. Autonomous agents do none of that. They fill every available slot and queue the next request the instant one finishes.
There's a technical angle too. Claude Code has aggressive prompt caching built in — it reuses repeated context (project files, conversation history) at reduced rates. Third-party tools bypass all of that. Every request hits the full-cost path. Same output, way more infrastructure consumed.
Boris Cherny put it plainly: "Our systems are highly optimized for one kind of workload... to serve as many people as possible." Third-party agents were a different kind of workload entirely.
The timeline tells the real story
This didn't come out of nowhere:
- January 9, 2026: Anthropic briefly blocked subscription OAuth tokens from working outside official apps. Backlash was immediate. They reversed it.
- February 2026: The Terms of Service were quietly updated to formally prohibit third-party harness usage.
- March 26, 2026: Usage calculation changes caused subscribers to burn through limits faster during peak hours.
- April 4, 2026: Full enforcement. Third-party tool usage now draws from "extra usage" bundles, not subscription limits.
Three months of escalation. The January reversal wasn't mercy — it was buying time.
What it costs now
If you're switching to direct API billing, here's what you're looking at:
| Model | Input (per 1M tokens) | Output (per 1M tokens) | |-------|----------------------|------------------------| | Claude Sonnet 4.6 | $3 | $15 | | Claude Opus 4.6 | $15 | $75 |
A heavy user running 500K input + 200K output Opus tokens per day lands at roughly $675/month. That's 3.4x the old Max subscription. Some users report cost increases up to 50x, depending on how hard they were hitting the API.
Anthropic is offering affected users a one-time credit equal to their monthly plan price (redeemable until April 17), plus a 30% discount on pre-purchased extra usage bundles, and a full refund option if you just want out.
OpenClaw's creator called it a betrayal
Peter Steinberger — who left for OpenAI on February 14, before the ban — called this "a betrayal of open-source developers." He reported getting only a one-week negotiation window with Anthropic before the hammer dropped. The Hacker News thread hit 684 points with 563 comments. People were angry.
I get both sides. Steinberger built something wildly popular that exposed a pricing flaw, and Anthropic's response — less than 24 hours notice, no grandfathering period — was rough. But also: a $200 flat fee covering thousands of dollars in compute was never going to last. OpenClaw's growth basically accelerated an inevitable correction.
This isn't just Anthropic
The whole industry is tightening up:
Google Gemini now caps free-tier users at 5 prompts per day. Five. Pro subscribers get 100, Ultra gets 500. In December 2025, Google had already slashed free API rate limits by 50–80%, citing abuse.
Cursor shifted from simple request counts to credit-based pricing in mid-2025. Users reported their effective request allowance dropping from ~500 to ~225 under the same $20/month plan.
Replit introduced "effort-based pricing" for its AI coding agent, linking costs directly to task complexity rather than offering flat rates.
The pattern is clear: flat-rate AI pricing doesn't survive agent usage. Sam Altman admitted in January 2025 that OpenAI was losing money on the $200/month ChatGPT Pro tier. These subscriptions were designed for humans chatting, not for autonomous agents maxing out every available token.
What to do if you're affected
If you were running OpenClaw or similar tools on a Claude subscription, here's the practical path forward:
- Get an API key at console.anthropic.com. Direct API billing gives you full control and is generally cheaper than the "extra usage" billing layered on subscriptions.
- Set spend caps immediately. Automated workloads can burn through money fast without guardrails.
- Audit your model selection. Sonnet 4.6 is 5x cheaper than Opus and handles most tasks fine. Don't run Opus out of habit.
- Implement prompt caching if you're building custom tooling. This is the single biggest cost lever.
- Consider batching. The batch API offers discounted processing for non-real-time workloads.
- Build vendor-agnostic abstractions. Tools like LiteLLM let you swap providers without rewriting your stack. Given how fast these policies shift, flexibility matters.
My take
Anthropic's move was inevitable but poorly executed. Less than 24 hours of notice for a change affecting 135,000+ users is bad, full stop. The one-time credit is a gesture, not a solution — one month of $200 credits doesn't soften a jump to $675+/month.
But the underlying economics were broken. Subscription arbitrage at this scale was unsustainable, and pretending otherwise helps nobody. The real lesson here: if you're building anything serious on top of an AI provider's consumer subscription, you're building on sand. These tiers exist for individual users typing prompts. The moment your usage pattern stops looking human, you're on borrowed time.
Budget for API costs. Treat them as infrastructure spend, not a subscription perk. That's where this whole industry is headed.
