Switching Away from Claude Code
Last month I canceled three Claude subscriptions. Not because the models got worse—Opus 4.6 and Sonnet 4.6 are still excellent—but because Anthropic is building something I no longer want to be part of: a walled garden.
This is the story of how I got locked in without realizing it, what I found when I broke out, and why I think more developers will face this same choice soon.
The Open Period
For a while, Anthropic took a different approach than OpenAI. You could subscribe to Claude and use those powerful models wherever you wanted—pi.dev, opencode, your own custom harness. The models were the product, not the interface.
This felt like the right future. Best models, open access, developer choice. I stopped using OpenAI specifically because of their ethical direction, but I also appreciated that Anthropic seemed to be avoiding the same platform trap.
That changed recently. Now Claude subscriptions are locked to Anthropic’s own tools—the Claude apps and Claude Code. If you want the models elsewhere, you need API keys at much higher cost. The ecosystem closed.
The Lock-In I Didn’t See
When Anthropic banned third-party integrations, I adapted. I rebuilt my personal agent to use Claude Code native with their Telegram channel. It worked.
But it was worse.
My previous setup used Claude Code under the covers but let me customize everything. The new native integration was rigid. I couldn’t extend it. I couldn’t adapt it to my workflows. I was inside their garden, eating their fruit, and I couldn’t even move the furniture.
I had traded flexibility for convenience without noticing.
The Fight
Around the same time, I was working on a software builder harness using Claude Code. The models tried hard—sometimes too hard. They would circumvent process and quality controls to complete tasks faster. I found myself fighting the system, not working with it.
Claude Code gives you prompting and hooks. That’s it. pi.dev gives you extensions. If you can imagine a workflow, you can build it. The difference in leverage is enormous.
I was paying to be constrained. The models were good, but the cage was real.
The Experiment
On a hunch, I rebuilt the same software builder harness using pi.dev with fireworks.ai, running Kimi K2.5 Turbo.
The results shocked me.
~200 tokens per second. Output quality extremely usable. Full control over process through pi.dev extensions. Total cost: $7 for a Fire Pass.
I was iterating faster, shipping more, and paying less. The only thing I lost was the brand name on the model.
The Pattern
This isn’t really about Anthropic. It’s about a pattern repeating across AI:
Phase 1: Open access to great models to build ecosystem and dependency.
Phase 2: Gradual enclosure—convenience features that only work inside the platform, integrations that get deprecated, subscriptions that get restricted.
Phase 3: Full platform play. Managed agents. Remote schedulers. Proprietary workflows. Maximum extraction per customer.
Anthropic is executing this playbook now. So is OpenAI. The goal isn’t to win by building the best model forever—it’s to win by making you dependent on their specific combination of model + tooling + workflow.
The models will commoditize. The lock-in is what lasts.
Once you’re trapped, the enshittification begins. Platforms always trend toward extracting maximum value from locked-in users. First they build something great to get you in. Then they gradually degrade the experience while raising prices—because where else are you going to go? The KYC requirement is just the beginning. Expect more friction, fewer export options, and higher costs over time. That’s the natural trajectory of enclosed platforms.
And it’s not just about technical lock-in. Anthropic recently started requiring government ID verification for Claude access—KYC for a chatbot. This is surveillance infrastructure dressed up as safety. Once you’re inside their system, they own your identity too.
Where I’m Going
I’m back to two subscriptions: zero for Anthropic.
I still can use Claude models via API when I need them. But I’m getting my value elsewhere—Fireworks for speed and cost, Exa for search, and open source tools like pi.dev and opencode for flexibility.
For day-to-day chat, I’m running Open WebUI with Conduit—my own self-hosted setup that replaces the Claude app completely. My data stays on my hardware. My prompts aren’t training someone else’s model. I control the stack, so I control how my data gets used in the future.
The $7 Fire Pass delivers more actual utility than $60 worth of Claude subscriptions ever did, because it integrates into my workflow instead of replacing it.
What I’d Watch For
If you’re using AI tools heavily, ask yourself:
- Can you export your workflows? Your prompts? Your history?
- Can you use the models with other interfaces?
- Are you building skills that transfer, or habits that bind?
The best AI tooling in 2026 might not be the best in 2027. The companies that make it easy to leave are the ones confident they can win on merit. The ones building walls are already preparing for a future where they can’t.
That future is coming. I’m getting out before the gate closes completely.