Hey folks, I'm Alex from the reliability engineering team at Anthropic. We've just posted the retrospective for this incident:
> On March 26–27, 2026, customers experienced elevated error rates when using Claude Opus 4.6 and Claude Sonnet 4.6. The issue was caused by a networking performance degradation within our cloud infrastructure that disrupted communication between components of our serving stack. We resolved the incident by migrating the affected workloads to healthy infrastructure, restoring normal service by 9:30 AM PT on March 27.
Honestly, downtime has gotten way better as one of the people behind (https://downforeveryoneorjustme.com). Compared to 10 years ago things are so much more redundant and harder to take down.
Well, (a) why would they? (b) "uptime" has shifted from a binary "site up/down" to "degraded performance", which itself indicates improvements to uptime since we're both pickier and more precise.
Yes, I'm asking why they'd lock themselves into a contract around 5 9s of uptime since the parent poster mentioned that they won't do so. Of course, AWS actually does do this in some cases and they guarantee 99.99% for most things, so it feels a bit arbitrary - 5 minutes vs an hour, roughly.
It's pretty damn good, and it's seen a real exodus of conscientious users; the QuitGPT movement alone hit 1.5 million participants, with Claude skyrocketing to #1 on the App Store virtually overnight. No surprise the servers are getting hammered.
The ironic thing about outages such as this one and Github's recent spate of outages are that if those vendors' sales pitches are to be believed, the vendors could just ask their LLMs to program reliable replacements overnight (okay, maybe a weekend).
I know they tend to get much slower early evenings in the Western US... Not sure if this is everyone on the west coast going home and working on stuff, or the early people in the Asia region coming online.
I also use them per-token (and strongly prefer that due to a lack of lock-in).
However, from a game theory perspective, when there's a subscription, the model makers are incentivized to maximize problem solving in the minimum amount of tokens. With per-token pricing, the incentive is to maximize problem solving while increasing token usage.
I don't think this is quite right because it's the same model underneath. This problem can manifest more through the tooling on top, but still largely hard to separate without people catching you.
I do agree that Big Ai has misaligned incentives with users, generally speaking. This is why I per-token with a custom agent stack.
I suspect the game theoretic aspects come into play more with the quantizing. I have not (anecdotally) experienced this in my API based, per-token usage. I.e. I'm getting what I pay for.
I saw a funny skit where if free Claude instance was down for you, you could just ask Rufus, Amazon's shopping AI assistant, your math/coding question phrased as a question about a product, and it would just answer lol.
In my region a certain small bank had an AI assistant which someone neglected to limit, so you could put whatever there and not even phrase it as a question about a product.
The local grocer that isn't amazing and cost more and actually isn't really that local in the sense that none of the products sold are from local businesses/producers?
They seem to be a victim of their own success. Their response times are quite bad, and it's widely believed they are doing something to degrade service quality (quantizing?) in order to stretch resources. They just announced that they're cutting their usage limits down during peak hours as well.
They're in serious risk of losing their lead with this sort of performance.
> it's widely believed they are doing something to degrade service quality (quantizing?) in order to stretch resources
God, I wish this inane bullshit would just fucking die already.
Models are not "degrading". They're not being "secretly quantized". And no one is swapping out your 1.2T frontier behemoth for a cheap 120B toy and hoping you wouldn't notice!
It's just that humans are completely full of shit, and can't be trusted to measure LLM performance objectively!
Every time you use an LLM, you learn its capability profile better. You start using it more aggressively at what it's "good" at, until you find the limits and expose the flaws. You start paying attention to the more subtle issues you overlooked at first. Your honeymoon period wears off and you see that "the model got dumber". It didn't. You got better at pushing it to its limits, exposing the ways in which it was always dumb.
Now, will the likes of Anthropic just "API error: overloaded" you on any day of the week that ends in Y? Will they reduce your usage quotas and hope that you don't notice because they never gave you a number anyway? Oh, definitely. But that "they're making the models WORSE" bullshit lives in people's heads way more than in any reality.
It's possible though - it was a bug, a model pool instance wasn't updated properly and served a very old model for several months; whoever hit this instance would received a response from a prev version of a model.
While it's true that people are naturally predisposed to invent the "secret quantizing" conspiracy regardless of whether the actual conspiracy exists or not, I think there's more to the story.
I've seen Sonnet consistently start hallucinating on the exact same inputs for a couple hours, and then just go back to normal like nothing ever happened. It may just be a combination of hardware malfunction + session pinning. But at the end of the day the effects are indistinguishable from "secret quantizing".
Gemini CLI has been broken for the past 2-3 days, with no response from Google. Really embarrassing for a multi-trillion dollar company. At this point Codex is the only reliable CLI app, out of the big three.
GeminiCLI is absolutely terrible, nothing comparable to the browser access. I've started using the 'AI Pro' tier lately and I get 15 minutes response times from Gemini 3 'Flash' on a regular basis.
You'll notice I specifically said "victims of their own success". Obviously these problems are induced by the fact that they have so many users. Blowing a lead due to inability to handle the demands of success is still a path to losing the lead.
This is not an outage, Claude just gets lazier on Fridays.
Sometimes Claude wants more lunch breaks, takes a half day and leaves the desk early just like any human would. (since AI boosters like comparing LLMs to humans all the time) /s
If you're concerned about humans anthropomorphizing AI models, you might want to steer well clear of Anthropic, as their entire positioning (starting with the product name and continuing with UX choices and model releases) is built to attract the kind of researchers who are prone to believe in sentient machines.
They are going in the "Claude is alive" direction already and that line of communication is likely going full throttle in the nearby future.
I suspect the next big marketing gimmick is this supposed leak about capybara. I suspect the leak is intentional and meant to influence their expected IPO.
I think the big reveal is going to be that frontier models are no better than the open source models that you could feasibly run on retail hardware however they have a highly complex harness behind the API where the magic is.
I had my agent set up a "team" of subagents directed to different parts of a big new app (UX Engineer, test lead, etc) . Apparently the Senior SWE had reduced the scope, and my PM came to me trying to argue the side of the SWE that had reduced the scope for time constraint reasons...
> On March 26–27, 2026, customers experienced elevated error rates when using Claude Opus 4.6 and Claude Sonnet 4.6. The issue was caused by a networking performance degradation within our cloud infrastructure that disrupted communication between components of our serving stack. We resolved the incident by migrating the affected workloads to healthy infrastructure, restoring normal service by 9:30 AM PT on March 27.
https://status.claude.com/incidents/b9802k1zb5l2
Not one of the usual ones that has service problems :)
Very few cases these days.. feels like we are lucky to get 2 9s anymore.
Have you noticed any change in that trend in the past year or two, or is it continuing to get better?
It should be low risk to offer such guarantees then.
You just won't like the price.
Tired of all the people online with anxiety who project their own personal issues by spamming this kind of doomer posts.
- Stalin probably
time to give your devops guy his job back.
I personally prefer per-token, it makes you more thoughtful about your setup and usage, instead of spray and pray.
You can also access the notable open weight models with VertexAI, only need to change the model id string.
However, from a game theory perspective, when there's a subscription, the model makers are incentivized to maximize problem solving in the minimum amount of tokens. With per-token pricing, the incentive is to maximize problem solving while increasing token usage.
I do agree that Big Ai has misaligned incentives with users, generally speaking. This is why I per-token with a custom agent stack.
I suspect the game theoretic aspects come into play more with the quantizing. I have not (anecdotally) experienced this in my API based, per-token usage. I.e. I'm getting what I pay for.
Any tips?
They are the best.
ChatGPT is walmart.
Gemini is kroger.
Claude is... idk your local grocer that is always amazing and costs more?
GPT4.5 + COT would have been the best, but OpenAI got cheap.
They're in serious risk of losing their lead with this sort of performance.
God, I wish this inane bullshit would just fucking die already.
Models are not "degrading". They're not being "secretly quantized". And no one is swapping out your 1.2T frontier behemoth for a cheap 120B toy and hoping you wouldn't notice!
It's just that humans are completely full of shit, and can't be trusted to measure LLM performance objectively!
Every time you use an LLM, you learn its capability profile better. You start using it more aggressively at what it's "good" at, until you find the limits and expose the flaws. You start paying attention to the more subtle issues you overlooked at first. Your honeymoon period wears off and you see that "the model got dumber". It didn't. You got better at pushing it to its limits, exposing the ways in which it was always dumb.
Now, will the likes of Anthropic just "API error: overloaded" you on any day of the week that ends in Y? Will they reduce your usage quotas and hope that you don't notice because they never gave you a number anyway? Oh, definitely. But that "they're making the models WORSE" bullshit lives in people's heads way more than in any reality.
I've seen Sonnet consistently start hallucinating on the exact same inputs for a couple hours, and then just go back to normal like nothing ever happened. It may just be a combination of hardware malfunction + session pinning. But at the end of the day the effects are indistinguishable from "secret quantizing".
https://www.reddit.com/r/GeminiCLI/comments/1s49pag/this_is_...
only people who do not even look at code anymore need anything more than that.
Nobody goes there anymore, it's too crowded.
Sometimes Claude wants more lunch breaks, takes a half day and leaves the desk early just like any human would. (since AI boosters like comparing LLMs to humans all the time) /s
They are going in the "Claude is alive" direction already and that line of communication is likely going full throttle in the nearby future.
I think the big reveal is going to be that frontier models are no better than the open source models that you could feasibly run on retail hardware however they have a highly complex harness behind the API where the magic is.
It went a bit too deep into the role-playing bit.
Anthropic has had more than that.
Yikes.