Anthropic Identifies Three Product-Layer Changes Behind Claude Code Quality Decline, Not Model Issue

Gate News message, April 23 — Anthropic’s engineering team confirmed that the Claude Code quality degradation reported by users over the past month stemmed from three independent product-layer changes, not from API or underlying model issues. The three problems were fixed on April 7, April 10, and April 20 respectively, with the final version now at v2.1.116.

The first change occurred on March 4, when the team reduced the default reasoning effort level for Claude Code from “high” to “medium” to address occasional extreme latency spikes in Opus 4.6 under high reasoning intensity. After widespread user complaints about reduced performance, the team reverted the change on April 7. The current default is now “xhigh” for Opus 4.7 and “high” for other models.

The second issue was a bug introduced on March 26. The system was designed to clear old reasoning records after conversation inactivity exceeded one hour to reduce session recovery costs. However, a flaw in implementation caused the clearing to execute repeatedly on every subsequent turn rather than once, causing the model to progressively lose prior reasoning context. This manifested as increasing forgetfulness, repeated operations, and abnormal tool invocations. The bug also resulted in cache misses on every request, accelerating user quota consumption. Two unrelated internal experiments masked the reproduction conditions, extending the debugging process to over a week. After fixing on April 10, the team reviewed problematic code using Opus 4.7 and found that Opus 4.7 could identify the bug while Opus 4.6 could not.

The third change launched on April 16 alongside Opus 4.7. The team added instructions to the system prompt to reduce redundant output. Internal testing over several weeks showed no regression, but post-launch interaction with other prompts degraded coding quality. Extended evaluation revealed a 3% performance drop in both Opus 4.6 and 4.7, leading to a rollback on April 20.

These three changes affected different user groups at different times, and their combined effect created widespread and inconsistent quality decline, complicating diagnosis. Anthropic stated it will now require more internal employees to use the same public build version as users, run full model evaluation suites for every system prompt modification, and implement staged rollout periods. As compensation, Anthropic has reset usage quotas for all subscription users.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Cohere Acquires German AI Firm Aleph Alpha, Secures $600M Investment for European Expansion

Gate News message, April 24 — Canadian AI company Cohere announced plans to acquire German AI firm Aleph Alpha to strengthen its presence in Europe. Schwarz Group, a backer of Aleph Alpha, plans to invest $600 million in Cohere's Series E funding round. The funding round is expected to close in 202

GateNews24m ago

Xpeng, Xiaomi Lead In-Car AI Push at Beijing Auto Show

Gate News message, April 24 — Chinese automakers showcased advanced in-car AI systems at the Beijing Auto Show on April 24, as the country accelerates its AI Plus strategy and seeks greater independence from foreign semiconductors. Xpeng demonstrated voice-controlled parking that allows drivers to

GateNews1h ago

Former ByteDance Seed Engineer: ByteDance AI Iteration Takes Six Months vs Google's Three Months

Gate News message, April 24 — Zhang Chi, a former engineer at ByteDance's Seed team and current assistant professor at Peking University, revealed on the podcast "Into Asia" that ByteDance requires approximately six months to complete one full cycle of large language model training (pretraining

GateNews1h ago

OpenAI Engineer Clive Chan Challenges V4 Hardware Recommendations, Citing Errors and Vagueness vs. V3

Gate News message, April 24 — OpenAI engineer Clive Chan has raised detailed objections to the hardware recommendations chapter in the V4 technical report, calling it "surprisingly mediocre and error-prone" compared to the acclaimed V3 version. V3's hardware guidance, which included Q&A sessions

GateNews1h ago

Naver Launches AI Tab Beta as Google Gemini Enters South Korea Search Market

Gate News message, April 24 — Naver announced the start of a closed beta for AI Tab, its new conversational search feature, following Google's launch of Gemini in Chrome in South Korea. AI Tab will appear alongside Naver's existing search tabs, offering users a dedicated space for conversational

GateNews2h ago

India AI Engineering Hiring Surges 59.5%, Expands Beyond Tech Hubs

LinkedIn's AI Labor Market Report 2026, released on April 24, found that AI engineering hiring in India rose 59.5% year on year, marking the fastest pace among the markets studied by the platform. The growth was driven by demand spreading beyond established tech centers. Cities including

CryptoFrontier2h ago
Comment
0/400
No comments