At the beginning of 2026, a thought-provoking scene emerged in the software engineering field. The new generation of AI programming tools, represented by Claude Opus 4.6, is pushing developer productivity to unprecedented heights. Internal Microsoft data shows that after engineers freely chose their tools, Claude Code quickly became dominant, seen as the “least resistance path” by natural selection.
Meanwhile, discussions about “professional burnout” are intensifying within the developer community. Former Google and Amazon engineer Steve Yegge describes a phenomenon he calls “nap attacks”: after long periods of ambient programming, he suddenly falls asleep during the day without warning.
Today, more and more software engineers are openly sharing a common experience: work output has increased dramatically, but fatigue is accumulating at an even faster rate. Technology has significantly shortened task execution times, yet has not reduced human decision-making burdens—in fact, these burdens are increasing.
Yegge points out that previous discussions about “AI’s limited help with actual work” are now outdated after the deployment of Claude Code with Opus 4.5 and 4.6. This combination has greatly lowered the cost of transforming from problem definition to runnable code, enabling a skilled engineer to produce several times more output in the same amount of time.
When productivity exceeds about twice the original, a phenomenon he calls the “Vampire Effect” begins to manifest: technology is no longer just a tool but starts to inversely shape the user’s work rhythm and mental state.
Software engineer Xidante Caret detailed this process on his blog. In his article “AI Fatigue Is Real,” he writes that his code delivery in the last quarter hit a career peak, but his mental exhaustion was also at its maximum.
He describes a fundamental shift in his work pattern. Before using AI, he would focus deeply on a single problem all day, maintaining a coherent train of thought. After introducing AI, he now handles five to six different problem domains simultaneously. Each problem, aided by AI, takes about an hour to resolve, but frequent switching between problems creates new cognitive loads. “AI doesn’t get tired between problems,” he writes, “but I do.”
Caret describes his new role as a “quality inspector on an assembly line.” Pull requests flood in continuously, each requiring review, decision, and approval. The process never stops, but decision-making authority never shifts. He remains fixed in the judge’s seat, with cases delivered by AI and responsibility borne by humans.
A recent study provides empirical support for this phenomenon. Researchers tracked 200 employees at an American tech company and found that while AI use initially significantly increased task completion speed, it also triggered chain reactions: faster speeds raised organizational expectations for delivery cycles, which in turn made employees more dependent on AI, expanding the scope of tasks they attempted to handle, further increasing work density and cognitive load.
The researchers describe this mechanism as “workload creep.” It is not driven by directives but is a self-reinforcing cycle of efficiency gains and expectation adjustments.
Sammo Koroshets, involved in digital product design, expressed a similar situation on social media. He pointed out that there are countless demos of “generating ten UI options in one minute” circulating online. These are repeatedly pushed to practitioners and managers, creating an implicit standard. Since tools can produce solutions so quickly, the output is expected to be equally rapid.
However, these demos rarely show the subsequent costs of filtering, implementation, and cross-functional coordination—all of which are still entirely human responsibilities. Technology compresses the production time but does not shorten decision-making time. The latter is becoming the new bottleneck—human attention and willpower.
Yegge offers a simplified analytical framework. Suppose an engineer, after mastering AI tools, increases their output by ten times per unit time. Who gains the ninefold value difference depends on how the user allocates their labor supply.
For example, in Scenario A, the engineer maintains their original working hours, delivering all the additional output to the employer. The employer gains nearly ten times the output at unchanged labor costs. The engineer’s income remains proportionally the same, but their workload and mental strain increase significantly. Yegge calls this “being drained.”
In Scenario B, the engineer drastically reduces working hours, completing the same amount of work in only 10% of the original time. The entire incremental value is captured by the individual, gaining more leisure time. But this state is hard to sustain in a competitive environment. If all team members adopt this strategy, overall organizational output will lag behind competitors, risking long-term survival.
Yegge notes that the ideal lies somewhere between these extremes. But in current organizational structures, the power to adjust the scale is asymmetrical. Organizations tend to push the pointer toward Scenario A, while individuals need to actively exert counterpressure.
This framework turns the issue of technological efficiency into a distribution problem. AI does not change the fundamental fact that “value is created by labor,” but it alters the magnitude of value created per unit of labor. When this magnitude jumps, the existing distribution balance is inevitably disrupted.
Yegge recalls his experience at Amazon in 2001. His team faced intense delivery pressure with highly uncertain returns. In a discussion, he wrote a formula for his colleagues: $/hour. He explained that the numerator (annual fixed salary) is hard to change in the short term, but the denominator (actual working hours) has considerable flexibility. He advocated shifting focus from “how to earn more” to “how to work fewer hours.”
Twenty-five years later, Yegge believes this formula still applies in the AI era. The difference is that AI greatly amplifies the impact of changes in the denominator on the numerator, but individuals’ control over the denominator has not increased proportionally.
Social media user Joseph Amason responded from another angle. He observed that most successful creators in creative fields—writers, designers, researchers—typically work no more than four effective hours a day. The rest of the time is spent recovering, wandering, inputting. This is not an efficiency issue but a physiological limit of cognitive activity.
If AI further segments “work” and “effective work,” then perhaps what we need to redefine is not how tools are used, but the length of the “workday.”
Yegge admits that he himself is part of the problem. With over forty years of engineering experience, leading large teams, fast reading, and ample time and resources for experimentation, he can spend dozens of hours continuously building a runnable system with Claude Code and releasing it publicly. His work is widely circulated, and some managers see it as “the standard engineers should reach.”
He writes, “Employers are likely to start looking at me—and at us, the outliers—and say: ‘Hey, all my employees can do that.’”
On platforms like social media, some early adopters openly share their AI usage intensity: some claim their organization pays thousands of dollars monthly for a few accounts; others show themselves running dozens of chat sessions simultaneously. These contents attract attention from the tech community and also shape an implicit reference point at the management level. Yegge calls this an “unrealistic beauty standard.”
He admits he is not representative; his pace is hard for most to replicate, and even he is unsure if he can sustain it long-term. But when he speaks on stage or writes books, the message (at least on the receiving end) is simplified to “this is achievable.”
User Lih Ashaov raised a deeper issue. He believes that the way humans interact with AI reflects a long-standing boundary recognition problem in human relationships. Many lack the ability to recognize and express their limits, and this deficiency is transferred into human-machine interactions. Tools do not stop voluntarily nor sense user fatigue. As capabilities expand, the ability to recognize lower limits becomes even scarcer.
Yegge makes a specific proposal: the effective workday in the AI era should be shortened to three or four hours. This is not a rigorously validated number but an experiential inference. His observation is that AI automates many execution tasks but leaves high-level cognitive activities like decision-making, judgment, and problem restructuring to humans. These activities consume far more attention and emotional resources and are difficult to parallelize or compress.
During a visit to a tech park, he saw a working environment he calls “dialed to the right setting”—an open space, plenty of natural light, social and rest areas scattered around, with employees freely switching between work and recovery. He is unsure whether this balance can be maintained after full AI integration.
But he is certain that many current organizational models—simply increasing output density without adjusting work hours—are unsustainable. He no longer sees the problem as “AI is a vampire,” but as “I need to better understand my limits.”
Yegge concludes by saying he is trying to dial down his scale. He has reduced public appearances, declined many meetings, and stopped chasing every new tech trend. He still writes, builds products, and exchanges ideas with peers. But he also closes his laptop in the afternoon and takes walks with his family. He doesn’t know how much he can turn the pointer back, but he is sure the direction is correct.
For the broader workforce, this issue has not yet entered the collective agenda. The dominant narrative still focuses on AI boosting productivity, and discussions about fatigue remain personal and fragmented. But increasing signals suggest these two trends are converging. Technology shortens task paths but not the workday. Tools share execution but not responsibility. Efficiency accelerates delivery but also consumption. As AI keeps telling us “it can go faster,” perhaps the more pressing question is: can we go slower?
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Strange! AI has increased Silicon Valley's efficiency tenfold, but it's also triggering a more covert crisis than a 50% drop in $BTC.
At the beginning of 2026, a thought-provoking scene emerged in the software engineering field. The new generation of AI programming tools, represented by Claude Opus 4.6, is pushing developer productivity to unprecedented heights. Internal Microsoft data shows that after engineers freely chose their tools, Claude Code quickly became dominant, seen as the “least resistance path” by natural selection.
Meanwhile, discussions about “professional burnout” are intensifying within the developer community. Former Google and Amazon engineer Steve Yegge describes a phenomenon he calls “nap attacks”: after long periods of ambient programming, he suddenly falls asleep during the day without warning.
Today, more and more software engineers are openly sharing a common experience: work output has increased dramatically, but fatigue is accumulating at an even faster rate. Technology has significantly shortened task execution times, yet has not reduced human decision-making burdens—in fact, these burdens are increasing.
Yegge points out that previous discussions about “AI’s limited help with actual work” are now outdated after the deployment of Claude Code with Opus 4.5 and 4.6. This combination has greatly lowered the cost of transforming from problem definition to runnable code, enabling a skilled engineer to produce several times more output in the same amount of time.
When productivity exceeds about twice the original, a phenomenon he calls the “Vampire Effect” begins to manifest: technology is no longer just a tool but starts to inversely shape the user’s work rhythm and mental state.
Software engineer Xidante Caret detailed this process on his blog. In his article “AI Fatigue Is Real,” he writes that his code delivery in the last quarter hit a career peak, but his mental exhaustion was also at its maximum.
He describes a fundamental shift in his work pattern. Before using AI, he would focus deeply on a single problem all day, maintaining a coherent train of thought. After introducing AI, he now handles five to six different problem domains simultaneously. Each problem, aided by AI, takes about an hour to resolve, but frequent switching between problems creates new cognitive loads. “AI doesn’t get tired between problems,” he writes, “but I do.”
Caret describes his new role as a “quality inspector on an assembly line.” Pull requests flood in continuously, each requiring review, decision, and approval. The process never stops, but decision-making authority never shifts. He remains fixed in the judge’s seat, with cases delivered by AI and responsibility borne by humans.
A recent study provides empirical support for this phenomenon. Researchers tracked 200 employees at an American tech company and found that while AI use initially significantly increased task completion speed, it also triggered chain reactions: faster speeds raised organizational expectations for delivery cycles, which in turn made employees more dependent on AI, expanding the scope of tasks they attempted to handle, further increasing work density and cognitive load.
The researchers describe this mechanism as “workload creep.” It is not driven by directives but is a self-reinforcing cycle of efficiency gains and expectation adjustments.
Sammo Koroshets, involved in digital product design, expressed a similar situation on social media. He pointed out that there are countless demos of “generating ten UI options in one minute” circulating online. These are repeatedly pushed to practitioners and managers, creating an implicit standard. Since tools can produce solutions so quickly, the output is expected to be equally rapid.
However, these demos rarely show the subsequent costs of filtering, implementation, and cross-functional coordination—all of which are still entirely human responsibilities. Technology compresses the production time but does not shorten decision-making time. The latter is becoming the new bottleneck—human attention and willpower.
Yegge offers a simplified analytical framework. Suppose an engineer, after mastering AI tools, increases their output by ten times per unit time. Who gains the ninefold value difference depends on how the user allocates their labor supply.
For example, in Scenario A, the engineer maintains their original working hours, delivering all the additional output to the employer. The employer gains nearly ten times the output at unchanged labor costs. The engineer’s income remains proportionally the same, but their workload and mental strain increase significantly. Yegge calls this “being drained.”
In Scenario B, the engineer drastically reduces working hours, completing the same amount of work in only 10% of the original time. The entire incremental value is captured by the individual, gaining more leisure time. But this state is hard to sustain in a competitive environment. If all team members adopt this strategy, overall organizational output will lag behind competitors, risking long-term survival.
Yegge notes that the ideal lies somewhere between these extremes. But in current organizational structures, the power to adjust the scale is asymmetrical. Organizations tend to push the pointer toward Scenario A, while individuals need to actively exert counterpressure.
This framework turns the issue of technological efficiency into a distribution problem. AI does not change the fundamental fact that “value is created by labor,” but it alters the magnitude of value created per unit of labor. When this magnitude jumps, the existing distribution balance is inevitably disrupted.
Yegge recalls his experience at Amazon in 2001. His team faced intense delivery pressure with highly uncertain returns. In a discussion, he wrote a formula for his colleagues: $/hour. He explained that the numerator (annual fixed salary) is hard to change in the short term, but the denominator (actual working hours) has considerable flexibility. He advocated shifting focus from “how to earn more” to “how to work fewer hours.”
Twenty-five years later, Yegge believes this formula still applies in the AI era. The difference is that AI greatly amplifies the impact of changes in the denominator on the numerator, but individuals’ control over the denominator has not increased proportionally.
Social media user Joseph Amason responded from another angle. He observed that most successful creators in creative fields—writers, designers, researchers—typically work no more than four effective hours a day. The rest of the time is spent recovering, wandering, inputting. This is not an efficiency issue but a physiological limit of cognitive activity.
If AI further segments “work” and “effective work,” then perhaps what we need to redefine is not how tools are used, but the length of the “workday.”
Yegge admits that he himself is part of the problem. With over forty years of engineering experience, leading large teams, fast reading, and ample time and resources for experimentation, he can spend dozens of hours continuously building a runnable system with Claude Code and releasing it publicly. His work is widely circulated, and some managers see it as “the standard engineers should reach.”
He writes, “Employers are likely to start looking at me—and at us, the outliers—and say: ‘Hey, all my employees can do that.’”
On platforms like social media, some early adopters openly share their AI usage intensity: some claim their organization pays thousands of dollars monthly for a few accounts; others show themselves running dozens of chat sessions simultaneously. These contents attract attention from the tech community and also shape an implicit reference point at the management level. Yegge calls this an “unrealistic beauty standard.”
He admits he is not representative; his pace is hard for most to replicate, and even he is unsure if he can sustain it long-term. But when he speaks on stage or writes books, the message (at least on the receiving end) is simplified to “this is achievable.”
User Lih Ashaov raised a deeper issue. He believes that the way humans interact with AI reflects a long-standing boundary recognition problem in human relationships. Many lack the ability to recognize and express their limits, and this deficiency is transferred into human-machine interactions. Tools do not stop voluntarily nor sense user fatigue. As capabilities expand, the ability to recognize lower limits becomes even scarcer.
Yegge makes a specific proposal: the effective workday in the AI era should be shortened to three or four hours. This is not a rigorously validated number but an experiential inference. His observation is that AI automates many execution tasks but leaves high-level cognitive activities like decision-making, judgment, and problem restructuring to humans. These activities consume far more attention and emotional resources and are difficult to parallelize or compress.
During a visit to a tech park, he saw a working environment he calls “dialed to the right setting”—an open space, plenty of natural light, social and rest areas scattered around, with employees freely switching between work and recovery. He is unsure whether this balance can be maintained after full AI integration.
But he is certain that many current organizational models—simply increasing output density without adjusting work hours—are unsustainable. He no longer sees the problem as “AI is a vampire,” but as “I need to better understand my limits.”
Yegge concludes by saying he is trying to dial down his scale. He has reduced public appearances, declined many meetings, and stopped chasing every new tech trend. He still writes, builds products, and exchanges ideas with peers. But he also closes his laptop in the afternoon and takes walks with his family. He doesn’t know how much he can turn the pointer back, but he is sure the direction is correct.
For the broader workforce, this issue has not yet entered the collective agenda. The dominant narrative still focuses on AI boosting productivity, and discussions about fatigue remain personal and fragmented. But increasing signals suggest these two trends are converging. Technology shortens task paths but not the workday. Tools share execution but not responsibility. Efficiency accelerates delivery but also consumption. As AI keeps telling us “it can go faster,” perhaps the more pressing question is: can we go slower?