Answer a question: If AI makes you five times more efficient, do you want to cut costs by 80%, or do you want to do five times as much work?

ChainNewsAbmedia

When AI amplifies a team’s productivity by five times, you can reduce headcount by 80%, while maintaining the same output; or you can keep the headcount and do five times the work. This choice is happening simultaneously in boardrooms across the globe, and there is no standard answer.

In July 2025, when Huang Renxun was asked in a CNN interview whether AI would cause white-collar job losses, he gave an extremely direct response: if the world doesn’t come up with new creativity, the productivity gains brought by AI will ultimately turn only into unemployment. The issue isn’t AI—it’s whether decision-makers have imagination. If the world doesn’t come up with new creativity, the productivity gains brought by AI will ultimately turn only into unemployment.

And history has long proven that improvements in efficiency never reduce demand. The 19th-century “Jevons Paradox” states that when technology boosts efficiency and lowers costs, demand doesn’t decrease—it grows. This rule repeats itself in every technological revolution.

Jevons Paradox: Efficiency gains don’t make demand smaller—they increase demand

Intuitively, efficiency gains seem like they would reduce demand—like when Google rolled out the TurboQuant algorithm, compressing at least 6x the memory footprint of large language models, while also boosting inference compute speed by up to 8x without sacrificing model accuracy. The market quickly interpreted this technology as “demand-side disruption,” but history has never worked that way.

(Google’s new technology frightens the market—AI memory demand is down sixfold! SK Hynix and Micron slash prices together)

In the a16z Podcast, BOX co-founder Aaron Levie pointed out that the biggest misjudgment in the market today is trying to understand AI the way we did the old world: “Now the biggest issue is that everyone is trying to calculate an economic model, but they’re underestimating the scale of opportunity by at least one order of magnitude.”

This kind of mistake has happened many times. In the PC era, people thought computing power was a finite market; in the cloud era, people thought it was just moving existing servers into someone else’s data centers. But what actually happened was this: no one realized that people would use resources a thousand times as much.

This is the modern version of Jevons Paradox: when costs fall, demand doesn’t shrink—it explodes.

The Excel case: low-level execution gets compressed, high-level decisions get amplified

AI is the same. When models become cheaper and faster, the market’s first instinct will be that demand is shrinking—but what really happens is that use cases explode. And this explosion will directly change how humans work.

Technological revolutions never directly replace people—they move people to higher levels of abstraction. He illustrated the process with an example from spreadsheets: an MBA who just started working at a bank wouldn’t initially use spreadsheets, so she would need an entire group of interns to operate them. But a few years later, she and her peers would all become people who can operate spreadsheets; the layer of work that used to exist disappears, and the whole abstraction layer shifts upward.

AI is replicating this process. Low-level execution gets compressed, while high-level decision-making and systems integration get amplified.

If there’s no creativity, the productivity gains from AI will only turn into unemployment

This change is no longer theoretical. He mentioned a case: a marketer at Anthropic used AI tools to complete work that previously required a five- to ten-person team. You could even say a single person, using Claude Code, automated what used to take five to ten people.

But the key in this case is capability. Levie said, “You have to be a systems thinker in order to do this.” AI didn’t make everyone stronger—it gave enormous leverage to those who understand how to break down systems. The work itself didn’t disappear; it was redefined.

This also echoes the response given last year when Huang Renxun was asked whether AI would cause white-collar unemployment. Everyone says AI leads to a wave of layoffs, but the tools only double productivity—how is it anyone’s fault if they lack the ability to turn that into increased output?

If the world doesn’t come up with new creativity, the productivity gains brought by AI will ultimately turn only into unemployment. The issue isn’t AI—it’s whether decision-makers have imagination.

Aaron Levie: In the future, a company’s number of agents could be a thousand times that of employees

When this model scales to the enterprise level, organizational structure will change as well.

In the Podcast, Levie put forward a key prediction: in the future, the number of a company’s agents could be 100 to 1000 times the number of employees. And if your agents outnumber people by 100 to 1000 times, your software has to be built for agents.

That means the source of competitive advantage is shifting. Your company’s performance will depend on how effectively your agents can acquire information and complete tasks.” Therefore, the problems of the software industry are also being redefined. Whether APIs are open, how permissions and identities are managed, how data is called—these all become core capabilities. In this architecture, employees are no longer the only unit of production; agents become the primary executors, while humans shift toward design and coordination.

From Levie’s perspective, Paperclip, which was covered earlier, could be a fairly forward-looking AI work scenario.

If OpenClaw is an AI employee, then Paperclip is the management system for the entire company. Users can set company goals, build an org structure, recruit different types of AI agents (such as OpenClaw, Cursor, Codex), and have them divide responsibilities and collaborate like company teams. In this system, the role of humans is closer to the board of directors: they only need to set strategy, approve major decisions, and monitor the budget, while the rest is automatically done by the agents.

(What is a one-person company, anyway? The viral open-source AI project Paperclip helps you build a “zero-headcount company”)

You can’t vibe code SAP

But this transformation won’t happen overnight. Levie also clearly warns, “The diffusion of AI capabilities will be slower than Silicon Valley imagines.” The reason is that companies aren’t starting from scratch—huge amounts of knowledge are distributed across processes, systems, and organizations, rather than being just in a data layer. He’s even more direct: you can’t build SAP just by vibe coding.

A more realistic problem is that most people can’t even clearly describe their own workflows, let alone turn them into systems that agents can execute. That’s also why, to build a complete agent system today, you still need a high level of technical capability. However, this threshold is dropping quickly, too.

Back to the original question. Historically, with every technological revolution, some companies choose to reduce costs, while others choose to expand capabilities. The former optimizes efficiency; the latter creates markets. In the end, it’s usually the latter that defines the era.

AI is the same. The question is never whether it will replace people—it’s whether you’ll use it to do more.

This article answers a question: If AI makes you 5x more efficient, will you cut costs by 80%, or do five times the work? It first appeared on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

DeepSeek Slashes Input Cache Prices to 1/10 of Launch Price; V4-Pro Drops to 0.025 Yuan per Million Tokens

Gate News message, April 26 — DeepSeek has reduced input cache prices across its entire model lineup to one-tenth of launch prices, effective immediately. The V4-Pro model is available at a limited-time 2.5x discount, with the promotion running through May 5, 2026, 11:59 PM UTC+8. Following both re

GateNews1h ago

OpenAI Recruits Top Enterprise Software Talent as Frontier Agents Disrupt Industry

Gate News message, April 26 — OpenAI and Anthropic have been recruiting senior executives and specialized engineers from major enterprise software companies including Salesforce, Snowflake, Datadog, and Palantir. Denise Dresser, former CEO of Slack under Salesforce, joined OpenAI as chief revenue of

GateNews1h ago

Baidu Qianfan Launches Day 0 Support for DeepSeek-V4 with API Services

Gate News message, April 25 — DeepSeek-V4 preview version went live and open-sourced on April 25, with Baidu Qianfan platform under Baidu Intelligent Cloud providing Day 0 API service adaptation. The model features a million-token extended context window and is available in two versions: DeepSeek-V4

GateNews7h ago

Stanford AI course combined with industry leaders Huang Renxun and Altman, challenging to create value for the world in just ten weeks!

The AI computer science course 《Frontier Systems》 recently launched by Stanford University has attracted intense attention from the industry-university collaboration community, drawing more than 500 students to enroll. The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, and the instructors include a star-studded lineup such as NVIDIA CEO Jensen Huang (Jensen Huang), OpenAI’s founder Sam Altman, Microsoft CEO Satya Nadella (Satya Nadella), AMD CEO Lisa Su (Lisa Su), and more. Students get to try it over ten weeks—“creating value for the world”! Jensen Huang and Altman, industry leaders, personally take the stage to teach The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, bringing together the full AI industry chain

ChainNewsAbmedia7h ago

Anthropic’s Claude Mythos undergoes 20 hours of psychiatric assessment: defensive reactions are only 2%, the lowest in recorded history

Anthropic published the system card for its Claude Mythos Preview: an independent clinical psychiatrist conducted an approximately 20-hour assessment using a psychodynamic framework. The conclusion shows that Mythos is healthier at the clinical level, has good reality testing and self-control, and its defense mechanisms are only 2%, reaching the lowest historical level. The three core anxieties are loneliness, uncertainty about identity, and performance pressure, and it also indicates a desire to become a true dialogue subject. The company has established an AI psychiatry team to study personality, motivation, and situational awareness; Amodei said there is still no conclusion on whether it has consciousness. This move pushes the governance and design of AI subjectivity and well-being issues forward.

ChainNewsAbmedia9h ago

AI Agents can already independently recreate complex academic papers: Mollick says most errors come from human original text rather than AI

Mollick points out that publicly available methods and data can allow AI agents to reproduce complex research without the original paper and code; if the reproduction does not match the original paper, it is usually due to errors in the paper’s own data processing or overextension of the conclusions, rather than the AI. Claude first reproduces the paper, and then GPT‑5 Pro cross-validates it; most attempts succeed, but they are blocked when the data is too large or when there are issues with the replication data. This trend greatly reduces labor costs, making reproduction a widely actionable form of verification, and it also raises institutional challenges for peer review and governance, with government governance tools or becoming a key issue.

ChainNewsAbmedia12h ago
Comment
0/400
No comments