Answer a question: If AI makes you five times more efficient, do you want to cut costs by 80%, or do you want to do five times as much work?

ChainNewsAbmedia

When AI amplifies a team’s productivity by five times, you can reduce headcount by 80%, while maintaining the same output; or you can keep the headcount and do five times the work. This choice is happening simultaneously in boardrooms across the globe, and there is no standard answer.

In July 2025, when Huang Renxun was asked in a CNN interview whether AI would cause white-collar job losses, he gave an extremely direct response: if the world doesn’t come up with new creativity, the productivity gains brought by AI will ultimately turn only into unemployment. The issue isn’t AI—it’s whether decision-makers have imagination. If the world doesn’t come up with new creativity, the productivity gains brought by AI will ultimately turn only into unemployment.

And history has long proven that improvements in efficiency never reduce demand. The 19th-century “Jevons Paradox” states that when technology boosts efficiency and lowers costs, demand doesn’t decrease—it grows. This rule repeats itself in every technological revolution.

Jevons Paradox: Efficiency gains don’t make demand smaller—they increase demand

Intuitively, efficiency gains seem like they would reduce demand—like when Google rolled out the TurboQuant algorithm, compressing at least 6x the memory footprint of large language models, while also boosting inference compute speed by up to 8x without sacrificing model accuracy. The market quickly interpreted this technology as “demand-side disruption,” but history has never worked that way.

(Google’s new technology frightens the market—AI memory demand is down sixfold! SK Hynix and Micron slash prices together)

In the a16z Podcast, BOX co-founder Aaron Levie pointed out that the biggest misjudgment in the market today is trying to understand AI the way we did the old world: “Now the biggest issue is that everyone is trying to calculate an economic model, but they’re underestimating the scale of opportunity by at least one order of magnitude.”

This kind of mistake has happened many times. In the PC era, people thought computing power was a finite market; in the cloud era, people thought it was just moving existing servers into someone else’s data centers. But what actually happened was this: no one realized that people would use resources a thousand times as much.

This is the modern version of Jevons Paradox: when costs fall, demand doesn’t shrink—it explodes.

The Excel case: low-level execution gets compressed, high-level decisions get amplified

AI is the same. When models become cheaper and faster, the market’s first instinct will be that demand is shrinking—but what really happens is that use cases explode. And this explosion will directly change how humans work.

Technological revolutions never directly replace people—they move people to higher levels of abstraction. He illustrated the process with an example from spreadsheets: an MBA who just started working at a bank wouldn’t initially use spreadsheets, so she would need an entire group of interns to operate them. But a few years later, she and her peers would all become people who can operate spreadsheets; the layer of work that used to exist disappears, and the whole abstraction layer shifts upward.

AI is replicating this process. Low-level execution gets compressed, while high-level decision-making and systems integration get amplified.

If there’s no creativity, the productivity gains from AI will only turn into unemployment

This change is no longer theoretical. He mentioned a case: a marketer at Anthropic used AI tools to complete work that previously required a five- to ten-person team. You could even say a single person, using Claude Code, automated what used to take five to ten people.

But the key in this case is capability. Levie said, “You have to be a systems thinker in order to do this.” AI didn’t make everyone stronger—it gave enormous leverage to those who understand how to break down systems. The work itself didn’t disappear; it was redefined.

This also echoes the response given last year when Huang Renxun was asked whether AI would cause white-collar unemployment. Everyone says AI leads to a wave of layoffs, but the tools only double productivity—how is it anyone’s fault if they lack the ability to turn that into increased output?

If the world doesn’t come up with new creativity, the productivity gains brought by AI will ultimately turn only into unemployment. The issue isn’t AI—it’s whether decision-makers have imagination.

Aaron Levie: In the future, a company’s number of agents could be a thousand times that of employees

When this model scales to the enterprise level, organizational structure will change as well.

In the Podcast, Levie put forward a key prediction: in the future, the number of a company’s agents could be 100 to 1000 times the number of employees. And if your agents outnumber people by 100 to 1000 times, your software has to be built for agents.

That means the source of competitive advantage is shifting. Your company’s performance will depend on how effectively your agents can acquire information and complete tasks.” Therefore, the problems of the software industry are also being redefined. Whether APIs are open, how permissions and identities are managed, how data is called—these all become core capabilities. In this architecture, employees are no longer the only unit of production; agents become the primary executors, while humans shift toward design and coordination.

From Levie’s perspective, Paperclip, which was covered earlier, could be a fairly forward-looking AI work scenario.

If OpenClaw is an AI employee, then Paperclip is the management system for the entire company. Users can set company goals, build an org structure, recruit different types of AI agents (such as OpenClaw, Cursor, Codex), and have them divide responsibilities and collaborate like company teams. In this system, the role of humans is closer to the board of directors: they only need to set strategy, approve major decisions, and monitor the budget, while the rest is automatically done by the agents.

(What is a one-person company, anyway? The viral open-source AI project Paperclip helps you build a “zero-headcount company”)

You can’t vibe code SAP

But this transformation won’t happen overnight. Levie also clearly warns, “The diffusion of AI capabilities will be slower than Silicon Valley imagines.” The reason is that companies aren’t starting from scratch—huge amounts of knowledge are distributed across processes, systems, and organizations, rather than being just in a data layer. He’s even more direct: you can’t build SAP just by vibe coding.

A more realistic problem is that most people can’t even clearly describe their own workflows, let alone turn them into systems that agents can execute. That’s also why, to build a complete agent system today, you still need a high level of technical capability. However, this threshold is dropping quickly, too.

Back to the original question. Historically, with every technological revolution, some companies choose to reduce costs, while others choose to expand capabilities. The former optimizes efficiency; the latter creates markets. In the end, it’s usually the latter that defines the era.

AI is the same. The question is never whether it will replace people—it’s whether you’ll use it to do more.

This article answers a question: If AI makes you 5x more efficient, will you cut costs by 80%, or do five times the work? It first appeared on Chain News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

Sullivan & Cromwell Apologizes for AI Hallucinations in Court Filing with 40 Erroneous Citations

Gate News message, April 23 — Sullivan & Cromwell, a major Wall Street law firm, apologized to a federal judge after submitting a court filing containing approximately 40 incorrect citations and other errors caused by AI hallucinations. Andrew Dietderich, co-head of the firm's global restructuring t

GateNews7m ago

Tencent Releases and Open-Sources Hunyuan Hy3 Preview with 295B Parameters

Gate News message, April 23 — Tencent unveiled and open-sourced Hunyuan Hy3 preview, a hybrid mixture-of-experts language model featuring fast and slow thinking fusion. The model comprises 295 billion total parameters with 21 billion active parameters, supporting a maximum context length of 256K

GateNews21m ago

South Korea, Vietnam Sign 70+ MOUs on AI, Energy, and Data Infrastructure

Gate News message, April 23 — South Korea and Vietnam signed more than 70 memoranda of understanding (MOUs) during President Lee Jae Myung's state visit to Hanoi on April 23, covering AI, energy, infrastructure, and telecommunications. A business forum attended by over 500 executives discussed AI an

GateNews21m ago

AI answer engine batch poisoning: In Gemini 3’s correct answers, 56% have no source support

This article points out that when an AI answering engine queries, it retrieves and cites webpages in real time; if the sources are AI-generated or lack evidence, the results get contaminated. This can take effect without further training and is called retrieval contamination. Although Gemini3 has high accuracy, 56% of its answers lack verifiable sources. Case studies such as Lily Ray and Grokipedia show that AI can be easily fooled by self-created content. The conclusion is that the citation layer becomes decoupled from reliable authors, forming a self-reinforcing contamination loop; users still need to trace back to the original sources and should not treat the answer as the endpoint of fact-checking.

ChainNewsAbmedia29m ago

Anthropic Tells Court Deployed Pentagon AI Models Have No 'Kill Switch'

Gate News message, April 23 — Anthropic submitted a filing to the U.S. Court of Appeals for the D.C. Circuit stating that once its AI models are deployed in Pentagon environments, the company has neither visibility nor technical means to control or shut down the models, and no "kill switch"

GateNews31m ago

Anthropic Launches /ultrareview for Claude Code: Multi-Agent Cloud-Based Code Review

Gate News message, April 23 — Anthropic has introduced /ultrareview (research preview), a cloud-based multi-agent code review feature for Claude Code. Users can type /ultrareview in the CLI to launch a group of review agents in a remote sandbox that work in parallel to examine diffs between the

GateNews39m ago
Comment
0/400
No comments