Bittensor’s Decentralized Lie Exposed, Covenant AI Announces the Entire Team Has Left

TAO1,53%

Covenant AI退出Bittensor

Decentralized AI training team Covenant AI announced on April 10 that it is exiting the Bittensor network, and specifically called out the network’s key figure, Jacob Steeves, accusing him of betraying its decentralization promises. Covenant AI founder Sam Dare said: “The entire core commitment of Bittensor—that no single entity can control it—is a lie.”

Accusation List: Five Suppression Actions by Const Against Covenant AI

In its statement, Covenant AI detailed a series of specific actions that Const took against its team, forming the direct basis for its decision to withdraw:

Sam Dare Accuses Const of the Main Suppression Actions

Pausing token emissions: Const unilaterally paused the token emissions for Covenant AI’s subnet under its control, directly cutting off its source of economic incentives within the Bittensor ecosystem

Revoking community administration permissions: Const stripped Covenant AI of control over its own community channels, effectively taking control of its external communication channels

Decommissioning subnet infrastructure: Const unilaterally decommissioned Covenant AI’s subnet infrastructure, causing its technical deployments on the Bittensor network to fall into collapse

Applying pressure through large-scale token selloffs: During the period of conflict between the two sides, Const applied economic pressure on Covenant AI through large-scale, high-visibility token selloffs

Bypassing the consensus mechanism: All of the above actions were not carried out through the network’s formal consensus process for governance, showing effective individual control over the multisig mechanism

Bittensor “Decentralization Theater”: The truth behind the 3-of-multisig of three people

Covenant AI’s core accusations point to the fundamental gap between Bittensor’s claimed governance mechanism and its real-world operation. Bittensor publicly markets that it uses a “3-of-multisig” governance framework as an institutional safeguard for decentralization. However, Covenant AI directly states that Const exerts effective control over this multisig mechanism, enabling unilateral changes to be pushed through at any time while bypassing consensus, and that the other multisig participants are merely “a shield to carry legal responsibility.”

If this allegation is true, it would mean that while Bittensor’s governance architecture is decentralized at the technical design level, it is still led by a single individual at the operational level—forming a kind of “decentralization theater,” where decentralization exists in documents and whitepapers, but centralization exists in real decision-making.

Covenant AI’s confidence: This exit is not a failure—it’s a deliberate choice

The credibility of this withdrawal statement is built on Covenant AI’s actual technical achievements. Covenant-72B is, to date, the largest decentralized LLM pretraining project, with 72 billion parameters, spanning participation from over 70 independent contributors. It received public recognition from NVIDIA CEO, and was cited by Anthropic’s co-founder—making its reputation in the AI industry exceed the Bittensor ecosystem itself.

Covenant AI announced that the team, research outcomes, and models will all be taken along, and also previewed that new project announcements will be released soon, showing that this exit is more like an intentional ecosystem migration rather than a forced departure.

Frequently Asked Questions

What is the core reason Covenant AI is exiting Bittensor?

According to the public statement by Covenant AI founder Sam Dare, the core reason for the exit is that Bittensor’s core figure Const (Jacob Steeves) took a series of suppression actions against its team, including pausing token emissions, revoking community administration permissions, decommissioning infrastructure, and applying economic pressure through token selloffs. Moreover, all actions were carried out unilaterally by bypassing formal governance consensus.

What problems exist with Bittensor’s 3-of-multisig governance?

Covenant AI points out that in real-world operations, Bittensor’s claimed “3-of-multisig” governance framework is under Const’s effective control, leaving other multisig participants without real checks and balances. They only bear legal responsibility and are unable to prevent Const from unilaterally pushing changes. Covenant AI directly describes this as “decentralization theater.”

What is Covenant AI’s Covenant-72B project?

Covenant-72B is Covenant AI’s decentralized LLM pretraining project completed within the Bittensor ecosystem. It has 72 billion parameters, involves over 70 independent contributors, and is the largest of its kind to date. The project received public recognition from NVIDIA’s CEO and was cited by Anthropic’s co-founder, giving Covenant AI substantial industry influence even at the time of its exit.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

DeepSeek Slashes Input Cache Prices to 1/10 of Launch Price; V4-Pro Drops to 0.025 Yuan per Million Tokens

Gate News message, April 26 — DeepSeek has reduced input cache prices across its entire model lineup to one-tenth of launch prices, effective immediately. The V4-Pro model is available at a limited-time 2.5x discount, with the promotion running through May 5, 2026, 11:59 PM UTC+8. Following both re

GateNews9h ago

OpenAI Recruits Top Enterprise Software Talent as Frontier Agents Disrupt Industry

Gate News message, April 26 — OpenAI and Anthropic have been recruiting senior executives and specialized engineers from major enterprise software companies including Salesforce, Snowflake, Datadog, and Palantir. Denise Dresser, former CEO of Slack under Salesforce, joined OpenAI as chief revenue of

GateNews9h ago

Baidu Qianfan Launches Day 0 Support for DeepSeek-V4 with API Services

Gate News message, April 25 — DeepSeek-V4 preview version went live and open-sourced on April 25, with Baidu Qianfan platform under Baidu Intelligent Cloud providing Day 0 API service adaptation. The model features a million-token extended context window and is available in two versions: DeepSeek-V4

GateNews15h ago

Stanford AI course combined with industry leaders Huang Renxun and Altman, challenging to create value for the world in just ten weeks!

The AI computer science course 《Frontier Systems》 recently launched by Stanford University has attracted intense attention from the industry-university collaboration community, drawing more than 500 students to enroll. The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, and the instructors include a star-studded lineup such as NVIDIA CEO Jensen Huang (Jensen Huang), OpenAI’s founder Sam Altman, Microsoft CEO Satya Nadella (Satya Nadella), AMD CEO Lisa Su (Lisa Su), and more. Students get to try it over ten weeks—“creating value for the world”! Jensen Huang and Altman, industry leaders, personally take the stage to teach The course is coordinated by Anjney Midha, a partner at top venture capital firm a16z, bringing together the full AI industry chain

ChainNewsAbmedia15h ago

Anthropic’s Claude Mythos undergoes 20 hours of psychiatric assessment: defensive reactions are only 2%, the lowest in recorded history

Anthropic published the system card for its Claude Mythos Preview: an independent clinical psychiatrist conducted an approximately 20-hour assessment using a psychodynamic framework. The conclusion shows that Mythos is healthier at the clinical level, has good reality testing and self-control, and its defense mechanisms are only 2%, reaching the lowest historical level. The three core anxieties are loneliness, uncertainty about identity, and performance pressure, and it also indicates a desire to become a true dialogue subject. The company has established an AI psychiatry team to study personality, motivation, and situational awareness; Amodei said there is still no conclusion on whether it has consciousness. This move pushes the governance and design of AI subjectivity and well-being issues forward.

ChainNewsAbmedia17h ago

AI Agents can already independently recreate complex academic papers: Mollick says most errors come from human original text rather than AI

Mollick points out that publicly available methods and data can allow AI agents to reproduce complex research without the original paper and code; if the reproduction does not match the original paper, it is usually due to errors in the paper’s own data processing or overextension of the conclusions, rather than the AI. Claude first reproduces the paper, and then GPT‑5 Pro cross-validates it; most attempts succeed, but they are blocked when the data is too large or when there are issues with the replication data. This trend greatly reduces labor costs, making reproduction a widely actionable form of verification, and it also raises institutional challenges for peer review and governance, with government governance tools or becoming a key issue.

ChainNewsAbmedia20h ago
Comment
0/400
No comments