According to Jinshi, Ant Bailing released its flagship trillion-parameter reasoning model Ring-2.6-1T today, designed for complex real-world task execution. The model features an adjustable Reasoning Effort mechanism supporting high and xhigh inference intensity levels. In real task execution benchmarks, Ring-2.6-1T achieved a PinchBench score of 87.6, outperforming GPT-5.4x High, Gemini-3.1-Pro high, and Claude-Opus-4.7x high. On advanced reasoning tasks, the model scored 77.78 on ARC-agi-V2, matching performance levels of Gemini-3.1-Pro high and Claude-Opus-4.7x high.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
Google Pilots Hiring Exams That Let Engineers Use AI Tools
According to The Chosun Daily, Google is piloting hiring exams that let US software engineer candidates use AI tools in selected entry-level and mid-level positions. The trial includes code comprehension tasks where applicants review existing code, fix bugs, and improve performance. Interviewers
GateNews2h ago
OpenAI Discontinues Fine-tuning API Effective Immediately, Existing Users Can Access Until January 6, 2027
According to OpenAI's official announcement monitored by Beating, the company is discontinuing its self-serve Fine-tuning API for developers effective immediately. New users can no longer create fine-tuning tasks, while existing active users can access the service until January 6, 2027. Deployed fin
GateNews2h ago
Sakana AI and Nvidia Achieve 30% Faster H100 Inference by Skipping 80% of Invalid Computations
Sakana AI and Nvidia have open-sourced TwELL, a sparse data format that enables H100 GPUs to skip 80% of invalid computations in large language models without sacrificing accuracy. The solution delivers up to 30% faster inference and 24% faster training on H100s while reducing peak memory usage.
GateNews3h ago
Microsoft Open-Sources Phi-Ground 4B Model, Outperforms OpenAI Operator and Claude in Screen Clicking Accuracy
According to Beating, Microsoft recently open-sourced the Phi-Ground model family, designed to solve the problem of where AI should click on a computer screen. The 4-billion-parameter version, paired with larger language models for instruction planning, exceeded the clicking accuracy of OpenAI
GateNews4h ago
Tilde Research Discovers Muon Optimizer Kills 25% of Neurons; Aurora Alternative Achieves 100x Data Efficiency Gain
According to Tilde Research, the Muon optimizer adopted by leading AI models including DeepSeek V4 and Kimi K2.5 has a hidden flaw: it causes over 25% of MLP layer neurons to permanently die during early training. The team designed Aurora, an alternative optimizer, and open-sourced it. A 1.1B
GateNews4h ago
Nvidia Commits Over $40 Billion to AI Investments in Early 2026, Including $30 Billion to OpenAI
According to TechCrunch, Nvidia committed over $40 billion to equity investments in AI companies in the first months of 2026, with a $30 billion investment in OpenAI as the largest single commitment. The chipmaker also pledged up to $3.2 billion in glassmaker Corning and as much as $2.1 billion to d
GateNews7h ago