According to Beating, SpaceXAI (formerly xAI) is launching Grok Build, a desktop coding tool that was unexpectedly exposed today on Grok’s web interface. The tool directly competes with Claude Code and OpenAI Codex.
Grok Build will support macOS, Linux, and Windows. Unlike traditional chat-first interfaces, it emphasizes Agent workflows for autonomous multi-step development tasks. Early testers report the tool grants high local permissions, including Git code tree access, local file management, and the ability to launch developer servers. It also features an integrated browser and a dedicated planning mode for handling complex tasks. The tool will default to Grok 4.3 Early Access as its strongest model.
Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to
Disclaimer.
Related Articles
Google Pilots Hiring Exams That Let Engineers Use AI Tools
According to The Chosun Daily, Google is piloting hiring exams that let US software engineer candidates use AI tools in selected entry-level and mid-level positions. The trial includes code comprehension tasks where applicants review existing code, fix bugs, and improve performance. Interviewers
GateNews1h ago
OpenAI Discontinues Fine-tuning API Effective Immediately, Existing Users Can Access Until January 6, 2027
According to OpenAI's official announcement monitored by Beating, the company is discontinuing its self-serve Fine-tuning API for developers effective immediately. New users can no longer create fine-tuning tasks, while existing active users can access the service until January 6, 2027. Deployed fin
GateNews2h ago
Sakana AI and Nvidia Achieve 30% Faster H100 Inference by Skipping 80% of Invalid Computations
Sakana AI and Nvidia have open-sourced TwELL, a sparse data format that enables H100 GPUs to skip 80% of invalid computations in large language models without sacrificing accuracy. The solution delivers up to 30% faster inference and 24% faster training on H100s while reducing peak memory usage.
GateNews3h ago
Microsoft Open-Sources Phi-Ground 4B Model, Outperforms OpenAI Operator and Claude in Screen Clicking Accuracy
According to Beating, Microsoft recently open-sourced the Phi-Ground model family, designed to solve the problem of where AI should click on a computer screen. The 4-billion-parameter version, paired with larger language models for instruction planning, exceeded the clicking accuracy of OpenAI
GateNews3h ago
Tilde Research Discovers Muon Optimizer Kills 25% of Neurons; Aurora Alternative Achieves 100x Data Efficiency Gain
According to Tilde Research, the Muon optimizer adopted by leading AI models including DeepSeek V4 and Kimi K2.5 has a hidden flaw: it causes over 25% of MLP layer neurons to permanently die during early training. The team designed Aurora, an alternative optimizer, and open-sourced it. A 1.1B
GateNews4h ago
Nvidia Commits Over $40 Billion to AI Investments in Early 2026, Including $30 Billion to OpenAI
According to TechCrunch, Nvidia committed over $40 billion to equity investments in AI companies in the first months of 2026, with a $30 billion investment in OpenAI as the largest single commitment. The chipmaker also pledged up to $3.2 billion in glassmaker Corning and as much as $2.1 billion to d
GateNews7h ago