Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI computing power is shifting gears: from "training battles" to "inference battles"
Recently, NVIDIA's latest move actually reveals that the AI industry is undergoing a significant transformation. Over the past two years, the core of computing power competition has been "who can train larger models," with more GPUs stacked the better. But now, once the model capability reaches a certain stage, the real bottleneck becomes inference efficiency—how fast responses are, how much each call costs, and whether it can run stably over the long term.
NVIDIA has started to introduce the Groq LPU (Language Processing Unit) concept beyond traditional GPUs, with the main goal of reducing latency and energy consumption. This in itself indicates that GPUs are not the optimal solution for all AI scenarios.
More notably, the choice of OpenAI is worth paying attention to. Their large-scale procurement of "dedicated inference capacity" suggests that future AI cost pressures will mainly come from inference rather than training. The key to AI commercialization is not about building bigger models, but about making them affordable and sustainable to run.
Computing power is shifting from a "single general-purpose platform" to an era of "scenario-specific infrastructure."
Expert opinion:
The next watershed in AI investment will not be "who has the strongest computing power," but "who can reduce the unit inference cost." Efficiency is replacing scale as the new pricing anchor.