Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
finally a benchmark that actually matters.
forget MMLU and math scores.. PinchBench tests which AI model is best at doing real work.
not answering trivia. actually doing things:
→ looking up info from multiple web sources
→ creating and scheduling meetings
→ organizing files on your computer
→ writing and managing emails
it tests models running as agents through OpenClaw.. meaning the AI has to use tools, chain actions, and complete tasks end to end.
the results are interesting:
> Gemini 3 Flash leads at 95.1%
> MiniMax M2.1 close behind at 93.6%
> Kimi K2.5 at 93.4%
> Claude Sonnet at 92.7%
> Gemini 3 Pro at 91.7%
> Claude Haiku at 90.8%
> Claude Opus 4.6 at 90.6%
> GPT-5 Nano at 85.8%
the spread between top and bottom is only ~10%.. which means most frontier models are getting pretty good at agent tasks.
but the real takeaway? Gemini Flash.. a lightweight model.. is outperforming every heavy model on practical agent work. speed + tool use > raw intelligence.
this is the kind of benchmark that should decide which model you use daily.. not some academic test nobody relates to.
→