Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
AI Algorithm Testing Launch: Vico 2026 Global Artificial Intelligence Challenge Enters Critical Stage
On January 12, 2026, the highly anticipated Vico AI Trading Hackathon officially kicked off the first round of algorithm screening. This technical competition, which brings together global experts in quantitative modeling, data science, and algorithm development, has transitioned from the early solution solicitation phase to the logic verification stage in a real data environment.
As the event's popularity continues to rise, the organizing committee also announced: with the support of multiple technical partners, the ecological incentive resources for this year's competition have been increased to an equivalent of 1.88 million USD. This initiative aims to encourage more developers to deepen their work in AI automation decision-making and risk control, elevating the event's status as a benchmark for annual AI technical competitions.
As of January 13, backend data shows that nearly 800 technical talents have registered to participate. Currently, the "Night Owl" technical channel remains open, giving more algorithm teams around the world the opportunity to engage in this round of deep collision between AI logic and market volatility.
From Theoretical Modeling to Practical Verification
According to the rules, this pre-selection abandons single-code evaluation and instead tests all participating models in a unified standardized data environment.
The technical committee has set multi-dimensional evaluation criteria for this round, focusing on the algorithm's performance in complex environments:
* Signal Recognition Accuracy: The model's ability to capture trend data.
* Logic Drawdown Management: The algorithm's self-correction and risk hedging during abnormal fluctuations.
* System Environment Adaptability: The model's response efficiency to different frequency data streams.
* Extreme Scenario Response: Millisecond-level reaction speed of automated risk control logic.
This means that from now on, all shortlisted algorithms will face real data testing—the focus is not on the complexity of the model but on its robustness and stability amid real market fluctuations.
1.88 Million Resources Support: Discover Top Algorithm Teams
With the expansion of participation, the $1,880,000 resource package will be precisely allocated to teams demonstrating excellence in model innovation and system stability. The competition will conduct multiple rounds of gradient selection to gradually identify high-quality models with long-term scientific research value. The final shortlisted teams will showcase their deep iterative results of AI models in the grand finale, competing for industry honors in the field of artificial intelligence applications.
Vico CSO Ethan: Technical Feedback Is the Core of System Evolution
As an important promoter of the event, Vico Ethan stated at the launch ceremony that the current participation enthusiasm far exceeds expectations:
"Nearly 800 developers joining us has shown us the enormous potential of AI empowering financial technology. This is not just a competition but a high-level exchange of algorithm logic."
Vico Ethan emphasized that this "feedback-driven development" model is crucial. Feedback from participants during testing—regarding system architecture, matching logic, or data transmission efficiency—will directly drive platform technical optimization. Through this high-intensity stress testing, the security and reliability of AI infrastructure can be more effectively validated.
Promoting AI Applications Toward Openness and Science
Since its inception, the relevant technical teams have been committed to long-term investment in system security and quantitative infrastructure. As algorithmic decision-making becomes increasingly important in the digital ecosystem, the organizers hope to bring AI modeling capabilities into an open, quantifiable, and comparable competitive stage through this open hackathon.
The game of algorithmic intelligence has already begun. The 1.88 million USD resource investment and the participation of 800 tech enthusiasts are jointly defining a global AI technology feast.