Kuaishou executives interpret Q4 financial report: confident in achieving over 100% year-over-year growth in flexible income this year

robot
Abstract generation in progress

Kuaishou Technology releases Q4 and full-year 2025 results. The financial report shows that in Q4 2025, Kuaishou’s total revenue increased by 11.8% year-over-year to 39.6 billion RMB, with adjusted net profit reaching 5.5 billion RMB. Among these, core business revenue, including online marketing services and other services, grew by 17.1% year-over-year.

See details: Kuaishou’s 2025 revenue of 142.8 billion RMB, adjusted net profit of 20.6 billion RMB

After the earnings release, Kuaishou Technology co-founder, Executive Director, and CEO Cheng Yixiao, CFO Jin Bing, and other executives attended the subsequent earnings call to interpret key points of the report and answer analyst questions.

Below is the main content of the analyst Q&A session during this conference call:

Goldman Sachs analyst Lincoln Kong: First, congratulations on the company’s steady performance in Q4. My question is about KeLing AI. Recently, we’ve also observed that updates to large video generation models are accelerating across various companies, including the recent launch of Seedance 2.0. What might these changes mean for the industry or for the future of KeLing AI? Could management share with us? What plans and strategies does the company have this year regarding KeLing AI’s technology, products, and monetization?

Cheng Yixiao: As we previously introduced, large video generation models are complex, with open input and output formats. The choice of technology and product architecture is highly flexible, allowing significant room for innovation. We still believe that video generation technology and products are far from mature, and many participants can work together to accelerate industry progress and better meet user needs.

Recent updates like Seedance 2.0 and other companies’ large video models have brought positive impacts to the industry. They lower the barrier for ordinary users to create content and increase the penetration of “AI video generation” across more application scenarios, expanding the industry’s overall cake. Seedance 2.0’s support for multimodal input aligns with the KeLing O1 model launched in December last year, confirming our forward-looking approach to model iteration around multimodal data. KeLing AI’s models and product capabilities remain at the forefront globally. In the AI video generation large model leaderboard, KeLing AI’s benchmark scores are leading; in terms of character consistency, controllability, physical realism, and stability in complex scenes, KeLing 3.0 performs even better. This further strengthens KeLing AI’s differentiated advantages for professional creators and enterprise clients.

Recently, KeLing AI was deeply involved in the production of many virtual scenes and special effects in the hit drama “Taiping Year” produced by Huace Film & TV. This not only ensured high-quality output of commercial-grade film and TV content but also significantly reduced production costs. This collaboration validated KeLing AI’s commercial value in top-tier film and TV production and confirmed that our focus on the film and TV scene is the right direction.

From the revenue trend perspective, KeLing AI has maintained very good month-over-month growth this year. Its ARR (Annual Recurring Revenue) exceeded $300 million in January. We are very confident that KeLing AI’s revenue will grow by over 100% year-over-year this year.

Regarding model iteration for KeLing. Over the past year, we have consistently evolved around the themes of unification, native design, and multimodality. When we released KeLing 2.0, we first proposed the concept of multimodal visual language, MVL, which combines multimodal information to express creativity, supplementing the limitations of pure text interaction. In December 2025, with the release of the KeLing O1 large model, we deepened the MVL interaction architecture, enabling input of text, images, videos, and other multimodal instructions, and simultaneously launched the “KeLing 2.6” model, which supports multimodal output with “audio and visuals.” In February this year, based on the “All-in-One” concept, we launched the KeLing 3.0 series models, achieving multimodal input and output within a single model.

Next, we will consider expanding more modalities in our models to further enhance video generation possibilities, such as actions and expressions, and will also focus on solving issues related to complex scene setup and consistency. Meanwhile, at the product level, we will steadily advance agent capabilities to enable autonomous full-process creation, including automatic storyboarding, character and scene consistency control, audio-visual synchronization, lighting and camera movement design, and other functions.

In summary, KeLing AI will continue to uphold the vision of enabling everyone to tell good stories with AI, constantly refine our models and product capabilities, and maintain our global leading position in both technology and commercial monetization.

(Updates ongoing…)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin