ByteDance: Seedance 2.0 is still far from perfect and will continue to explore deep alignment between large models and human feedback

robot
Abstract generation in progress

On February 12th, ByteDance officially released the video creation model Seedance 2.0. ByteDance’s official Weibo post stated that Seedance 2.0 is still far from perfect, and its generated results still have many flaws. In the future, they will continue to explore deep alignment between large models and human feedback, aiming to provide more efficient, stable, and imaginative audio and video production tools to serve more creators. It is understood that the model supports multimodal comprehensive referencing, allowing combined input of different texts, images, videos, and audio. It also introduces new video editing capabilities, supporting targeted modifications of specific segments, characters, actions, or plots, and offers video extension features. Currently, the model is available on platforms such as Yimeng AI and Doubao. ByteDance stated that (Jiemian News).

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский язык
  • Français
  • Deutsch
  • Português (Portugal)
  • ภาษาไทย
  • Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)