On September 5, the Tencent Hunyuan Game Visual Generation Platform officially launched its 2.0 version, adding capabilities such as image-to-video generation for games, custom model training, and one-click character refinement. It also significantly enhanced the 2D image generation model for games, with image-to-video and text-to-image models reaching industry SOTA levels in game scenarios. This upgrade further addresses pain points in game art design and promotion, including dynamic content generation, style customization, and detail optimization, helping game artists improve efficiency.

The Hunyuan Game Platform has a simple interface and a user-friendly experience. At the same time as this capability upgrade, the Hunyuan Game Platform announced that it is now open to all users. Users can experience it through the Tencent Hunyuan official website, and they can use it after logging in. The experience URL is https://hunyuan.tencent.com/game/, and users can find the game industry experience entry in the creator community on the Tencent Hunyuan official website.

WeChat screenshot_20250905170850.png

The newly introduced game AI animation/CG capabilities are based on the Tencent Hunyuan image-to-video technology, allowing static images to be instantly transformed into animations, including 360-degree rotation of characters in games. Users can upload any game image and input a dynamic description, and high-quality dynamic videos will be generated immediately, supporting character actions, scene effects, and "rotation of all objects," applicable for game CG previews, creation of three-view concept art for characters, and skill effect previews, replacing traditional frame-by-frame drawing processes.

Custom model training significantly lowers the threshold for fine-tuning image generation models, allowing individual users to fine-tune their own LoRA models with just a few images, solving the problem of style consistency in game projects, especially suitable for independent studios to create IP-based art assets. The Hunyuan Game official website provides preset styles, including Ouka, anime, and realistic CG, while also supporting users to train their own LoRA style models or character models using personal datasets. The custom model training capability is based on the Hunyuan image generation base model, simplifying the LoRA model training process. Users only need to upload dozens of images and set trigger words, and the system will automatically tag them. The model training can be completed in a few hours. The entire training process is visual, requiring no coding skills or complex tools. This capability is currently in internal testing, and users can apply to use it.

The one-click character refinement feature is mainly used for enriching details or converting the style of game character concept art, offering a high consistency mode and a high creativity mode. The high consistency mode retains the original image structure, refining clothing textures and lighting layers, suitable for optimizing character final drafts; the high creativity mode supports converting character concept art into styles such as traditional Chinese, 3D, and anime, and refining the effects.

The Hunyuan Game 2.0 upgraded the underlying 2D image generation model of the platform, achieving SOTA level text-to-image capabilities in the game industry. Hunyuan Game significantly improved the aesthetics and composition of the image generation model, making it more suitable for game art creation. At the same time, it optimized for unique game scenes, providing generation capabilities for game skill effects, environmental effects, and interactive game interfaces. It specifically optimizes the generation effects of game scenes, game items, and game characters.