Tongyi Lab has launched a new video creation tool, Wan2.7-Video, aimed at enhancing creators' freedom and flexibility. Addressing two common issues in the current AI video creation field - unprofessional content generation and difficulty in video editing - Wan2.7-Video offers a series of powerful features to help users easily achieve creation and editing.

In terms of content generation, Wan2.7-Video uses a more advanced model, supporting full-modal input including text, images, videos, and audio. Users can make precise controls over aspects such as the structure of the scene, the storyline, and local details. This means users can edit video content like editing a document, meeting higher creative demands.
The tool's powerful editing capabilities make video modifications much simpler. Users can precisely adjust elements in the video with simple instructions, such as removing unwanted passersby or replacing objects in the video. Additionally, users can change the background environment, transforming a summer scene into autumn or winter effortlessly, or even switching styles with one click, adding rich visual effects to the video.
When it comes to modifying the plot of a video, Wan2.7-Video allows users to change a character's dialogue, actions, and camera angles without re-shooting. Users can flexibly adjust the characters' performances and scene settings according to their needs, greatly improving the convenience of creation.
Additionally, Wan2.7-Video also has the functions of quickly replicating creativity and continuing the story. Users can quickly reuse actions, shots, and special effects from existing videos to realize new creative expressions. With precise time control, users can achieve seamless transitions in the video, enhancing the flow of the story.





