Amid the rapid development of AI technology, Lightricks recently released its latest AI video generation model - LTX-2. This model can generate a full 4K narrative high-definition video lasting up to 20 seconds in one go, complete with sound and lip synchronization, marking a revolutionary advancement in video creation.
The core technology of LTX-2 lies in its ability to generate synchronized audio and video. Traditional AI video generation tools could only produce silent videos, which required manual addition of voiceovers afterward. In contrast, LTX-2 generates both visuals and audio simultaneously within the same diffusion process, ensuring that the character's mouth movements match the speech, the sound effects of explosions align with the light, and the rhythm of walking matches the footsteps. This innovation makes the generated videos more realistic and coherent.
In addition, LTX-2 supports output at a maximum resolution of 4K and 50 frames per second, with video quality comparable to that of movies. Its coherence and stability have reached new heights in the field of video generation, making it the first open-source AI model capable of reliably generating native 4K videos. This means creators can directly use the generated videos for films, advertisements, or promotional videos, rather than just simple AI animation sketches.
The model also supports multiple input methods, including text, images, and sketches, allowing creators to finely control aspects such as the camera angle, object actions, and timing rhythm of the video. This increased creative freedom enables content creators to better express their ideas. Additionally, the built-in LoRA (Low-Rank Adaptation) fine-tuning mechanism allows users to train a personalized style model with minimal materials, ensuring consistency of the video across different scenarios.
Another significant advantage of LTX-2 is its ability to run locally, meaning users don't need to connect to the cloud or be locked into paid platforms. The model can run on consumer-grade GPUs. It is expected to open source the code, model weights, and training process in the fall of 2025, providing creators, developers, and researchers with more control and privacy protection.
Lightricks plans to open-source the code and performance benchmarks of LTX-2 later this year, further advancing the development of AI video generation technology. Users can experience this new model through the official platform, looking forward to LTX-2 becoming a "game-changer" in the future of AI creation.
With the release of LTX-2, the barrier to video creation will be further lowered, giving more creators the opportunity to realize their creativity and dreams with this advanced technology.










