The AI video generation field has seen a key upgrade. The 3D and AI video company Luma AI, backed by a16z, recently launched a new model called Ray3Modify, which for the first time enables high-fidelity AI modifications to existing live-action videos while preserving the essence of the original actor's performance - whether changing the character's appearance, switching costumes, transforming scenes, or generating smooth transition shots, the actor's movement rhythm, eye direction, and emotional expressions are fully retained.

This breakthrough directly addresses the core pain points of creative studios: although traditional AI video tools can generate impressive visuals, they often struggle with controlling details, leading to the loss of real performances in post-production. Ray3Modify achieves precise editing with "performance unchanged, expression changeable" by introducing a character reference image (character reference image) and start/end frames.

image.png

Specifically, users only need to provide a live-action video and a target character's image reference (such as an anime character, historical figure, or brand's virtual representative), and Ray3Modify can seamlessly transform the actor's appearance into the new character while faithfully preserving the original performance - including subtle facial expressions, body language, and emotional tension. In addition, by setting start and end frames, creators can guide the AI to generate controlled transition shots, achieving coherent actions such as walking, turning, and gradual expression changes, ensuring smooth storytelling between scenes.

"Generative video is highly expressive, but often difficult to control," said Amit Jain, co-founder and CEO of Luma AI. "Ray3Modify integrates the real world with AI creativity, giving creators complete control. Now, teams just need to shoot a performance once with a regular camera and use AI to place it in any imagined scene - changing clothes, locations, or even 're-shooting' without having to rebuild sets or re-cast actors."

The model is now integrated into Luma's Dream Machine platform, available for professional creators. As a strong competitor to companies like Runway and Kling, Luma's update further solidifies its technical advantage in the field of controllable generative video.

This release also benefits from the company's strong capital support: in November 2024, Luma completed a $900 million funding round led by Humain, an AI company under Saudi Arabia's sovereign wealth fund, with previous shareholders a16z, Amplify Partners, and Matrix Partners participating. It is reported that Luma also plans to collaborate with Humain in Saudi Arabia to build a 2GW scale AI computing cluster to provide infrastructure support for future high-load video generation tasks.