OpenAI has recently pushed an important update to the Sora video generation API, officially introducing five core capability upgrades based on the Sora 2 model. These improvements mainly address pain points in batch video production, such as consistency, duration, and multi-format adaptation, significantly enhancing the scalability efficiency for developers and content creators.
The most critical improvement is the support for character consistency. Previously, when generating videos in bulk using the API, the same main character often exhibited visual drift in facial features, clothing, and props across different scenes. Now, developers can pre-upload or define a "character profile" (including appearance, clothing, accessories, etc.), and the model will automatically reuse this reference in subsequent multiple segments, ensuring visual continuity across shots and scenes. This feature significantly reduces post-production editing costs, especially for scenarios such as advertisements, short dramas, and series content that require a consistent main character.

The video duration has been increased from the previous maximum of 12 or 16 seconds to 20 seconds, allowing creators to generate more complete narrative segments or dynamic shots in one go, avoiding quality loss and stylistic discontinuities caused by frequent stitching. At the same time, the API now includes a video extension feature, which can naturally continue generating based on existing clips, further supporting longer narrative constructions.
In terms of output format, a single task can now generate two sets of 1080p materials simultaneously: 16:9 landscape (suitable for YouTube and PC) and 9:16 portrait (compatible with TikTok and short video platforms), without the need for secondary cropping or re-rendering, greatly simplifying the multi-platform distribution process.
Additionally, the update strengthens asynchronous batch processing support for the Batch API, making it suitable for large-scale offline rendering queues, studio workflows, or automated production pipelines.
Link: https://developers.openai.com/api/docs/guides/video-generation


