After teasing the feature back in February, OpenAI has finally launched its text-to-video model, Sora, to US (and “most other countries) users, as part of its ‘12 Days of OpenAI’ campaign, which will see the start-up release or demo a new product or feature, every day, for the next 12 days.
Early reviewers have praised Sora for its “storyboard” feature, which allows users to create a sequence of scenes, based on a string of prompts, which helps maintain consistency throughout the AI-generated film (something which many AI video tools struggle with). They also liked its ‘remix’ feature which ‘blends’ scenes together, seamlessly, and its ability to create realistic landscape shots, and accurately display text within films (something else most image generators find challenging) — although users must be incredibly specific with their prompts, as background text, like on signs, for example, still comes out garbled.
Although there was plenty of praise, Sora still struggles with object permanence, and users have found that objects can sometimes suddenly disappear or reappear for no reason. Photorealism and movement are still challenging for Sora (as they are for most AI video tools), with body parts or objects often warping unexpectedly.
Despite this, Sora does have plenty of safeguards in place, including visible watermarks and C2PA metadata to indicate videos are made with AI. It also stops users from creating videos using copyrighted material and prohibits the use of public figures or people under 18. They openly admitted that they’re trying to strike the difficult balance of encouraging creative expression while preventing illegal activity and have asked users for feedback on their moderation protocols.