OpenAI has introduced Sora, a video-generation model that can create realistic and imaginative scenes from text instructions provided by users. Users can generate photorealistic videos up to one minute long using this text-to-video model.
Stay well-informed and be the very first to receive all the most recent updates directly in your email! Tap here to join now for free!
Sora is praised for its ability to create detailed scenes with multiple characters, specific motions, and realistic backgrounds. It can generate videos from still images, fill in missing frames, and extend existing videos. Although some videos show signs of AI with unrealistic movements, Sora’s overall results are impressive, with demos including scenes like the California gold rush and a perspective from a Tokyo train.
Recent advancements in video technology have been notable, with companies like Runway and Pika demonstrating text-to-video models. Google’s Lumiere is a prominent competitor, providing text-to-video features and the ability to generate videos from still images.
Access to Sora is currently restricted to “red teamers” assessing risks and a select group of visual artists, designers, and filmmakers for feedback. However, the model may have limitations in accurately representing complex physics or understanding certain cause-and-effect situations.
Stay well-informed and be the very first to receive all the most recent updates directly in your email! Tap here to join now for free!