OpenAI has recently unveiled its highly anticipated text-to-video model, Sora, which quickly generates realistic video content based on user prompts. However, some concerns have emerged regarding the rendering times, with rumors suggesting that a one-minute video could take over an hour to render.
In discussions on Reddit, users have shared their experiences and opinions about Sora. Many noted that OpenAI has primarily showcased a curated selection of examples and has yet to offer the public the ability to customize prompts. The longest demo video released is only 17 seconds, which has led to skepticism about Sora's practical capabilities.
Addressing the rendering time issues, some users referenced comments from OpenAI CEO Sam Altman regarding funding needs, suggesting this could be a factor contributing to the prolonged rendering periods. Conversely, others argued that Sora’s rendering times are relatively modest when compared to traditional animation processes. One user pointed out, “Producing a 90-minute film typically requires over 90 hours of shooting. Therefore, from a broader perspective on animation production, Sora’s rendering times are quite reasonable.”
While there is still room for improvement in rendering times, Sora’s ability to generate videos from text undoubtedly opens new avenues in video production. As technology continues to evolve and optimize, Sora could play a significant role in shaping the future of video creation.