RunwayML, a pioneer in AI-driven video generation, has officially launched its latest model, Gen-3 Alpha. This model promises to enhance the creative process by enabling users to create hyper-realistic videos from text, images, or video prompts.
First revealed a few weeks ago, Gen-3 Alpha is now available to all registered users on the RunwayML platform. Its advanced capabilities allow for high-fidelity, controllable video outputs suitable for various applications, including advertising—a space where OpenAI's upcoming Sora is also generating interest.
However, Gen-3 Alpha is not free like its predecessors, Gen-1 and Gen-2. Users will need to subscribe to one of RunwayML's paid plans, with prices starting at $12 per month for each editor when billed annually.
What to Expect from Gen-3 Alpha?
After the rapid rollout of Gen-1 and Gen-2 last year, RunwayML took a step back to focus on platform improvements while competitors like Stability AI and OpenAI ramped up their offerings. Last month marked RunwayML's return to the spotlight with Gen-3 Alpha, a model trained on videos and images with detailed captions. This model facilitates the creation of captivating video clips featuring imaginative transitions, precise element key-framing, and expressive characters conveying a variety of actions and emotions.
Initial samples demonstrate significant advancements in speed, fidelity, consistency, and motion compared to earlier models. RunwayML has collaborated with a diverse team of research scientists, engineers, and artists, although the specific training data sources remain undisclosed.
With Gen-3 Alpha now widely accessible, users can leverage it for a multitude of creative projects by upgrading to a paid plan. Initially, RunwayML will feature a text-to-video mode, allowing users to transform their concepts into engaging videos using natural language prompts. Future updates are expected to introduce image-to-video and video-to-video functionalities, along with advanced tools like Motion Brush, Advanced Camera Controls, and Director Mode.
Videos generated with Gen-3 Alpha will be limited to a maximum length of 10 seconds, with generation speed varying based on video duration. While this is an improvement over many AI video models, it falls short of the one-minute generation length promised by OpenAI's Sora, which is yet to launch.
As the creative community begins to explore Gen-3 Alpha's capabilities, Emad Mostaque, former CEO of Stability AI, has already tested it against Sora's output.
This launch is just the beginning. RunwayML anticipates ongoing developments for Gen-3 Alpha, including the release of a free version for users. The company envisions this model as the first in a series, built on a new infrastructure designed for large-scale multimodal training, paving the way toward the creation of general world models capable of simulating a broad range of real-world scenarios and interactions.