New York City-based Runway ML, commonly referred to as Runway, is a pioneer in high-quality generative AI video creation. Following the releases of its Gen-1 model in February 2023 and Gen-2 in June 2023, the company faced increased competition from other highly realistic AI video generators, including OpenAI’s upcoming Sora model and Luma AI’s recently launched Dream Machine.
In response, Runway is making a significant comeback in the generative AI video landscape with the announcement of Gen-3 Alpha. This model, described in a blog post as "the first in a series trained on new infrastructure for large-scale multimodal training," aims to develop General World Models capable of simulating a wide range of real-world situations and interactions. Sample videos showcasing Gen-3 Alpha's capabilities are available throughout this article.
Gen-3 Alpha empowers users to generate 10-second, high-quality, and realistic video clips with precise emotional expressions and camera movements. According to a spokesperson from Runway, this initial rollout will offer 5 and 10-second clip generations, significantly speeding up production times—5 seconds in 45 seconds and 10 seconds in 90 seconds.
While no specific release date has been announced, demo videos are being showcased on Runway’s website and their X social account. The availability for users remains unclear, with indications that it may be accessible through their free tier or require a paid subscription starting at $15 per month or $144 annually.
In a recent interview, Anastasis Germanidis, Runway's co-founder and CTO, confirmed that Gen-3 Alpha would be available to paying subscribers in the coming days, with plans for a future rollout to free tier users. A Runway representative added that the model would be accessible to Enterprise users and those in the Creative Partners Program.
Germanidis stated on X that Gen-3 Alpha would enhance existing functionalities like text-to-video and image-to-video, while also introducing new capabilities. Since releasing Gen-2, Runway has found that scaling video diffusion models has not yet reached peak performance, allowing them to develop powerful representations of visual content.
Diffusion refers to how an AI model learns to reconstruct visual concepts from pixelated "noise," using annotated image/video and text pairs. Runway claims that Gen-3 Alpha is "trained jointly on videos and images," guided by a team of research scientists, engineers, and artists. However, they have not disclosed specific datasets used, following a common trend among AI media generators.
Critics have called for AI model creators to compensate original data authors through licensing, with some pursuing copyright infringement lawsuits. Nonetheless, AI companies contend they can legally utilize any publicly available data.
When asked about Gen-3 Alpha’s training data, Runway’s spokesperson mentioned they rely on curated internal datasets managed by their in-house research team.
Notably, Runway is collaborating with leading media and entertainment organizations to develop customized versions of Gen-3, which aim for stylistic consistency and meet specific artistic and narrative objectives. While details on these collaborations remain undisclosed, filmmakers from award-winning projects like Everything Everywhere All at Once and The People’s Joker have previously utilized Runway's technology.
Runway has also provided a form for organizations interested in custom versions of Gen-3, although pricing details for custom model training have not been released. It’s evident that Runway is fiercely committed to maintaining its position as a leader in the rapidly evolving generative AI video creation sector.