Adobe Announces Video Generation Feature Coming to Firefly This Year

Adobe is set to unveil its groundbreaking AI model for video generation in just a few months. The company has announced that features powered by the Adobe Firefly Video model will be accessible before the end of 2024 on the Premiere Pro beta app and through a dedicated website.

Three key features—Generative Extend, Text to Video, and Image to Video—are currently in a private beta phase but will soon be available to the public.

Generative Extend allows users to seamlessly extend any video by two seconds. This feature will be integrated into the Premiere Pro beta app later this year. Meanwhile, Firefly’s Text to Video and Image to Video functionalities will generate five-second clips from text prompts or image inputs, debuting on Firefly’s official website soon. (Adobe notes that the maximum duration may increase.)

For example, users can input a prompt such as: “Cinematic closeup and detailed portrait of a reindeer in a snowy forest at sunset, featuring soft, sun-kissed lighting and dreamy bokeh.”

Adobe’s software has been a staple for creatives for decades, but the introduction of these generative AI tools could dramatically transform the industry, for better or worse. Firefly represents Adobe's response to the surge of generative AI models, including OpenAI’s Sora and Runway’s Gen-3 Alpha. These tools have captivated users by generating video clips in mere minutes, a task that would traditionally require hours of manual effort. However, many early iterations of AI tools are deemed too unpredictable for professional use.

Adobe believes that control is where it can distinguish itself. According to Ely Greenfield, Adobe’s CTO of digital media, there is a “huge appetite” for AI tools like Firefly that can complement and enhance existing workflows. For instance, the generative fill feature added to Adobe Photoshop last year has become “one of the most frequently used features we’ve introduced in the past decade.”

While Adobe has not revealed the pricing structure for these AI video capabilities, the company provides Creative Cloud customers with a set number of “generative credits,” where one credit typically yields one video result. Higher-tier plans, of course, offer more credits.

Greenfield showcased the innovative new features coming this year. Generative Extend picks up where the original video ends, adding two seconds of footage in a seamless manner. This feature analyzes the final frames of the original scene and employs Firefly’s Video model to predict the continuation. For audio, Generative Extend recreates background sounds like traffic or nature, but excludes human voices and music to adhere to licensing regulations.

In one demonstration, Greenfield highlighted a video clip of an astronaut gazing into space, showcasing the feature's capabilities. While I could identify the moment of extension—just after an unusual lens flare—the continuity of the camera pan and scene elements remained intact. This functionality could be particularly beneficial when a scene concludes prematurely, providing the extra time needed for smooth transitions or fade-outs.

Firefly’s Text to Video and Image to Video features are also user-friendly. These tools let users convert text or image prompts into up to five seconds of generated video. Access to these AI video generators will be available at firefly.adobe.com, likely with usage limits (although Adobe has yet to specify).

Furthermore, Adobe claims that Firefly’s Text to Video features excel at accurate spelling, addressing a common issue faced by AI video models.

In terms of safety measures, Adobe is taking a precautionary approach. Greenfield noted that Firefly’s video models are restricted from generating content that includes nudity, drugs, or alcohol. Additionally, Adobe’s video generation tools will not be trained on public figures, such as politicians or celebrities, an aspect not universally applied by competing models.

Most people like

Find AI tools in YBX