Adobe has announced the development of an AI model designed to generate videos, though specific details regarding its launch timeline and functionalities remain scarce. Positioned as a competitor to OpenAI’s Sora, Google’s Imagen 2, and various emerging startups in the generative AI video landscape, this new model is part of Adobe's expanding Firefly family of generative AI products. Adobe plans to integrate the model into Premiere Pro, its flagship video editing software, later this year.
Similar to many existing generative AI video tools, Adobe's model can produce footage from a given prompt or reference images. It will introduce three new features within Premiere Pro: object addition, object removal, and generative extend, each of which is fairly straightforward.
- Object Addition: Users can highlight a segment of a video—such as the upper third or lower-left corner—and input a prompt to seamlessly insert objects into that space. In a demonstration, Adobe showcased a still featuring a briefcase filled with diamonds, generated by its model.
- Object Removal: This feature allows for the elimination of unwanted elements from a clip, such as boom mics or distracting coffee cups in the background.
- Generative Extend: This functionality adds extra frames to either the start or end of a clip. While Adobe didn’t disclose the exact number of frames that will be added, the feature is designed not to create new scenes but to provide buffer frames for syncing with soundtracks or enhancing emotional resonance.
To assuage concerns around the potential misuse of generative AI, such as deepfakes, Adobe is implementing Content Credentials, a form of metadata that identifies AI-generated content. This feature will indicate the specific AI model used to generate each video, aligning with Adobe's commitment to media authenticity through its Content Authenticity Initiative.
When questioned about the training data utilized for this model, Adobe declined to provide specifics. However, a recent report from Bloomberg indicated that the company may be compensating contributors on its stock media platform, Adobe Stock, up to $120 for short video clips submitted to train the model. Payment reportedly ranges from $2.62 to $7.25 per minute of video depending on quality, diverging from Adobe's previous model of offering annual bonuses based on usage volume.
This strategy marks a clear contrast to competitors like OpenAI, which have been accused of using publicly available data—such as YouTube videos—without proper compensation to the content owners. YouTube's CEO, Neal Mohan, has highlighted the legal complications surrounding such practices, raising concerns about intellectual property rights that could affect these AI initiatives.
While the cost for customers to access these new video generation features remains unclear, Adobe plans to adopt the generative credits system established with its earlier Firefly models. Adobe Creative Cloud subscribers will receive a monthly renewal of generative credits, varying from 25 to 1,000 depending on their subscription plan. More intricate tasks would require additional credits.
The pressing inquiry remains: will Adobe’s AI-driven video capabilities justify their potential costs? Currently, Adobe's Firefly image generation models have faced criticism for being less impressive compared to other market leaders like Midjourney and OpenAI’s DALL-E 3. The absence of a clear timeline for the video model’s release further casts doubts on its ability to meet expectations. Notably, Adobe opted not to demonstrate live functionalities such as object addition or removal during its presentation, choosing instead to showcase a pre-recorded reel.
To mitigate risks, Adobe is engaging with third-party vendors to explore the integration of additional video generation models into Premiere. This includes collaborations with OpenAI, which aims to incorporate Sora into Adobe’s workflow. Other early partnerships feature startups like Pika and Runway, which focus on AI-driven video creation and editing.
It’s essential to understand that these discussions surrounding third-party integrations are still in the preliminary research phase; nothing is available to the public just yet. Adobe’s announcements hint at a cautious but strategic interest in generative video technologies. The company is recognizing the potential of this emerging market, knowing that failing to innovate could cost it valuable revenue opportunities in the long run. However, the current concepts lack compelling demonstrations, indicating that Adobe still has significant ground to cover in proving its capabilities against incumbent technologies.