As OpenAI pushes the boundaries of virality with its upcoming AI video platform, Sora, competitors are racing to elevate their offerings. Recently, Pika Labs launched a lip-sync feature, and now a new AI video startup, Haiper, has emerged from stealth mode with $13.8 million in seed funding from Octopus Ventures.
Founded by former Google DeepMind researchers Yishu Miao (CEO) and Ziyu Wang, London-based Haiper enables users to create high-quality videos from text prompts or animate existing images. The platform incorporates its own visual foundation model and competes with established tools like Runway and Pika Labs, though initial tests suggest it still trails behind OpenAI's Sora.
Haiper aims to utilize its funding to enhance its infrastructure and product capabilities, setting the stage for developing artificial general intelligence (AGI) that can genuinely understand and reflect human-like comprehension.
What Does Haiper Offer?
Similar to Runway and Pika Labs, Haiper provides a user-friendly web platform where users can easily input text prompts to create AI videos. Currently, the platform generates videos in both SD and HD quality, but HD content is limited to two seconds while SD videos can reach up to four seconds. The lower-quality option allows for motion control.
In our tests, HD video outputs were more consistent, likely due to shorter duration. In contrast, lower-quality videos often appeared blurred, with significant distortion in shape, size, and color at high motion levels. Unlike Runway, Haiper currently lacks an option to extend video lengths, although they plan to introduce this feature in the near future.
In addition, Haiper allows users to animate existing images and adjust video styles, backgrounds, and elements using text prompts.
Haiper claims its platform and proprietary visual foundation model can cater to various applications, from social media content creation to business uses like studio content generation. However, the company has not disclosed any commercialization plans and continues to provide its technology for free.
Vision for AGI
With the recent funding, Haiper intends to expand its infrastructure and product offerings, ultimately working towards AGI that possesses comprehensive perceptual abilities. This latest investment raises the company's total capital to $19.2 million.
In the coming months, Haiper plans to refine its offerings based on user feedback, releasing a series of extensively trained models to improve video quality and potentially narrow the gap with competitors.
As Haiper develops its models, it aims to enhance the understanding of the physical world, encompassing light, motion, texture, and object interactions. This would enable the creation of hyper-realistic content.
“Our end goal is to build an AGI with full perceptual abilities, unlocking vast potential for creativity. Our visual foundation model represents a significant advancement in AI's capacity to understand the dynamics of reality, which can enhance human storytelling,” Miao stated.
With next-gen perceptual capabilities, Haiper anticipates its technology will influence not only content creation but also fields like robotics and transportation. This innovative approach to AI video positions Haiper as a compelling company to watch in the bustling AI landscape.