Viggle Creates Interactive AI Characters for Memes and Idea Visualization

You may not be familiar with Viggle AI, but chances are you've encountered the viral memes it has spawned. This Canadian AI startup has gained attention for its innovative videos that remix rapper Lil Yachty's energetic performances at summer music festivals. For instance, one popular clip replaces Lil Yachty with Joaquin Phoenix’s Joker, while another features Jesus cheering up the crowd. While countless users have created their own versions, it was Viggle that powered this meme phenomenon. According to Viggle's CEO, YouTube videos play a crucial role in fueling their AI models.

Viggle has developed a 3D video foundation model called JST-1, which the company claims has a "genuine understanding of physics." CEO Hang Chu highlights that the primary distinction between Viggle and other AI video models lies in its capability to let users dictate the character motion. Many other AI models often produce unrealistic animations that defy physics; however, Chu asserts that Viggle's technology is different.

“We are essentially creating a new type of graphics engine driven entirely by neural networks,” Chu explained in an interview. “This model diverges significantly from existing video generators focused on pixel data that lack an understanding of structure and physics. Our model is designed to comprehend these elements, resulting in enhanced controllability and generation efficiency.”

To illustrate, crafting a video that features the Joker performing as Lil Yachty requires users simply to upload the original clip and an image of the Joker. Alternatively, users can provide images of characters along with text prompts detailing how they should be animated. Viggle also allows users to create animated characters from scratch using text prompts alone.

However, meme creators represent only a small segment of Viggle's user base; Chu notes that the model has seen widespread adoption as a visualization tool among creatives. While Viggle's videos can be shaky and the characters often appear expressionless, they remain effective for filmmakers, animators, and video game designers looking to visualize their concepts. Currently, Viggle focuses on character creation, but Chu aims to expand capabilities to develop more complex videos in the future.

Viggle offers a free, limited version of its AI model on Discord and its web application. Additionally, they provide a subscription for $9.99, granting users enhanced capabilities, along with a creator program for select users. The CEO reported that Viggle is in discussions with film and video game studios to license their technology, and they've also garnered interest from independent animators and content creators.

On Monday, Viggle announced a successful $19 million Series A funding round led by Andreessen Horowitz, with contributions from Two Small Fish. This funding will enable Viggle to scale operations, accelerate product development, and expand its team. The startup has partnered with Google Cloud and other cloud providers to train and operate its AI models, gaining access to GPU and TPU clusters, though this typically excludes using YouTube videos as training data.

Training Data:

“So far, we’ve been relying on publicly available data,” Chu noted, echoing comments made by OpenAI’s CTO regarding Sora’s training data.

When asked whether their training data encompassed YouTube videos, Chu responded candidly: “Yeah.”

This response could pose complications. In April, YouTube CEO Neal Mohan told Bloomberg that utilizing YouTube videos to train an AI text-to-video generator would be a “clear violation” of the platform's terms of service. His remarks were in reference to concerns that OpenAI may have leveraged YouTube videos to train Sora.

Mohan clarified that while Google holds contracts with specific creators permitting their videos to be used in training datasets for Google DeepMind’s Gemini, indiscriminately harvesting videos from the platform is prohibited without prior permission.

Following our interview with Viggle’s CEO, a spokesperson reached out to clarify that Chu “spoke too soon” regarding whether Viggle uses YouTube data for training. The spokesperson stated, “In truth, Hang/Viggle is unable to share details of their training data.”

After emphasizing that Chu's previous comments were on the record, the spokesperson confirmed that Viggle does indeed utilize data from YouTube:

“Viggle leverages a variety of public sources, including YouTube, to generate AI content. Our training data is meticulously curated and refined to ensure compliance with all relevant terms of service. We prioritize maintaining strong relationships with platforms like YouTube and are committed to respecting their guidelines by avoiding large-scale downloads or unauthorized video harvesting.”

We sought comments from YouTube and Google representatives but have not yet received a response.

Viggle joins other companies in the AI sector that utilize YouTube as training data, navigating a gray area in compliance. Many AI developers, including Nvidia, Apple, and Anthropic, are reported to use YouTube video transcriptions or clips for training purposes. In Silicon Valley, it's a well-known secret that many are likely doing so; what's uncommon is openly acknowledging it.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles