In 2016, the Mannequin Challenge captured the attention of thousands, where participants struck random poses while a camera moved among them. These videos, often shared on YouTube, garnered millions of views. Now, a team from Google AI is leveraging this viral trend to enhance artificial intelligence capabilities. Their goal is to improve AI's ability to predict depth in videos with moving cameras.
While humans can perceive depth despite object motion, AI has struggled with this task. Generating sufficient training clips is challenging, particularly when aiming for diversity in age and gender among participants. This is where YouTube came into play. If you took part in the Mannequin Challenge, your video might be one of the 2,000 that researchers utilized to compile their dataset, which they intend to share with the scientific community.
To train their neural network, the team converted the videos into 2D images, estimated camera poses, and created corresponding depth maps. As a result, the AI can now more accurately predict the depth of moving objects in videos than ever before. This advancement could significantly benefit self-driving cars and robots as they learn to navigate unknown environments.
However, using these videos without consent has sparked privacy concerns. As highlighted by Technology Review, it's increasingly common for researchers to gather publicly available data from platforms like Twitter and Flickr. As the need for larger datasets grows, this practice is likely to persist, prompting careful consideration of the virtual trends we choose to participate in.