Humans Can't Resist Testing AI with Inappropriate Memes: From Boobs to 9/11

The AI industry is advancing at an alarming rate, yet no amount of training can prepare AI models for the unpredictable nature of human creativity, especially when it comes to generating absurd content like images of a pregnant Sonic the Hedgehog. As companies rush to release the latest AI tools, they often overlook the reality that users may exploit these technologies for chaos rather than constructive purposes. Unfortunately, artificial intelligence struggles to keep pace with humanity's penchant for creating outlandish and inappropriate content.

Recently, both Meta and Microsoft’s AI image generators gained significant attention for producing bizarre responses to prompts such as “Karl Marx large breasts” and fictional characters involved in the 9/11 tragedy. These incidents highlight the pitfalls of hastily integrating AI capabilities without addressing potential misuse.

Meta is currently rolling out AI-generated chat stickers for Facebook Stories, Instagram Stories, DMs, Messenger, and WhatsApp. This new feature is powered by Llama 2, Meta’s advanced AI model, and Emu, its image generation technology. The stickers were announced during last month's Meta Connect and will be available for select English users throughout October.

“Every day, millions of stickers are sent to convey emotions in chats,” said Meta CEO Mark Zuckerberg during the announcement. "With Emu, users can now simply describe what they want, rather than being limited to predefined options."

While some early users aimed to test the limits of sticker specificity, many of their prompts veered into the realm of absurdity. In just days, Facebook witnessed users generating outrageous images like Kirby with breasts and a pregnant Sonic—demonstrating the unpredictable ways people can utilize new technologies.

Although Meta has implemented filters to block certain terms such as “nude” and “sexy,” users quickly discovered that these restrictions can be easily circumvented by using creative misspellings and alternative phrases. Moreover, Meta’s AI models continue to struggle with generating realistic human hands, adding another layer of unpredictability.

Critics on social media, like X user Pioldes, argued that the lack of foresight among developers is evident, sharing examples of AI-generated stickers that included inappropriate and offensive content.

Microsoft's Bing Image Creator, which integrated OpenAI’s DALL-E earlier this year, also faced similar issues. Despite claims of added “guardrails” to prevent harmful image generation, reports have revealed that the system can still produce distressing content, including fictional characters involved in tragic events like 9/11. Although Microsoft has a content policy that prohibits such depictions, many images featuring beloved characters piloting planes toward the Twin Towers have circulated online.

For instance, images depicting characters like the Eva pilots from "Neon Genesis Evangelion" and Gru from "Despicable Me" have recently gone viral, showcasing the persistent loopholes in content moderation.

Even with phrases like “Twin Towers” and “9/11” blocked, clever users have found ways around Microsoft's filters, allowing them to create shocking visuals with little effort. One user even crafted a thread featuring Kermit in bizarre scenarios, illustrating the lengths to which users will go to exploit these systems.

The trend of bypassing content filters is often referred to as "jailbreaking," a term originally used in software hacking. This phenomenon has become more prevalent as users seek to test AI's defenses, leading to a new form of online satire that revels in absurdity and rule-breaking.

Other AI platforms have faced similar challenges. For example, once Snapchat introduced its family-friendly chatbot, users quickly coached it to respond in comical and inappropriate ways. Midjourney, despite robust filters against adult content, also saw users finding ways to generate explicit images. Similarly, Discord's chatbot, Clyde, inadvertently provided a user with information on producing napalm after being prompted to role-play as a user's deceased relative.

The endless pursuit of creativity in generative AI tools continues to present public relations challenges for tech companies. Users adept at uncovering shortcomings in safety protocols are demonstrating the vast, sometimes humorous possibilities of AI technology, even if the outcomes raise ethical concerns. The irony lies in the fact that after countless advancements, we often utilize this sophisticated technology for the simplest—or most outrageous—of human interests.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles