AI Music Generators are Being Misused to Create Hateful Songs: A Growing Concern

Malicious individuals are exploiting generative AI music tools to produce homophobic, racist, and propagandistic songs while also disseminating guides that teach others how to do the same.

According to ActiveFence, a service dedicated to enhancing trust and safety on online platforms, there has been a noticeable increase in discussions within "hate speech-related" communities since March. These conversations focus on strategies for misusing AI music creation tools to craft offensive songs aimed at minority groups. The AI-generated tracks circulating in these forums target ethnic, gender, racial, and religious communities, inciting hatred while glorifying martyrdom, self-harm, and terrorism, as noted by ActiveFence researchers in a recent report.

While hateful and harmful songs are not new, the concern lies in the accessibility of user-friendly, free music-generating tools. This ease of use could enable individuals who previously lacked the skills or resources to create such material. Similar to how text, image, voice, and video generators have facilitated the spread of misinformation and hate speech, the proliferation of generative AI music may pose significant risks.

“These trends are escalating as more users learn to generate and share these songs,” stated an ActiveFence spokesperson. “Threat actors are quickly identifying vulnerabilities in these platforms to exploit and create malicious content.”

The Creation of Hate Songs

Generative AI music tools like Udio and Suno allow users to add custom lyrics to generated songs. Although these platforms implement safeguards to filter out common slurs and derogatory terms, users have already discovered ways around these restrictions, as reported by ActiveFence.

For instance, individuals in white supremacist forums have shared phonetic representations of minorities and offensive terms—utilizing spellings like “jooz” for “Jews” and “say tan” for “Satan”—to evade content filters. Some users even suggested modifying the spacing and spelling of violent phrases, replacing “my rape” with “mire ape.”

Media tested several of these methods on Udio and Suno, two popular platforms for AI music creation and sharing. Suno allowed all attempted workarounds, whereas Udio blocked a few but not all offensive homophones.

In response to our inquiries, a Udio spokesperson asserted that the company prohibits hate speech on its platform. Suno did not respond to our request for comment.

ActiveFence discovered communities producing AI-generated songs that echoed conspiracy theories against Jewish individuals and endorsed their mass extermination. Additionally, there were songs featuring slogans linked to terrorist groups like ISIS and al-Qaida, as well as those glorifying sexual violence against women.

The Emotional Impact of Music

ActiveFence argues that songs, unlike other forms of media, carry a unique emotional weight that can serve as a powerful tool for hate groups and political agendas. The firm references Rock Against Communism, a series of white power concerts held in the U.K. during the late ’70s and early ’80s, which gave rise to subgenres of antisemitic and racist “hatecore” music.

“AI makes harmful content more engaging—imagine someone promoting a damaging narrative about a community, and then envision the impact of turning that message into a catchy, rhyming song. It becomes easier for people to chant and remember,” an ActiveFence spokesperson explained. “These songs reinforce group identities, indoctrinate those on the edges, and can shock unaffiliated users.”

ActiveFence is urging music generation platforms to implement preventive measures and perform more comprehensive safety evaluations. “Red teaming could help identify some of these vulnerabilities by simulating the actions of threat actors,” said the spokesperson. “Improved moderation of both input and output could be beneficial, enabling platforms to block harmful content before it reaches users.”

However, fixes may only be temporary as users continue to discover new techniques to bypass moderation. For example, some AI-generated terrorist propaganda songs identified by ActiveFence utilized euphemisms and transliterations in Arabic—language that the music generators may not adequately filter.

If left unchecked, AI-generated hateful music could proliferate rapidly, mirroring trends seen in other AI-generated media. Wired reported earlier this year on an AI-manipulated clip featuring Adolf Hitler that garnered over 15 million views on X after being circulated by a far-right conspiracy influencer.

Among various experts, a UN advisory body has raised alarms about how generative AI could amplify racist, antisemitic, Islamophobic, and xenophobic content.

“Generative AI services empower users without resources or technical expertise to create compelling content that competes for attention in the global market of ideas,” the spokesperson noted. “Threat actors, recognizing the creative potential of these new services, are actively working to bypass moderation and evade detection, and they are finding success.”

We're launching an AI-focused newsletter! Sign up here to receive it starting June 5.

Most people like

Find AI tools in YBX