Common Sense Media Flags Popular AI Products as Unsafe for Kids: A Must-Read Safety Alert

An independent evaluation of widely used AI tools has revealed potential safety issues for children, with products like Snapchat’s My AI, DALL-E, and Stable Diffusion receiving critical assessments. The reviews come from Common Sense Media, a nonprofit organization dedicated to empowering families through media ratings, allowing parents to scrutinize the apps, games, podcasts, TV shows, movies, and books their children consume. Earlier this year, Common Sense Media announced its intention to introduce ratings for AI products, and these ratings are now live, featuring "nutrition labels" that clarify the safety of AI tools such as chatbots and image generators.

The initiative was sparked by a parent survey that indicated strong demand for reliable resources. A significant 82% of parents expressed a desire for assistance in determining if new AI products, such as ChatGPT, are appropriate for their children. However, only 40% were aware of dependable resources that could guide them in making these evaluations.

As a result, Common Sense Media launched its initial AI product ratings today, which assess a range of AI principles, including trust, child safety, privacy, transparency, accountability, educational value, fairness, social connections, and societal benefits. The organization examined ten popular applications on a 5-point scale, covering learning tools, AI chatbots like Bard and ChatGPT, as well as generative products like Snapchat's My AI and DALL-E. Unsurprisingly, the generative AI category scored the lowest.

"AI is not infallible, and it is not free from biases," stated Tracy Pizzo-Frey, Senior Advisor of AI at Common Sense Media. "Generative AI systems are trained on extensive internet data, which encompasses a wide range of cultural, racial, socioeconomic, historical, and gender biases—insights we identified in our assessments," she explained. "We hope our ratings will motivate developers to create safeguards that prevent misinformation and protect future generations from adverse outcomes."

In her own assessments, reporter Amanda Silberling observed that Snapchat’s My AI generally displayed eccentric and unpredictable behavior rather than outright harmfulness. However, Common Sense Media assigned the chatbot a 2-star rating, highlighting its tendency to perpetuate ageism, sexism, and cultural stereotypes. It also provided inappropriate responses at times and raised privacy concerns by storing personal user data.

Snapchat responded to the unfavorable review, emphasizing that My AI is an optional feature and that the app clearly labels it as a chatbot along with advising users about its limitations. “By default, My AI displays a robot emoji. Before anyone can interact with My AI, we show a message clarifying that it’s a chatbot and outline its limitations,” said Snap spokesperson Maggie Cherneff. “My AI also integrates into our Family Center, allowing parents to monitor their teens' interactions. We value feedback as we work towards enhancing our product,” she added.

Other generative AI tools like DALL-E and Stable Diffusion exhibited similar risks, notably concerning the objectification and sexualization of women and girls, and the reinforcement of harmful gender stereotypes. These models are increasingly being employed to create inappropriate content, with sites like Hugging Face and Civitai facilitating the discovery and combination of image models for creating pornographic material that may involve celebrity likenesses. This issue gained traction earlier this week when 404Media highlighted Civitai’s functionalities, sparking debate over accountability among community aggregators and AI model developers on platforms like Hacker News.

Common Sense’s mid-tier ratings included AI chatbots such as Google’s Bard (which recently became available to teenagers) and ChatGPT, along with Toddle AI. The organization warned that biases may be present in these bots, particularly affecting users from diverse backgrounds and dialects. They may also produce erroneous information—commonly referred to as AI hallucinations—and perpetuate stereotypes. Common Sense cautioned that the false information generated by AI could shape users' perspectives, making it increasingly challenging to distinguish truth from fabrication.

OpenAI responded by reiterating its commitment to user safety and privacy. “We care deeply about the safety and privacy of all users, including young individuals, which is why we’ve implemented strong safety measures into ChatGPT. We require parental consent for users aged 13-17, and children under 13 are prohibited from using our services,” stated OpenAI spokesperson Kayla Wood.

The only AI products that garnered positive reviews were Ello’s AI reading tutor, Khanmingo from Khan Academy, and Kyron Learning’s AI tutor—all educational tools designed with children in mind. Although lesser-known, these products adopt responsible AI practices, focusing on fairness, diverse representation, and child-friendly designs, while maintaining transparency in their data privacy policies.

Common Sense Media plans to continue publishing ratings and evaluations of new AI products, aiming to inform parents, families, lawmakers, and regulators. “Consumers need access to clear ratings for AI products that may jeopardize the safety and privacy of all users, especially children and teens,” said James P. Steyer, founder and CEO of Common Sense Media. “Understanding what an AI product is, how it operates, and its ethical risks allows lawmakers, educators, and the public to recognize what responsible AI entails. If the government neglects to regulate AI effectively, tech companies may exploit the resulting environment, ultimately compromising our data privacy and societal well-being,” he added.

Updated, 11/16/23, 4:18 PM ET with OpenAI comment.

Google introduces Bard AI chatbot for teenagers.

Snapchat’s AI bot raises concerns, but maintains safety measures.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles