OpenAI Creates New Team Dedicated to Researching Child Safety

Under the spotlight from activists and parents alike, OpenAI has launched a dedicated team aimed at preventing the misuse of its AI tools among children. The company has now announced the formation of a Child Safety team through a recent job posting on its careers page. This team is collaborating closely with OpenAI's platform policy, legal, and investigations departments, as well as external partners, to manage processes, incidents, and reviews concerning underage users.

Currently, OpenAI is seeking to hire a child safety enforcement specialist. This role will focus on applying the company's policies related to AI-generated content and developing review processes for sensitive content pertaining to children.

As technology vendors grow, they often allocate significant resources to comply with regulations such as the U.S. Children’s Online Privacy Protection Act (COPPA), which ensures that children have appropriate access to online content while controlling the data that companies can collect about them. Hence, OpenAI’s decision to recruit child safety experts aligns with its expectation of a substantial underage user demographic in the future. Notably, OpenAI’s terms of use mandate parental consent for users aged 13 to 18 and prohibit access for children under 13.

The establishment of this new team follows OpenAI's partnership with Common Sense Media to develop child-friendly AI guidelines and its recent acquisition of its first education customer. This move highlights OpenAI's commitment to adhering to policies related to minors’ usage of AI and mitigating potential negative press surrounding these issues.

Today’s youth are increasingly turning to generative AI tools not just for academic assistance but also for personal challenges. A recent poll by the Center for Democracy and Technology revealed that 29% of kids have utilized ChatGPT to address anxiety or mental health concerns, 22% for social issues with friends, and 16% for family conflicts.

However, this trend raises significant concerns. Last summer, educational institutions rapidly enacted bans on ChatGPT due to fears of plagiarism and misinformation. While some have since lifted these bans, skepticism about generative AI’s benefits persists. For example, a survey by the U.K. Safer Internet Centre found that 53% of kids reported witnessing peers using generative AI negatively, such as creating misleading content or harmful imagery.

In September, OpenAI released guidelines for using ChatGPT in educational settings, outlining prompts and an FAQ designed to assist educators in leveraging generative AI as a teaching resource. Within this documentation, OpenAI acknowledged the possibility that ChatGPT “may produce output that isn’t suitable for all audiences or ages” and recommended exercising “caution” when children are exposed to its tools, even if they meet the age criteria.

The call for clear guidelines regarding children's engagement with generative AI is becoming increasingly urgent.

Late last year, the UN Educational, Scientific and Cultural Organization (UNESCO) urged governments to regulate the use of generative AI in education, recommending age limits for users and implementing measures for data protection and privacy. “Generative AI holds incredible potential for human development, but it also poses risks and biases,” stated UNESCO Director-General Audrey Azoulay in a press release. “Public engagement, along with essential safeguards and regulations from governments, is crucial before its integration into education.”

Most people like

Find AI tools in YBX