AI startup Anthropic is updating its policies to allow minors to access its generative AI systems—under certain conditions.
In a recent official blog post, Anthropic announced that it will permit teens and preteens to use third-party applications powered by its AI models, provided that the app developers integrate specific safety features and disclose which Anthropic technologies are being utilized.
In a support article, Anthropic outlines several crucial safety measures that developers should adopt when creating AI-powered apps for minors. These include implementing age verification systems, content moderation and filtering, along with educational resources on “safe and responsible” AI usage for young users. The company also mentioned that it may provide “technical measures” designed to enhance AI experiences for minors, such as a “child-safety system prompt” that developers catering to a younger audience must implement.
Developers utilizing Anthropic’s AI models are obligated to adhere to relevant child safety and data privacy regulations, including the Children’s Online Privacy Protection Act (COPPA), a U.S. federal law that safeguards the online privacy of children under 13. Anthropic intends to conduct “periodic” audits of applications for compliance, with the authority to suspend or terminate accounts of those who consistently breach these requirements. Additionally, developers are required to “clearly state” their compliance status on public-facing websites or documentation.
“There are specific scenarios where AI tools can deliver significant advantages for younger users, such as test preparation and tutoring support,” Anthropic noted in its blog post. “With this understanding, our new policy allows organizations to incorporate our API into products aimed at minors.”
This policy shift by Anthropic coincides with the rising trend of children and teenagers turning to generative AI tools for assistance with academic challenges and personal matters. Competing generative AI providers, including Google and OpenAI, are also exploring applications for children. Earlier this year, OpenAI established a new team to research child safety and announced a collaboration with Common Sense Media to create child-friendly AI guidelines. Similarly, Google has made its chatbot, now rebranded as Gemini, accessible to teens in English in select regions.
A recent poll conducted by the Center for Democracy and Technology revealed that 29% of children have used generative AI tools, like OpenAI’s ChatGPT, to cope with anxiety or mental health concerns. Additionally, 22% reported using these tools to address friendship issues, while 16% engaged them for family conflicts.
Last summer, many educational institutions hurried to ban generative AI applications—particularly ChatGPT—due to concerns over plagiarism and misinformation. However, some have since lifted these bans. Still, skepticism remains about the benefits of generative AI, with surveys from the U.K. Safer Internet Centre indicating that over half of children (53%) observed their peers using generative AI negatively, such as for creating misleading information or distressing images (including pornographic deepfakes).
The demand for regulatory guidelines on minors’ use of generative AI is growing. Late last year, UNESCO urged governments to regulate the educational application of generative AI, proposing age limits for users and safeguards on data protection and user privacy. “Generative AI can present a significant opportunity for human development, but it also poses risks and biases,” stated Audrey Azoulay, UNESCO’s director-general, in a press release. “It cannot be integrated into education without public involvement and the necessary regulations and safeguards from governments.”