OpenAI Forms Child Safety Team to Protect Underage Users from AI Risks

In response to growing concerns from both parents and advocacy groups, OpenAI has initiated a move to enhance the safety of its AI technologies for younger users. 

The company recently unveiled the formation of a dedicated Child Safety team. This newly established team is a collaborative effort, integrating expertise from OpenAI's platform policy, legal, and investigations divisions alongside partnerships with external entities. The primary focus is to oversee and refine processes, manage incidents, and conduct thorough reviews related to the participation of underage users on its platforms.

OpenAI's career portal now features a position for a child safety enforcement specialist. The specialist will play a role in shaping review processes related to ‘sensitive’ content. 

The necessity for such measures aligns with broader industry practices, where tech companies are increasingly dedicating resources to comply with regulations like the U.S. Children's Online Privacy Protection Rule. These laws are designed to safeguard children's online experiences, regulating access to digital content and the collection of personal data. 

The move follows OpenAI's recent collaboration with Common Sense Media to develop guidelines for child-friendly AI applications. 

A recent report from the Center for Democracy and Technology found some 29 per cent of children and teenagers in the US have used generative AI tools like ChatGPT for personal and educational support, including dealing with anxiety, friendship issues, and family conflicts. This trend raises important questions about the potential risks associated with AI in young hands, especially in light of schools' mixed responses to AI tools regarding plagiarism and misinformation concerns.

The call for regulated AI usage among children is gaining traction. The UK's Children's Commissioner has raised pertinent observations regarding the rapid proliferation of AI tools and their consumption by young users.

 Highlighting the dual-edged nature of AI, the Commissioner pointed out the remarkable surge in AI tool usage among children, with Ofcom data revealing significant engagement among 7-17-year-olds in the UK. 

Despite the noted benefits, the Commissioner expressed caution over the potential risks and unknown impacts of AI on children's lives, emphasising concerns ranging from cyberbullying and privacy issues to the generation of harmful content and the perpetuation of bias. She says: 

“We are yet to understand the true impact of these tools on children’s lives. However, I consider that AI demonstrates the problem of emerging technologies that are not fully covered by the existing regulatory regime and how children can suffer as a result.  

I have been a strong proponent of the robust protections for children in the Online Safety Act, but it has taken us many years to get here and many, many children who have grown up in an online environment that was and is not safe or designed for them. I am very pleased that the Act is in law and that I have a statutory role to ensure that children’s voices are heard, but AI is not covered by the Act and I am concerned that we are once again lagging behind an issue. 

More work is needed to fully understand how children can safely interact with these new technologies, and what strong safeguards should look like.”

Previous
Previous

School Districts sue social media giants including Google and Snap over youth mental health crisis

Next
Next

EdTech Innovators: Adam Boxer