UK Considers AI Solutions to Safeguard Younger Internet Users as Their Online Presence Grows

Artificial Intelligence and Online Child Safety: Ofcom's New Initiative

Governments are increasingly scrutinizing artificial intelligence (AI) due to its potential misuse for fraud, disinformation, and other harmful online activities. In a new development, the U.K. regulator Ofcom is shifting the focus to explore AI's role in combating malicious online content that targets children.

As the official authority responsible for enforcing the Online Safety Act in the U.K., Ofcom plans to initiate a consultation aimed at examining both the current and future applications of AI and automated tools in proactively identifying and removing illegal content online. The primary objective is to enhance child protection against harmful material and to better detect child sexual abuse content that has previously been challenging to uncover.

This initiative aligns with Ofcom's recent research highlighting a significant increase in online connectivity among younger users: approximately 84% of children aged 3-4 are now accessing the internet, and nearly 25% of 5-7-year-olds already possess their own smartphones.

The upcoming tools from Ofcom will be part of a larger framework of proposals designed to improve online safety for children. The public consultation on these comprehensive proposals will commence shortly, while the specific AI consultation is set to follow later this year.

Mark Bunting, a director in Ofcom’s Online Safety Group, emphasizes that their investigation into AI will initially focus on its effectiveness as a screening tool. “Some services do utilize these tools to protect children from harmful content,” he stated in an interview. “However, there is limited information regarding their accuracy and effectiveness. We aim to establish measures that ensure companies assess these tools thoroughly, balancing risks to free expression and privacy."

One expected outcome is Ofcom's recommendations for platforms regarding the evaluation of their safety measures. This could lead to the adoption of more advanced technologies, with potential penalties for platforms that fail to enhance their content moderation capabilities or protect younger users effectively.

“As with much online safety regulation, responsibility lies with companies to implement appropriate measures and utilize the correct tools to safeguard their users,” Bunting noted.

This initiative is likely to attract both criticism and support. AI researchers are working on improving detection methods for various online threats, including deepfakes, while some skeptics point out that AI detection systems are not infallible, questioning the necessity of this consultation.

According to Ofcom’s research, the demographic of children engaging with online services is trending younger than ever, prompting a closer examination of behaviors among increasingly younger age groups.

Mobile technology is becoming ever more prevalent among children. In surveys involving between 2,000 and 3,400 parents and students, it was revealed that nearly 25% of 5-7-year-olds own smartphones, while 76% have access to tablets.

The same age group is significantly increasing their media consumption on these devices. For instance, 65% have engaged in voice and video calls (up from 59% last year), and half the children (compared to 39% the previous year) reported watching streamed content.

Despite age restrictions on mainstream social media platforms, compliance appears to be lacking in the U.K. Notably, 38% of 5-7-year-olds are reported to be using social media, with Meta's WhatsApp being the most popular choice at 37%. In a surprising twist, TikTok, ByteDance's viral sensation, is now used by 30% of these young users, while Instagram follows with 22%. Discord remains considerably less favored at just 4%.

Approximately one-third (32%) of children in this age range navigate online independently, with 30% of parents expressing comfort with their under-aged children having social media accounts. YouTube Kids remains the leading platform for younger audiences, capturing 48% of this demographic.

Gaming continues to be immensely popular, with 41% of 5-7-year-olds participating, including 15% engaging in first-person shooter games.

While 76% of parents indicated that they discuss online safety with their young children, Ofcom raises concerns about the gap between what children encounter online and what they communicate to their parents. Research conducted with older children aged 8-17 revealed that 32% reported encountering concerning content, whereas only 20% of parents were aware of any incidents.

Despite potential inconsistencies in reporting, Ofcom's findings highlight a disconnect between older children's experiences with potentially harmful online content and their willingness to share these experiences with their parents. Additionally, the issue of deepfakes complicates matters further, as 25% of older children aged 16-17 express uncertainty about distinguishing fake content from real.

This initiative marks a significant step towards better protecting children in the digital space and enhancing overall online safety.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles