LONDON: Britain’s media and privacy regulators have issued a strong warning to major social media platforms, urging them to take immediate steps to prevent children from accessing services that are not designed for them.
On Thursday, the UK’s communications regulator Ofcom and the Information Commissioner’s Office (ICO) said several global platforms—including Meta’s Facebook and Instagram, TikTok, Snapchat, Roblox and YouTube—are failing to properly enforce their own minimum age policies.
The regulators expressed concern that many young users continue to gain access to social media despite existing restrictions. The warning comes as the British government considers stricter measures to limit children’s use of social media.
According to Reuters, one proposal under discussion includes banning users under the age of 16 from joining such platforms, similar to a policy recently introduced in Australia.
Regulators also highlighted the risks posed by algorithm-driven content feeds. According to Ofcom, these systems can expose young users to harmful, inappropriate or highly addictive material, increasing concerns about the impact of social media on children’s wellbeing.
Ofcom chief executive Melanie Dawes said major tech platforms must prioritize the safety of children in the design of their services. She stressed that regulators expect companies to make rapid improvements or face enforcement action.
As part of the latest phase of the UK’s Online Safety Act, Ofcom has instructed the platforms to demonstrate by April 30 how they will strengthen their safeguards.
These measures include introducing stronger age-verification systems, limiting contact between minors and unknown users, improving the safety of recommendation feeds and ensuring that new digital products are not tested on children.
Meanwhile, the ICO has issued a separate open letter urging the companies to adopt modern age-assurance technologies capable of preventing children under 13 from accessing platforms not meant for them.
ICO chief executive Paul Arnold said advanced tools for verifying users’ ages are widely available, leaving companies with little justification for failing to implement them.
Technology firms have responded cautiously to the regulators’ demands. A Meta spokesperson stated that the company already uses artificial intelligence tools to estimate users’ ages and places teenage users into accounts with built-in safety protections.
The company also suggested that age verification should be handled at the app store level to avoid families repeatedly sharing personal data across multiple platforms.
YouTube said it was surprised by Ofcom’s approach and argued that regulators should focus on high-risk platforms that are not complying with existing laws rather than applying broader restrictions.
Other companies named in the warning—including TikTok, Snapchat and Roblox—had not publicly responded at the time of the announcement.
Under the UK’s regulatory framework, Ofcom has the authority to impose fines of up to 10 percent of a company’s global revenue for violations of the Online Safety Act.
The ICO can also penalize companies up to 4 percent of their worldwide annual turnover for breaches of data protection rules.
Regulators have already shown a willingness to take action. Last month, the ICO fined Reddit nearly £14.5 million for failing to implement effective age checks and for unlawfully processing children’s personal data.
The latest warnings signal that UK authorities are prepared to take tougher action against technology companies that fail to protect younger users online.



