British regulators have urged major social media platforms to strengthen protections for children online after lawmakers rejected a proposal to impose a blanket ban on social media use for under-16s.
The call was made by the UK’s communications regulator Ofcom and the data watchdog Information Commissioner’s Office (ICO), which said they had written to several major platforms asking them to improve safeguards for young users.
The regulators addressed the letter to companies including YouTube, TikTok, Facebook, Instagram and Snapchat, calling on them to tackle a range of child safety concerns.
Their demands include stronger age-verification systems, preventing adults from contacting minors, safer content for teenagers and ensuring that experimental technologies such as artificial intelligence are not tested on children.
The move follows a decision by lawmakers in United Kingdom to reject a proposal to include a social media ban for under-16s in new child welfare legislation currently under debate.
Instead, the government has launched a consultation seeking views from parents and young people on whether restricting children’s access to social media platforms would be effective.
The debate reflects growing concern across Europe about the impact of social media on children and teenagers.
Several governments are considering tighter restrictions after Australia became the first country to introduce a sweeping ban on social media use for under-16s in December. Countries including Spain, France and Denmark are also weighing similar measures.
In their letter, Ofcom asked platforms to report on steps they are taking to keep children off services that they are too young to use. Companies have been given until April 30 to respond.
Ofcom chief executive Melanie Dawes said technology companies were still failing to prioritise the safety of young users.
“Tech firms are failing to put children’s safety at the heart of their products and are falling short on promises to keep children safe online,” she said.
“Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose on services they can’t realistically avoid.”
The ICO also published an open letter urging platforms to adopt more reliable methods of verifying users’ ages.
Its chief executive Paul Arnold said many platforms currently rely on self-declared ages when users sign up — a system regulators say is easily bypassed.
“This puts under-13s at risk by allowing their information to be collected and used unlawfully without the protections they are entitled to,” Arnold said.
Regulators suggested several technologies that could improve age checks, including facial age-estimation systems, digital identification and one-time photo verification.
Technology companies say they are already implementing some of these measures.
A spokesperson for Meta said the company uses artificial intelligence to detect users’ ages based on their activity, as well as facial age-estimation technology.
Meta has also introduced specialised “teen accounts” with built-in safety protections on platforms such as Instagram and Facebook.
The company added that verifying users’ ages at the app store level could be a more effective approach, noting that teenagers typically use dozens of apps each week.
Meanwhile, TikTok said it has introduced new technologies across Europe since January to detect and remove accounts belonging to users under the minimum age of 13.
The platform said it uses a combination of facial age estimation, credit-card verification and government-approved identification to confirm users’ ages.
The renewed regulatory pressure comes as courts and regulators increasingly scrutinise social media companies over the safety of younger users.
A major lawsuit in the United States involving Meta and Alphabet — the parent company of YouTube — is examining claims that the design of platforms such as Instagram and YouTube contributes to addiction among young users.
The case, which began earlier this year, could set an important precedent regarding the responsibility of social media companies to protect children online.