British regulators have urged major social media platforms to strengthen protections for children online after lawmakers rejected a proposal to impose a blanket ban on social media use for under-16s.
The call was made by the UK’s communications regulator Ofcom and the data watchdog Information Commissioner’s Office (ICO), which said they had written to several major platforms asking them to improve safeguards for young users.
The regulators addressed the letter to companies including YouTube, TikTok, Facebook, Instagram and Snapchat, calling on them to tackle a range of child safety concerns.

Their demands include stronger age-verification systems, preventing adults from contacting minors, safer content for teenagers and ensuring that experimental technologies such as artificial intelligence are not tested on children.
The move follows a decision by lawmakers in United Kingdom to reject a proposal to include a social media ban for under-16s in new child welfare legislation currently under debate.
Instead, the government has launched a consultation seeking views from parents and young people on whether restricting children’s access to social media platforms would be effective.
The debate reflects growing concern across Europe about the impact of social media on children and teenagers.
Several governments are considering tighter restrictions after Australia became the first country to introduce a sweeping ban on social media use for under-16s in December. Countries including Spain, France and Denmark are also weighing similar measures.

In their letter, Ofcom asked platforms to report on steps they are taking to keep children off services that they are too young to use. Companies have been given until April 30 to respond.
Ofcom chief executive Melanie Dawes said technology companies were still failing to prioritise the safety of young users.
“Tech firms are failing to put children’s safety at the heart of their products and are falling short on promises to keep children safe online,” she said.
“Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose on services they can’t realistically avoid.”
The ICO also published an open letter urging platforms to adopt more reliable methods of verifying users’ ages.
Its chief executive Paul Arnold said many platforms currently rely on self-declared ages when users sign up — a system regulators say is easily bypassed.
“This puts under-13s at risk by allowing their information to be collected and used unlawfully without the protections they are entitled to,” Arnold said.
Regulators suggested several technologies that could improve age checks, including facial age-estimation systems, digital identification and one-time photo verification.

Technology companies say they are already implementing some of these measures.
A spokesperson for Meta said the company uses artificial intelligence to detect users’ ages based on their activity, as well as facial age-estimation technology.
Meta has also introduced specialised “teen accounts” with built-in safety protections on platforms such as Instagram and Facebook.
The company added that verifying users’ ages at the app store level could be a more effective approach, noting that teenagers typically use dozens of apps each week.
Meanwhile, TikTok said it has introduced new technologies across Europe since January to detect and remove accounts belonging to users under the minimum age of 13.
The platform said it uses a combination of facial age estimation, credit-card verification and government-approved identification to confirm users’ ages.
The renewed regulatory pressure comes as courts and regulators increasingly scrutinise social media companies over the safety of younger users.
A major lawsuit in the United States involving Meta and Alphabet — the parent company of YouTube — is examining claims that the design of platforms such as Instagram and YouTube contributes to addiction among young users.
The case, which began earlier this year, could set an important precedent regarding the responsibility of social media companies to protect children online.
The call by UK regulators for social media companies to strengthen protections for children comes amid growing concerns about the impact of online platforms on young users and increasing pressure on technology firms to improve digital safety.
Rising concerns over online harm
In recent years, policymakers, parents and child-protection groups in the United Kingdom have raised alarm about the risks children face on social media platforms. These include exposure to harmful or inappropriate content, cyberbullying, online grooming and addictive platform designs that can negatively affect mental health.
High-profile cases involving the deaths of young people linked to harmful online content intensified public scrutiny of social media companies. Campaigners have argued that many platforms’ algorithms recommend dangerous or age-inappropriate material to minors, prompting calls for stronger oversight.
Introduction of stronger digital regulations
In response, the UK government introduced the Online Safety Act, a landmark law designed to make technology companies more responsible for the content and risks on their platforms. The legislation requires companies to identify and mitigate potential harms to users, particularly children.
The law gives the UK’s communications regulator, Ofcom, broad powers to enforce compliance. Companies that fail to protect users—especially minors—could face significant financial penalties or restrictions on their services.
Responsibilities for social media platforms
Under the new regulatory framework, major social media platforms such as Meta, TikTok, Snap and YouTube are expected to introduce stronger safeguards for younger users.
These measures may include:
- Implementing robust age-verification systems
- Limiting the spread of harmful or age-inappropriate content
- Strengthening parental control tools
- Adjusting algorithms that recommend content to minors
- Improving reporting systems for abuse or harmful posts
The goal is to ensure that children are not exposed to material related to self-harm, violence, pornography or other harmful themes.
Global push for stronger child protections
The UK’s regulatory efforts reflect a broader global trend toward tighter oversight of technology companies. Governments in regions such as the European Union and the United States are also debating stricter rules on online platforms, particularly concerning children’s safety.
For example, the Digital Services Act in the European Union similarly requires large technology platforms to assess and mitigate risks related to illegal or harmful online content.