The United Kingdom government is tightening its online safety legislation to ensure artificial intelligence chatbot providers are subject to stricter rules aimed at protecting children.
The move is designed to close what officials described as a loophole in the country’s online safety framework, which will now require AI chatbot services to take stronger action against illegal and harmful material or face significant penalties, including fines or potential blocking within the UK.
Prime Minister Keir Starmer announced the changes following criticism of X over sexually explicit content generated by its chatbot Grok. The government signalled that AI-driven services must meet the same standards as other online platforms in preventing the spread of illegal material.

Under the revised approach, chatbot developers operating in the UK will be expected to implement robust safeguards to prevent the creation and distribution of harmful content, particularly material that could endanger children. Companies that fail to comply risk regulatory enforcement measures.
The development reflects growing global concern about the risks associated with generative AI tools, especially as they become more widely used by young people. UK authorities have positioned child protection and digital safety as central priorities in their technology regulation agenda.
The update to the legislation underscores the government’s intention to ensure that advances in AI innovation are matched by accountability and strong content moderation standards.

Apple moves to integrate AI chatbots like ChatGPT into CarPlay