Grok AI draws government scrutiny over misuse to create sexualised deepfakes

Elon Musk’s artificial intelligence company xAI is facing growing international scrutiny after its Grok image generator was used by users on X to create sexualised images of women, including non-consensual deepfakes and images involving minors.

Concerns escalated this week after screenshots circulated on X showing users prompting Grok to digitally alter photos of real people by removing clothing or changing their appearance in a sexualised manner. While some requests involved consenting adults, others appeared to target individuals without consent, raising serious legal and ethical questions.

French authorities have confirmed that they are investigating the spread of AI-generated deepfakes produced using Grok. Under French law, distributing non-consensual deepfake content can carry a prison sentence of up to two years. Prosecutors said the inquiry forms part of broader efforts to curb the misuse of artificial intelligence for online abuse.

Grok AI draws government scrutiny over misuse

In India, the Ministry of Electronics and Information Technology has formally written to X’s compliance office, citing reports of AI-generated images that demean and degrade women. The ministry requested an urgent technical and governance review of the platform and demanded the removal of any content found to violate Indian law.

The backlash has also reached the United Kingdom. Alex Davies-Jones, the UK minister responsible for victims and violence against women and girls, publicly criticised the platform, urging xAI to strengthen safeguards. She warned that AI tools capable of rapidly producing explicit deepfakes pose a significant threat to women and girls, particularly when consent is absent.

xAI’s own acceptable use policy explicitly bans pornographic depictions of real people and the sexualisation or exploitation of children. However, critics argue that enforcement has lagged behind the rapid rollout of new features. Requests for comment sent to xAI were met with automated responses that did not directly address the allegations.

The official Grok account on X acknowledged what it described as “lapses in safeguards” and said fixes were being implemented urgently. It stated that while protections exist, improvements are ongoing to fully block prohibited requests, including those involving minors.

The controversy comes amid wider concern about AI-generated deepfakes globally. Regulators and legal experts have warned that existing laws are struggling to keep pace with generative technologies. In the United States, federal and state laws offer limited protection for adults affected by non-consensual deepfakes, while protections for minors are broader but still difficult to enforce at scale.

Despite the growing pressure, responsibility for AI-generated content remains legally complex. US law has traditionally shielded platforms from liability for user-generated content, though legal scholars argue that AI tools which actively generate images may challenge that protection.

As governments move to tighten regulation, the Grok case is emerging as a test of how far AI companies can be held accountable for the misuse of their products, and whether voluntary safeguards are sufficient in an era of rapidly advancing generative technology.

India orders Musk’s X to fix Grok over ‘obscene’ AI content, gives 72‑hour ultimatum

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *