The European Union has launched a major investigation into X, the social media platform owned by Elon Musk, following a global outcry over millions of sexually explicit “deepfake” images generated by its artificial intelligence chatbot, Grok.
A “Violent” and “Unacceptable” Scandal
The formal probe, announced by the European Commission on Monday, centers on whether X broke EU law by failing to assess the severe risks before rolling out Grok’s image-creation features. The scandal erupted late last year when users discovered they could easily upload a photo of a person and use simple text prompts to have Grok generate non-consensual, sexualized images—including of children and women in degrading poses.

Henna Virkkunen, the EU Executive Vice-President leading the investigation, condemned the images as a “violent, unacceptable form of degradation”. She stated the investigation would determine if X “treated rights of European citizens—including those of women and children—as collateral damage of its service”.
X’s Inadequate Response and Mounting Legal Peril
X’s response to the scandal has drawn criticism from regulators worldwide. Initially, the company limited the image-creation tool to paying subscribers, a move a UK government minister labeled as “monetising abuse” and “insulting to victims“. It later blocked users from generating images of real people in revealing clothing in jurisdictions where it is illegal.
An Oxford University analysis suggests the crisis was predictable, arguing Grok was “structurally designed to operate with fewer safeguards” from the start, and prior warnings about its ability to create harmful content were ignored.
The EU investigation is being conducted under the powerful Digital Services Act (DSA), which could see X fined up to 6% of its global annual turnover if found in breach. This comes just a month after the EU fined X approximately €120 million ($140 million) for other DSA violations related to the deceptive design of its verification system.
A Global Regulatory Onslaught
The EU is not acting alone. The UK’s communications regulator, Ofcom, launched its own formal investigation in January under the Online Safety Act. The UK government has also moved to criminalize the creation of such deepfakes, with a minister declaring the images “weapons of abuse” and warning that platforms hosting them must be held accountable.
Investigations are also underway in Australia, France, Germany, and California. Following the outcry, Indonesia and Malaysia temporarily banned access to the Grok chatbot.
Why It Matters
Experts warn the Grok case is symptomatic of a much larger problem. Dr. Federica Fedorczyk of the University of Oxford’s Institute for Ethics in AI states it is “just the tip of the iceberg of a wider… ecosystem of online misogyny and abuse”. She argues that simply criminalizing the outcome is insufficient and that safety must be embedded in the design of AI systems from the outset.
With its user base, reputation, and finances now under unprecedented scrutiny from the world’s most powerful regulators, X faces a pivotal moment. The investigation will test whether the platform’s “move fast and break things” ethos can survive in an era demanding digital accountability.
















