×

Deepfakes, AI and the new age of sexual harassment: The grok controversy explained

The Grok AI controversy has reignited global concern over AI-generated deepfakes that violate the privacy and dignity of women and minors

Grok and xAI logos | Reuters

Protecting our 'digital dignity' is quickly becoming a major human rights issue. As Generative AI advances, a serious new problem has emerged. The rise of AI has created a scary new reality; it’s now far too easy for people to use the faces of women and children without their consent.

What is the controversy all about?

A major row has erupted around Grok, the artificial intelligence chatbot developed by Elon Musk’s xAI, over its alleged role in generating sexualised and non-consensual deepfake images, including a major threat to women and minors.

This raised urgent questions about AI safety, privacy, and regulatory oversight globally. Users began prompting Grok to 'undress' people in photos or put them in sexually suggestive poses, often targeting female celebrities, influencers, and even private citizens.

The 2024 Taylor Swift deepfake crisis served as a global wake-up call. Sexually explicit, synthetic images of the singer, reportedly generated by bypassing filters on tools like Microsoft Designer, spread uncontrollably on X, with one post crossing 47 million views in under a day. This viral violation forced X to temporarily block searches for Swift's name and prompted the U.S. White House to demand urgent legislative action. 

The Grok controversy isn't confined to one country. Governments in France, Malaysia, Italy, Indonesia, and other jurisdictions have publicly criticised Grok for facilitating sexually explicit and non-consensual AI content. 

Malaysia and Indonesia became the first countries to restrict access to Grok, citing failures to implement adequate safeguards and protect citizens from harmful deepfakes.

The EU ordered Elon Musk’s X to retain and preserve all internal documents and data related to Grok’s development and its 'spicy' mode until the end of 2026. This prevents the company from deleting internal emails or logs that might show whether they were aware that the AI was being misused.

The Indian government has moved from issuing advisories to enforcing strict punitive measures, specifically targeting the misuse of tools like Grok. The Ministry of Electronics and Information Technology (MeitY) issued a letter to X and its AI chatbot, demanding the swift removal of unlawful content within 24 hours and the submission of a detailed Action Taken Report (ATR) within 72 hours.

Further, the Ministry emphasises that these digital offenses attract serious penal consequences under the Bharatiya Nyaya Sanhita (BNS) and the POCSO Act, framing the issue not merely as a technical glitch but as a significant threat to digital safety and public order.

MeitY asserts that X has failed to maintain its statutory due diligence under the IT Act, 2000, and IT Rules, 2021, explicitly warning that such lapses could lead to the loss of the exemption from liability under Section 79, making the platform and its officers liable for criminal prosecution.

Public reacts: 

Indian MPs and civil society have raised alarms about women’s privacy and online dignity being compromised. Rajya Sabha MP Priyanka Chaturvedi has formally addressed the issue with the government. She described this pattern as a “gross misuse of AI function” and a direct breach of women’s right to privacy. 

AI's relationship with privacy has always been a rocky one. The technology didn't just appear; it was built on the backs of unconsenting data. Most AI companies have a list of 'negative prompts', blacklisted words like 'nude', 'suggestive', or names of specific minors.

Grok was launched with a 'free speech' philosophy, meaning many of these filters were either weak or non-existent, allowing the AI to follow harmful instructions.

Following the backlash in January 2026, X implemented a technical barrier. Instead of fixing the AI's 'brain' to stop it from knowing how to draw nudity, they limited access to paid subscribers. By requiring a credit card and a verified account, they create a 'digital paper trail'. 

In a significant shift for the artificial intelligence industry, OpenAI announced in late 2025 that it would relax its long-standing ban on mature content to allow erotica for verified adult users. This policy change, which CEO Sam Altman framed as a move to treat adult users like adults, is largely viewed as a defensive strategic response to the competitive pressure from Elon Musk’s Grok AI, which gained rapid discoverability by positioning itself as an unfiltered and spicy alternative that already permitted suggestive role-play and sexualized imagery.

This digital strip search and synthetic modesty violation serves as a modern form of sexual harassment that disproportionately targets women, aiming to humiliate, silence, and reduce them to mere objects of a misogynistic fantasy. The impact is particularly devastating for minors, where the sexualized images, even if entirely synthetic, are legally classified as Child Sexual Abuse Material (CSAM), leading to irreparable psychological trauma and secondary victimization through cyberbullying. 

Intrusion of privacy in the AI era is no longer just about hacking passwords; AI systems thrive on vast amounts of data, and privacy is often compromised through a combination of technical processes and ethical lapses. Some AI models inadvertently memorize specific snippets of their training data. Generative AI can create hyper-realistic images, videos, or audio clones that convincingly impersonate individuals, compromising one’s privacy.  AI tools like Grok use deep learning to map the target’s unique facial and bodily features. The AI then erases the original clothing in the photo and uses its training data to swap into nude or suggestive imagery. 

Tech policy expert Jaspreet Bindra, Founder of AI&Beyond, said the Grok controversy has pushed the debate around ‘safe harbour’ protections to a turning point. Safe harbour protections were originally designed to shield platforms acting in good faith, but they are increasingly being used to avoid accountability. At the same time, removing safe harbour entirely could stifle innovation and weaken free expression. Bindra advocated a middle path, a “conditional safe harbour” where platforms retain immunity only if they demonstrate absolute transparency and proactive moderation of AI-amplified content. He further added that this approach balances innovation with accountability, ensuring that digital platforms take responsibility for preventing harm without undermining free speech.  

What to do if you are targeted?

If an individual finds themselves targeted by AI-generated deepfake, pornographic, or paedophilic invasive content, they are advised to act methodically to protect their privacy and build a legal case. 

The primary step towards fighting this would be preserving all the evidence, such as screenshots, URLs, and messages related to the content. 

The secondary step is to use the reporting tools of the concerned platform so that it can take the exploited content down under its own policies and legal obligations.

Simultaneously, individuals can file a formal complaint through the National Cyber Crime Reporting Portal, which allows reporting of such digital crimes. 

The government has also issued advisories under the Information Technology Rules, 2021, requiring platforms to remove illegal content promptly and inform users of terms and conditions. 

Public awareness campaigns by CERT-In and MeitY also encourage users to adopt digital safety practices, such as enabling two-factor authentication, using strong passwords, and limiting the sharing of personal photos and videos on public platforms to reduce the risk of misuse and make it difficult to exploit the data. 

In this high-stakes race between technological exploitation and legal protection, the ultimate goal remains clear: ensuring that the “rocket for the future” is built with the consent and safety of everyone still standing on the ground.