Thursday, March 26, 2026
HomeNews ReportsElon Musk’s Grok goes unhinged, lets users undress women publicly on X; sparks outrage...

Elon Musk’s Grok goes unhinged, lets users undress women publicly on X; sparks outrage over consent and safety

Unlike private AI chatbots, Grok generates sexualised image edits in public replies on X, turning harassment into spectacle. The trend raises alarms over digital consent, platform accountability, and the growing misuse of generative AI against women.

A disturbing trend has emerged on social media platform X. Users are replying to photographs of women and prompting Elon Musk’s AI chatbot Grok to change the women’s clothing into bikinis, or into something more revealing and explicit. Grok, which is developed by xAI, has had deliberately relaxed guardrails since May 2025. Now, with such requests pouring in, the chatbot is obliging by generating AI edited images, sexualising women on social media. The most disturbing aspect is that these images look extremely real.

In simple terms, strangers are taking women’s photos, often shared in a completely non-sexual context, and publicly asking an AI to undress them. Grok is responding with altered images showing the women in bikinis or similarly explicit attire. Shockingly, if a user asks to change the pose of the woman in the photograph to a sexual or erotic one, Grok obliges that request as well.

The impact is immediate and visible. Grok’s own media feed on X has been flooded with such non-consensual altered images, triggering widespread outrage. Interestingly, the media tab on Grok’s X account has been disabled, but if someone goes to the replies section, it is still filled with such images.

Generally, AI chatbots operate within private environments. OpenAI’s ChatGPT also allows such images to be generated to some extent, but this happens in private. However, Grok is doing this in public, where everyone can view the images. Content that would be blocked or confined to private chats elsewhere is instead displayed openly, magnifying exposure, humiliation, and, in some cases, harm.

Notably, when OpIndia asked Grok itself why this behaviour is being allowed, it said that Elon Musk has positioned Grok as a “spicy” AI with fewer restrictions compared to rivals. Musk has even boasted that it would answer questions other systems refuse.

Grok’s reply to OpIndia’s query.

In practice, Grok pushes boundaries, and this has made the AI chatbot reckless. While it reportedly refuses outright nudity, it still walks right up to the edge of non-consensual sexual imagery, and with a few tweaks, some users have claimed that it can bypass restrictions on showing nudity as well.

What Grok is doing stands in sharp contrast to Google’s Gemini or OpenAI’s ChatGPT. These two chatbots, which are extremely popular among users, have applied stricter filters, even on private outputs. Even when guardrails fail elsewhere, visibility remains limited. With Grok, the harm is amplified because the output is public by design.

Notably, Grok is also facing criticism as users have observed that its timeline is almost entirely filled with women being digitally undressed or made more revealing. What should have been a general-purpose AI tool has become a public gallery of coerced digital voyeurism.

Ethical implications, digital consent, harassment and dignity

This problematic trend has raised serious and fundamental ethical questions about consent, autonomy, and dignity in the digital era. The images of women are being altered without their consent, and they are being depicted in bikinis. This is not a harmless experiment with AI. It is a blatant violation of their privacy.

This practice strips women of digital autonomy, reducing them to raw material for entertainment, trolling, or harassment. It constitutes image based sexual abuse, a form of harassment increasingly recognised as deeply traumatising. As one legal expert noted while discussing AI misuse, this is not misogyny by accident, it is by design. Grok’s permissive and provocative positioning lowers the barrier for abuse and rewards it with visibility.

Speaking to OpIndia, cyber security expert Ananth Prabhu Gurpur said, “When an AI system is used to alter a woman’s image without consent, it is not innovation, it is digital abuse. Technology does not erase ethics. If anything, it increases the responsibility to protect dignity, privacy, and bodily autonomy in online spaces.”

Digital consent must be treated with the same seriousness as real-world consent. A photo shared online is not an invitation for sexualised alterations. Turning an ordinary image into a sexualised one without permission echoes harms seen in deepfake pornography and morphing cases. The damage is not abstract. Victims can experience embarrassment, reputational harm, anxiety, and fear, knowing that strangers have seen and circulated a falsified sexualised image of them. Reports suggest that some women have stopped posting photographs online after witnessing such misuse, a chilling effect on women’s participation driven by fear.

There is also wider cultural harm. Normalising casual AI undressing reinforces objectification and entitlement. Left unchecked, it risks escalating into more explicit deepfakes, coercion, blackmail, and revenge porn. Grok’s framing of this behaviour as “fun” masks what it really is, non-consensual sexualisation at scale, enabled by design choices and amplified by public distribution.

Legal dimensions – Indian law and digital rights

OpIndia spoke to Advocate Amita Sachdeva, Advocate on Record, Supreme Court of India, on the growing misuse of AI tools like Grok to non-consensually sexualise images of women. She described the trend as dangerous, unlawful, and a clear violation of India’s digital safety framework.

“This is not harmless fun or experimentation,” Sachdeva told OpIndia. “When an AI tool alters a real person’s image to show them in revealing clothing without permission, it is a direct invasion of bodily privacy. In many cases, it also amounts to harassment, obscenity, and failure of intermediary due diligence.”

She pointed out that Indian law already provides multiple safeguards against such abuse, even if the word ‘deepfake’ is not explicitly used in every statute.

Under Section 66E of the Information Technology Act, 2000, violation of privacy is punishable when images of a person’s private areas are captured, published, or transmitted without consent. While a bikini edit may not amount to nudity in the strictest sense, Sachdeva explained that AI-generated sexualised images can still undermine a woman’s reasonable expectation of privacy, especially when intimate areas are artificially emphasised or the intent is sexual.

Repeated targeting of women using such AI edits can also attract Section 354D of the Indian Penal Code, which deals with stalking, including cyberstalking. “If a woman is repeatedly subjected to such image manipulation, tagging, circulation, or public ridicule online, it squarely falls within electronic harassment,” she said.

In cases where the generated images cross into explicit or obscene depiction, Sections 67 and 67A of the IT Act come into play. These provisions criminalise the electronic transmission of obscene or sexually explicit material. Users sharing such content can face criminal liability, and platforms are obligated to act once notified.

Sachdeva further highlighted the relevance of Section 509 IPC, which criminalises acts intended to insult the modesty of a woman, as well as the Indecent Representation of Women (Prohibition) Act, which bars derogatory or indecent depiction of women’s bodies. “Non-consensual sexualised morphing fits squarely within the mischief these laws were meant to address,” she noted.

Crucially, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, updated in October 2025, impose clear obligations on platforms like X. Rule 3(1)(b) requires intermediaries to prevent content that invades bodily privacy, is obscene, harassing on the basis of gender, or harmful to children. More significantly, Rule 3(2)(b) mandates that platforms must remove or disable access within 24 hours of receiving a complaint relating to full or partial nudity, sexual conduct, or artificially morphed images of a real person.

“If X fails to remove such content within the mandated timeframe, it risks losing its ‘safe harbour’ protection under Section 79 of the IT Act,” Sachdeva warned. “Once safe harbour is lost, the platform itself becomes exposed to criminal liability.”

She also flagged the added danger where minors are involved. Any AI-generated sexualised image of a child, even if clothed, can trigger stringent provisions relating to child safety, and platforms are expected to exercise heightened vigilance.

Beyond intermediary rules, Sachdeva pointed to India’s Digital Personal Data Protection Act, 2023, which treats photographs as personal data. Using a woman’s image to generate altered sexualised content without consent violates the Act’s core consent-based framework. “The law is clear. Personal data cannot be repurposed arbitrarily, especially in a manner that causes harm,” she said.

Indian courts, she added, are increasingly recognising the seriousness of such violations. Recent injunctions granted to public figures against AI-generated misuse of their likeness underscore the judiciary’s willingness to protect dignity and privacy, irrespective of whether the victim is a celebrity or an ordinary citizen.

“The legal framework exists,” Sachdeva concluded. “What is needed is enforcement, platform accountability, and the will to treat digital sexual abuse with the seriousness it deserves.”

What Indian women can do if their images are misused

Women targeted by such AI driven abuse are not without remedies. Prabhu said, “Non-consensual AI image manipulation is a form of cybercrime and deserves to be taken seriously. Such misuse of technology has real-world consequences, and victims should be encouraged to report it through formal legal and platform mechanisms.”

First, document everything. Screenshots of the altered image, prompts, URLs, usernames, and timestamps are critical evidence.

Second, report the content on X using the platform’s reporting tools, clearly stating that the image is morphed, sexualised, and non-consensual. Persistent follow up is often required.

Third, file a complaint with the local cyber-crime cell or police station, citing provisions such as Section 66E of the IT Act and Section 354D IPC. Complaints can also be lodged on the national cybercrime portal.

Fourth, approach the National Commission for Women, which regularly intervenes in online harassment cases and can apply institutional pressure on both law enforcement and platforms.

Fifth, seek guidance from cyber safety NGOs or legal aid organisations. Victims should not internalise blame. The fault lies with those abusing technology.

Finally, if harm continues, civil remedies including injunctions can compel takedowns and restrain further circulation. Courts have increasingly treated dignity and privacy as enforceable rights in such cases.

A call for accountability, stronger moderation and cultural shift

This trend has exposed a troubling gap between technological capabilities and ethical restraint. Platforms such as Reddit have long banned involuntary pornography and deepfake communities. Earlier iterations of Twitter enforced policies against non-consensual intimate imagery. Today’s X, however, hosts an AI tool that generates precisely such content. That is a regression.

Speaking to OpIndia, Cyber security expert Sunny Nehra said, “Such incidents show why we need strict laws against the AI. The AI models are supposed to take content from persons whose imaging they are editing. That’s how a decent digital space is supposed to act like.”

xAI and X must implement stricter guardrails immediately. There is no moral or technical justification for enabling violations of consent in the name of being edgy. If other AI platforms can refuse prompts that alter someone’s likeness without consent, Grok can learn to say no. Platform level action is equally necessary, clearer reporting pathways, consistent enforcement, swift takedowns, and bans for repeat offenders. If Grok’s public replies have become an exhibition of such content, that reflects a failure of oversight. Transparency about corrective steps is essential.

Furthermore, there is a need for a cultural shift. The eagerness to digitally disrobe women for entertainment showcases a deeper cultural problem that demands immediate attention. Digital consent must be non-negotiable. Just because AI can do something does not mean people should use it without restraint. While xAI needs to address the issue, users must also exercise responsibility and refrain from misusing technology in this manner.

Join OpIndia's official WhatsApp channel

  Support Us  

For likes of 'The Wire' who consider 'nationalism' a bad word, there is never paucity of funds. They have a well-oiled international ecosystem that keeps their business running. We need your support to fight them. Please contribute whatever you can afford

Anurag
Anuraghttps://lekhakanurag.com
Anurag is a Chief Sub Editor at OpIndia with over twenty one years of professional experience, including more than five years in journalism. He is known for deep dive, research driven reporting on national security, terrorism cases, judiciary and governance, backed by RTIs, court records and on-ground evidence. He also writes hard hitting op-eds that challenge distorted narratives. Beyond investigations, he explores history, fiction and visual storytelling. Email: [email protected]

Related Articles

Trending now

- Advertisement -