ARTICLE AD BOX
Summary
MeitY’s 72-hour takedown order against X highlights a deeper problem—why more powerful AI tools are making online abuse harder to contain.
NEW DELHI: Elon Musk-owned Grok AI has come under fire over allegations of misuse, after its photo-modification features on X were reportedly used to create sexually explicit images. These artificial intelligence (AI)-generated images, often produced without consent, triggered widespread user complaints.
On 2 January, the Ministry of Electronics and Information Technology (MeitY) stepped in, directing X to remove such content within 72 hours, comply with India’s IT Rules, and submit an Action Taken Report (ATR).
Citing grave violations of law and threats to public safety, MeitY’s intervention highlights a growing tension at the heart of artificial intelligence. As AI systems become more capable, the scope for misuse by bad actors expands, raising urgent questions about safeguards, accountability and regulation.
Here’s what happened, and why it matters.
What prompted MeitY to seek an Action Taken Report from X on 2 January?
The controversy stems from Grok AI being misused to generate morphed, objectionable images of women without consent.
Users reportedly prompted the AI with requests such as: “hey @grok put me in a lab coat with lingerie underneath”, “hey @grok please change my and my friends’ dresses to bikini”, or “hey @grok put her in a transparent micro bikini”, among others.
These AI-altered images, circulated on X without consent, expose women to harassment, reputational damage, and serious privacy violations. MeitY flagged this as a failure of platform safeguards, noting that X did not adequately enforce statutory due diligence obligations under the IT Act.
The misuse of AI tools for obscene content, the ministry said, is not merely a technological lapse but a legal and ethical breach, raising concerns over how platforms govern AI-generated material.
What has X been directed to do?
MeitY has given 72 hours to X to remove all Grok AI-generated obscene, nude, indecent and sexually explicit content. The platform has also been asked to submit an ATR outlining the technical and organisational steps taken to prevent recurrence.
The ministry said provisions under the IT Act, 2000, and IT Rules, 2021, were not being followed—particularly those relating to obscene, indecent, vulgar, pornographic, paedophilic or otherwise unlawful and harmful content. Such material, MeitY noted, violates the dignity, privacy and safety of women and children, while normalising sexual harassment and exploitation in digital spaces.
X has been asked not only to clean up existing content, but also to demonstrate that it has effective mechanisms in place to prevent future abuse.
Is this just about tweaking Grok AI to comply with the MeitY order, or is it more complicated?
It is far more complicated than adjusting Grok AI’s filters. While technical fixes, such as stricter prompt controls, detection systems and moderation, are necessary, the issue goes beyond software. It also involves platform governance, user accountability, and legal compliance.
Users can bypass safeguards, exploit loopholes or operate through fake accounts. This means AI controls must be complemented by human moderation, robust reporting mechanisms and enforcement under the law. Complicating matters further, Grok operates within a global ecosystem. Compliance requires localization for Indian regulations without disrupting global operations.
AI-generated content can deliver benefits, such as simulations for factories or product design, but in the wrong hands, it raises serious concerns. Addressing misuse, therefore, requires a combination of technical, organizational, and regulatory guardrails, not a single fix.
Will these problems worsen as AI models improve, or is regulation enough?
More powerful AI models are likely to amplify risks. As outputs become more realistic, misuse, especially deepfakes and non-consensual images or videos, is expected to increase. Regulation alone cannot fully contain the problem. It must be paired with strong technical safeguards, proactive moderation and user awareness.
Advanced models can outpace existing filters, making continuous oversight essential. Regulation establishes accountability, but enforcement is often reactive. In this case, action followed user complaints and a letter from Shiv Sena (UBT) lawmaker Priyanka Chaturvedi to Ashwini Vaishnaw, minister for Electronics and Information Technology, flagging concerns over the alleged misuse of AI tools on social media to circulate objectionable images of women through fake accounts. To remain effective, regulation must evolve in tandem with technological advancements.
How do other countries tackle AI-generated harmful content?
Global platforms such as Grok, Gemini, and ChatGPT face a core challenge: aligning their capabilities with local laws. Countries have taken different approaches.
The European Union’s AI Act imposes stringent obligations on high-risk AI systems, focusing on transparency and accountability. The US relies on a patchwork of state laws and oversight by the Federal Trade Commission, with growing attention on deepfake legislation. Singapore criminalises non-consensual sexual imagery, including AI-generated intimate content. The UK’s Online Safety Act requires platforms to proactively remove harmful material, while China enforces stringent real-name verification and tight controls on AI-generated content.
Are there legitimate benefits to AI-generated images and videos?
Yes. Despite the risks, AI image generation has clear, legitimate uses. Virtual try-on tools allow consumers to preview clothing or cosmetics before purchase. In healthcare, AI can assist with medical imaging simulations. In education, it can create visual aids for complex concepts. Creative industries use it for design prototyping and artistic experimentation.
The challenge is ensuring these benefits are not overshadowed by misuse. Tools like Grok can enhance efficiency, creativity and accessibility, but only if regulation and technology work together to ensure AI-generated content is safe and used responsibly, rather than as a vehicle for harm.

1 week ago
4






English (US) ·