Mint Explainer | India’s AI rules and the elusive quest for online safety

6 hours ago 3
ARTICLE AD BOX

Copyright © HT Digital Streams Limited
All Rights Reserved.

Shouvik Das 2 min read 13 Feb 2026, 06:00 am IST

India’s first dedicated legislation on digital media ethics code said all content significantly modified or generated by AI tools must be watermarked. (Reuters) India’s first dedicated legislation on digital media ethics code said all content significantly modified or generated by AI tools must be watermarked. (Reuters)

Summary

Prima facie, the AI rules should make internet platforms better monitored for deepfake content that modifies a person’s identity without consent. However, the implementation remains to be seen.

India notified its artificial intelligence (AI) rules this week, cutting the deadline for taking down sexual content to within two hours of reporting. For other content, the time given is three hours. Can these moves make social media safer for us? Mint explores.

What do India’s AI rules say?

India’s first dedicated legislation on digital media ethics code said all content significantly modified or generated by AI tools—such as OpenAI’s ChatGPT or Google’s Gemini—must be watermarked, without specifying what or how big the watermark would be. Basic image edits and addition of filters won’t need such watermarks. Further, complaints related to deepfakes will be dealt with within three hours—with sexual content to be taken down within two hours. Companies will have until 20 February to start complying with these rules, failing which they may see their safe harbour protections being taken away.

Is this a novel approach?

This is not the first time that a country has sought to regulate AI content, or deepfakes. In the European Union (EU), the EU AI Act as well as the Digital Services Act have mandated how social media intermediaries will need to proactively monitor their platforms, and take down content based on user reports and court appeals. Starting May last year, the US also now has a ‘Take It Down’ Act that mandates social media companies to actively monitor and crack down on deepfake content on the internet. China also has a similar law, as does Australia through its Criminal Code Amendment (Deepfake Sexual Material) Act, 2024.

What have other countries done about it?

Most say social media firms must proactively monitor their platforms and take cognizance of user reports on deepfakes and other AI-altered material within a set time. While the EU doesn’t mandate a deadline, the US gives firms 24 hours to take down deepfake content. India is the first notable economy with a two-hour window to take down content.

Will these rules ease complaint redressal?

Prima facie, the AI rules should make internet platforms better monitored for deepfake content that modifies a person’s identity without consent. However, the implementation remains to be seen. While a two-hour window for sexual content takedown was lauded by Nasscom, experts said AI generates this in minutes, so perpetrators have enough time to propagate the content. The grievance report and redressal process is also riddled with legal steps, and the interface could be tough for the tech-challenged.

How will this impact artists & advertising?

During the consultation process, artists said content with AI watermarks may ruin their creative liberties and efficacy. Similar concerns were voiced by advertisers, who wonder if users will trust content labelled as AI-generated. Some parties have sought nuance in the norms to differentiate ‘AI-generated’ from ‘deepfakes’. Creators will have to wait and watch if their AI-related content reach is stifled. However, some see a win for copyright holders and intellectual property theft prevention in the long run.

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

more

topics

Read Next Story footLogo

Read Entire Article