Centre amends IT Rules, mandates AI labels, cuts deepfake takedown time to 2 hours

2 months ago 11
ARTICLE AD BOX

The government has dropped the requirement that would have mandated large, fixed-size watermarks on AI-generated content.

The new rules impose drastically tighter deadlines for user safety.
The new rules impose drastically tighter deadlines for user safety. (Pexels)

The government on Tuesday notified the amended Information Technology (IT) Rules, 2026, introducing a stricter compliance regime for social media companies to combat the rise of deepfakes and other sensitive content on digital platforms such as X, Facebook, Instagram, Telegram, among others.

The new rules impose drastically tighter deadlines for user safety. The platforms must remove non-consensual intimate imagery and deepfake content within two hours of receiving a complaint, a sharp reduction from the previous 24-hour window.

Additionally, platforms are now required to take down other unlawful content within three hours of a government or court order, down from the earlier 36-hour limit.

Separately, the government has introduced labelling of AI-generated content, mandating that intermediaries must ensure any synthetically generated information—audio, visual, or video that appears authentic—is prominently labeled to distinguish it from reality. Social media platforms are mandated to deploy technical measures to verify the accuracy of user declarations for such synthetically generated content.

Notably, the government has dropped the requirement that would have mandated large, fixed-size watermarks on AI-generated content.

In the draft released in October last year, the ministry of electronics and IT (MeitY) proposed that for visual content, these labels were required to cover at least 10% of the surface display area, while for audio content, an audible marker was proposed during the first 10% of its duration.

The draft amendment witnessed a pushback from big tech companies, with industry bodies such as the Internet and Mobile Association of India (IAMAI) calling the rules too rigid and technically difficult to implement across formats and devices. IAMAI had also said the labelling requirement was overly prescriptive and could disrupt user experience, especially for audio and video content.

According to the new rules, beyond visible labels, the government has mandated that, where technically feasible, intermediaries must embed permanent metadata or unique identifiers into the content. This is intended to act as a digital fingerprint to track the computer resource used to create or modify the information.

The rules also introduce a preventative mandate. Intermediaries offering AI tools must deploy technical measures to prevent users from generating or sharing specific harmful content, including child sexual abuse material (CSAM), content related to explosives, or deepfakes designed to defraud or deceive users about a person’s identity.

In a move to improve accountability towards users, the time given to grievance officers to resolve general user complaints has been cut by more than half, reduced to seven days from the previous 15-day window.

Platforms are now required to be more proactive in communicating their policies. Intermediaries must inform users of their rules, privacy policies, and the consequences of non-compliance—such as account termination or police reporting—at least once every three months, increasing the frequency from the earlier requirement of once a year, according to the new rules.

Read Entire Article