Begin typing your search...

Centre notifies new IT rules to regulate AI-generated content

Govt amends IT Rules to regulate AI content. Platforms must label synthetic media, embed traceable metadata, and follow faster takedown norms from Feb 20.

Centre notifies new IT rules to regulate AI-generated content

Centre notifies new IT rules to regulate AI-generated content
X

11 Feb 2026 8:39 PM IST

The central government has amended the IT Intermediary Rules to formally regulate AI-generated content for the first time. Platforms must label synthetic media, embed traceable metadata, and follow stricter takedown timelines starting February 20.


The central government on February 10 notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, bringing AI-generated content under formal regulatory oversight. Issued as G.S.R. 120(E) and signed by MeitY Joint Secretary Ajit Kumar, the changes come into force on February 20.

The amendments introduce a legal definition for AI-created material, impose new compliance duties on digital platforms, and tighten timelines for content removal.

What Qualifies as ‘Synthetically Generated Information’

The notification defines Synthetically Generated Information (SGI) as any audio, visual or audio-visual content that is artificially created, modified or altered using computer resources and appears real enough to be mistaken for authentic depictions of people or events.

This includes:

Deepfake videos

AI-generated voice clones

Face-swapped images

AI-altered videos portraying real individuals

Even fictional scenarios involving real people may fall under SGI if they appear realistic.

However, the rules exclude routine edits such as colour correction, compression, transcription, translation, accessibility adjustments, and conceptual or illustrative material used in documents, research, or training content—provided they do not distort original meaning.

Mandatory Labelling and Traceability

Intermediaries that enable the creation or spread of SGI must:

Display clear, prominent and unambiguous labels on AI-generated content

Embed persistent metadata and unique identifiers where technically feasible

Prevent removal or tampering of these identifiers once applied

This aims to stop unlabeled synthetic content from being reshared without disclosures.

Stricter Rules for Major Platforms

Significant social media intermediaries such as Instagram, YouTube and Facebook face additional requirements under new Rule 4(1A):

Users must declare if uploaded content is AI-generated

Platforms must deploy automated tools to verify declarations

Confirmed SGI must carry a visible disclosure

Failure to comply could be treated as a lapse in due diligence, potentially affecting safe harbour protections.

A proposed requirement from the draft rules mandating labels to occupy 10% of screen space has been dropped, though labelling remains compulsory.

Faster Takedowns and Monitoring

The amendments significantly shorten response timelines:

Certain government orders: 3 hours

Other compliance windows reduced from 15 days to 7 days, and 24 hours to 12 hours

Platforms must also use automated systems to block unlawful SGI, including:

Child sexual abuse material

Obscene or pornographic content

False electronic records

Content linked to explosives or weapons

Deceptive deepfakes involving real individuals

User Warnings and Legal Framework Updates

Platforms must warn users at least once every three months about penalties for misuse of AI content, via terms of service, privacy policies, or in-app notices in English or any Eighth Schedule language.

Legal references have been updated to the Bharatiya Nyaya Sanhita (BNS), 2023, replacing the Indian Penal Code.

What This Means for Users

Users will increasingly see AI disclosure labels on posts, reels, videos and audio. Uploaders may also need to confirm whether content was AI-created or altered. Misrepresentation could lead to account penalties and potential legal consequences depending on the content involved.

The draft rules were published in October 2025 for consultation, and platforms now have until February 20 to comply.





Next Story
Share it