Centre prescribes labels for all photorealistic AI content
What has the Centre done?
The Union government has notified amendments to the Information Technology Act, 2000 rules.
These amendments mandate compulsory labelling of photorealistic AI-generated content online.
The new rules will come into force on February 20, 2026.
Which rules are amended?
Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
Amended through the IT Rules Amendment, 2026.
What is meant by “photorealistic AI-generated content”?
The rules define synthetically generated content as:
Audio, visual or audio-visual material
Created or altered using computer algorithms
That appears real, authentic or indistinguishable from real persons or events
Examples:
Deepfake videos
AI-generated realistic images
Voice clones mimicking real individuals
The definition is narrower than the draft version released in October 2025, focusing mainly on content that can mislead viewers into believing it is real.
What are the new obligations on social media platforms?
1. Mandatory labelling
AI-generated content must be prominently labelled.
Platforms must:
Ask users to disclose if content is AI-generated, or
Proactively label such content themselves
If content is a non-consensual deepfake, platforms must remove it, not merely label it.
2. Much shorter takedown timelines
Type of content | Time limit for takedown |
Content declared illegal by a court or government | 3 hours |
Sensitive content (non-consensual nudity, deepfakes) | 2 hours |
Earlier timeline | 24–36 hours |
This is a major tightening of platform responsibilities.
3. Due diligence and safe harbour
Platforms enjoy safe harbour, meaning they are not automatically liable for user content.
However:
If an intermediary knowingly permits, promotes, or fails to act on illegal AI content,
It will be considered a failure of due diligence
This can lead to loss of safe harbour protection
What is “safe harbour” and why is it important?
Safe harbour is a legal protection under Indian law.
It ensures platforms like social media sites are not treated as publishers of user content.
Loss of safe harbour means:
Platforms can be held legally liable
Strong incentive for faster compliance and monitoring
Proactive labelling: what does it mean?
Even if users do not disclose AI generation:
Platforms must label content if they become aware it is synthetic
The draft rule earlier proposed:
Mandatory labelling covering 10% of the image
Final rules:
Give platforms flexibility, but still require clear and prominent disclosure
Changes regarding takedown authorities
October 2025 amendment:
Each State could designate only one officer for takedown orders
2026 amendment:
States can now appoint multiple officers
Reason:
Administrative efficiency in large, populous States
Why is this significant?
1. Tackling deepfakes
Addresses growing misuse of AI for:
Political misinformation
Non-consensual sexual content
Reputation damage
2. Platform accountability
Shifts burden from users to intermediaries
Encourages rapid response to harmful content
3. Free speech vs regulation
Raises concerns about:
Over-removal of content
Chilling effect on speech
Likely to be tested in courts
Prelims practice MCQs
Q. With reference to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, consider the following statements:
Photorealistic AI-generated content must be prominently labelled on online platforms.
The rules apply only to visual content and exclude audio material.
The amendments come into force from February 20, 2026.
Which of the statements given above are correct?
A. 1 and 2 only
B. 1 and 3 only
C. 2 and 3 only
D. 1, 2 and 3
Correct answer: B
Explanation:
Statement 1 is correct as the amended rules mandate prominent labelling of photorealistic AI content.
Statement 2 is incorrect because the definition covers audio, visual and audio-visual content.
Statement 3 is correct as the rules take effect from February 20, 2026.
Q. Under the amended IT Rules, what is the maximum time allowed for social media intermediaries to take down content declared illegal by a court or an appropriate government?
A. 24 hours
B. 12 hours
C. 3 hours
D. 2 hours
Correct answer: C
Explanation:
Content declared illegal by a court or appropriate government must be taken down within three hours, a sharp reduction from earlier timelines of 24–36 hours.
Q. Which of the following types of content must be taken down within two hours under the new rules?
Non-consensual nudity
Deepfake content
Copyright-infringing content
Defamatory content pending court order
Select the correct answer using the code below:
A. 1 and 2 only
B. 1, 2 and 3 only
C. 2 and 4 only
D. 1, 2, 3 and 4
Correct answer: A
Explanation:
The two-hour takedown requirement applies specifically to sensitive content, including non-consensual nudity and deepfakes. Copyright and defamation follow different legal processes.