Platforms must remove illegal AI content within three hours
Mandatory labels and metadata required for all synthetic media
Rules tighten grievance timelines and accountability for social platforms
Platforms must remove illegal AI content within three hours
Mandatory labels and metadata required for all synthetic media
Rules tighten grievance timelines and accountability for social platforms
The government on Tuesday tightened rules for social media platforms such as YouTube and X, mandating the takedown of unlawful content within three hours, and requiring clear labelling of all AI-generated and synthetic content.
The new rules - which came in response to the growing misuse of Artificial Intelligence to create and circulate obscene, deceptive and fake content on social media platforms - require embedding of permanent metadata or identifier with AI content and ban content considered illegal in the eyes of law, as well as shorten user grievance redressal timelines.
The response timelines have been reduced to two hours for platforms to take down flagged content involving material that exposes private areas, and in case of full or partial nudity, or sexual acts.
In the run-up to the rules, authorities had flagged a rise in AI-generated deepfakes, non-consensual intimate imagery and misleading videos that impersonate individuals or fabricate real-world events, often spreading rapidly online.
The amended IT rules aim to curb such abuse by requiring faster takedowns, mandatory labelling of AI-generated content and stronger accountability from platforms to prevent the promotion and amplification of unlawful synthetic material. It places the onus on both social media platforms as well as AI tools.
Ministry of Electronics and Information Technology (MeitY) issued a gazette notification, amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The new rules will come into force on February 20, 2026.
Interestingly, February 20 is also the concluding day of India AI Impact Summit, a mega congregation New Delhi will host as the nation prepares to take the centrestage in global AI conversation.
The tweaked rules explicitly bring AI content within the IT rules framework; it defines AI-generated and synthetic content as one that by means audio, visual or audio-visual "is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event." Routine editing, accessibility improvements, and good-faith educational or design work are excluded from this definition.
The new rules require social media platforms to take down any illegal content flagged by the government or courts within three hours instead of the previously 36 hour deadline.
User grievance redressal timelines have also been shortened.
The rules require mandatory labelling of AI content. Platforms enabling creation or sharing of synthetic content must ensure such content is clearly and prominently labelled and embedded with permanent metadata or identifiers, where technically feasible, as per the amended rules.
Calling for a ban on illegal AI content, it said platforms must deploy automated tools to prevent AI content that is illegal, deceptive, sexually exploitative, non-consensual, or related to false documents, child abuse material, explosives, or impersonation.
Intermediaries cannot allow removal or suppression of AI labels or metadata once applied, it said.
It requires stricter user disclosures. Intermediaries must warn users at least once every three months about penalties for violating platform rules and laws, including for misuse of AI-generated content.
Significant social media intermediaries must require users to declare whether content is AI-generated and verify such declarations before publishing.
AI-related violations involving serious crimes must be reported to authorities, including under child protection and criminal laws, it said.
The government said the changes aim to curb the misuse of AI, prevent deepfake harms, and strengthen accountability of digital platforms.
One of the provisions for markers and identifiers, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip, was in the earlier draft and has been dropped now in the final version.
The latest amendments geared to curb user harm from deepfakes and misinformation aims to impose obligations on two key sets of players in the digital ecosystem: one social media platforms and providers of AI tools such as ChatGPT, Grok and Gemini.
The IT Ministry had earlier highlighted that deepfake audio, videos, and synthetic media going viral on social platforms demonstrates the potential of generative AI to create "convincing falsehoods", where such content can be "weaponised" to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
In fact, the issue of deepfakes and AI user harm came into sharp focus following the recent controversy surrounding Elon Musk-owned Grok allowing users to generate obscene content. Users flagged the AI chatbot's alleged misuse to 'digitally undress' images of women and minors, raising serious concerns over privacy violations and platform accountability.
The days and weeks that followed saw pressure mounting on Grok from governments worldwide, including India, as regulators intensified scrutiny of the generative AI engine over content moderation, data safety and non-consensual sexually-explicit images.
On January 2, the IT Ministry had pulled up X and directed it to immediately remove all vulgar, obscene and unlawful content generated by Grok or face action under the law.
The platform subsequently said it has implemented technological measures to prevent Grok from allowing the generation of images of real people in revealing clothing in jurisdictions where it is illegal.
Maintaining safe harbour protection would require platforms to adhere to prescribed due diligence, including now AI labelling and complying with stricter takedown timelines as stipulated.
Not abiding by the rules, not pulling down unlawful content despite it being brought to their notice, could mean loss of the safe harbour immunity for platforms.
Sajai Singh, Partner, JSA Advocates and Solicitors, said that amendments allow regulators and the government to monitor and control synthetically-generated information, including deepfakes. "Interestingly, the amendments narrow the scope of what is to be flagged, compared to the earlier draft released by Meity, with a focus on misleading content rather than everything that has been artificially or algorithmically created, generated, modified or altered," Singh said.
On the other hand, takedown time has been reduced from 36 hours to three hours, Singh noted, while adding, "I think intermediaries will be happy with the reasonable efforts expectation rather than the earlier proposed visible labelling".