Outlook Money
Effective February 20 2026, the Ministry of Electronics and Information Technology has notified new IT Rules to regulate artificial intelligence generated synthetic information, mandating transparency, quick takedown, and greater platform accountability to promote digital safety.
All platforms must have clear and prominent labels or audio disclosures about AI-synthetic content, so that viewers can instantly differentiate between real and AI-generated media.
Platforms will have to embed non-removable metadata and unique digital identifiers in files to make source and origin of AI content traceable across various platforms.
Users have to self-declare any AI-generated uploads, and large platforms have to enforce automated verification tools to ensure these uploads are declared.
Platforms must remove illegal AI-synthetic content within three hours of receiving any government order to prevent viral spread and also ensure immediate compliance with national safety standards.
Non-consensual deepfake should be removed within two hours of a complaint, in order to safeguard the dignity of the victims and prevent the rapid spread of harm.
Firms have to use automated systems to proactively detect and prevent harmful content, such as child abuse material and fake identity documents from being shared online.
Platforms will no longer enjoy legal immunity and will be subject to prosecution if they knowingly allow unlabelled deepfakes, or fail to comply with mandatory takedown and content labelling requirements.
Platforms are bound to share the identity of deepfake creators directly with victims or their representatives to support legal action and redress personal grievances.
Curated by Priyanka Debnath