India Fast Facts

New Delhi, 17 February 2026
By Tannaz Ahmed and Tushar Gandhi

India’s Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 [1] on 10 February 2026, bringing synthetically generated information (SGI), within India’s statutory due diligence framework for intermediaries. Effective from 20 February 2026, the notification amends certain sections of the previous rules and are brought about to strengthen regulatory oversight of digital intermediaries and online content platforms, enhance accountability and user safety in the digital ecosystem. The framework aims to address risks associated with deepfakes, misinformation, data security vulneravilities, fraud, and rapid virality of unlawful content.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 have introduced a structured regulatory framework with the following core proposals:

  1. Formal definition and regulation of synthetically generated information (SGI): The rules establish a legal definition for audio, visual, or audio-visual AI-generated or algorithmically manipulated content and bring such content under the regulatory ambit for the first time.
    • Exclusions: Pure text content alone is not included under SGI, although it may still fall under other unlawful content provisions depending on context/law violated [2]. Routine or good-faith edits [3], accessibility enhancements, and ordinary formatting are explicitly excluded as long as they do not materially distort meaning.
  2. Mandatory labeling and metadata requirements: Digital platforms would need to prominently label SGI as “AI-generated” or “synthetic” and embed metadata/unique identifiers to indicate origin; platforms are prohibited from removing or suppressing such labels once applied.
  3. Accelerated content takedown timelines: The timeline for intermediaries to remove unlawful content after receiving a court order or government notice has been sharply reduced (e.g., from 24–36 hours to 3 hours), with shorter windows for highly sensitive content like non-consensual intimate imagery or deepfakes (mention timeline here, in consistency with the earlier timeline mentioned).
  4. Enhanced due-diligence obligations: Significant social media intermediaries must verify SGI disclosures by users and ensure compliance with the labeling, metadata, and removal requirements to maintain legal safe harbour protections under the IT Act.
  5. Expanded accountability and enforcement: Failure to comply with the amended rules can result in loss of intermediary safe harbour protections, increasing platforms’ legal exposure for user-generated content.

Definitional Clarity vs. Detection Complexity

The amendments define “synthetically generated information” as artificially created or altered audio, visual, or audio-visual content that appears real and is likely to deceive viewers into believing it depicts a real person or event. The emphasis is on deceptive realism rather than the mere use of AI.

Translating this definition into automated detection systems is likely to be inherently complex as determining whether content is “likely to deceive” requires contextual assessment. This gap between statutory language and algorithmic enforcement represents the first implementation challenge where legal clarity does not automatically translate into technical measurability.

The Two-Tier Model and Verification

The framework distinguishes between:

  • Unlawful SGI, which platforms must not allow (e.g., child sexual abuse material, non-consensual intimate imagery, forged documents, impersonation, arms-related content, deceptive political deepfakes).
  • Permitted SGI, which may be hosted if clearly labelled and embedded with provenance mechanisms [4], where technically feasible.

This two-tier model replaces earlier draft proposal [5], specifically draft Rule 3(3) on labelling and metadata requirements and draft Rule 4(1A) on user declarations and verification, which focused on prominently identifying SGI but did not explicitly differentiate between unlawful and permissible synthetic content, effectively allowing all synthetic content to remain online if it met the labelling criteria.

For Significant Social Media Intermediaries (SSMIs), obligations extend further. They must:

  • Obtain user declarations on whether uploaded content is SGI.
  • Deploy technical measures to verify such declarations.
  • Ensure labelling prior to display or publication.

This creates compliance responsibilities. The implementation challenge lies in verifying declarations across high-volume uploads. Self-declaration mechanisms are susceptible to misuse, while automated verification tools may generate false positives or false negatives.

The rules also require “reasonable and appropriate technical measures, including automated tools or other suitable mechanisms, to prevent users from creating, modifying, sharing, or disseminating synthetically generated information that violates any law in force.” However, they do not set uniform technical standards for detection or watermarking. The Parliamentary Standing Committee on Home Affairs, in its 254th Report on Cyber Crime [6], recommended uniform technical standards for media provenance and expansion of indigenous detection tools, including C-DAC’s Deepfake Detection Tool.

The effectiveness of provenance mechanisms depends on interoperability and resilience, as metadata can be stripped through screenshots, re-encoding, cross-platform sharing, or compression. Without uniform and tamper-resistant standards, labels risk being platform-bound rather than ecosystem-wide. Implementation therefore hinges on technical standardisation and cross-platform coordination, challenges that extend beyond the text of the rules.

Proactive Moderation and Safe Harbour

Under Section 79 of the Information Technology Act, 2000, intermediaries enjoy “safe harbour” protection, granting them immunity from liability for third-party content, provided they exercise due diligence and do not knowingly host unlawful material. The 2026 amendments (Rule 2(1B)) clarify that intermediaries who remove or disable access to content, including synthetically generated information, in compliance with the rules, including via automated tools and in good faith, will not be considered in violation of safe harbour provisions under Section 79(2) of the Act.

This clarification encourages preventive governance. However, it also intensifies implementation pressure.

The expectation that platforms “not allow any user to create, generate, modify, alter, publish, transmit, share, or disseminate” unlawful SGI signals a shift from reactive notice-and-takedown toward preventive design. Operationally, this requires:

  • Scalable automated detection systems.
  • Real-time moderation pipelines.
  • Trained human review teams.
  • Internal escalation protocols capable of responding within hours.

The preservation of safe harbour depends not merely on policy adoption but demonstrable compliance. This raises documentation and audit burdens.

Compressed Timelines and Response Capacity

The amendments significantly tighten compliance timelines:

  • Removal within three hours upon court or authorised government notice.
  • Grievance disposal within seven days.
  • Action within 36 hours, and within two hours in certain sensitive categories.

The rules also mandate that intermediaries issue user advisories at least once every three months, warning users about SGI misuse, illegal content, and penalties.

Implementation requires continuous monitoring, 24/7 response teams, and close coordination between legal and technical divisions. Global platforms operating across time zones may find uniform compliance with India-specific timelines particularly challenging.

Smaller intermediaries face substantial cost and staffing pressures. Deploying automated detection tools, maintaining grievance redressal officers, preserving logs, and meeting compressed timelines imposes burdens that larger platforms may absorb more easily.

Meeting these timelines is not just procedural; it may require robust internal escalation protocols, real-time moderation pipelines, and trained human review teams working alongside automated detection systems. Together, these factors make operational execution the key determinant of regulatory effectiveness.

Overall Assessment

India’s deepfake amendments mark a decisive move toward preventive governance of synthetic media.

Deepfake regulation is no longer about defining deception; it is about engineering traceability, synchronising enforcement, scaling detection tools, and maintaining procedural safeguards under compressed timelines.

As synthetic media becomes increasingly sophisticated, regulatory effectiveness will depend less on intent and more on operational execution. The law is now in place. Its durability will depend on whether institutions, platforms, and enforcement agencies can translate obligation into practice.

[1] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026

[2] FREQUENTLY ASKED QUESTIONS on The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026

[3] Routine or good faith actions such as editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription, or compression.

[4] Includes a unique identifier, to identify the computer resource of the intermediary used to create, generate, modify or alter such information.

[5] [Proposed Amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 in relation to synthetically generated information

[6] Parliamentary Standing Committee on Home Affairs. (2025, August 25). “Cyber Crime – Ramifications, Protection and Prevention”. Report no. 254