India Fast Facts

Policy Brief: India’s New Deepfake Rules – A Shift from Reactive Takedowns to Proactive Governance

Policy Brief: India’s New Deepfake Rules – A Shift from Reactive Takedowns to Proactive Governance

New Delhi, 17 February 2026
By Tannaz Ahmed and Tushar Gandhi

India’s Ministry of Electronics and Information Technology (MeitY) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 [1] on 10 February 2026, bringing synthetically generated information (SGI), within India’s statutory due diligence framework for intermediaries. Effective from 20 February 2026, the notification amends certain sections of the previous rules and are brought about to strengthen regulatory oversight of digital intermediaries and online content platforms, enhance accountability and user safety in the digital ecosystem. The framework aims to address risks associated with deepfakes, misinformation, data security vulneravilities, fraud, and rapid virality of unlawful content.

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 have introduced a structured regulatory framework with the following core proposals:

  1. Formal definition and regulation of synthetically generated information (SGI): The rules establish a legal definition for audio, visual, or audio-visual AI-generated or algorithmically manipulated content and bring such content under the regulatory ambit for the first time.
    • Exclusions: Pure text content alone is not included under SGI, although it may still fall under other unlawful content provisions depending on context/law violated [2]. Routine or good-faith edits [3], accessibility enhancements, and ordinary formatting are explicitly excluded as long as they do not materially distort meaning.
  2. Mandatory labeling and metadata requirements: Digital platforms would need to prominently label SGI as “AI-generated” or “synthetic” and embed metadata/unique identifiers to indicate origin; platforms are prohibited from removing or suppressing such labels once applied.
  3. Accelerated content takedown timelines: The timeline for intermediaries to remove unlawful content after receiving a court order or government notice has been sharply reduced (e.g., from 24–36 hours to 3 hours), with shorter windows for highly sensitive content like non-consensual intimate imagery or deepfakes (mention timeline here, in consistency with the earlier timeline mentioned).
  4. Enhanced due-diligence obligations: Significant social media intermediaries must verify SGI disclosures by users and ensure compliance with the labeling, metadata, and removal requirements to maintain legal safe harbour protections under the IT Act.
  5. Expanded accountability and enforcement: Failure to comply with the amended rules can result in loss of intermediary safe harbour protections, increasing platforms’ legal exposure for user-generated content.

Definitional Clarity vs. Detection Complexity

The amendments define “synthetically generated information” as artificially created or altered audio, visual, or audio-visual content that appears real and is likely to deceive viewers into believing it depicts a real person or event. The emphasis is on deceptive realism rather than the mere use of AI.

Translating this definition into automated detection systems is likely to be inherently complex as determining whether content is “likely to deceive” requires contextual assessment. This gap between statutory language and algorithmic enforcement represents the first implementation challenge where legal clarity does not automatically translate into technical measurability.

The Two-Tier Model and Verification

The framework distinguishes between:

  • Unlawful SGI, which platforms must not allow (e.g., child sexual abuse material, non-consensual intimate imagery, forged documents, impersonation, arms-related content, deceptive political deepfakes).
  • Permitted SGI, which may be hosted if clearly labelled and embedded with provenance mechanisms [4], where technically feasible.

This two-tier model replaces earlier draft proposal [5], specifically draft Rule 3(3) on labelling and metadata requirements and draft Rule 4(1A) on user declarations and verification, which focused on prominently identifying SGI but did not explicitly differentiate between unlawful and permissible synthetic content, effectively allowing all synthetic content to remain online if it met the labelling criteria.

For Significant Social Media Intermediaries (SSMIs), obligations extend further. They must:

  • Obtain user declarations on whether uploaded content is SGI.
  • Deploy technical measures to verify such declarations.
  • Ensure labelling prior to display or publication.

This creates compliance responsibilities. The implementation challenge lies in verifying declarations across high-volume uploads. Self-declaration mechanisms are susceptible to misuse, while automated verification tools may generate false positives or false negatives.

The rules also require “reasonable and appropriate technical measures, including automated tools or other suitable mechanisms, to prevent users from creating, modifying, sharing, or disseminating synthetically generated information that violates any law in force.” However, they do not set uniform technical standards for detection or watermarking. The Parliamentary Standing Committee on Home Affairs, in its 254th Report on Cyber Crime [6], recommended uniform technical standards for media provenance and expansion of indigenous detection tools, including C-DAC’s Deepfake Detection Tool.

The effectiveness of provenance mechanisms depends on interoperability and resilience, as metadata can be stripped through screenshots, re-encoding, cross-platform sharing, or compression. Without uniform and tamper-resistant standards, labels risk being platform-bound rather than ecosystem-wide. Implementation therefore hinges on technical standardisation and cross-platform coordination, challenges that extend beyond the text of the rules.

Proactive Moderation and Safe Harbour

Under Section 79 of the Information Technology Act, 2000, intermediaries enjoy “safe harbour” protection, granting them immunity from liability for third-party content, provided they exercise due diligence and do not knowingly host unlawful material. The 2026 amendments (Rule 2(1B)) clarify that intermediaries who remove or disable access to content, including synthetically generated information, in compliance with the rules, including via automated tools and in good faith, will not be considered in violation of safe harbour provisions under Section 79(2) of the Act.

This clarification encourages preventive governance. However, it also intensifies implementation pressure.

The expectation that platforms “not allow any user to create, generate, modify, alter, publish, transmit, share, or disseminate” unlawful SGI signals a shift from reactive notice-and-takedown toward preventive design. Operationally, this requires:

  • Scalable automated detection systems.
  • Real-time moderation pipelines.
  • Trained human review teams.
  • Internal escalation protocols capable of responding within hours.

The preservation of safe harbour depends not merely on policy adoption but demonstrable compliance. This raises documentation and audit burdens.

Compressed Timelines and Response Capacity

The amendments significantly tighten compliance timelines:

  • Removal within three hours upon court or authorised government notice.
  • Grievance disposal within seven days.
  • Action within 36 hours, and within two hours in certain sensitive categories.

The rules also mandate that intermediaries issue user advisories at least once every three months, warning users about SGI misuse, illegal content, and penalties.

Implementation requires continuous monitoring, 24/7 response teams, and close coordination between legal and technical divisions. Global platforms operating across time zones may find uniform compliance with India-specific timelines particularly challenging.

Smaller intermediaries face substantial cost and staffing pressures. Deploying automated detection tools, maintaining grievance redressal officers, preserving logs, and meeting compressed timelines imposes burdens that larger platforms may absorb more easily.

Meeting these timelines is not just procedural; it may require robust internal escalation protocols, real-time moderation pipelines, and trained human review teams working alongside automated detection systems. Together, these factors make operational execution the key determinant of regulatory effectiveness.

Overall Assessment

India’s deepfake amendments mark a decisive move toward preventive governance of synthetic media.

Deepfake regulation is no longer about defining deception; it is about engineering traceability, synchronising enforcement, scaling detection tools, and maintaining procedural safeguards under compressed timelines.

As synthetic media becomes increasingly sophisticated, regulatory effectiveness will depend less on intent and more on operational execution. The law is now in place. Its durability will depend on whether institutions, platforms, and enforcement agencies can translate obligation into practice.

[1] Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026

[2] FREQUENTLY ASKED QUESTIONS on The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026

[3] Routine or good faith actions such as editing, formatting, enhancement, technical correction, colour adjustment, noise reduction, transcription, or compression.

[4] Includes a unique identifier, to identify the computer resource of the intermediary used to create, generate, modify or alter such information.

[5] [Proposed Amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 in relation to synthetically generated information

[6] Parliamentary Standing Committee on Home Affairs. (2025, August 25). “Cyber Crime – Ramifications, Protection and Prevention”. Report no. 254

India AI Impact Summit 2026: Scale, Strategy and the Shaping of Global AI Governance

India AI Impact Summit 2026: Scale, Strategy and the Shaping of Global AI Governance

New Delhi, 14 February 2026
By Tannaz Ahmed and Tushar Gandhi

The upcoming India AI Impact Summit 2026 is emerging as one of the most important global technology gatherings of the year. Positioned at the intersection of geopolitics, innovation and governance, the summit reflects India’s growing ambition to shape the next phase of global artificial intelligence policy and deployment.

Beyond its scale, the summit’s significance lies in its attempt to bridge advanced AI economies with emerging markets, placing India at the center of an evolving, multi-polar AI ecosystem.

Scale and Global Participation

The summit is expected to draw participation from more than 100 countries, alongside over 100 global CEOs, policymakers, researchers and investors. Registrations have reportedly crossed 35,000, underscoring the high level of global interest.

Prime Minister Narendra Modi is set to inaugurate the summit, signalling strong political ownership. Confirmed or widely expected heads of state and senior leaders include:

  • President Emmanuel Macron (France)
  • President Luiz Inácio Lula da Silva (Brazil)
  • UN Secretary-General António Guterres

The United States delegation will be led by Michael Kratsios, Assistant to the U.S. President and Director of the White House Office of Science and Technology Policy (OSTP). China is expected to participate through a senior government and AI research delegation, adding a notable dimension to regional and global technology diplomacy.

Additional high-level representation is anticipated from countries across Europe, Asia, Africa and the Global South, reinforcing the summit’s positioning as a broad multilateral platform.

Corporate Leadership and Technology Representation

The summit is also expected to convene the most influential leaders in global AI and digital infrastructure, including:

  • Sam Altman (OpenAI)
  • Jensen Huang (NVIDIA)
  • Sundar Pichai (Google & Alphabet)
  • Alexandr Wang (Scale AI)
  • Demis Hassabis (Google DeepMind)
  • Dario Amodei (Anthropic)
  • Julie Sweet (Accenture)
  • Cristiano Amon (Qualcomm)
  • Nikesh Arora (Palo Alto Networks)
  • Matthew Prince (Cloudflare)

Strong domestic industry participation is expected from leaders such as Mukesh Ambani, Nandan Nilekani, and Sunil Bharti Mittal, reflecting India’s private sector engagement across telecom, digital infrastructure and enterprise technology.

The presence of leading AI model developers, chip manufacturers, cybersecurity firms, and cloud providers signals that discussions will extend beyond policy frameworks to infrastructure, compute access, and commercial deployment.

Expo and Ecosystem Participation

The India AI Impact Expo component is expected to feature over 400 exhibitors, spanning AI infrastructure providers, enterprise solution companies, research institutions and startups.

This positions the summit not only as a diplomatic and policy dialogue platform, but also as a commercial showcase for:

  • Foundational and domain-specific AI models
  • Data center and compute infrastructure
  • AI applications in health, agriculture, finance and governance
  • Startup innovation from India and emerging markets

Emerging Policy Themes and Collaboration Tracks

Beyond keynote speeches and CEO roundtables, the summit’s working sessions are expected to focus on structural questions shaping the global AI ecosystem:

1.  AI Infrastructure Cooperation

Discussions are anticipated around compute collaboration, semiconductor supply chains, and data center investments particularly in the context of reducing supply-chain concentration risks.

2. Responsible and Safe AI Standards

Countries are expected to explore common minimum standards for safety, transparency, and accountability in AI deployment, with particular emphasis on inclusive and human-centric approaches.

3. Workforce and Skilling Partnerships

Joint initiatives on AI education, digital skilling and research exchanges are likely to form part of bilateral and multilateral discussions.

4. Global South Cooperation

A prominent thread is expected to center on inclusive AI frameworks tailored to emerging economies, positioning India as a convenor for Global South engagement in AI governance.

Several bilateral meetings on the sidelines are expected to result in Memoranda of Understandings (MoUs) or framework agreements in areas such as AI infrastructure, education, research collaboration and innovation partnerships.

Strategic Positioning: Beyond a Conference

The summit reflects a broader strategic calculation: AI governance is increasingly shaped through coalitions rather than singular blocs. By convening Western economies, emerging markets, and major AI corporations under one platform, India is positioning itself as a bridge actor in a fragmented global technology landscape.

The presence of both U.S. and Chinese delegations, alongside European and Global South leaders, underscores the summit’s geopolitical dimension. It is positioned not merely a technology forum, but where standards, partnerships and long-term take shape.

Overall Assessment

The India AI Impact Summit 2026 signals India’s intention to move from being a large AI market to becoming a central node in global AI governance and deployment.

Its success, however, will be measured less by attendance figures and more by tangible outcomes:

  • Durable partnerships on AI infrastructure
  • Alignment on responsible AI standards
  • Concrete skilling and research collaborations
  • A clearer roadmap for inclusive AI growth

If effectively executed, the summit could mark a shift toward a more distributed and coalition-based AI order, one in which India plays a defining role in shaping both the rules and the applications of the next technological era.