Released in draft form in October 2025 and notified in February 2026, the IT Amendment Rules on Synthetically Generated Information (SGI) have generated significant public debate. The concern that they seek to address—namely the growing misuse of deepfakes, manipulated media, and AI-enabled fraud—is both real and urgent. The approach they adopt reflects an emerging institutional consensus that deceptive AI-generated and manipulated content should be governed through transparency and accountability measures. Other measures that the government has taken in this regard include MeitY’s 2024 advisory, the Election Commission’s disclosure directions for synthetically generated campaign material, the Madras High Court’s directions in X v. Union of India (2025), the SOP subsequently issued by MeitY, and the already existing due diligence framework under the IT Rules, 2021.
The difficulty lies in the regulatory design. Even in their notified form, the Amendment Rules move beyond the scope and logic of existing transparency-based interventions.
Overbreadth, Subjectivity, and Label Fatigue
The Amendment Rules risk pushing intermediaries into a form of proactive monitoring that sits uneasily with India’s existing legal framework. The IT Rules, 2021 already contained a limited “reasonable” obligation for Significant Social Media Intermediaries (SSMIs) to proactively identify certain categories of unlawful content. The notified 2026 amendments expand that obligation in important respects by bringing SGI within a structured due diligence regime that requires verification of user declarations, prominent labelling, and embedded provenance markers.
While the Rules narrow the scope to specific media types and introduce explicit carve-outs for routine or good-faith editing, accessibility enhancements, and educational or research materials, they continue to extend obligations beyond strictly unlawful content into broad classes of synthetic media, irrespective of demonstrated harm.
Although the notified definition of SGI is more tailored than the draft, concerns regarding overbreadth and interpretive uncertainty remain. The operative trigger is not demonstrable harm or intent to deceive, but whether content ‘appears to be real, authentic, or true’ and is ‘likely to be perceived as indistinguishable from a natural person or real-world event.’ These perception-based standards inject subjectivity into compliance determinations and require intermediaries to anticipate how viewers might interpret content. The boundary between realistic simulation and unlawful deception is often context-dependent, and this ambiguity may generate compliance uncertainty in practice. The Rules also condition safe-harbour protection under Section 79 of the IT Act on adherence to these due diligence requirements, heightening the regulatory stakes of classification errors.
Imposing enhanced verification and labelling obligations may harm rather than benefit the interests of users online. The requirement to deploy technical systems capable of identifying and marking qualifying SGI at scale, coupled with persistent metadata and non-removable provenance markers, could incentivise platforms to err on the side of precaution. False positives, wherein lawful or contextually legitimate content is classified and labelled as SGI, may distort user engagement and undermine trust. At scale, the cumulative effect may be a form of "label fatigue", where frequent disclosures dilute their own communicative value.
Regulators should also recognise that interoperable provenance standards and techno-legal frameworks developed through multi-stakeholder collaboration may provide more flexible and resilient approaches to managing synthetic media risk than compliance mandates alone.
A shift away from the Notice-and-Takedown framework
The Supreme Court’s decision in Shreya Singhal v. Union of India is often understood as reaffirming a notice-and-takedown model for intermediary due diligence, where obligations are, in the main, triggered by court or government directions. The notified Rules, however, move in a materially different direction. While they retain a formal notice-based structure, they compress compliance timelines to an extent that fundamentally alters the practical architecture of enforcement.
Under the amended Rule 3(1)(d), intermediaries are now required to remove or disable access to unlawful content within three hours of receiving a valid court or government order. Grievance redressal timelines have also been significantly tightened, with shorter windows in certain cases—including as little as two hours for specified categories of content. These timelines represent a dramatic departure from the earlier 36-hour framework under the 2021 Rules.
Such compression is not merely procedural but is likely to reshape the compliance environment. A three-hour takedown window, particularly at scale, leaves little room for contextual evaluation, internal escalation, or careful legal assessment. Even where notices are validly issued, the risk of error increases as platforms are compelled to act almost immediately. In high-volume systems handling millions of posts daily, this may incentivise defensive "over-removal" rather than calibrated moderation.
Moreover, when combined with proactive technical deployment duties relating to synthetically generated information, the regime begins to resemble a hybrid model: part notice-and-takedown, part continuous monitoring architecture. The cumulative effect is a shift away from a reactive, knowledge-based safe-harbour model toward one that increasingly expects near-real-time detection and response. Where timelines approach immediacy, due-process safeguards—including clarity of notice, internal review mechanisms, and user appeal rights—become even more important. Without such guardrails, accelerated compliance risks undermining both speech protection and legal certainty.
Asymmetric incentives
The notified Rules move beyond the traditional ‘actual knowledge’ model that the law has otherwise sought to preserve. Even as the IT Rules, 2021 envisage liability consequences being linked to actual knowledge of unlawful content through court or government directions, the amended framework exposes intermediaries to consequences where their deployed technical measures fail to prevent or appropriately label synthetically generated information.
Detection tools, while improving, remain imperfect, vary across content types, and can be defeated through adversarial techniques. At the scale at which major platforms operate, false positives and false negatives are not edge cases; they are inevitable. Faced with that uncertainty, intermediaries may respond by suppressing borderline or ambiguous content pre-emptively, often without the procedural safeguards that typically accompany restrictions on speech, such as notice, a chance to be heard, appeal, or meaningful review.
It is true that the notified Rules clarify that the removal or disabling of access to content in compliance with their provisions does not violate Section 79(2) of the IT Act. However, the interaction between proactive detection duties and the statutory safe-harbour architecture continues to raise interpretive questions, particularly where liability exposure may hinge on the adequacy of technical systems rather than on actual knowledge of specific unlawful content.
Although the amended framework provides some assurance regarding compliance-based removals, it simultaneously hardens certain obligations—replacing discretionary formulations with the mandatory deployment of technical measures. This recalibration may narrow the space for voluntary, good-faith moderation beyond the strict confines of the Rules and could generate uncertainty regarding the outer limits of safe-harbour protection in practice.
Where risk turns on intent, context, and downstream use, accountability should be allocated accordingly. A more future-proof approach would therefore adopt a risk-sensitive, value-chain-based accountability model rooted in shared responsibility. Such a framework should be developed through structured consultation with technical experts, civil society, fact-checkers, platforms, and public authorities, and complemented by sustained investments in digital literacy.
(The author is associate professor at West Bengal National University of Juridical Sciences, Kolkata.)
(The views expressed in this article are those of the author and do not purport to reflect the opinions or views of THE WEEK.)