Introduction
In late March 2026, the Ministry of Electronics and Information Technology (MeitY) introduced draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021). These changes followed closely on the heels of earlier updates, including the regulation of synthetic content and significantly reducing the time for content takedown for intermediaries. The term intermediaries include social media platforms, messaging services, video-sharing websites, search services, internet service providers, and cloud service providers, as well as other services that host or transmit user-generated content.
At first glance, the amendments appear administrative. They are presented as clarifications, procedural adjustments, and efforts to modernise and streamline existing rules. Yet, when read closely, they reshape how authority flows through the digital ecosystem. The changes influence how platforms make decisions, how users express themselves, and how information circulates. To understand what is at stake, it helps to see these amendments not as isolated legal tweaks but as part of a broader shift in how communication is governed.
Expansion of Oversight
Earlier, Part III of the IT Rules 2021 applied to publishers of “news and current affairs content”, and obligations were imposed on intermediaries in the context of blocking and emergency measures. The amended Rule 8(1) now brings user-generated “news and current affairs content” within the regulatory framework. In this context, “user” is broad enough to include influencers as well as ordinary internet users. There is no qualitative or quantitative threshold that limits who falls within this category. As a result, the scope remains expansive, potentially covering anyone posting content that is even loosely connected to “news and current affairs”.
This expansion unfolds in a context where the constitutional validity of Part III of the IT Rules remains under judicial consideration. The Code of Ethics and the three-tier grievance redressal mechanism were stayed by the Bombay High Court, with concerns around free speech and statutory overreach. That position was reinforced by the Madras High Court in T.M. Krishna v. Union of India (2021). The amendment effectively circumvents these stay orders, which remain in force, undermining judicial authority. This also means this change is constitutionally suspect.
The scope of the term “news and current affairs content” is not defined. In contemporary media ecosystems, public discourse extends far beyond institutional journalism. It includes commentary, satire, policy analysis, cultural critique, and even everyday documentation of public life. The absence of clear boundaries allows a wide range of expressions to be drawn into regulatory consideration.
Moreover, a large proportion of Indians now relies on online platforms as their primary source of news and public information. The Reuters Institute’s 2024 Digital News Report found that 71% of Indians rely on online media for news, with 54% consuming news through YouTube, 48% through WhatsApp, and 35% through Facebook. The 2025 Digital News Report reinforces this trend, documenting a rising preference for influencer-led, video-driven, personalised news formats, particularly among younger audiences. Bringing this decentralised and participatory ecosystem within a formal oversight structure substantially expands the reach of regulation.
These creators have emerged as important alternative sources of commentary and analysis. They broaden access to information, enable direct engagement with audiences, and sustain a degree of pluralism that has receded elsewhere.
This move must be situated within the conditions of India’s traditional media landscape. Ownership of television and print media has become concentrated in the hands of corporate groups with proximity to political power. Mukesh Ambani’s Reliance group controls more than 70 outlets reaching hundreds of millions of Indians. Gautam Adani's acquisition of NDTV through a hostile takeover ended the channel’s editorial independence and foreclosed one of the last traditional spaces for critical journalism in mainstream media. The term “Godi media” has emerged in public discourse to describe sections of the press that function in alignment with the government led by Narendra Modi.
This alignment is supported by incentives. Advertising remains the primary source of revenue for many media organisations, with government expenditure forming a big chunk. The allocation of this revenue operates as a mechanism of influence, shaping editorial priorities across both large networks and smaller outlets. Alongside financial pressure, journalists also navigate an environment where reporting can attract legal and extra-legal consequences. The combined effect is a narrowing of independent space within mainstream media.
In this context, users and influencers play a critical role. On YouTube, for instance, voices such as Dhruv Rathee command over 31 million subscribers. Ravish Kumar, a former NDTV anchor turned independent commentator, reaches over 14.5 million. Faye D’Souza’s news reporting reaches over 463,000 subscribers, and Akash Banerjee’s satirical channel The Deshbhakt has over 6.5 million subscribers. These creators have emerged as important alternative sources of commentary and analysis. They broaden access to information, enable direct engagement with audiences, and sustain a degree of pluralism that has receded elsewhere.
The government’s awareness of the significance of these voices is reflected in patterns of content moderation. Between March 2024 and July 2025, Union and state governments reportedly ordered X (formerly Twitter) to remove approximately 3,700 posts or accounts, including action against two Reuters-affiliated handles. During the January–June 2025 period, the content removals from Instagram and Facebook in response to government orders increased by nearly 300% compared to the same period in 2023.
A similar pattern has emerged this year. Between 11 and 24 March 2026 alone, there were 69 instances of account-level restrictions on X. The accounts targeted were largely those critical of the government's treatment of minorities, India’s response to the US-Israeli conflict, and the liquefied petroleum gas (LPG) crisis, with many expressing dissent through satire and political parody.
The targets were not chosen at random. The Congress party reported that nine of its satirical posts were taken down. Takedown notices were issued on both X and Instagram for The Wire’s satirical music videos featuring the Prime Minister. Its founding editor, Siddharth Varadarajan, then reposted related content.
Cartoonist Satish Acharya disclosed that X had withheld two of his cartoons depicting India-Iran relations and the United States.
Similarly, lawyer Prashant Bhushan received a notice for a post that included a graphic referencing global arrests and resignations linked to the Jeffrey Epstein case and alleged connections involving Union Minister Hardeep Singh Puri.
Attempts to track the takedown orders are ongoing, and a listing can be found here.
What emerges from these cases is a directed pattern, calibrated towards political satire, dissenting commentary, and content that challenges official narratives. It is this space—occupied by independent creators, commentators, and ordinary users—that the amended Rule 8(1) now draws within regulatory oversight.
It is important to clarify why bringing this space within such an explicit regulatory regime is problematic. User content could always be taken down by the government under Section 69A of the Information Technology (IT) Act. However, by bringing such content within Part III, the framework now extends beyond takedown to include the power to direct users to issue apologies, add labels, or modify their content. This reflects a shift from removal to the active shaping of speech, both before and after publication.
To regulate this space without definition, without threshold, and without independent oversight is to leave the most consequential discretion exactly where it already sits: with the executive.
The extension of the framework to user-generated news and current affairs content arrives at a moment when that space is already under sustained pressure—and when the voices most likely to be affected are those least represented within mainstream institutional media. To regulate this space without definition, without threshold, and without independent oversight is to leave the most consequential discretion exactly where it already sits: with the executive.
Inter-Departmental Committee transformation
The regulation of user-generated news and current affairs content becomes even more concerning when considered alongside the oversight structure and the amendments made to it. Two issues are worth highlighting.
First, while Rule 9 formally imposes Code of Ethics obligations on publishers, Rule 14(5) authorises the Inter-Departmental Committee (IDC) to examine user-generated content and recommend actions to the Ministry of Information and Broadcasting. This creates an indirect pathway through which users may be nudged or directly compelled to comply with the Code of Ethics.
This interpretation is reinforced by that the IDC is not independent, as its membership is dominated by government representatives. It can therefore indirectly compel compliance with the Code of Ethics, which incorporates the Programme Code under the Cable Television Networks (Regulation) Act, 1995. This legislation was designed to retain a degree of state control over broadcast content in the post-liberalisation era and has long been criticised for its vague, patriarchal, and paternalistic standards. It relies on terms such as morality, taste, and social harmony, alongside phrases like “suggestive themes” or content that may “corrupt viewers”.
Under the IT Rules 2021, the Ministry of Information and Broadcasting’s powers for securing compliance with the Code of Ethics extend beyond blocking and include warnings, censure, demands for apology, and modification of content. Thus, an individual user or an influencer may be required to issue an apology, add a label, or even modify their content. It should be noted that any direction to modify content is formally conditioned on the ministry being satisfied that there is a need to prevent incitement to the commission of a cognisable offence relating to public order, or on the grounds enumerated under Section 69A of the IT Act.
However, in practice, this does not operate as a meaningful constraint, as public order has often been invoked expansively, and non-reasoned or weakly reasoned orders in the context of content takedown under the IT Act are the norm.
In sum, the the Inter-Departmental Committee has moved [in the draft amendments] from a body that responds to defined grievances towards one that conducts ongoing surveillance of content at the executive’s direction, unconstrained by procedural discipline.
Second, under the earlier framework, the IDC was limited to examining complaints concerning breaches of the Code of Ethics. Its role was reactive, initiated by identifiable grievances. The revised framework departs from this design. The committee is now authorised to consider “matters” referred to it by the ministry, without any requirement that such references stem from a complaint, relate to a defined violation, or involve an affected party that has been heard.
The Ministry of Information and Broadcasting may act on its own initiative and place any content-related issue before the IDC. This effectively allows it to set the agenda for a body that is meant to provide independent and impartial oversight. It effectively recasts the IDC from a forum for redress into a body of continuous oversight. It permits ongoing scrutiny of content without the discipline of an initiating grievance or the presence of a contesting voice.
This can be used by the ministry to focus on compliance with the Programme Code, or even other standards it may consider appropriate, which may be even more indeterminate. The latter may allow the ministry to require publishers to modify content it finds objectionable, without even maintaining the pretence of meeting the threshold of preventing incitement to a cognisable offence relating to public order, or satisfying the requirements under Section 69A of the Act.
In sum, the IDC has moved from a body that responds to defined grievances towards one that conducts ongoing surveillance of content at the executive’s direction, unconstrained by procedural discipline. Paired with the potential extension of the Code of Ethics to user-generated content, this creates a structure in which ordinary users and independent creators can find themselves subject to regulatory scrutiny and pressure through a process that is initiated, shaped, and concluded by the state itself. This is likely to produce a chilling effect on speech, as individuals may begin to self-censor in anticipation of scrutiny or sanction.
Architecture of Censorship
The amendments introduced in February 2026 to the IT Rules 2021 shorten takedown timelines. The MeitY’s published guidance indicates that intermediaries are expected to act within three hours upon receiving actual knowledge through a court order or government intimation, within 36 hours for certain expedited grievances, and within two hours for specified complaints involving nudity, sexual content, morphed content, and impersonation. The government has also reiterated that intermediaries that fail to observe due diligence risk losing immunity under Section 79.
There are also indications that these timelines may be reduced to one hour. Such compressed timelines leave little scope for platforms to assess legality or context, pushing them towards immediate removal as a strategy to minimise risk. The outcome is predictable: lawful but sensitive content is taken down out of caution, driven by the threat of liability.
Combined with strict timelines and liability risk, compliance becomes an immediate operational requirement rather than a considered response. Platforms begin to anticipate executive expectations and build them into their decision-making.
At the same time, the Sahyog regime introduces a centralised mechanism through which the government can issue takedown requests across platforms. This bypasses the procedural safeguards under Section 69A of the IT Act, which require reasoned orders and review. By enabling executive action without these constraints, Sahyog risks creating a parallel system of content control outside the statutory framework.
The regime is currently under challenge before the division bench of the Karnataka High Court. Further, the Union Government is considering a proposal to decentralise content blocking powers. The proposal would grant direct takedown authority to multiple ministries. At present, the MeitY authorises blocking orders, while a separate notice and takedown channel already operates alongside a Home Ministry-led Sahyog portal with nodal officers across State police departments.
All of these developments must be understood in light of the broader context. India ranked third globally in content restriction requests in 2023 and second in internet shutdowns worldwide in 2025. The data on government content restrictions is not easily available; hence, these statistics are likely to be underreported.
This accelerated, and centralised model of enforcement is reinforced by the broader shift towards executive-driven control under the newly inserted Rule 3(4) under the IT Rules. It gives MeitY the power to issue binding clarifications, advisories, orders, directions, standard operating procedures (SOPs), codes of practice, and guidelines to intermediaries. This departs from the scheme of the parent statute.
The rule-making power under the IT Act contemplates issuing formal rules subject to parliamentary oversight. Rule 3(4), by contrast, bypasses that framework and allows binding obligations to be created through executive instruments without meaningful safeguards or independent oversight. It enables governance by executive decree rather than the rule of law, fostering a culture of authority over justification.
Combined with strict timelines and liability risk, compliance becomes an immediate operational requirement rather than a considered response. Platforms begin to anticipate executive expectations and build them into their decision-making. The architecture of censorship thus extends beyond individual takedowns to shape the very conditions under which those decisions are made.
In addition, a parliamentary committee has proposed expanding the powers of the Fact Check Unit (FCU) of the government-run Press Information Bureau (PIB), enabling it to coordinate directly with intermediaries to secure the removal of online content. This is particularly concerning given that the constitutional validity of any such FCU remains seriously suspect. This follows from the decision of the Bombay High Court, which held that the FCU under the IT Rules 2021 was unconstitutional.
If platforms are exposed to liability for user-led fact-checking mechanisms, they are likely to remove or sharply restrict these features, weakening one of the few tools that allows collective scrutiny of official narratives.
Although that ruling did not formally extend to the PIB’s unit, the continuation of substantially similar functions signals an attempt to bypass the effect of the judgment. These arrangements would place the executive in the position of both determining what constitutes unlawful content and enforcing its removal, entrenching a structural conflict with serious consequences for online speech.
Finally, it has been reported that the government has sought to bring Community Notes, a user-driven fact-checking feature on X, within the regulatory ambit of the Ministry of Information and Broadcasting when they relate to news and current affairs. This would enable the state to target content that challenges official claims.
Interestingly, earlier this year, several posts by Bharatiya Janata Party (BJP) leaders on X drew community notes that the government flagged to the platform, including a post by Prime Minister Modi. Such an approach goes beyond the scope of the parent statute and reflects an effort to reshape platform architecture through regulatory pressure.
If platforms are exposed to liability for user-led fact-checking mechanisms, they are likely to remove or sharply restrict these features, weakening one of the few tools that allows collective scrutiny of official narratives. This is problematic because it concentrates the power to define truth in the hands of the executive, removing space for independent verification and dissent. It risks suppressing legitimate disagreement and enabling selective control over public discourse.
Conclusions
These developments, read together, indicate a broader structural shift rather than a set of isolated regulatory updates. The amendments reshape not only specific obligations, but also the underlying conditions in which digital communication operates, altering how authority is exercised and how compliance is produced.
As Apar Gupta has pointed out, the IT Rules have been revised almost every year since 2021. The government’s FCU was set aside by the Bombay High Court within a year of its notification. Yet regulatory expansion has continued through new pathways. The current Draft Amendment was released with a 15-day public consultation window, extended only after major backlash. The implementation timeline for the February 2026 amendments was 10 days.
One helpful way to understand this shift is through the lens of structural authoritarianism, where control emerges not only through direct intervention but also through the architecture within which decisions are made.
Courts cannot keep pace: by the time one provision is litigated to a stay, two more have been notified in its place. The rhythm of regulation begins to outpace the rhythm of scrutiny. As I have suggested elsewhere, this reflects a deeper continuity. Communication technologies continue to function as instruments through which control and discipline are exercised. The medium has changed, but the underlying logic remains recognisable.
The MeitY frames these amendments as necessary to secure an “Open, Safe, Trusted, and Accountable Internet”. It is here that the parallel with Nineteen Eighty-Four becomes instructive. George Orwell’s formulation “War is peace. Freedom is slavery. Ignorance is strength” was not merely rhetorical. It captured a system in which language is used to stabilise power by shaping how reality is understood. A similar dynamic is visible in the vocabulary of these reforms.
Safety, trust, and accountability are not neutral descriptors. Their meaning is defined within the same institutional structures that issue directives, determine compliance, and impose consequences. Protection can merge into control. Accountability can operate as a discipline. The distinction between governance and justification begins to narrow.
The amendments do more than regulate content. They recalibrate the conditions under which communication occurs. Platforms begin to internalise regulatory expectations as operational norms. Users learn to navigate an environment where the boundaries of permissible expression are defined in advance through design, timelines, and institutional pathways. One helpful way to understand this shift is through the lens of structural authoritarianism, where control emerges not only through direct intervention but also through the architecture within which decisions are made.
The central issue, then, extends beyond legality. It concerns the kind of public sphere that is being constituted. An open internet is not measured only by access, but by the space it creates for independent participation, disagreement, and contestation. These amendments move that space towards a model where communication is increasingly organised through systems of guidance, coordination, and pre-emptive response.
This essay has been written by the author in his personal capacity.
Rudraksh Lakra is a research fellow with the Applied Law and Technology Research team at the Vidhi Centre for Legal Policy, an independent, non-profit legal think-tank based in New Delhi.