India has unleashed a sweeping new regulatory regime on the world’s largest tech giants, slashing the legal deadline to remove unlawful online content from 36 hours to an unprecedented three hours while simultaneously forcing platforms to label every piece of AI-generated material—a combination experts warn is “perhaps the most extreme takedown regime in any democracy” and all but guarantees a future of automated, error-prone censorship.
The amended Information Technology rules, effective February 20, apply to major platforms including Meta, YouTube, and X. They mandate the removal of notified unlawful content within 180 minutes, introduce the first-ever legal definition of AI-generated material, and require permanent, non-removable labelling of all synthetic media. The government offered no explanation for the dramatic compression of the takedown window.

Digital rights groups reacted with alarm. The Internet Freedom Foundation warned the new timeline forces platforms to become “rapid fire censors,” stating: “These impossibly short timelines eliminate any meaningful human review, forcing platforms toward automated over-removal.”
This concern is echoed by industry analysts and technologists. Anushka Jain of the Digital Futures Lab noted companies already struggle with the 36-hour deadline due to required human oversight. “If it gets completely automated, there is a high risk that it will lead to censoring of content,” she told the BBC.
Technology analyst Prasanto K Roy described compliance as “nearly impossible” without extensive automation and minimal human scrutiny, leaving platforms virtually no capacity to assess whether a government takedown request is legally appropriate before acting on it.
A “Deepfake” Definition Arrives—Along With Unproven Technology
The amendments mark the first time Indian law has formally defined AI-generated content: audio or video created or altered to appear real, specifically targeting deepfakes. Platforms hosting user-generated AI material must now clearly label it and, where possible, embed permanent, tamper-proof markers to trace its origin. These labels cannot be removed once applied.
While Roy acknowledged the labelling intention is positive, he cautioned that reliable, tamper-proof technologies are still in development, raising serious questions about how platforms can comply with a mandate that depends on immature infrastructure.
A System Already Blocking 28,000+ Links—Now Accelerated
Critics view the three-hour rule not as an isolated efficiency measure but as the logical extension of a broader, accelerating crackdown. Transparency reports show Indian authorities already ordered the blocking of more than 28,000 URLs in 2024 alone under existing IT rules permitting removal of content deemed threatening to national security or public order—categories experts say grant the government exceptionally broad discretion.
The compressed timeline transforms this already expansive power into a weapon of near-instantaneous content suppression, with platforms given no meaningful window to challenge, review, or resist government demands.
Why It Matters
Meta declined to comment. Google and X have not responded. The Ministry of Electronics and Information Technology has not answered BBC inquiries about the changes or the mounting expert criticism.
The fundamental question now confronting India—a nation of over one billion internet users and the self-described world’s largest democracy—is whether this new regime represents legitimate regulation of harmful content or the construction of a censorship apparatus of unprecedented speed and automation.
The three-hour deadline leaves no time for due process, no room for judicial review, and no space for platforms to defend user speech. It does, however, guarantee one thing: India’s internet will now move faster than ever before. But it will move in whichever direction the government decides—instantly, automatically, and without question.
















