NEW DELHI — India’s government has introduced new rules that make it mandatory for social media companiesto take down unlawful content within three hours of being notified, tightening an existing 36-hour deadline.The changes amend India’s 2021 IT rules, which have already been a flashpoint between the New Delhi government and global technology companies.The amended guidelines will take effect from 20 February and apply to major platforms including Meta, YouTube and X. They will also apply to AI-generated content.The government directive did not give any reason for the change in the timeline for takedowns.The move reinforces India’s position as one of the world’s most aggressive regulators of online content, requiring platforms to balance compliance in a market of 1 billion internet users against mounting concerns over government censorship.”It’s practically impossible for social media firms to remove content in three hours,” said Akash Karmakar, a partner at Indian law firm Panag & Babu who specialises in technology law. “This assumes no application of mind or real world ability to resist compliance.”There is mounting global pressure on social media companies to police content more aggressively, with governments from Brussels to Brasilia demanding faster takedowns and greater accountability.India’s IT rules empower the government to order the removal of content deemed illegal under any of its laws, including those related to national security and public order.The country has issued thousands of takedown orders in recent years, according to platform transparency reports. Meta alone restricted more than 28,000 pieces of content in India in the first six months of 2025 following government requests, it disclosed.But critics worry the move is part of a broader tightening of oversight of online content and could lead to censorship in the world’s largest democracy.For the first time, the law defines AI-generated material, including audio and video that has been created or altered to look real, such as deepfakes. Ordinary editing, accessibility features and genuine educational or design work are excluded.The rules mandate that platforms that allow users to create or share such material must clearly label it. Where possible, they must also add permanent markers to help trace where it came from.Companies will not be allowed to remove these labels once they are added. They must also use automated tools to detect and prevent illegal AI content, including deceptive or non-consensual material, false documents, child sexual abuse material, explosives-related content and impersonation.Digital rights groups and technology experts have raised concerns about the feasibility and implications of the new rules.The Internet Freedom Foundation said the compressed timeline would transform platforms into “rapid fire censors”.”These impossibly short timelines eliminate any meaningful human review, forcing platforms toward automated over-removal,” the group said in a statement. — Agencies
Add a comment
