×

Regulating social media and OTT services: Comparing rules from around the world

Other countries have also used AI to censor, or asked platforms to self-censor

Representational image

With the new draft laws to regulate social media, India enters a phase of increased control over social media and media streaming platforms.

From the enactment of the Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021, intermediaries would have to remove content within 36 hours of receiving a government or legal order. Intermediaries would also need to be able to “trace” the “first originator” of content, and in cases where the origin is foreign, to identify the first in-India user to have shared it. AI would be required to be used to identify “objectionable content” like “sexual violence”.

India is not alone in seeking to enforce greater government regulation onto social media platforms, nor is it the first to prompt such platforms to turn to AI to censor content before the government had to get involved.

Australia too has planned legislation to mandate a 48-hour period by which time social media companies must take down harassing, abusive or revenge porn posts, if ordered to do so by the eSafety Commissioner's office. Failure to do so could invite a fine of up to $555,000 for websites and social media companies and up to $111,000 for individuals.

Since 1995, South Korea’s Information & Communication Ethics Office has had the ability to order information providers to delete and restrict material that “encroaches on public morals,” causes a “loss of national sovereignty” or is “information that may harm youths,’s character, emotions and sense of value”.

The US, however, has charted a different route with Section 230 of the Communications Decency Act, 1996 providing immunity to online services, including Intermediaries, from liability for transmission of any third party content. This provision, dubbed the “26 words that created the Internet,” by Jeff Kossett, cybersecurity law professor at the U.S. Naval Academy, came under fire by former President Donald Trump, who sought its removal after accusing social media platforms of being biased against conservatives. Section 230 has made US social media companies largely self-regulating.

That does not mean the US government has not tried to increase its ability to order content to be taken offline: Legislations like the Child Online Protection Act attempted to make it illegal for websites to host material deemed harmful to minors for commercial purposes, but were later struck down as unconstitutional. In general, laws putting the liability onto the platforms themselves, have ended up becoming quagmired in legal challenges revolving around the First Amendment and Section 230. It remains illegal, however, for users on such platforms to upload illegal content.

The EU, however, holds platforms liable for copyright infringements by its users.

When it comes to streaming platforms, some countries like the UK (and Australia in the case of Netflix) opt to let streaming platforms self regulate, others directly determine what regulations they adhere to in terms of their content. With the three-tier grievance redressal mechanism with two tiers of self-regulation and the third level being an oversight mechanism under the MIB, India’s approach is a mixed one.

Tackling terrorism, using AI

Approaches also vary in terms of how social media platforms handle terrorism-related content. As early as 2017, Facebook was using AI to scrub its platform of such content.

In a 2019 report, the Centre for Internet and Society India noted that a “one-size-its-all” model for illegal content may not be productive, noting that the nature and impact of the content in question should be considered in determining the appropriate turnaround time.

For example, a 2018 European Commission working document found that terrorist content was most harmful within the first hours of its appearance, with a third of links to ISIS propaganda disseminated within an hour of release, and three-quarters of these shared within four hours. In France, social media and other websites have just one hour to remove terrorism or child sexual abuse related content.

In Germany, companies must remove illegal content within 24 hours, under the Network Enforcement Act (NetzDG).

On encryption

On India’s proposed originator rules, the closest comparison would be with countries that have sought to restrict encryption (without which, it becomes easier for law enforcement to determine the originator of a message). In Russia, a licence is required for distributing, maintaining, providing, developing or manufacturing encryption facilities and the Federal Security Service can compel the provision of any information necessary to decrypt encrypted messaging.

US laws don’t enforce decryption but do mandate all telecommunication carriers to be able to be intercept communications and/or deliver it to the government.

In France, private entities or individuals who provide cryptology service must decrypt encrypted data by their services within 72 hours.

This website offers a map of countries with key disclosure laws or requirements to decrypt information.

India’s intentions

Welcoming the announcement of the new rules by the Union Minster for Information and Broadcasting and the Union Minister for Information Technology for Electronics and Information Technology, the Ministry of Electronics and IT has dubbed the new rules “progressive, liberal and contemporaneous”. Home Minister Amit Shah said he applauded Prime Minister Narendra Modi and MeITY minister Prasad, saying the rules would “empower social media users by institutionalising redressal mechanism and ensuring resolution of their grievances.”

Outside of government, reactions have been mixed: Social media giant Facebook has said it welcomes “regulations that set guidelines for addressing today’s toughest challenges on the Internet” while Twitter and Netflix have yet to issue a statement. Organisations like the Internet Freedom Foundation have criticised the rules for “excessive vagueness” and warned of collateral damage to “citizen free speech and privacy”.