The US Supreme Court recently ruled in favour of Google, Meta and Twitter etc., stating that the social media companies will not be held responsible for posts carrying content supportive of terrorist groups. Market and social media experts are airing different views on the issue.
The US Court refused to recommend changes to 'Section 230', as demanded by victims of terrorist attacks in their plea against social media companies. The victims sought amendments to the law which gives social media platforms immunity from liabilities arising from third-party content on their sites. The immunity stays in force even if these social media platforms recommend offensive content to users.
Experts with whom THE WEEK spoke are of the view that in the US, there is a long-standing legal principle that social platforms are not responsible for the content posted on their sites. This principle is based on the First Amendment, which protects freedom of speech. Social platforms are considered to be platforms 'for speech', not 'publishers of speech'. This means they are not responsible for the content posted on their sites, as long as they do not actively promote or encourage illegal activity. So what implications does such a ruling have on the global scale as well as in India?
“Social media companies need not celebrate this yet. It is ethical behaviour and prudent for any media to ensure that they regulate the content keeping in mind the prevalent social norms around relationships, safety, security and beliefs. However, it’s pertinent to mention that the verdict of the court brings in a balance in the public dialogue on this issue, and draws everyone’s attention to the fact that we cannot be unreasonable in our expectations from the social media platforms around regulating the posts and verifying the authenticity of the users,” pointed out Aditya Narayan Mishra, director and CEO of CIEL HR.
However, a few experts also believe that the US Supreme Court ruling is a massive win for Silicon Valley as it is true that social media intermediaries and content hosting companies like YouTube do not directly boost terrorist content on their platforms unless viewers are already watching similar type of content.
“Technology definitely should play a role in preventing the spread of hateful and harmful content, and companies working at the intersection of tech and media ought to build functionalities to better screen and flag such content. But punishing the platforms themselves for the behaviour of recommendation algorithms isn’t the right way to go about it. In fact, the ruling directly makes mention of the fact that recommendations on these platforms play a passive role at best. Laws that deal with the technical aspect of content moderation are the need of the hour, rather than any blanket regulations,” observed Krupesh Bhat, Founder of SignDesk.
He added that the ruling will have huge implications for all media platforms that rely on smart recommendations and could even apply to large language models like ChatGPT which also make passive use of AI algorithms. Bhat said that the companies will now have more freedom to innovate and fine-tune their algorithms and encourage the open discussion of critical topics without falling back on harsh content moderation tools. However, the tech giants who run these platforms should take a hint from the slew of lawsuits and begin doing more to regulate themselves.
“The ruling could also impact the Indian scenario. The digital media intermediaries rules introduced a couple of years ago hold platforms responsible for disseminating improper content and have caused a lot of controversies. Perhaps, with this ruling, lawmakers and executive bodies can take another look at these regulations and take them in a direction that prioritizes freedom and innovation in technology,” remarked Bhat.
Experts do point out that the US Courts have been reiterating their stand that social media platforms are not responsible for the content posted on their sites, as long as they do not actively promote or encourage illegal activity, and this principle has been upheld by the courts in a number of cases.
“In 2019, the Ninth Circuit Court of Appeals ruled that Facebook could not be held liable for the spread of terrorist propaganda on its platform. The court found that Facebook was not a publisher of the content and that it had taken reasonable steps to remove illegal content from its site. The same principle has been applied to other social platforms such as Twitter and YouTube. While social platforms cannot be held legally responsible for the spread of terrorism, they have all taken steps to remove illegal content from their sites and have worked with law enforcement agencies to investigate and prosecute terrorists who do use their platforms,” said Girish Linganna, an aerospace and defence expert and the MD of ADD Engineering Limited.
Social media experts do point out that platforms are not the only means used by terrorists to spread their propaganda and recruit new members. They also use traditional media such as newspapers and television, as well as in-person meetings. However, social platforms have become an increasingly important tool for terrorists and these platforms are taking steps to address the problem and prosecute terrorists who use their platforms.
“Social media platforms are using a variety of technologies to detect and remove illegal content-such as hate speech, child sexual abuse material and terrorist propaganda. These technologies include Artificial Intelligence (AI), Machine Learning (ML) and human review. At the same time, they are creating policies and procedures to address illegal content. These policies and procedures outline how the platforms will identify, remove and report illegal content. They are also working with law enforcement agencies to investigate and prosecute terrorists who use their platforms. This includes providing these agencies with information about terrorists and their activities,” added Linganna.
Undoubtedly, the use of the internet and social media has become a powerful tool in the hand of non-state actors and terrorist organizations. Internet use by terrorist organizations, such as ISIS, in recruiting youth across the world is now a reality. The recent theft of data of lakhs of ATM cards in India is an example of misuse of the Internet by non-state actors. Social media is also being used by like-minded individuals as a tool for radicalization. In these circumstances, effective strategies should be adopted to curb the threat posed by the Internet and social media.
Linganna points out that India recently appointed its first Chief Information Security Officer (CISO) and it is likely to help the country to build the vision and measures to fight crime in cyberspace and better manage cyber security. The creation of a National Cyber Security Agency (NCSA) is also expected to improve India’s resilience and defence systems.
“Monitoring of content on the Internet by intelligence agencies, such as the Intelligence Bureau and RAW, can prevent any attempt to radicalize youths. The National Cyber Security Policy, 2013, was formulated to protect cyberspace information infrastructure and reduce weaknesses. A 24×7 mechanism at the national and sectoral levels is planned to counter cyber threats through the National Critical Information Infrastructure Protection Centre (NCIIPC). Similarly, a Computer Emergency Response Team (CERT-In) has been designated to act as a nodal agency for coordination of crisis management effort,” explained Linganna.
Branding expert Harish Bijoor, founder of Harish Bijoor Consults Inc, is of the view that the US Supreme Court's ruling is a win for technology and social media companies as it reinstates the image and in practical terms their status of being a platform. A platform that hosts the good, the bad and the ugly equally. “These US Supreme Court judgments reduce liability for sure. I do believe this must enthuse companies such as Google and Twitter to work on algorithms that detect and restrict content that are harmful to society from the perspective of terror,” said Bijoor.