Cyber Security and Emerging Threats

Forums or Publishers? Social Media Platforms under Stricter Content Policies

Over the last decade, critics of social media providers complained about the control they had over the content published on their platforms, especially regarding hateful online speech, the spread of misinformation, and even the promotion of terrorism. In 2017, The Economist characterized these growing concerns as a “global techlash”. While these big tech companies started to implement regulations a few years ago, current developments seem to be cementing this trend. The first major move was made by Twitter last May, introducing fact-checking and warning labels encouraging users to “get the facts.” The company took these measures in response to U.S. President Donald Trump’s “potentially misleading” messages about voting by mail. Moreover, Trump’s tweets were not the only ones labelled by the company. A few politicians and heads of state saw their tweets labelled or removed, most notably those of Osmar Terra, a Brazilian politician, and Jair Bolsonaro and Nicolás Maduro, presidents of Brazil and Venezuela respectively, for their inaccuracies or misleading comments about COVID-19 and potential cures to the virus. 

However, it was the reinvigoration of the Black Lives Matter movement following George Floyd’s murder that hastened the measures taken by social media to contain what it deemed hateful speech and racial discrimination from their platforms. On May 29th, after a controversial tweet from Trump’s account, implying that protesters and looters in Minneapolis could be shot, Twitter decided for the first time to restrict access to Trump’s messages. While the tweet was not removed, as it might be of public interest, the company warned users that the message violated “Twitter Rules about glorifying violence” and could not be responded to or retweeted. Twitter’s measures were followed by other companies such as Twitch, the live-streaming platform owned by Amazon, which issued a two-week ban to Trump’s channel for “hateful conduct”. Reddit also hardened its content policy to more aggressively ban hate speech and harassment from its forums. The platform recently banned the subreddit r/The_Donald – a forum dedicated to discussions about the U.S President and notorious for its promotion of racism, anti-Semitism and conspiracy theories – as well as 2000 other communities. As for YouTube, the company banned several prominent white supremacist channels, most notably those of Stefan Molyneux, a Canadian white nationalist activist, and David Duke, former leader of the Ku Klux Klan. The platform also tightened its policy regarding anti-Semitic and revisionist content, banning creators that denied the Holocaust, an act criminalized in 17 countries. 

In contrast, Facebook has been much more reluctant to take part in this movement. Trump’s message about the protests in Minneapolis, also posted on Facebook, was not removed by the company, nor labelled as in the case of Twitter. The company’s founder and chief executive, Mark Zuckerberg, justified this decision on Fox News, saying he was uncomfortable with Facebook’s being “the arbiter of truth of everything that people say online.” This position was not well received and resulted in the Stop The Hate For Profit movement, which aimed to persuade companies to stop advertising on Facebook. This movement was joined by some wide-ranging companies such as Coca-Cola, Verizon, Unilever and the top five Canadian banks RBC, TD, CIBC, BMO and Scotiabank. In response to the boycott, on June 26th, Facebook introduced labels to provide more context to political content posted on its platform, saying it would expand its regulations about hateful speech.

The controversy around Facebook’s stance falls within the debate around whether social media platforms have a responsibility to restrict hateful speech and disinformation or, on the contrary, should not filter the content generated by their users on behalf of free speech. This question raises the issue of their nature and, therefore, their legal status. Should these platforms still be considered as intermediaries, distributing the content generated by their users, or publishers who filter this content? 

This issue was addressed in Trump’s executive order enacted on May 28th that aimed to suppress the legal shield protecting tech companies from being legally responsible for the content published on their platforms, provided by section 230 of the Communication Decency Act of 1996. According to this section, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Issued in response to Twitter’s decision to add a fact-check label to some of the president’s tweets, the executive order was meant to “prevent online censorship,” stipulating that online platforms should be liable like “any traditional editor and publisher.” “If Twitter wants to editorialize & comment on users’ posts, it should be divested of its special status under federal law (Section 230) & forced to play by same rules as all other publishers,” commented the Republican Senator of Missouri Josh Hawley on Twitter

Fundamentally, content moderation on social media is similar to publishers as it relies on “media-like editorial processes” of “content review, assessment, approval or denial of publication and control over creators’ potential to generate advertising revenue.” However, these big tech firms remain very different from traditional editors and publishers. They first differ by their “user-led nature of content generation.” Contrary to traditional media companies, the content on social media is not generated by authors or journalists, but by the users themselves, which also raises the issue of the scale of information they facilitate. Considering the amount of content created by users, these editorial processes cannot be done by individuals but by algorithms. Moreover, social media is used globally, making content regulation across time zones and cultural contexts even more difficult. These companies have to deal with national specificities regarding media regulation and legislation. For example, the legal protection provided to online platforms by section 230 of the Communication Decency Act is almost exclusive to the United-States. In contrast to this “broad immunity,” social media platforms have a “conditional liability” in Europe and South America, and a “strict liability” in China and the Middle East.

With stricter content policies, social media platforms seem to gradually move away from being simple online forums, without falling into the grouping of traditional media and publishers. In-between these two categories, these companies now have to reassess their status on the Internet. This shift is especially problematic regarding the existing media legislation that might need to adapt to the evolution of these platforms facing unprecedented challenges in regards to their content moderation.


Cover image: President Trump signs an executive order on preventing online censorship, official White House photo by Shealah Craighead (2020) via Flickr. Public Domain.

 

Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.

Chloé Ketels
Chloé Ketels is an intern at the NATO Association of Canada and serves as the program editor for the Emerging Security Program. She is currently completing her Master's degree in International Studies at the University of Montreal, with a focus in Cultures, Conflicts and Peace Studies. Originally from France, she also holds a double-major Bachelor's degree in Political Science and History from the Université Paris 1 Panthéon-Sorbonne since 2018. In January 2020, she became a contributor for Horizons Stratégiques, a young and independent think-tank, for which she discusses on international security and foreign affairs issues. She has a keen interest in international relations, history, as well as conflict prevention and resolution. She can be contacted by e-mail at ketels.chloe@gmail.com.
http://live.natoassociation.ca/chloe-ketels/