Challenges Faced by Social Media in Curbing Misinformation

Abstract

The spread of misinformation, especially during the pandemic has posed several challenges for social media companies. The structure of their algorithms and exploitation of human biases have exacerbated the impact of misinformation. Despite the efforts by social media companies, misinformation has continued to grow at an enormous scale which has further endangered life during the pandemic.

 By Sharon Jose


Introduction

Health misinformation has a long history and with the onset of COVID-19, social media has covered this pandemic more than any other health concern covered before. Social media has been an important tool for information generation, dissemination, and consumption during the pandemic [1]. However, it has also been a tool for spreading misinformation. In fact, Avaaz finds that “global health misinformation spreading networks on Facebook reached an estimated 3.8 billion views in the last year spanning at least five countries — the United States, the UK, France, Germany, and Italy.” [2].

Within cyberspace, social media has served as a medium for self-expression that is mostly free from governance. This space has given a voice to the marginalised sections of society and helped increase collective empathy. Yet, the challenges with privacy, surveillance and control have also enabled social media to serve as a platform for cyber-crime and spurred polarisation of society [3]. Technological aspects such as algorithms and cognitive biases such as the processing of information by human brains can make social media a breeding ground for misinformation. The connection between these two aspects complicates the regulation of misinformation.

 

The Challenge of Algorithms

The basic functioning and structure of social media play a major role in spreading misinformation. Social media companies have not been transparent about their algorithms and it is thus difficult to gauge the extent to which their amplification systems have had an impact [4]. However, they have revealed that their platforms provide for wide engagement of misinformation as its algorithm rewards content that draws the most attention. Highly viewed content is prioritised as ‘relevant’ and its circulation is further amplified [5]. This kind of content often includes misinformation as it targets existing grievances and often includes inflammatory content that draws the attention of people. This may not be inherently problematic, but since the algorithms prioritise engaging content regardless of whether it is accurate or not, this process can in fact threaten lives. Within the ongoing COVID-19 pandemic, for example, false claims regarding the effectiveness of vaccines, quarantining, and alternative cures have received the largest amount of views, leading people to disobey lockdown rules and depend on phoney cures, some of which are harmful. Due to the ‘attention-grabbing’ nature of conspiracy theories, “misinformation websites received almost 4 times as many views as sites by certified health organizations like the World Health Organization (WHO) and Centers for Disease Control (CDC) in 2020” [6]. The algorithms also direct the spread of content to those who show interest, leading misinformation posts to be concentrated in echo-chambers. These algorithms that support the basic structure of social media platforms, threaten credible information.

Social media platforms have taken several measures to control misinformation especially since the 2016 US elections exposed how social media can be used as a tool to manipulate people. Major social media platforms such as Facebook, Instagram and Twitter have introduced a tool to flag content as false and made its data on profits from advertisements more transparent [7]. However, despite these policy changes since 2016, misinformation continues to proliferate. For example, Facebook has advocated that flagging content should reduce viewership by 80%. Unfortunately, by the time falsifiable content is reviewed by fact-checkers, it has usually already been seen by millions of people [8]. Worse, Avaaz reports that only 16% of health misinformation identified by Facebook receives a warning label [9].

 

The Challenge of Cognitive Biases

While the amplification systems of these social media companies play a major role in the spread of misinformation, its impact and influence on such a large population have more to do with the ways in which human minds can be manipulated. The impact of misinformation depends on how the brain processes information. One factor that influences these processes is confirmation bias. The algorithms aid misinformation spreaders in exploiting the confirmation bias of users. People are easily affected by the emotional connotations of articles and advertising tools are built to tailor posts to those who are inclined to believe them [10]. People are more likely to trust or accept information that upholds people’s existing beliefs or prejudices and disregard any information that challenges these beliefs. For instance, following the remarks by a Chinese foreign ministry spokesperson during the pandemic, several social media posts spread misinformation on the role of the US army behind the implantation of the virus in China. People who are against the US are more likely to believe such content. [11]

 

Apart from confirmation bias, misinformation also feeds into uncertainties among the general public. For instance, some Facebook groups based on spirituality posted content that encourages people to discover the truth for themselves and watch out for ‘evil forces’. [12] People are inclined to believe misinformation spreaders when they help people question everything they know and encourage people to judge for themselves. Moreover, misinformation posts by such wellness facebook groups coupled with encouragement of freedom of thought and people’s biases towards ideas of spirituality, instil distrust in credible institutions. Similarly, when the political parties that people are biased towards, spread misinformation, widespread feelings of being misled crumbles overall public trust as well. This mistrust among the people continues to persist as according to the Edelman Trust Barometer, “57% of people believe government leaders, business chiefs and journalists are spreading falsehoods or exaggerations.” [13]. Regardless, according to the report, traditional media is trusted 18% more than social media even if a large population distrusts journalists, government, and other such institutions. Therefore, traditional institutions have not completely lost their power in informing people. However, when journalists and government officials themselves spread misinformation and contradict their peers, the 18% who trust traditional media more could be misinformed as well since traditional media can falter. Reuters research on the sources of misinformation spreaders during the pandemic revealed that politicians, celebrities and other public figures made up only 20% of the misinformation spreaders on social media. However, it also accounted for a total of 69% of social media engagement [14]. Therefore, misinformation spread by influencers or government officials has a wide reach and could affect the credibility of institutions. People are also more likely to trust information shared by their personal contacts without fact-checking. Therefore, many people continue to be ill-informed as social media often benefits from emotional and personal affiliations. In these ways, confirmation bias and growing uncertainties make human brains susceptible to believing misinformation. Even when people are introduced to accurate information after viewing misinformation, the confirmation bias and lack of trust in institutions, make people disregard credible information. Therefore, it is difficult to end these trends through the intervention of social media companies.

 

Conclusion

Health misinformation can be disastrous and since the COVID pandemic is widespread, it has exacerbated the extent of people affected by such misinformation. Although social media platforms have attempted to control misinformation, they have fallen short of making significant positive impacts. In an attempt to break down the complex elements that create a misinformed society today, two main elements have been observed. These include the nature of algorithm systems and the nature of cognitive biases. The amplification systems of social media algorithms complement the cognitive biases in aggravating the impact of misinformation. As noticed during the pandemic, misinformation has exploited the confirmation bias and is inflammatory in nature. The processing of information is also influenced by the level of uncertainties, leading people to believe in misinformation in their search for answers. The amplification systems of algorithms further help misinformation reach a wider audience. Fact-checkers and AI have been racing against the enormous reach of health misinformation. However, accurate information fails to reach the audience before misinformation due to the nature of algorithms and human processing of information. These technological and psychological factors that extend the reach and impact of misinformation, have made its regulation very challenging. Future research could work upon these elements to discover solutions for the challenges in regulating misinformation. Concrete steps need to be taken to improve trust in credible institutions and ensure a better informed society.

References

[1] Tsao, S., Chen, H., & Tisseverasinghe, T. (2020) “What social media told us in the time of COVID-19: a scoping review” The Lancet, pp. 186-189.

[2] Avaaz (2020) “How Facebook can Flatten the Curve of the Coronavirus Infodemic” Avaaz, p. 2.

[3]  Loader, B. (1997) “The Governance of Cyberspace: Politics, technology and global restructuring” Routledge, pp. 1-35.

[4] Lomas, N. (2020) “Facebook’s latest ‘transparency’ tool doesn’t offer much — so we went digging” Techcrunch, 25th February.

[5] Meserole, C. (2018) “How misinformation spreads on social media—And what to do about it” Brookings, 9th May.

[6] Avaaz (2020) “How Facebook can Flatten the Curve of the Coronavirus Infodemic” Avaaz, p. 2.

[7] Conklin, A. (2020) “How Facebook, Twitter policies have changed since 2016 in the name of voter integrity” Fox Business, 28th October.

[8] Silverman, C. (2017) “Facebook Says Its Fact Checking Program Helps Reduce The Spread Of A Fake Story By 80%.” BuzzFeed News, 11th October.

[9] Avaaz (2020) “How Facebook can Flatten the Curve of the Coronavirus Infodemic” Avaaz, p. 2.

[10] Ciampaglia, G., & Menczer, F. (n.d.) “Biases Make People Vulnerable to Misinformation Spread by Social Media” Scientific American.

[11] Britt, R. (2020) “Why People Won’t Change Their Minds on Covid-19” Medium, 1 June.

[12] Silverton, L. (2020). “The Rise of Conspirituality in the Age of COVID” Buro247, 27 December.

[13] John, M. (2021) “Public Trust Crumbles under COVID-19, Fake News - Survey” Reuters, January 14.

[14] Mark, J. (2021) “Public Trust Crumbles amid COVID, Fake News – Survey” Reuters, 13th January.