social media

Challenges Faced by Social Media in Curbing Misinformation

Challenges Faced by Social Media in Curbing Misinformation

The spread of misinformation, especially during the pandemic has posed several challenges for social media companies. The structure of their algorithms and exploitation of human biases have exacerbated the impact of misinformation. Despite the efforts by social media companies, misinformation has continued to grow at an enormous scale which has further endangered life during the pandemic.

The Influence of Big Data in the Intelligence Cycle

Big Data entails innovative technological progress to the intelligence cycle as it strengthens the collection stage, introduces the correlational analysis method, and facilitates the dissemination of data to the final consumers. However, Big Data also presents some challenges and risks as human consciousness and expert participation remains essential to ensure the intelligence cycle’s effectiveness.

by Alejandra Bringas Colmenarejo

The inclusion of Big Data (BD) in the intelligence cycle has entailed a great advance since it introduced objective and quantitative methods in a discipline highly characterised by its subjectivity. In this sense, BD attempts to reduce intelligence uncertainty through the collection of a huge volume of data and the identification of hidden correlations unobservable in smaller samples. However, while BD is a beneficial technological advance of the intelligence cycle, it also leads to deep controversy given that policymakers may be tempted to replace the expert knowledge and the intelligence analysis with raw BD assets and correlations [1].

BD “represents the Information assets characterized by such a High Volume, Velocity and Variety to require specific Technology and Analytical Methods for its transformation into value” [2]. Consequently, BD is defined by the extremely large quantity of information collected in real-time and in continuous flows. Such information includes structured and unstructured data, traditional processed numeric and text databases, as well as unprocessed formats like images, audios, videos, tweets, emails and more [3]. Furthermore, BD also entails the necessary technologies to collect, manipulate, compare and analyse the collected bulk data and transform it into a reasoned intelligence assessment [4].

The inclusion of BD in the intelligence cycle has several challenges since it surpassed information, knowledge, casualty and context to centre the focus of attention on correlations [5]. Once its veracity and validity have been determined, the data collected from different sources is analysed to predict, determine or even prevent future scenarios, actions and behaviours [6]. Consequently, BD intelligence analysis is “the process of examining and interrogating Big Data assets to derive insights of value for decision making in a quasi-immediate response” [7]. However, this intelligence progress entails some risks and challenges since the increasing dependence on gathering technologies, as well as the enormous quantity of data collected, could result in a sense of overconfidence in technologies and a refusal of human capabilities.

Regarding intelligence collection, BD improves the inductive approach that attempts to recognize long-term trends, patterns and anomalies [8]. Different algorithms and informatics tools enable the automatization of collection, storage, management and transmission of data. This automatization decreases the dependence from manual processes and facilitates the continuous flows of data, [9] which strengthens the analysts’ capabilities to discover intelligence gaps or unusual behaviours. However, to avoid a paralysation of the intelligence process it is essential that the algorithms used are effective in selecting valid and useful data from the vast raw data collected [10].

BD also allows intelligence analysts to generate and refute hypotheses. BD analysis appears to be quite inductive since it refers to past events and historical patterns to causally respond to the question of ‘what is happening’. However, the value of BD lies in the correlation and the identification of hidden events and circumstances so that realities which may not be evident or observable become available to the intelligence analyst. Consequently, filtering valid information from the massive quantity of data allows analysts to support their speculations with facts or to deny a previously confirmed hypothesis [11]. The quick and real-time collection, as well as the long-term storage of data, provides analysts with the necessary evidence to develop informed and predictive intelligence hypotheses. In spite of that, the BD correlation process could also result in the identification of patterns and realities that extrapolated from their specific context are completely useless or coincidental. Consequently, intelligence agents should carefully use BD correlations as without the appropriate expertise analysis they could lead to irrelevant events or unconnected behaviours [12].

Despite the massive volume of data gathered by the intelligence actors, some information remains unknown and excluded from the correlation process because of its secrecy or its restricted access. In this context, non-state data collectors, such as social media platforms, marketing agencies or companies collect and store information that can be bought by the intelligence actors to fulfil the information gap. Nevertheless, the veracity and accuracy of this information remains dependent on the initial collectors [13]. As a result, data provided by private actors could involuntarily impact the effectiveness of the intelligence process or maliciously corrupt, manipulate and counterfeit the reality to deliberately influence the final intelligence assessment [14].

In this manner, BD remains dependent on human capabilities because it still lacks creativity, consciousness and judgement to contextualize new correlations within a broader analytical framework [15]. The limitations of BD should be understood completely in order to avoid misinterpretations and misunderstandings of reality. BD needs expert analysts who are able to identify mere coincidences and consider the unpredictable behaviour of human beings.

Concerning the relation between intelligence analysts and consumers, BD could play different roles. It could help disseminate relevant intelligent assessments to their effective consumers facilitating well-informed analysis and decision-making. Despite this progress in the dissemination stage, intelligence consumers may be sceptical about the veracity and validity of BD’s correlations. Consequently, they could ask for in-depth pattern’ explanations or even become reluctant to authorise action or enact policies supported by BD’s analysis [16]. Otherwise, consumers may be tempted to use raw data without the necessary subsequent analysis to support their own interest and purposes, contrary to the effectiveness of the intelligence cycle [17].

The challenges introduced by Big Data in the intelligence cycle are part of the existential debate between humans and technology and a logical consequence of the very speed of technological advances. Nevertheless, an even greater intelligence revolution could result from the next technological progress – the autonomy of artificial intelligence (AI). AI would collect BD in real-time, develop the consequent intelligence analysis and finally disseminate a reasoned assessment. Future BD analysis and AI would be able to reduce uncertainty and solve intelligence puzzles. However, the challenges and risks associated with this kind of technology are also undeniable since the human element in the intelligence cycle is reduced to the mere intelligence consumer. In the present time, BD does not possess human consciousness, however, full autonomy could be a reality in the near future [18].

Sources:

[1] Van Puyvelde, Damien, Stephen Coulthart, and M. Shahriar Hossain. “Beyond the buzzword: big data and national security decision-making.” International Affairs, 2017: 1397-1416.

[2] De Mauro, Andrea, Michele Grimaldi, and Marco Greco. (2014) “What is Big Data? A Consensual Definition and a Review of Key Research Topics.” 4th International Conference on Integrated Information. AIP Proceedings, pp. 1-11.

[3] Normandeau, K. (2013, September 12). Beyond Volume, Variety and Velocity is the Issue of Big Data Veracity. Available at https://insidebigdata.com/2013/09/12/beyond-volume-variety-velocity-issue-big-data-veracity/

[4] Boyd D. & Crawford. K. Critical Questions for Big Data. (Information, Communication and Society, 2012), p. 662-678

[5] Landon-Murray, M. (2016). Big Data and Intelligence: Applications, Human Capital, and Education. Journal of Strategic Security, 9(2), p.92-121.

[6] Lyon, D. (2014, July-December). Surveillance, Snowden, and Big Data: Capacities, consequences, critique. Big Data & Society, p.1-13. doi: 10.1177/2053951714541861

[7] Couch, N., & Robins, B. (2013). Big Data for Defence and Security. Royal United Services Institute for Defence and Security Studies, p.6.

[8] Lim, K. (2015). Big Data and Strategic Studies. Intelligence and National Security, p.619-635.

[9] Symon, P. B., & Tarapore, A. (2015). Defense Intelligence Analysis in the Age of Big Data. Joint Force Quarterly 79, p. 4-12

[10] Couch & Robins, p.9

[11] Lim, p. 636

[12] Landon-Murray, p.94

[13] Zwitter, A. (2015) Big Data and International Relations. Ethics & International Affairs, 29, no 4, pp. 377-389.

[14] Symon & Tarapore, p. 9.

[15] Dyndal, G. L., Berntsen, T. A., & Redse-Johansen, S. (2017, 28 July). Autonomous military drones: no longer science fiction. Available at NATO Review Magazine: https://www.nato.int/docu/review/2017/also-in-2017/autonomous-military-drones-no-longer-science-fiction/en/index.htm

[16] Landon-Murray, p.101.

[17] Jani, K. (2016). The Promise and Prejudice of Big Data in Intelligence Community. Sam Nunn School of International Affairs, p.14.

[18] Dyndal; Berntsen & Redse-Johansen.


Online Political Microtargeting in the United States

Online political microtargeting is personalised advertising targeting the voters who are on the fence in a campaign, and are thus most susceptible to personalised political advertisements. In the US, microtargeting allows political campaigns to target swing states, which fluctuate between supporting Democrats and Republicans and possess considerable weight in the outcome of an election.

By Agniete Pocyte

‘Political elites do not employ new communication channels with the aim of citizen empowerment, greater democratic deliberation, or any other normative goals’ [1]. The goal of investing in new media communication tools is to win elections.’

Online political microtargeting is personalised advertising which targets voters based on the predictions of an algorithmic model, manipulated from publicly available data and private data [2]. Facebook is the most popular advertising platform as nearly three-quarters of American adults use Facebook, and 44% of the adult population cite it as a part of their news sources [3]. Although Facebook is not the only social media site that functions as a news source, it is by far the largest [4].

Despite the focus on President Trump’s 2016 campaign, George W. Bush made use of similar, albeit less complicated, microtargeting. In 2004, Bush’s presidential campaign bought data on 5.7 million Michigan consumers from Acxiom, one of the world’s largest data brokers, and merged it with their own polling information to categorise Michigan voters into 34 ‘microtargeting segments’ [5]. With this information, the campaign created advertisements and scripted messages targeted at the narrow categories of voters through telephone and direct-mail messages. Mitt Romney’s 2012 US presidential campaign used micro-categories to target undecided voters with advertisements that emphasised different aspects of his campaign. Zac Moffet, the digital director of Mitt Romney’s 2012 presidential campaign stated: ‘two people in the same house could get different messages. Not only will the message change, the type of content will change’ [6].

A microtargeting strategy will rarely target more than a small portion of the voting population. That is because most of the population is either set on voting for a particular candidate or is extremely unlikely to vote. By targeting the voters who are on the fence in a campaign, and are thus most susceptible to personalised political advertisements, microtargeting becomes a cost-effective strategy. Most importantly in the US, microtargeting allows political campaigns to target swing states, which fluctuate between supporting Democrats and Republicans and possess considerable weight in the outcome of an election. Since 1980, the number of contested swing states has dwindled [7]. In 1976, 20 states were won by a margin of less than 5%. This number dropped to 11 states in 2004 and to just 7 states (Florida, Ohio, Virginia, New Hampshire, Wisconsin, Iowa, and Colorado) in 2008. The fact that US presidential elections are fought over ‘relatively small margins in a handful of states sets up conditions for continued importance of fine-grained tactical efforts’ to persuade a select group of voters [8]. That being said, ‘political elites do not employ new communication channels with the aim of citizen empowerment, greater democratic deliberation, or any other normative goals’ [9]. The goal of investing in new media communication tools is to win elections.

Although political microtargeting purports to engage with voters in a more relevant fashion, the threats to individual privacy, the electorate, and democracy outweigh the benefits. American voters do not have adequate control of their data and cannot dictate who uses it. Many organisations, including political campaigns, are under no obligation to protect user’s information privacy and political privacy. Moreover, microtargeting practices suppress certain voter populations and exacerbate the effects of the ‘filter bubble’ by channeling voters into informational silos. Due to the highly personalised nature of the messages in political ads, thousands of variations of the same ad exist to maximise voter receptiveness. Political campaigns do not publish a database of all the ad variations which makes it difficult for journalists and the general public to investigate the honesty of a particular campaign. Third parties including social media companies, data brokers, and data analytic firms, are unregulated and possess a questionable amount of political power if the effects of microtargeting are as extreme as purported by campaign managers. Regulations are difficult to implement due to alleged conflicts with freedoms of speech and expression and the lack of empirical evidence surrounding the effects of microtargeting. Technology has outgrown regulation and it is vital to keep the possible threats of microtargeting in mind not only for policymakers, but the voters as well.

N.B. ‘the ‘filter bubble’ is the intellectual isolation that can occur when websites make use of algorithms to selectively assume the information a user would want to see, and then give information to the user according to this assumption’ [10].

Sources:

[1] Bimber, B. (2014). Digital media in the Obama campaigns of 2008 and 2012: Adaptation to the personalized political communication environment. Journal of Information Technology & Politics, 11(2), p.146.

[2] Gorton, W. A. (2016). Manipulating Citizens: How Political Campaigns’ Use of Behavioral Social Science Harms Democracy. New Political Science, 38(1), 61-80.

[3] Gottfried, J., & Shearer, E. (2016). News Use Across Social Media Platforms 2016. Pew Research Center’s Journalism Project. Retrieved 2 May 2018, from http://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/
[4] Ibid.

[5] Gorton, W. A. (2016). Manipulating Citizens: How Political Campaigns’ Use of Behavioral Social Science Harms Democracy. New Political Science, 38(1), 61-80

[6] Ibid.

[7] Bimber, B. (2014). Digital media in the Obama campaigns of 2008 and 2012: Adaptation to the personalized political communication environment. Journal of Information Technology & Politics, 11(2), p.146.

[8] Ibid, p. 144

[9] Ibid, p146

[10] Techopedia. (2018). What is a Filter Bubble? – Definition from Techopedia. [online]. Available at: https://www.techopedia.com/definition/28556/filter-bubble [Accessed 30 Aug. 2018]

Author’s further reading:

[1] Borgesius, F. J., Moller, J., Kruikemeier, S., Fathaigh, R. Ó., Irion, K., Dobber, T., … & de Vreese, C. (2018). Online Political Microtargeting: Promises and Threats for Democracy. Utrecht L. Rev., 14, 82.

[2] Ienca, M. (2017). Do We Have a Right to Mental Privacy and Cognitive Liberty?. Scientific American Blog Network. Retrieved 2 May 2018, from https://blogs.scientificamerican.com/observations/do-we-have-a-right-to-mental-privacy-and-cognitive-liberty/

[3] Tenove, C., Buffie, J., McKay, S., & Moscrop, D. (2018). How Foreign Actors Use Digital Techniques to Undermine Democracy. Centre for the Study of Democratic Institutions, UBC.