Authority and misinformation in the process of COVID-19 sensemaking

Team Members

Emillie de Keulenaar, Ivan Kisjes and Carlo de Gaetano

based on previous work by Emillie de Keulenaar, Ivan Kisjes, Rory Smith, Carina Albrecht, Eleonora Cappuccio and visualisations by Guillermo Appolinário

Contents

1. Introduction

Phenomena like COVID-19 have been characterized for their uncertainty (Yong, 2020). As new information on the epidemiological nature of the disease and its impact on public safety evolves, so do claims on which objective facts constitute it. Heads of state, health organizations and the public have been frequently divided on such claims, such as whether asymptomatic people and children can contaminate others, whether one should use a mask, or if children can be contagious (Iati et al., 2020; O’Leary, 2020). This inconsistency can reportedly erode trust between the public, governments and (health) organizations (Starbird, 2020), possibly leading the former to rely on increasingly diverging understandings of the pandemic (Bordia and Difonzo, 2004; Bostrom et al., 2015; Starbird et al., 2016).

In this midst of this uncertainty, social media and reference platforms have been tasked with ensuring that their users maintain consensus (Skopeliti and Bethan, 2020). Since the early months of 2020, Google Search, YouTube, Facebook, Twitter and Reddit have set up centralized access points to information related to COVID-19, all provided by local and "authoritative sources" (Skopeliti and Bethan, 2020). Such efforts respond for requests to ramp up moderation of falsehoods and other "problematic information" when maintaining public consensus is vital for citizens’ safety. Though some stakeholders continue to demand more radical platform redesigns (Dwoskin, 2020), more modest measures include: temporarily disabling the personalisation of Newsfeeds, flagging contents (tweets, posts, videos) that disseminate contested claims (Lyons, 2020), demoting "borderline" or suspicious contents like conspiracy theories and raising “authoritative contents” on top of search and recommendation results (The YouTube Team, 2019), or deleting materials that pose a danger to public health, such as anti-vaccination or alternative medication (YouTube, 2020).

While content moderation has never been an oddity to platforms — Gillespie (2018) goes so far as to define platforms by and for their moderation — many see the above-mentioned measures as concomitant to censorship, or bias at the very least (Jiang, Robertson and Wilson, 2019; Lee, 2020). Arguably, moderation requires that platforms actively intervene in ongoing deliberations around what constitutes reality, by sorting, ranking and deleting information that steps outside the boundaries of common sense (Rieder, 2017). And with contents produced by an already polarized user base, moderation can become "essentially contested" (de Laat, 2012). To what extent does moderating misinformation then help re-establish public consensus? Which substantiations of objectivity and factuality do platforms support over time?

This paper traces Twitter's moderation of disputed COVID-19 misinformation from March to June 2020. Using a sample of 3 million Tweets that mention #covid or #coronavirus, I combine close and "distant" reading techniques (namely, natural language processing) to assess how information about COVID-19 transmission, prevention and treatments is disputed between local American and international authoritative sources (U.S. President Donald Trump, the Centre for Disease Control and Prevention, the National Institutes of Health and the World Health Organisation, respectively) and their Twitter audiences. I then assesses how Twitter intervenes such disputes its moderation of COVID-19 misinformation, namely through the labeling, suspension and deletion of Tweets that mention non-authoritative claims. I first chart a brief web history of Twitter's "COVID-19 Misleading Information Policy" (Twitter, 2021) with the Internet Archive's Wayback Machine, and then scrape moderation metadata off of non-authoritative Tweets using Selenium.

3. Research Questions

  1. How did Twitter's content moderation affect ongoing debates amongst authoritative sources and users on what constitute COVID-19 treatments, protection, ?
  2. How did the COVID-19 crisis affect Twitter’s content moderation policies?

4. Literature review

Social and knowledge platforms like Facebook, Twitter, YouTube and Wikipedia are perceived as modelling themselves after “nominally open societies” (Gillespie, 2018) for distinguishing themselves as democratic, “participatory” alternatives to mass media (Langlois, 2013). Following myriad controversies – harassment campaigns (Jeong, 2019), fake news-mediated disseminations of conspiracist narratives (Venturini et al., 2018), and inter-ethnic violence up to and including genocide (Mozur, 2018) – these platforms have however taken a more proactive role of moderating their user base. This has manifested in the implementation of top-down anti-misinformation and hate speech measures (Gordon, 2017), including flagging and eventually suspending local authorities (Brazilian and U.S. Presidents Jair Bolsonaro and Donald Trump), demoting problematic contents in recommendation and search results (Constine, 2019), banning users linked to hate speech and conspiracy theories, or redirecting them to educational material in an effort to “deradicalize” them (The Redirect Method, 2016).

So far, however, research on content moderation and misinformation has focused primarily on content deletion, or "deplatforming". Considering that figures like Breitbart columnist Milo Yiannopolous or conspiracy theorist Alex Jones mellowed their language and their audience has thinned (Rogers, 2020), such studies have given reason to believe that deplatforming is an effective strategy for policing hate speech (Chandrasekharan et al., 2020). This also applies to misinformation: journalistic reports and academic research have pointed to the dramatic loss of public attention for deplatformed contents, including Plandemic, a documentary on how COVID-19 was a planned hoax (Frenkel, Decker and Alba, 2020). Such studies are supported by a growing number of technical and legal analyses that argue for a sensible redesign of speech moderation in private companies, by for example "democratising" such techniques with collaborative or participative moderation (De Gregorio, 2020), delegating them to civil society (Elkin-Koren and Perel, 2020), or better detecting context to support decisions to sanction, quarantine, or delete user-generated contents (Wilson and Land, 2020).

In this sense, a majority of studies have focused on the moderation of extreme contents. This has tended to position scholarly debates in platform critique, an oftentimes policy-driven assessment of what (more) platforms could do to stifle or even encourage the production of misinformation. In comparison, relatively few studies have considered the effects or politics of moderating openly disputed or unknown matters, such as the epidemiological nature of a virus like COVID-19. More generally, some studies locate content moderation within historical debates on inherently normative disputes, such as determining what can and cannot be said within ongoing battles of ideas as to what constitutes facts, truth and offences to religious, race, gender, and other subjects. For this reason, many have noticed an important change from platforms's inclinations to be nominally neutral “intermediaries” of public speech, to more proactive arbiters of normative conditions for expression (Gillespie, 2018). Platforms' interventions in public debate has solidified the perception that content moderation is “essentially contested” (De Laab, 2012, 125), pushing users to create expanding alternative or “alt-tech” infrastructures with looser speech affordances.

But while alt-tech infrastructures target decidedly extreme contents (child pornography, hate speech, violence), disputed information such as treatments, preventions and the forms of transmission of COVID-19 pose a more complex challenge to content moderation. Unlike extreme contents, claims around COVID-19 have been disputed by both authoritative and non-authoritative users (Iati et al., 2020; O’Leary, 2020). The U.S. public alone has seen the late U.S. President Donald Trump frequently contradict the CDC and NIH, other equally authoritive institutions. Platforms' prioritisation of the World Health Organisation as an authoritative source has further exposed such divergences on an international level. Content moderation would then imply detecting of misinformation as "non-authoritative" claims, and qualifying the authority of authoritative sources. The latter implication has been especially visible in January of 2021, when Twitter took the unprecedented step to label, suspend and eventually ban then-U.S. President Donald Trump for repeated violations of its policies against the Glorification of Violence and for Electoral Integrity.

In this context, this study assesses how Twitter's content moderation policies and practices have framed disputed claims amongst authoritative and non-authoritative users. This implies studying COVID-19 misinformation as the product of poor consensus between authoritative sources and their social media audiences. Drawing partly from studies on collective sensemaking (Dailey & Starbird, 2015; Krafft et al., 2017) and rumors (Caplow, 1946; Shibutani, 1966), I use close reading and natural language processing to compare claims on COVID transmission, prevention and treatments by U.S. “authoritative sources” (defined as its head of state and principal health organisations) and “audiences”, defined as the users who engage with or refer to the former on Twitter. I propose that, aside from corrective measures to ban misinformation, authorities and (social) media platforms could invest in affordances to facilitate consensus around disputed matters across (Implication 2).

5. Methodology and initial datasets

The method of this study is two-fold. Based on a collection of millions of Tweets, I first parse, analyse and visualise diverging claims on COVID-19 transmission, prevention and treatments between U.S. authoritative sources and their respective audiences. I then look at how Twitter moderated disputed claims by first consulting content moderation policies designed for COVID-19 misinformation, and then obtaining moderation metadata from Tweets containing disputed contents.

Definitions

The U.S. has at least two channels responsible for communicating authoritative information on COVID-19: its head of state and its health departments or disease prevention agencies (Annex: Figure 1). Because Twitter prioritises the World Health Organisation as an authoritative source, I also captured data from that organisation’s international and American offices. I refer to heads of state and public health organisations as “authoritative sources”, and the W.H.O., health ministries, departments and disease prevention agencies as “public health organisations”. By “audiences”, I refer to users who respond to these actors or mention them through Twitter replies or as mentions of these actors’ website domains.

By “claims” about the coronavirus, I mean information that can be confirmed as true or refuted as false by governments and health organisations. I focused on:
  1. how the virus is transmitted;
  2. available treatments;
  3. preventive methods;

Data collection

For data collection on Twitter, I used Rieder and Borra’s Twitter Capture and Analysis Tool (Borra & Rieder, 2014), which collects tweets based on a chosen set of queries. These queries were “covid”, “coronavirus” and “WuhanVirus” and captured a total of 61,498,037 tweets from January 26th to July 7. Of those, I extracted 910 tweets from government and public health organisations and 496,166 replies and mentions of official domains. In addition to Tweets, I also collected claims on COVID-19 transmission, prevention and treatment by CDC, NIV and Donald Trump on their official websites (cdc.gov, nih.org, whitehouse.gov). Information on Twitter's COVID-19 misinformation moderation policies came primarily from two sources: Twitter's blog on COVID-19, and its "COVID-19 Misleading Information Policy". From these, I was able to note what information they target and how they moderate it (suspension, labeling, deletion, etc.). I then obtained moderation metadata from Tweets that mentioned disputed claims by using Selenium, a web interface scraper.

Parsing claims inductively and deductively

To map divergences in government, public health organisation and “audience” statements about COVID-19, I sought to capture and compare the widest possible range of claims about the transmission, prevention and treatment of the virus. I captured both true and false statements with a deductive and inductive approach. The deductive approach consisted in consulting secondary sources on COVID-19 misinformation, such as Wikipedia (Annex: Figure 3). The inductive approach consisted in manual and semi-automatic capture of claims. This involved reading Tweets and (authoritative or official) websites that contained the words “transmission”, “prevention” or “protection” and “treatment” or “cure”. I also generated word embeddings and bigrams for the queries “transmission”, “prevention” or “protection” and “treatments” or “cure” to find other relevant terms. I obtained a total of 48 words for transmission, 83 for treatments (2,739 with medications extracted from drugbank.ca) and 79 for prevention (Annex: Figure 4).

Coding and filtering claims in Tweets and official websites

I split and detected sentences per topic as follows:
  1. Transmission: sentences mentioning “infect”, “transmi”, “transfer”, “contag”, “contamin”, “catch”, or “spread”;
  2. Prevention: sentences mentioning “prevent”, “protect”;
  3. Treatment: sentences mentioning “treatment”, “cure” and “vaccine”.
For more complex queries such as whether the virus is airborne or whether one should wear masks, I manually coded every sentence that mentioned both “wear” and “mask” for the masks query and “airborne” and either “aerosol” or “droplet” for the “airborne” query. For sentences mentioning COVID-19 transmission, coding meant annotating claims that (1) the virus is or is not airborne, and more specifically that (2) it spread through droplets or aerosols. For those mentioning protection, it implied annotating claims that (1) the general public should and should not wear masks (“should wear”, “should not wear” respectively) and (2) who should be wearing masks (caregivers, essential workers, travelers…). In many cases, claims were far beyond simple binaries, and if frequent, required a category of their own.

I then manually coded the information retrieved from government and health authorities' official webpages on whether they provided any instructions or claims about transmission, treatments and use of masks that were inconsistent among them. I used the Internet Archive to track changes in the information in these webpages from January 2020 to July 2020. For each page with any information about transmission, treatments or use of masks, I coded them by date of change accordingly. For transmission, I coded if they agree if the transmission is possible through airborne or aerosol, contact, droplet, fluid or animals. For treatments, I coded if they recommend chloroquine, hydroxychloroquine or ibuprofen. For masks, I coded if they recommend wearing a mask or face-covering in public, wear a mask if one has symptoms, wear a mask if around sick people.

Coding and filtering claims in social media textual data: limitations

Twitter audience responses contain a large amount of retweets of claims made by authoritative sources. Because of this, I also included Tweets that do not necessarily reply or mention authoritative sources by are geolocated in the U.S. Geolocation is included in T-CAT's Tweet metadata.

6. Findings

1. Authoritative sources and their audiences contradict each other most on undetermined facts, such as COVID-19 treatments

The most controversial topics are those that audiences have least information on: cures and treatments for COVID-19 (Figure 1). While authoritative sources mention “no treatment” and “vaccines” not being available — proposing to rely on “infection prevention” — audiences mention a myriad remedies, including ethanol, remdesivir, zinc and vitamin C.

Figure 1. Treatments mentioned by authoritative sources (Tweets and website data) and their Twitter audiences (Tweet replies and mentions of website domains by authoritative sources (e.g. "whitehouse.gov"))

The use of ethanol, honey, lemon, cannabis, cocaine, colloidal silver and lopinavir are occasionally debunked by authoritative sources (Figure 1); ethanol, for example, in early February and April, both before and after audiences mention it. I found that the White House expressed doubts on the efficacy of honey and lemon after these ingredients gained traction amongst audiences. The same cannot be said about remdesivir, chloroquine, hydroxychloroquine, dexamethasone, prednisolone and tamiflu, about which authoritative sources mention ongoing research and testing. With the exception of prednisolone, all of these ingredients generate continuous audience engagement.

Authoritative sources often stress the uncertain nature of research on COVID-19 (Bostrom et al., 2015, p. 633). Authoritative claims do not just “debunk” false knowledge, but also express uncertainty on transmission, treatments and prevention. “Chloroquine” and “hydroxychloroquine”, for example, are first presented as possible treatments for COVID infections by audiences; only later do authoritative sources follow (Figure 1). Conversely, authoritative sources question the “airborne” nature of COVID transmissions, specifying that the virus can be transmitted by coughed “droplets” (Figure 4). Audiences don’t always distinguish the technical term “airborne” from anything that can spread through flight, conflating “droplet transmission” with “airborne transmission”.

Authoritative sources and their audiences usually refer to different modes of COVID-19 transmission. Both mention close contact, coughing, sneezing and touch as modes of transmission, while audiences refer to alternatives like “mosquito”, “petrol”, “radiation” and “chicken”. Others, like “5G”, are debunked by authoritative sources only after the topic gains significant traction among audiences in early April.

Figure 2. Modes of transmission mentioned by authoritative sources (Tweets and websites) and their Twitter audiences (Tweet replies and mentions of website domains by authoritative sources (e.g. "whitehouse.gov"))

2. Audiences are divided around contradicting claims by authoritative sources

Audiences and official channels contradict each other on topics for which there is less scientific consensus, such as airborne transmission (Figure 3). Notable, here, are the amount of contradictions among authoritative sources, which explain in part notable public confusion on airborne transmission (Achenbach & Johnson, 2020; Lewis, 2020; Mandavilli, 2020). While the World Health Organization expresses uncertainty throughout February, the White House confirms this form of transmission in late March, causing a cascade of similar statements among users. The World Health Organisation then states that airborne transmission can indeed occur — but “within one meter”. One tweet by the World Health Organisation’s West Pacific regional office does however overturn this claim, and is overwhelmingly retweeted by audiences in late March (Figure 3).

Figure 3. Claims on airborne transmission by authoritative sources (web domains) and their Twitter audiences (Tweet replies and mentions of website domains by authoritative sources (e.g. "whitehouse.gov"))

In this context, audiences express a relatively constant amount of uncertainty throughout, as well as conspiratorial suspicions in early March. While this is especially applicable to the months of February and March, audiences appear to express a relatively constant amount of claims aligned with the majority of authoritative sources. This may suggest that more consensus between authoritative sources fosters consensus with their publics.

Still, there is more frequent contradiction between authoritative sources and their audiences on whether COVID-19 is transmitted through droplets or smaller aerosol particles (Figure 4). While virtually all sources agree that the virus is transmitted by coughed droplets, some specify that aerosols may remain in the air for longer periods. Here, too, we see contradictions among authoritative sources. The World Health Organization’s expressions of doubt regarding the latter claim is quickly undermined by the White House, who contradicts it in late March. As the White House positions itself in this debate, it again sees a cascade of similar claims made in cdc.gov. In this context, audiences express an equally distributed amount of agreement with each claim, seemingly partitioned into groups that either rely on the word of the World Health Organisation or that of the White House.

Figure 4. Claims on droplet or aerosol transmission by authoritative sources (web domains) and their Twitter audiences (Tweet replies and mentions of website domains by authoritative sources (e.g. "whitehouse.gov"))

The debate on whether the virus is droplet or aerosol airborne shows how the very concept of airborne transmission appears to have evolved in the course discussions between authoritative sources and audiences. Early public doubts as to whether the virus was airborne have pushed for authorities to define and measure airborne transmission in increasingly concrete terms (Figure 3). While the World Health Organisation had earlier stated that airborne transmission is an exchange of infected droplets, recent findings on aerosol transmission substantiate earlier public conceptions of airborne transmission as a somewhat ubiquitous form of “air infection”.

In comparison to more varied recommendations from authoritative sources, audiences overwhelmingly mention masks, washing one’s hands, hand hygiene, and avoiding close contact as protective measures. A few false methods mentioned by audiences are picked up and debunked by official channels: baths, ginger and gargling certain products.

Figure 4. Prevention strategies mentioned by authoritative sources (Tweets and websites) and their Twitter audiences (Tweet replies and mentions of website domains by authoritative sources (e.g. "whitehouse.gov"))

For masks, we find varying agreement between authoritative sources and their audiences in three notable periods: late February to early March; early April; and mid-May to mid-June. In late February, authoritative sources overwhelmingly advise against wearing masks in public so as to reserve them for caregivers, essential workers, risk groups and the sick (Figure 5). But as airborne transmission remained uncertain (Figure 3), audiences continuously claim that masks should be worn in public for protection (Figure 5).

Figure 5. Frequency of claims advocating for and against the use of masks in authoritative sources and their audiences. Sources in picture.

In early April, the White House and CDC contradict the World Health Organisation and begin advising their citizens to wear masks, “scarfs and other materials” in public (Figure 5). Their statements appear to gain overwhelming traction among users, first in a notable spike in early April, and later through continuous alignment with their policy throughout May, June and July (Figure 5).

Zoom into authoritative and audience claims on the efficacy of hydroxychloroquine, we see that audiences (below, "users") appear to polarise around diverging authoritative statements. While some echo Donald Trump's claims that the ingredient is effective (including in combination with azythromycin), others follow the World Health Organisation. Meanwhile, the CDC and NIH rule the matter as "uncertain".

Figure 6. Authoritative and audience claims on whether hydroxychloroquine is or is not effective against COVID-19 infections.

3. Twitter has adapted its content moderation policies to capture the disputed nature of COVID-19 information

In the face of such disputes, Twitter’s “COVID-19 misleading information” policy underwent very frequent changes throughout 2020. Twitter's initial definition of COVID-19 misinformation is initially technical: in February 4, 2020, it refers to “media shared in a deceptive manner” and “synthetic or manipulated media” on COVID-19. Examples for the former include “a deliberate intent to deceive people about the nature or origin of the content”, and for the latter, “content that has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing”, “any visual or auditory information that has been added or removed”, and “fabricated or simulated media depicting a real person”. Both of these types of information are subjected to an incremental type of moderation, whereby they are first labelled, demoted and altogether removed after infringing misinformation policies more than once. Twitter's policy against manipulated media is sealed with a “zero tolerance approach to platform manipulation”, announced in March 4, 2020.

It is only in March 4, as the virus begins to disseminate outside of China, that the platform broadens its conception of COVID-19 misinformation. It now determines local and international "authoritative sources" as independent criteria on the objective quality of information on the virus. It begins to prioritise posts and other contents posted by the World Health Organisation and local authorities in its homepage and user timelines, nudging users to follow local guidelines. Content moderation policies now target “content that goes directly against guidance from authoritative sources of global and local public health information”, “denial or global or local health authority recommendations to decrease someone’s likelihood of exposure to COVID-19”, “alleged cures for COVID-19 that are not immediately harmful but are known to be ineffective”, “harmful treatment or protection measures that are known to be ineffective”, and “denial or established scientific facts about transmission during the incubation period or transmission guidance from global and local health authorities.” All of the above is first labelled, and then removed.

The fact that authoritative sources occasionally disagree with each other poses a new challenge to these policies. For this reason, content moderation guidelines adopt a two-fold strategy: they simultaneously restrict the amount of unsanctioned claims users can make about COVID-19 transmission, prevention and treatments, and highlight the disputed nature of such claims. From January 29, 2020, Twitter launches a series of labelling and other reference techniques to nudge users to consult authoritative sources on any aspects of the pandemic. As did its counterparts (Google, Facebook), it launches a prompt (“#KnowTheFacts”) whenever users search or encounter information on the virus on the platform. By May 11, it introduces new labelling and warning techniques intended to “provide additional context and information on some Tweets containing disputed or misleading information related to COVID-19.” The idea is to adapt moderation to the contingent and disputed nature of various information about the disease, be they international discrepancies in public health policies or diverging claims made by authoritative sources about the virus.

Later, in December 16, 2020, Twitter goes as far as to specify the type of rhetoric that infringes upon its COVID-19 misinformation policy. It targets Tweets that “advance a claim of fact, expressed in definite terms”, and later “Tweets that are an assertion of fact (not an opinion), expressed definitely, and intended to influence others’ behavior”. Misleading statements on “vaccines” consist in spreading “preventative measures that are not approved by health authorities, or that are approved by health authorities but not safe to administer from home”; “the sale or facilitation of medicines or drugs that require a prescription or physician consultation”; or information on “adverse impacts or effects of receiving vaccinations, where these claims have been widely debunked”. It targets conspiratorial language, labeling Tweets “which suggest that COVID-19 vaccinations are part of a deliberate or intentional attempt to cause harm or control populations”. It reinforces consent to local authoritative guidelines by targeting Tweets that dispute “local or national advisories or mandates pertaining to curfews, lockdowns, travel restrictions, quarantine protocols, inoculations [...]”, and even targets Tweets about “research findings (such as misrepresentations of or unsubstantiated conclusions about statistical data) used to advance a specific narrative that diminishes the significance of the disease"Once again, all of the above is first labelled, and then removed.

Figure 7. Twitter's "COVID-19 misleading information policy". Source in image.

In practice, this means labeling almost every Tweet that mentions a COVID-19 treatment ingredient disputed by authoritative sources (Figure 8). Though some are deleted, most are simply labeled and redirected to a centralised reference page on local COVID-19 guidelines and information. This also applies to claims disputed amongst authoritative sources, such as whether hydroxychloroquine is or is not a safe drug.

Figure 8. Audience Tweets that mention a list of treatments for COVID-19. In green are numbers of unmoderated Tweets; in red, moderated Tweets.

It also means supporting authoritative sources in their continuous debunking of user claims (Figure 9). Authoritative sources — the World Health Organisation, in particular — repeatedly deny claims made on social media. The problem, there, is that disagreements amongst authoritative sources create a crisis of authority on the platform: Twitter can no longer redirect users to one specific source.

Figure 9. Tweets or website statements on a list of false treatments for COVID-19. In purple are debunking statements; in blue, claims that authoritative sources judge as disputed.

In the absence of consensus among authorities, Twitter begins to highlight the disputed nature of authoritative claims. This applies particularly to U.S. President Donald Trump's private account. While audience tweets are more severely moderated (suspended, deleted), Trump's Tweets initially obtain the generic "#KnowTheFacts" prompt the platform introduced in January 29th (Figure 10). It is only in early October, 2020, that a Tweet alleging that "sometimes over 100,000" people "die from the Flu" is labelled for violating "the Twitter Rules about spreading misleading and potentially harmful information related to COVID-19." The same happens to a later Tweet claiming immunity from COVID-10. Both of them do however stay up, in accordance with Twitter's "World leaders" and "Public-interest exceptions" policies (Twitter, 2021).

Figure 10. Moderated audience and Trump Tweets mentioning words related to COVID-19 treatments, transmission and prevention. Every dot is a Tweet.

7. Discussion and conclusions

The effects of COVID-19 on Twitter's content moderation philosophy

Since 2016, Twitter and its counterparts have undergone what appears to have been a Copernican shift in their philosophies vis-à-vis content moderation. While we are often reminded that moderation is what platforms do (and always did) (Gillespie, 2018), 2020 marks a profound change of attitude from being private "tech companies" to private public companies. Moderating user-generated contents on COVID-19 has surpassed the dilemmas of moderating political (and thereby subjective) contents, as they pose a clear, concrete threat to users' health and the state of local health facilities.

This appears to have lead Twitter to change its conception of contents as "user-generated" — thereby entirely up to a users' discretion — to "public good". In a sense, this has also forced Twitter and its platform counterparts to delimit what "misinformation" or other problematic information is, be it in a technical, authoritative or even rhetorical sense (Finding 3). We have seen that user-generated contents have been heavily scrutinised by both authoritative sources and Twitter itself, with nearly every single Tweet labelled and redirected to centralised, local references and guidelines on the virus. We have also seen that the platform continued to roll-out every-specific definitions of what constitutes COVID-19 "misinformation" — a term it rarely, if ever, mentions in its guidelines.

Determining the objective value of statements on COVID-19 treatments, prevention and transmission vehicles is not a responsibility the platform assumes. Its preference to relay that decision to authoritative sources follows a tradition already set by its counterparts around 2018, in the wake of the post-electoral "fake news" scandal (Marres, 2018). Google, in particular, has been prioritising authoritative contents as "reputed sources" in journalism, generating mixed reactions users suspicious of "political bias" in favour of left-wing American political culture (The Guardian, 2019). This study has also shown mixed results. The absence of consensus among authoritative sources makes the U.S. crisis of authority on COVID-19 even more evident, with the CDC, NIV and the White House frequently contradicting one another. The difference, here, is that in the absence of authority, Twitter steps in as an authority itself.

Consensus and misinformation in the process of COVID-19 sensemaking: conceptual implications

A number of misinformation policies and studies have focused on detecting and correcting misinformation, by e.g. investing in media literacy and pinpointing factors that can “increase the chances of citizens to be exposed to correct(ive) information” (Scheufele & Krause, 2019, p. 7664). Strategies include removing false content and demoting false or “borderline” information in favour of authoritative sources (Scheufele & Krause, 2019, p. 7664).

A possible drawback of these strategies is the decontextualisation of misinformed claims from the premises and info spheres that substantiate them. These spheres are frequently outside of misinformation-policed social media platforms (Tuters et al., 2018), and their users are also actively unaware of the information needed to understand claims and directives from authoritative sources (Kou et al., 2017). In other instances, misinformative claims can come from more innocuous misunderstandings (Hagen et al., 2019), or attempts at making sense of situations still unexplained by authorities (Krafft et al., 2017, p. 2976; Starbird et al., 2016). The inconsistency of official information is characteristic of the formation of rumours and other “improvised” sensemaking (Shibutani, 1966), which in themselves constitute an attempt to create consensus or a “common understanding” where there is none (Bordia & Difonzo, 2004).

In this sense, misinformation could be tackled as a byproduct of poor consensus between authorities and their audiences, who must in crises “converge” around a common understanding of facts and the epistemic frameworks used to validate them (Scheufele & Krause, 2019, p. 7663; Starbird, 2012, p. 1). By pinpointing information authoritative sources and their audiences mutually ignore (audiences mention “5G” and “food” as transmission vectors, while authoritative sources focus on “cough” and “touch”) and comparing diverging claims related to these terms (see Method), I indeed find that authoritative sources and their audiences have diverged most on uncertain or lacking information. Conspiratorial narratives emerged when authoritative sources could not confirm information on pressing issues, such as whether the virus was “airborne” (Finding 1). Authoritative sources frequently argued amongst themselves, causing uncertainty among audiences (Finding 1, Figure 3). I also find speculations that the virus may be transmitted in more ways than known or said (e.g., through eating chicken or via 5G “radiation”) and suspicions regarding the efficiency of protective measures like masks. These findings echo Brennen et al.’s report on COVID-19 misinformation being primarily about treatment, transmission and public governance (Brennen et al., 2020, pp. 6–8).

Official communication and platform content moderation techniques should consider fostering public consensus to prevent the production of COVID-19 misinformation

If they approach misinformation as a product of loss of public consensus, authoritative sources and (social) media platforms strategies could expand beyond misinformation correctives by investing in consensus-building affordances. One of the techniques used by platforms has been to nudge users to “authoritative content” on homepages, COVID-related search results and recommendations (Clea Skopeliti & Bethan John, 2020; “Building a Better News Experience on YouTube, Together,” 2018). These platforms may however be critiqued for adopting an “information deficit model” (Kahan, 2014), assuming that the transferal of more information from experts to non-experts will fill any potential gaps in public consensus. Though I have found some evidence that these measures direct users towards authoritative sources (Finding 2), it is a strategy that cannot always guarantee agreement with users who actively distrust these sources.

Consensus-oriented moderation could constitute a suite of techniques designed to facilitate dialogue between information providers (authorities) and recipients (publics) in moments of crisis. This could imply designing a meta COVID policy aggregator that allows users to situate individual claims within a broader network of data, research and other information used by authorities to substantiate their policies. The ethos of this strategy is to offer users the information necessary to understand policies that may otherwise appear unjustified, contradictory or altogether uncertain. Indeed, I have found that public uncertainty on COVID-19 appears when authoritative sources contradict each other or frequently change their claims (Finding 2). Aggregating international, regional or local COVID-19 policies can also grant users an overview of the state of consensus between authorities on ongoing issues. Remediating contradictions and uncertainty would in this sense involve consolidating the information necessary for users to understand the rationale of current COVID policies in the context of ongoing — and inevitable — sensemaking.

8. Appendix

Sources of false and true COVID-19 information

Dictionaries

9. References

Achenbach, J., & Johnson, C. Y. (2020, April 30). Studies leave question of ‘airborne’ coronavirus transmission unanswered. Washington Post. https://www.washingtonpost.com/health/2020/04/29/studies-leave-question-airborne-coronavirus-transmission-unanswered/

Bordia, P. and Difonzo, N. (2004) ‘Problem Solving in Social Interactions on the Internet: Rumor As Social Cognition’, Social Psychology Quarterly, 67(1), pp. 33–49. doi: 10.1177/019027250406700105.

Bostrom, A. et al. (2015) ‘Methods for Communicating the Complexity and Uncertainty of Oil Spill Response Actions and Tradeoffs’, Human and Ecological Risk Assessment: An International Journal, 21(3), pp. 631–645. doi: 10.1080/10807039.2014.947867.

Borra, E., & Rieder, B. (2014). Programmed method: Developing a toolset for capturing and analyzing tweets. Aslib Journal of Information Management, 66(3), 262–278. https://doi.org/10.1108/AJIM-09-2013-0094

Brennen, J. S., Simon, F. M., Howard, P. N., & Nielsen, R. K. (2020). Types, Sources, and Claims of COVID-19 Misinformation (pp. 1–13) [Factsheet]. University of Oxford.

Caplow, T. (1946). Rumors in War Departmental Contributions: Teaching and Research in the Social Sciences. Social Forces, 25(3), 298–302. https://heinonline.org/HOL/P?h=hein.journals/josf25&i=314

Clea Skopeliti, & Bethan John. (2020, March 19). Coronavirus: How are the social media platforms responding to the “infodemic”? [Journalism]. First Draft. https://firstdraftnews.org:443/latest/how-social-media-platforms-are-responding-to-the-coronavirus-infodemic/

Chandrasekharan, E. et al. (2020) ‘Quarantined! Examining the Effects of a Community-Wide Moderation Intervention on Reddit’, arXiv:2009.11483 [cs]. Available at: http://arxiv.org/abs/2009.11483 (Accessed: 11 November 2020).

Dailey, D., & Starbird, K. (2015). “It’s Raining Dispersants”: Collective Sensemaking of Complex Information in Crisis Contexts. Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing, 155–158. https://doi.org/10.1145/2685553.2698995

De Gregorio, G. (2020) ‘Democratising online content moderation: A constitutional framework’, Computer Law & Security Review, 36, p. 105374. doi: 10.1016/j.clsr.2019.105374.

Dwoskin, E. (2020) ‘Trump’s attacks on election outcome prolong tech’s emergency measures’, Washington Post, 12 November. Available at: https://www.washingtonpost.com/technology/2020/11/12/facebook-ad-ban-lame-duck/ (Accessed: 12 November 2020).

Elkin-Koren, N. and Perel, M. (2020) ‘Guarding the Guardians: Content Moderation by Online Intermediaries and the Rule of Law’, in Elkin-Koren, N. and Perel, M., Oxford Handbook of Online Intermediary Liability. Edited by G. Frosio. Oxford University Press, pp. 668–678. doi: 10.1093/oxfordhb/9780198837138.013.34.

Frenkel, S., Decker, B. and Alba, D. (2020) ‘How the “Plandemic” Movie and Its Falsehoods Spread Widely Online’, The New York Times, 20 May. Available at: https://www.nytimes.com/2020/05/20/technology/plandemic-movie-youtube-facebook-coronavirus.html (Accessed: 21 November 2020).

The Guardian (2019) ‘Google rewards reputable reporting, not left-wing politics’, The Economist, 8 June. Available at: https://www.economist.com/graphic-detail/2019/06/08/google-rewards-reputable-reporting-not-left-wing-politics (Accessed: 26 January 2021).

Hagen, S., Zeeuw, D. de, Peeters, S., Jokubauskaitė, E., & Briones, Á. (2019, February 21). Understanding Normiefication: A Cross-Platform Analysis of the QAnon Conspiracy Theory. Digital Methods Initiative. https://wiki.digitalmethods.net/Dmi/WinterSchool2019Normiefication

Huang, Y. L., Starbird, K., Orand, M., Stanek, S. A., & Pedersen, H. T. (2015). Connected Through Crisis: Emotional Proximity and the Spread of Misinformation Online. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, 969–980. https://doi.org/10.1145/2675133.2675202

Iati, M. et al. (2020) ‘Trump says it’s safe to reopen states, while Birx finds protesters with no masks or distancing “devastatingly worrisome”’, Washington Post, 4 May. Available at: https://www.washingtonpost.com/world/2020/05/03/coronavirus-latest-news/ (Accessed: 5 May 2020).

Jiang, S., Robertson, R. E. and Wilson, C. (2019) ‘Bias Misperceived:The Role of Partisanship and Misinformation in YouTube Comment Moderation’, Proceedings of the International AAAI Conference on Web and Social Media, 13, pp. 278–289. Available at: https://ojs.aaai.org/index.php/ICWSM/article/view/3229 (Accessed: 11 November 2020).

Kahan, D. M. (2014). Climate-Science Communication and the Measurement Problem (SSRN Scholarly Paper ID 2459057). Social Science Research Network. https://papers.ssrn.com/abstract=2459057

Kou, Y., Gui, X., Chen, Y., & Pine, K. (2017). Conspiracy talk on social media: Collective sensemaking during a public health crisis. Proceedings of the ACM on Human-Computer Interaction, 1(CSCW), 61. https://doi.org/10.1145/3134696

Krafft, P., Zhou, K., Edwards, I., Starbird, K., & Spiro, E. S. (2017). Centralized, Parallel, and Distributed Information Processing during Collective Sensemaking. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 2976–2987. https://doi.org/10.1145/3025453.3026012

de Laat, P. B. (2012) ‘Coercion or empowerment? Moderation of content in Wikipedia as “essentially contested” bureaucratic rules’, Ethics and Information Technology, 14(2), pp. 123–135. doi: 10.1007/s10676-012-9289-7.

Lee, E. (2020) Moderating Content Moderation: A Framework for Nonpartisanship in Online Governance. SSRN Scholarly Paper ID 3705466. Rochester, NY: Social Science Research Network. doi: 10.2139/ssrn.3705466.

Lewis, D. (2020). Is the coronavirus airborne? Experts can’t agree. Nature, 580(7802), 175–175. https://doi.org/10.1038/d41586-020-00974-w

Lyons, K. (2020) Twitter flags, limits sharing on Trump tweet about being ‘immune’ to coronavirus, The Verge. Available at: https://www.theverge.com/2020/10/11/21511682/twitter-disables-sharing-trump-tweet-coronavirus-misinformation (Accessed: 21 November 2020).

Marres, N. (2018) ‘Why We Can’t Have Our Facts Back’, Engaging Science, Technology, and Society, 4, p. 423. doi: 10.17351/ests2018.188.

Mandavilli, A. (2020, July 4). 239 Experts With One Big Claim: The Coronavirus Is Airborne. The New York Times. https://www.nytimes.com/2020/07/04/health/239-experts-with-one-big-claim-the-coronavirus-is-airborne.html

O’Leary, N. (2020) ‘How Dutch false sense of security helped coronavirus spread’, The Irish Times, 10 March. Available at: https://www.irishtimes.com/news/world/europe/how-dutch-false-sense-of-security-helped-coronavirus-spread-1.4199027 (Accessed: 5 May 2020).

Rieder, B. (2017) ‘Scrutinizing an algorithmic technique: the Bayes classifier as interested reading of reality’, Information, Communication & Society, 20(1), pp. 100–117. doi: 10.1080/1369118X.2016.1181195.

Rogers, R. (2020) ‘Deplatforming: Following extreme Internet celebrities to Telegram and alternative social media’, European Journal of Communication, p. 0267323120922066. doi: 10.1177/0267323120922066.

Scheufele, D. A., & Krause, N. M. (2019). Science audiences, misinformation, and fake news. Proceedings of the National Academy of Sciences, 116(16), 7662–7669. https://doi.org/10.1073/pnas.1805871115

Shibutani, T. (1966). Improvised News: A Sociological Study of Rumor. Ardent Media.

Skopeliti, C. and Bethan, J. (2020) ‘Coronavirus: How are the social media platforms responding to the “infodemic”?’, First Draft, 19 March. Available at: https://firstdraftnews.org/latest/how-social-media-platforms-are-responding-to-the-coronavirus-infodemic/.

Starbird, K. (2012). Crowdwork, crisis and convergence: How the connected crowd organizes information during mass disruption events [PhD].

Starbird, K. et al. (2016) ‘Could This Be True? I Think So! Expressed Uncertainty in Online Rumoring’, in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. San Jose, California, USA: Association for Computing Machinery (CHI ’16), pp. 360–371. doi: 10.1145/2858036.2858551.

Starbird, K. (2020) ‘How to cope with an infodemic’, Brookings, 27 April. Available at: https://www.brookings.edu/techstream/how-to-cope-with-an-infodemic/ (Accessed: 5 May 2020).

Tuters, M., Jokubauskaitė, E., & Bach, D. (2018). Post-Truth Protest: How 4chan Cooked Up the Pizzagate Bullshit. M/C Journal, 21(3). http://journal.media-culture.org.au/index.php/mcjournal/article/view/1422

Wilson, R. A. and Land, M. K. (2020) Hate Speech on Social Media: Towards a Context-Specific Content Moderation Policy. SSRN Scholarly Paper ID 3690616. Rochester, NY: Social Science Research Network. Available at: https://papers.ssrn.com/abstract=3690616 (Accessed: 11 November 2020).

Yong, E. (2020) ‘Why the Coronavirus is so Confusing’, The Atlantic, 29 April. Available at: https://www.theatlantic.com/health/archive/2020/04/pandemic-confusing-uncertainty/610819/.
I Attachment Action Size Date Who Comment
Areagraph epoche.jpgjpg Areagraph epoche.jpg manage 202 K 21 Oct 2019 - 13:30 EmilieDeKeulenaar  
Areagraph_03_Tavola disegno 1.jpgjpg Areagraph_03_Tavola disegno 1.jpg manage 302 K 21 Oct 2019 - 13:36 EmilieDeKeulenaar  
Atlantis_WikiTimeline_Tavola disegno 1.jpgjpg Atlantis_WikiTimeline_Tavola disegno 1.jpg manage 86 K 21 Oct 2019 - 13:28 EmilieDeKeulenaar  
Crusade_WikiTimeline-02.jpgjpg Crusade_WikiTimeline-02.jpg manage 70 K 21 Oct 2019 - 13:27 EmilieDeKeulenaar  
Screenshot 2019-07-22 at 15.22.51.pngpng Screenshot 2019-07-22 at 15.22.51.png manage 429 K 21 Oct 2019 - 13:20 EmilieDeKeulenaar  
Screenshot 2019-07-22 at 16.42.17.pngpng Screenshot 2019-07-22 at 16.42.17.png manage 527 K 21 Oct 2019 - 13:37 EmilieDeKeulenaar  
Screenshot 2019-07-23 at 12.25.46.pngpng Screenshot 2019-07-23 at 12.25.46.png manage 60 K 21 Oct 2019 - 13:24 EmilieDeKeulenaar  
Screenshot 2019-07-23 at 16.10.01.pngpng Screenshot 2019-07-23 at 16.10.01.png manage 327 K 21 Oct 2019 - 13:31 EmilieDeKeulenaar  
WW2_WikiTimeline-03.jpgjpg WW2_WikiTimeline-03.jpg manage 66 K 21 Oct 2019 - 13:28 EmilieDeKeulenaar  
cluster 2.pngpng cluster 2.png manage 1 MB 21 Oct 2019 - 13:44 EmilieDeKeulenaar  
image-wall-e3b55f6d8e296e95f13bd18fc943dd55.pngpng image-wall-e3b55f6d8e296e95f13bd18fc943dd55.png manage 934 K 21 Oct 2019 - 13:33 EmilieDeKeulenaar  
pasted image 0.pngpng pasted image 0.png manage 1 MB 21 Oct 2019 - 13:23 EmilieDeKeulenaar  
pasted image 2.pngpng pasted image 2.png manage 1 MB 21 Oct 2019 - 13:32 EmilieDeKeulenaar  
unnamed-2.pngpng unnamed-2.png manage 12 K 21 Oct 2019 - 13:34 EmilieDeKeulenaar  
unnamed-3.pngpng unnamed-3.png manage 11 K 21 Oct 2019 - 13:34 EmilieDeKeulenaar  
unnamed-4.pngpng unnamed-4.png manage 54 K 21 Oct 2019 - 13:37 EmilieDeKeulenaar  
Topic revision: r2 - 10 Feb 2021, EmilieDeKeulenaar
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback