Policy for Pandemics 03: Dealing with the digital "Infodemic"
The problem of digital misinformation has long exposed critical structural flaws in our communications infrastructure. Things are now changing very rapidly, in ways we may not be able to control.
This newsletter is edited by Andrew Potter (AP), associate professor at the Max Bell School of Public Policy at McGill University. This briefing is written by Taylor Owen (TO), who is the Beaverbrook Chair in Media, Ethics and Communications and Associate Professor at Max Bell .
Like all of us, I have been absorbing a huge amount of information about the pandemic. Below are some thoughts based on my particular public policy lens: the role of information reliability in a democracy; mis/disinformation as a structural problem; and, the broader and changing role tech plays in our society.
First, as always, there is a lot of really shitty content on the internet, and some of it is quite dangerous. The WHO has called this an “Infodemic,” which they define as “an overabundance of information—some accurate and some not—that makes it hard for people to find trustworthy sources and reliable guidance when they need it.” Heidi Larson, head of the Vaccine Confidence Project, said in a 2018 Nature article “The deluge of conflicting information, misinformation and manipulated information on social media should be recognized as a global public-health threat. ” The reason is that when you are trying to inform billions of people about why and how they should drastically change their behaviour, the quality of information being circulated is critical.
And a lot of what is circulating is really bad. This includes: Medical misinformation such as bad science, false cures, and fake cases; Ideological content from communities who distrust science and proven measures like vaccines; Profiteers including traffic seekers, selling “cures” or other health and wellness products or phishing scams; Conspiracy theorists such as those claiming the corona virus is a bioweapon, was planned by Bill Gates, or created in a Chinese or American lab; And harmful speech ranging from racist attacks to Neo-Nazis talking about ways to *spread* coronavirus to cause chaos. This is all occurring on social platforms, in private groups, on messaging platforms. The public health problem is not any one bad piece of content, but the effect of the sum of the parts. We are being flooded with content, both good and bad, which creates an epistemological problem: How do we come to know what we know?
Second, misinformation flows from the top, which this presents unique challenges. The Chinese government worsened the pandemic by censoring scientist and leading figures (such as Elon Musk) have used their platforms to spread damaging misinformation. Perhaps most extraordinarily, the President of the United States can’t be relied on to provide accurate information to the public, and is using his daily press briefings to spread dangerous misinformation. This has led prominent journalism scholars and media columnists such as Jay Rosen and Margaret Sullivan to call on the media to stop live-broadcasting Trump’s briefings. It is worth pausing on just how extraordinary this moment is.
Third, platforms are taking more responsibility for content, which further proves we can and should be demanding more responsibility from them. Content moderation is always about a values trade-off: the freedom to speak versus the right to be protected from the harms that speech can cause. This trade-off is changing in real time. Just a month ago platforms were still (to varying degrees) prioritizing the value of frictionless speech over the potential harms of this speech. But now Facebook, YouTube, and Twitter have all taken far more aggressive stances to content moderation on this issue than they have on “normal” political content. The Overton window is broadening. It will likely not close again.
Fourth, the problem of content moderation reveals bigger design and governance problems. Misinformation has never been a problem of individual bad actors, it has always been a structural problem. It is about the design of our digital infrastructure. And the structural problem remains. It includes the scale of platform activity (a billion posts Facebook a day), the role of AI in determining who and what is seen and heard, and the financial model which creates a free market for our attention, prioritizing virality and engagement over reliable information. If these are real drivers of the Infodemic, should these companies be changing how they function? Should they be adjusting their algorithms to prioritize reliable information? Should they be radically limiting microtargeting? And perhaps more importantly, do we want platforms making these broad decisions about speech themselves, and if not, how can democratic governments step in? Part of the reason we regulated communication infrastructure was because we deemed it essential, particularly during emergencies. These companies are clearly essential part of our lives, and likely need to be treated as such. What does this wider responsibility looks like?
Fifth, policies imposed during crises can be sticky. Particularly when they involved new technical infrastructure. There is clearly a significant value to be gained from behaviour and data monitoring that would have been (rightly) viewed as a significant breach of privacy only two weeks ago. Tracking the location of those that have tested positive as well as those they have come into contact with could prove critical. But how do we build out this capacity without entrenching these new powers?
Governments are discussing this with the platforms, as well as with pernicious surveillance tech companies such as Palantir and Clearview AI. The Israeli government has approved emergency measures for its security agencies to track the mobile-phone data of people with suspected coronavirus. We have seen even more draconian measures in other countries (temp drones, forced app downloads): will we see these here too? If so, we must ensure that these provisions are sun-setted and be vigilant against shifting important norms about surveillance and privacy.
Sixth, we are establishing new society-wide technology norms in realtime. The digital divide was always a fiction .Social, economic and political interactions using technology have always been real. A part of our lives. But we are now experimenting en masse with the rapid adoption of a technology-mediated society. Our social interactions, our digital economy, our employment and our politics are moving online. And we are doing so via commercial platforms designed with a very particular set of incentives. These design decisions and these incentives are going to have a profound effect on us all. It ever we were to think about and build public digital infrastructures, now would be the time.
Finally, we need to be thinking about the mental health implications of the ways we are adopting digital technologies. Online community is a double edged sword. In addition to all the amazing things social media enables, it can also causes real mental health problems. We need to be aware of this as well move more of our info diet, work and social interactions online. (TO)
Related Reading
A Lawfare analysis of the Israeli Emergency Regulations for location tracking of Coronavirus Carriers
No, dolphins haven’t returned to the canals of Venice, from National Geographic
What else is going on?
How Canada approved an Assad loyalist to serve Canada’s terrorized Syrian refugees, from the WaPo
A new study confirms that electric cars beat gas-powered vehicles on CO2 emissions
Turns out all 16 of the Dead Sea Scrolls are fakes
Distractions
Feeling cooped up? Watch the entire 5-hour director’s cut of Das Boot
We are made of stars, so check out this time-lapse of the Earth rotating beneath a stationary Milky Way
And finally:
(Image by Allan Harding MacKay)
_____
Policy for Pandemics is edited by Andrew Potter (AP). If you enjoyed this newsletter feel free to share it with friends or on social media. If you have any feedback or would like to contribute, feel free to send me an email: andrew2.potter@mcgill.ca