Opinion, Berkeley Blogs

Four strategies to combat disinformation before the election

By Brandie Nonnecke

Earlier in October, the U.S. Department of Homeland Security (DHS) released its “Homeland Threat Assessment.” The report cautions that Russian and Chinese online influencers continue to employ coordinated campaigns to spread mis- and disinformation to amplify socio-political division and increase voter suppression. With the 2020 US presidential election in just two weeks, these tactics are likely to ramp up.

The California State Government has taken steps to mitigate these tactics by passing legislation targeting bots and deepfakes—tools that are increasingly a part of the arsenal of coordinated influence campaigns. Unfortunately, they will likely be ineffective.

Passed into law in 2018 , the California “BOT Bill” requires the disclosure of bots that attempt to influence California residents’ voting or purchasing behaviors. Renée DiResta, Technical Research Manager at the Stanford Internet Observatory, warns that the law has significant flaws, including a lack of clarity on the very definition of “bot.” The BOT Bill defines a bot as “an automated online account where all or substantially all of the actions or posts of that account are not the result of a person.”

Defining “substantially” is not straightforward and trying to focus on accounts that are heavily automated misses a critical finding in much of her research.  Accounts that have some automated features but are largely human-operated are the most effective at executing an influence operation. Focusing attention solely on automation risks overlooking the accounts causing the most damage. Additionally, because the law removes liability from platforms to identify bots and instead places responsibility on the creators of bots to publicly identify them, DiResta warns that malicious political bots will likely go unchecked.

In October 2019, California became the second state to ban malicious political deepfakes before an election. While well intended, California’s “Anti-Deepfake Law” suffers from four major flaws that will significantly impede its success: timing, misplaced responsibility, burden of proof, and inadequate remedies. The law only applies to malicious deepfakes distributed within 60 days of an election, a forced time constraint that doesn’t reflect the enduring nature of online content.

Rather than asking platforms to identify malicious deepfakes, the law places responsibility on the creator and on the public to identify them. This is like asking a swindler to kindly inform their victims before tricking them. And overreliance on the public to flag malicious deepfakes will likely result in false positives and false negatives. The law only applies to “malicious” deepfakes. What might be considered “malicious” is unclear. While individuals debate the intent of a deepfake, it will likely remain available online, enabling it to gain virality and affect public opinion long before a determination is made.

Like the BOT Bill, an inappropriate focus on technical sophistication in the Anti-Deepfake Law diverts attention from simpler tactics that can wreak greater havoc. With a focus on deepfakes, the law overlooks cheapfakes, which use less sophisticated technology to edit images and videos. These may still be spread widely to manipulate, confuse, and influence voters. As Hany Farid, a professor and digital forensics expert at UC Berkeley who works on deepfake detection put it, “You can think of the deepfake as the bazooka and the video splicing as a slingshot. And it turns out the slingshot works.”

While legislation plays an important role in stopping malicious coordinated influence campaigns, social media platforms have the ability to pursue a variety of tactics that can mitigate the effectiveness of these campaigns, such as supporting public education on the threat and nature of these campaigns; detecting bots, cheapfakes, deepfakes, and disinformation; implementing interventions to mitigate the spread of malicious content; and supporting media content authentication.

First, expanded public education is needed on the nature and tactics of coordinated influence campaigns. A more informed public can become conscientious consumers of social media, enabling them to assist platforms to more accurately flag malicious content and slow its spread. Social media should integrate education programs like the nonpartisan “News Literacy Project” to raise the public’s awareness of mis- and disinformation tactics and Microsoft’s “Spot the Deepfake Quiz” to enable greater understanding of how deepfake technology works.

Social media platforms need to develop more public-facing communication when bots and deepfakes are detected. While these platforms routinely issue reports indicating the number of inauthentic accounts suspended and harmful posts removed, they should also publicly flag bots and deepfakes still in circulation to increase transparency and accountability. This will have the secondary benefit of educating the public on what bots and depfakes look like.

Once malicious or dangerous content is identified, platforms should intervene. Content that violates community standards should be removed but not deleted. It should be stored for further research in collaboration with independent researchers. This step is critical to facilitate public understanding of the role and effects of platforms on democracy. For problematic content that should nevertheless stay online, such as posts from elected leaders that are of public interest, platforms should remove features for interaction (i.e., the ability to like and reshare). By keeping the content online, platforms can help to correct inaccurate information and provide transparency into false or misleading narratives.

One reason malicious influence campaigns are successful is because of the inability to prove its veracity. The authentication of media is critical to enabling the public to differentiate fact from fiction. San Diego–based startup Truepic has partnered with Qualcomm to develop a feature that securely tags images and videos captured by a smartphone with date and location information. This technology is part of a larger effort led by the Content Authenticity Initiative, which includes Adobe, Twitter, and The New York Times, to develop a standard to authenticate digital imagery. Social media platforms serve  an important role in assisting to develop and support media verification techniques. Their involvement is critical to ensuring such features are used to verify images and videos on their platforms.

Developing and implementing legislation is a long process that may not have the intended results. Instead, social media platforms are better positioned to implement responsible tactics now. These include taking a more active role to mitigate the spread of malicious content by supporting education about influence campaigns and techniques, detect and flag inauthentic accounts and behavior, implement appropriate interventions to correct mis- and disinformation, and support the development of secure authentication techniques to verify the provenance of media sources.

Brandie Nonnecke is Director of the CITRIS Policy Lab at UC Berkeley & a Fellow at Harvard’s Carr Center for Human Rights PolicyFind her on Twitter at @BNonnecke.

This piece originally appeared on Protego Press.