Here’s a scary thought: In Avira’s recent Global Elections Survey, 37% of 2,000 respondents in three countries—the US, Germany and Hong Kong—strongly or somewhat agreed that they “get most of my political news on social media.”
Given what we’ve learned about internet-enabled meddling in the 2016 elections and the spread of misinformation and disinformation via social media, it’s sobering to realize that many of us are still drinking from this tainted firehose. How much should we worry? On one hand, an overwhelming majority of us have in fact learned our lesson about the danger of trusting whatever gunk we happen to step into online: 72% of respondents said that they strongly or somewhat agree that they check the credibility of news before they reshare it. On the other hand, we’re not great at vetting our sources. According to a poll from PBS NewsHour/NPR/Marist, about a third of American voters worry that they can’t spot misleading stories on social media and that it poses the biggest threat to the security of elections. (Not sure how to suss out news sources? Here are 5 tips on how to identify fake news and misinformation.)
To understand how national elections affect digital privacy—and vice versa—Avira surveyed a representative sample of citizens in three nations that have elections on the horizon. In this article, we take a look at the changes made by several online platforms in response to misinformation surrounding the elections, be it via content moderation, technology shifts and/or ad policies.
How Twitter’s fighting back
In January 2018, Twitter announced that it would email notifications to 677,775 users in the US: the number of people who followed one of many accounts created by Russian election trolls. Less than two weeks later, Twitter announced that the number had more than doubled.Twitter went on to find yet more automated, election-related activity originating out of Russia during the election period, identifying a total of 50,258 automated, Russia-linked accounts tweeting election-related content.
Twitter was never the favorite target for those who try to tinker with elections, mind you. That dubious honor goes to Instagram, according to a report prepared for the US Senate Intelligence Committee. Still, Twitter has been an important tool in propaganda warfare. Congress has demanded answers, and Twitter has taken steps to clean up its act.
In 2018, it promised to …
- Invest further in machine-learning to help detect and mitigate fake/coordinated/automated account activity.
- Limit the ability of users to perform coordinated actions across multiple accounts in Tweetdeck and via the Twitter API.
- Expand its developer onboarding process to better manage the use cases for developers building on Twitter’s API, in order to improve how it enforces policies on restricted uses of developer products, including rules on the appropriate use of bots and automation.
- Verify major party candidates for all statewide and federal elective offices, and major national party accounts, as a hedge against impersonation.
- Maintain open lines of communication to federal and state election officials to quickly escalate issues that arise.
- Address escalations of account issues with respect to violations of Twitter rules or applicable laws.
- Continually improve and apply anti-spam technology to address networks of malicious automation targeting election-related matters.
- Monitor trends and spikes in conversations relating to the 2018 elections for potential manipulation activity.
That’s just a partial list. In October 2019, Twitter went on to take the bottomline-bruising step of banning political ads, including both candidate ads and issue ads: an announcement that hit right before Facebook’s earnings call (and which came exactly two weeks after Facebook said that it’s quite possible that it would allow lying or misleading political ads to run without taking them down). Twitter’s election housecleaning has also included adding context to tweets with labels for candidates, government and state-affiliated media accounts, and notices on Tweets with manipulated media.
Since then, Pinterest has also disallowed ads on elections-related content.
In mid-September—with less than 50 days to go until election day—Twitter Public Policy Director Bridget Coyne announced that in order to help users find accurate US election news and information, the platform would launch a 2020 US election hub at the top of users’ Explore tab. To be included: news from reputable sources in English and Spanish; live streams of major election events such as debates; a tool that shows candidates for US House, US Senate, and governor with an Election Label in the user’s state; localized news and resources by state; and voter education PSAs, which are already live. It also announced expanded policies to “further protect the civic conversation.”
How Facebook’s fighting back
As far as Facebook goes, a year ago, CEO Mark Zuckerberg said that the platform would “probably” allow candidates to buy ads that lie about their opponents. Facebook doesn’t fact-check such ads because it thinks that in a democracy, “people should decide what’s credible, not tech companies,” Zuckerberg told Congress. But that was then, this is now: in early September, Zuckerberg announced that Facebook will block new political and issue ads—but only during the final week of the campaign.
- Removal of posts that claim that people will get COVID-19 if they take part in voting, with a link to authoritative information about the coronavirus added to posts that might use COVID-19 to discourage voting.
- Informational labels for “content that seeks to delegitimize the outcome of the election or discuss the legitimacy of voting methods, for example, by claiming that lawful methods of voting will lead to fraud.”
- A label directing people to the official results from Reuters and the National Election Pool, to be added to premature victory proclamations.
In addition, in August, Facebook launched a hub to help users with US election information. Facebook said that the Voting Information Center connects Facebook and Instagram users to accurate, easy-to-find information about voting wherever they live and will “help them hold their elected officials accountable.”
How Google’s fighting back
As far as the search behemoth goes, researchers have noted that there are loopholes in Google’s policies that have let misleading voting- and election-related ads slip through. The Election Integrity Partnership (EIP)—a coalition of research entities focused on supporting real-time information exchange between the research community, election officials, government agencies, civil society organizations, and social media platforms—said in September that analysts had come across a campaign that apparently sought to undermine voter confidence in voting by mail.
One example of what Google’s ad policies have let through: An ad that read “MIT Election Lab says mail-in voter fraud ‘more frequent’ than…” directed users to a Washington Times article and four other Washington Times pages that seemed to misrepresent the findings of the MIT publication to which it referred. Five other, similar ads appeared around the US, including in some battleground states, when users searched on terms such as “electoral fraud,” “mail-in voting,” and “voter fraud.” Some of the headlines on the ads:
“No, voter fraud isn’t a myth: 10 cases where it’s all too real”
“Millions of mail-in ballots went missing in 2018: Report”
“Unraveling the problems with mail-in voting – Washington Times”
“Donald Trump: Mail-in voting ‘corrupt’ – Washington Times”
“Election fraud is no myth – Washington Times”
The Stanford Internet Observatory’s Daniel Bush, writing on behalf of the EIP, said that Google could fix the problem in two ways: first, it should enforce its prohibition on clickbait ads and unreliable claims around political advertising. “Since the advertisers pay for the words that appear in the ads, these words should have some reasonable connection to the underlying content they are advertising,” he said. As well, the EIP suggested that Google repeal its policy exempting media outlets from its transparency report. As it is, there’s a loophole that allows an organization that classifies itself as “media” to evade reporting requirements, Bush said, calling it “a loophole that is being used to spread partisan content without accountability.”
Facebook and Twitter both have policies that mitigate advertisers’ ability to manipulate ad services in this way, he said.
Google’s also been trying to address misinformation and disinformation by muzzling its search suggestions. The company said in September that, among other changes meant to fight election tampering, it’s eliminating autocomplete suggestions that target candidates or voting. Some examples: predictions such as “you can vote by phone” or “you can’t vote by phone,” or a prediction that says “donate to” any party or candidate, won’t appear in autocomplete. That doesn’t mean you can’t search for whatever you like, of course, and still find results.
As well, like the other major platforms, Google’s been working on getting out the vote. When users type “how to vote” into Google, the search engine instantly populates the search results with location-specific instructions.
Is it enough?
The above is just a sampling of what the bigger online platforms are trying to do to strip disinformation and misinformation in the runup to the elections, as well as what they’re doing to try to get out the vote and provide voters with the reliable voting information they need. But will it be enough to make up for the gunk that’s saturated social media?
Carly Miller, a research analyst at the Stanford Internet Observatory who’s been tracking how different social media platforms are addressing election misinformation, said that the platforms have taken decent first steps, but time will tell if they enforce policies effectively. “The next step is to enforce these policies in a clear, transparent, and timely manner, which we have seen really makes a difference in preventing the spread of election-related misinformation,” she told Time in late September.
Time also quoted Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab, which tracks misinformation. The devil’s going to be in the details, he predicted: “It will depend on how strongly and quickly they enforce these policies.”