Bots, trolls, cyborgs…it’s a brave new world for democratic politics and social discourse. Social media is the new battleground for ideologies and ideas. Even if you don’t have a Twitter account, you’ve almost certainly come across news stories that were amplified, or even created, on social media. Many of them are misleading, or worse.
My colleagues and I at New Knowledge work to protect brands and the larger community from disinformation and data attacks. As part of our work, we have been studying influence operations around a number of elections, in the U.S. and abroad, for the past two years. We’ve watched tactics emerge and evolve in eight different countries. Here’s what we’ve learned, and what we expect to see in the future.
Who’s behind the operations?
We’ve been hearing a lot about Russia these days. Intelligence experts warn that Russia has returned to “active measures” status, using the same information warfare tactics they did during the Cold War. But now the internet and social media make those operations both easier and cheaper. But it’s not just the Russians. Some of the disinformation we encounter is state-sponsored, some is citizen-generated. Some is created by citizens and boosted by a state actor. Some of it is ideologically motivated. Some is “for the lulz” (i.e., a prank). All of it is dangerous.
What’s the purpose of these operations?
Sometimes these operations aim at getting a particular candidate elected to a particular office, but we’ve seen a number of influence operations that are aimed primarily at sowing domestic discord and distrust of democratic institutions. This sort of manipulation is increasingly recognized in press coverage, especially regarding what happened during the U.S. presidential election. But it is very common elsewhere as well.
One network we tracked in Latin America this summer amplified anti-government sentiment and pro-revolution activity — including advocating violence against states — in at least five different countries. Similarly, the Russian operations directed against the U.S. in 2016 focused more on socially divisive issues — race relations, gun rights, police brutality, LGBTQ+ rights, etc. — than on particular candidates. By encouraging us to turn our focus inward, malicious state actors can minimize the attention Americans pay to international issues. That may give them more latitude to fulfill their policy objectives, even if they threaten us or our allies.
Disinformation is more than “fake news”.
Falsehoods make up only a small portion of internet propaganda. Propaganda is primarily about manipulating behavior, and if a propagandist can use cherry-picked facts to do so, their job is much easier. Nazi propagandist Joseph Goebbels stressed that party dispatches must always be as factually accurate as possible. We call this weaponized truth: using partial truths, half-truths, and decontextualized truth to manipulate peoples’ behavior or turn our gaze away from other truths that require our attention. Social media makes it easier than ever to share truth stripped of its context, and to keep our attention focused on certain truths at the expense of others.
Multi-platform campaigns are growing more common.
The propaganda problem is bigger than any one platform. Sometimes memes circulated on 4chan get picked up on Reddit, and then they make their way to Twitter and Facebook. The more sophisticated actors, like Russia’s Internet Research Agency, conduct multi-platform campaigns, with messages tailored to specific audiences in specific online spaces. Countering, and even observing, those operations requires taking a much broader approach than bot-scanning, fact-checking, using new artificial intelligence tools, or tightening each platform’s content moderation standards (though all those things are important).
“Dark” advertising is increasingly targeted at all of us.
The ability to algorithmically target ads on social media to specific groups of people means that political ads can be run with little-to-no public oversight. Many are designed not even to appear as political ads. Because they typically don’t broadcast widely, they are hard to detect (hence the name “dark”).
Many such ads would fall afoul of governments’ campaign laws if the regulators were aware of them. The Ireland constitutional referendum around the legalization of abortion this year was a good example. Total social media content was low, and most of the disinformation came in the form of targeted advertisements, many of paid for by organizations outside Ireland. We’ve seen these before, but if Ireland tells us anything, it’s that the role of dark political advertising is increasing in spite of increased attention directed at it by some platforms and governments.
It’s a digital arms race.
This isn’t an easy problem to solve, but it’s incredibly important. It’s not just a “fake news” issue, or a “truth in messaging” issue, that can be combated with fact-checking and education. It’s not just a social media issue, to be combated by retreating from certain online spaces. This is a national security issue, a cybersecurity issue, and a First Amendment issue. The foundation of our democracy — the integrity of information flowing freely via personal speech and the press — is being attacked, and with it the integrity of the electoral process. As Director of Research at New Knowledge, Renee DiResta testified to the Senate Intelligence Committee, this is “one of the defining threats of our generation.”
The internet has brought this panoply of threats to all of our doors. These are cross-platform, multi-industry threats, and the solutions will be cross-platform and collaborative. They will include new tech practices, expanded intelligence capabilities, and modernized legislation. Just as after 9/11 U.S. government agencies expanded their collection and sharing of counterterrorism intelligence, we need to do the same for foreign influence operations. At the same time, users need to be slower to share conspiratorial content, and quicker to boost the corrections and the fact-checkers’ responses. It can’t go viral if we don’t click “share.” And like government agencies, tech platforms need to seriously think about sharing information with each other, with the government, or with “white-hat hackers” — researchers who can spot cross-platform influence operations, and share their insights with the appropriate authorities.
In an encouraging development, some platforms, including Facebook, have begun forming partnerships with researchers or legislators, and we’ll need to accelerate such efforts going forward. We will need ethical tech companies, indefatigable researchers, informed lawmakers and intelligence officers, and educated users. It will take all of us working together to solve the propaganda problem. It won’t get solved any time soon, but we can make significant progress improving the integrity of our information ecosystem, protecting our brands, and safeguarding our democracy.
Kris Shaffer is Senior Computational Disinformation Analyst, at New Knowledge, a cybersecurity company specializing in disinformation defense.
We will continue the discussion about technology’s impact on democracy at Techonomy 2018, this November, in Half Moon Bay, California. Read more.