As the 2020 U.S. presidential election draws near, memories of Russia’s interference in the 2016 election loom large. U.S. voters passed through the last election cycle largely unaware of Moscow’s extensive use of social media to spread disinformation and create divisive content, and learned of this manipulation long after they had cast their ballots. However, social media companies, policymakers, and the intelligence community headed into 2020 with much greater clarity about the threat Russian disinformation poses and the forms it can take.

Russian disinformation has not let up since the last election ended. In September, FBI Director Christopher Wray confirmed U.S. intelligence assessments that Russia is endeavoring to influence the U.S. presidential elections and is “very, very active” on social media and its state-run media, as well as through other proxies. This may sound like déjà vu — and, indeed, many of the disinformation narratives draw on similar themes from the 2016 election — but Russia is also evolving and adapting its tactics to avoid detection amid heightened awareness of its previous operations.

For a time, Russian actors largely had free rein to spread their disinformation by creating large numbers of fake accounts and promulgating a large quantity of divisive contact. Today, social media platforms and the U.S. government are much better informed about Russian tactics and have implemented a number of safeguards. Thus, to hide its hand and avoid rapid takedowns of its accounts and posts, Russia has had to evolve and experiment with new ways of spreading its narratives:

Employing real people. In 2016, Russia used trolls, bots and other proxies primarily located in Russia itself to spread disinformation and generate divisive content. Social media platforms have become more adept at taking down accounts traced to Russia, so Moscow has increasingly begun to employ real people in targeted or other countries to knowingly or unknowingly spread Russia’s narratives on its behalf. This helps Russia avoid the obvious signs of fake accounts that have led to quick account deactivations in the past. As Renee DiResta of the Stanford Internet Observatory stated, “Hiring people who are fluent in the language and culture avoids the kind of tells that can expose an operation.”

In October 2019, Facebook took down a Russia-based information operation that employed real Africans to post Russian propaganda and disinformation within African countries. The Russians even used some hacked Facebook accounts that had previously belonged to real people. More recently, the bogus news site Peace Data employed Ghanaian activists to write content targeting Black communities in the U.S. and hired freelance writers in the West to post articles espousing a far-left point of view. These writers did not appear to know that Peace Data was based in Russia. Facebook deactivated the site’s social media accounts in September. This recent approach also shows the Russians are focusing less on spreading wholly made-up disinformation and more on sponsoring opinion articles written to generate shares, stir emotions, and cause division.

Using artificial intelligence. Previously, many Russian accounts had no profile picture or copped profile pictures from the internet. These were easily traceable with a reverse image search. By contrast, accounts set up for the people purporting to operate the Peace Data site used profile pictures generated by artificial intelligence. Those behind Russian influence operations now have the ability to use machine learning to further obscure the foreign origin of their efforts. These images are much harder to identify as fake, as they will not come up with any results in a reverse image search and require a close scan to spot subtle signs of AI generation. While telltale signs of AI use still exist, these indicators will likely continue to become more subtle or disappear completely as technology advances.

Creating fewer but more elaborate and focused accounts. Leading up to the 2016 election, Russian actors created thousands of fake accounts as they attempted to build a huge following and reach large numbers of potential voters. Because they created such a large volume of profiles, they put little work into fleshing out the supposed user’s biography and history of each individual account and making them appear more authentically human. Now, as awareness has grown of how to spot these fake accounts, the Russians have focused instead on creating a smaller number of accounts with more carefully crafted fake personas. In addition to the AI-generated profile pictures, operators have put more effort into creating a robust social media presence for these accounts.

Those behind the Peace Data operation created just 13 fake accounts and two Facebook pages, but they also created a presence across several different sites for their fake personas. They engaged with a more targeted set of users, rather than trying to spread their narratives as wide and far as possible. The Peace Data operation seemed to focus on left-leaning Democratic Socialists, environmentalists and progressive Democrats, whom they hoped to persuade to break from the establishment Democratic Party. Another recently-seen Russian tactic for avoiding detection is the creation of “burner” accounts, used to post single pieces of disinformation on social media and then promptly abandoned.

Despite advances in Russian tactics since the last presidential election, their efforts are not guaranteed to succeed. The Peace Data operation at the time of its detection had attracted only 200 followers to its main Facebook page after four months in existence. Even some of those followers were possibly fake accounts.

Overall, though, disinformation has proven a cheap and low-risk method for Russia to further some of its objectives. Even the mere awareness that Russia has engaged in election interference and could do so again aids Moscow in its objective of appearing to be — and ultimately becoming — a great power with the ability to affect global affairs. It is thus highly likely Russia’s methods will continue to evolve and adapt beyond this current election cycle and into the future.


Kasey Stricklin is a research analyst with CNA's Adversary Analytics team, where she is a member of the Russia Studies Program. Her primary research interests are information warfare and disinformation as well as Russian naval leadership, personnel and demographics, though her work at CNA has spanned a range of other Russian military topics as well.