A year ahead of the US presidential election, the world’s biggest social media companies are still failing to tackle manipulation on their platforms, an exercise by NATO StratCom has found.
To test the ability of Facebook, Twitter, YouTube and Instagram to detect potentially malicious activity, researchers at the NATO Strategic Communication Centre of Excellence ran a four-month experiment starting in May.
They purchased social media engagement on 105 different posts across the four social media platforms from manipulation service providers (MSPs), a type of company that allow clients to buy clicks and inflate their social media presence.
At a cost of just 300 euros (about $333), NATO StratCom bought 3,530 comments, 25,750 likes, 20,000 views and 5,100 followers across the four platforms.
Researchers were able identify the accounts — 18,739 in total — that were being used to deliver the purchased interactions. This in turn allowed them to assess what other pages these inauthentic accounts were interacting with on behalf of other clients.
The results of the experiment are startling: Four weeks after the purchase, 4 in 5 of the purchased engagements were still online, and three weeks after a sample of fake accounts was reported to the companies, 95% of the accounts were still active.
The findings, which are contained in a report released today shared with a small number of media outlets including BuzzFeed News, suggest that malicious and inauthentic activity enabled by MSPs will often go unnoticed, considerably increasing the risk that attempts by ill-intentioned state and non-governmental actors that seek to interfere in democratic processes will not be effectively detected and tackled.
“Social media manipulation is the new frontier for antagonists seeking to influence elections, polarise public opinion, and side-track legitimate political discussions,” the report states. The NATO Strategic Communications Centre of Excellence is a NATO-accredited international military organisation. It is not part of the NATO Command Structure.
The vast majority of interactions driven by the inauthentic accounts identified by the researchers were commercial in nature and on pages for businesses and brands. But NATO StratCom observed the same accounts engage with 721 political pages, including 52 official government profiles and the accounts of two heads of state.
Experts are concerned that trends similar to those seen in Europe are present in the US election. Trevor Davis, a professor at George Washington University’s Institute for Data, Democracy and Politics, told BuzzFeed News that “accounts observed during the European parliamentary elections and identified as fraudulent have now been repurposed and relocated with the purpose of the 2020 US presidential elections, and specifically the democratic primaries.”
Professor Davis added: “This appears to not be on behalf of a particular campaign. The goal may be simply to sow distrust and division.”
Interactions, such as likes, were also noted by NATO StratCom analysts on the pages of leaders from major countries, political parties in the European Parliament, individual candidates competing at all levels in elections across Europe as well as on political pages in Russia, Ukraine and India. The researchers also identified political accounts focussed on politics in Armenia, Georgia, Israel, Taiwan and Tunisia, suggesting the use of MSPs is a global issue.
It is not known who is behind the interactions on these accounts. The owners of the pages being boosted could be paying MSPs for engagement themselves, but it could also be driven by supporters, or even opponents trying to smear a politician or political group.
MSPs are at the heart of a growing cottage industry mostly originating in Russia, spawned to sell clicks and comments, and inflate social media engagement. Their activity is technically not illegal, and they operate openly.
The NATO StratCom report lays bare what it describes as an extensive “black market” for social media manipulation.
Researchers identified hundreds of service providers, virtually all of Russian origin. Their activities range from the use of bots for viewing videos and retweeting posts on Twitter to more elaborate accounts that require direct human involvement — and can remain online for years before they’re discovered.
As part of the experiment, researchers set up their own inauthentic accounts as well, which were used to upload content to be manipulated using MSPs. The report notes that social media platforms have become better at identifying efforts to set up fake accounts. Facebook suspended 80% of the accounts created by NATO StratCom, Twitter suspended 66%, and Instagram suspended 50%. YouTube, however, didn’t suspend any of the profiles.
The four platforms also varied significantly in removing inauthentic comments. YouTube was the only service that corrected all of the manipulated view counts, with Instagram managing to remove only 5% of comments and not correcting any inauthentic likes and views,
YouTube, owned by Google, was also the most expensive to buy views on, which the researchers say makes it harder to manipulate. For 10 euros (about $11) you can get 3,267 views on YouTube compared to more than 13,000 views on Instagram.
Twitter is described as the most effective platform at countering abuse of its services because it took longer for bought engagement to appear on the website.
Ultimately, the MSPs delivered on the services purchased by the NATO StratCom researchers. Despite specific differences between the social media platforms, all four scored poorly overall against the seven criteria used by the researchers.
The majority of the inauthentic accounts identified during the test were found on Instagram. But Instagram, Facebook, YouTube and Twitter all failed when it came to removing the specific accounts that had been flagged by the NATO StratCom researchers. 100 such accounts were flagged to each company and only 4.5% were removed, with Facebook removing 12, YouTube none, and Twitter and Instagram three each.
The report concludes that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behaviour on their platforms and the threat posed by the growing manipulation industry.
Existing approaches, such as self-regulation, are not working effectively, the report claims.