NEW
Now available: click fraud protection for Facebook and InstagramLearn more
Now available: click fraud protection for Facebook

Troll Farms: What Are They and How Do They Impact Elections?

what is a troll farm

Troll farms have become a symbol of the internet’s darkest underbelly. Once obscure, these operations have erupted into the limelight, most notably for their roles in major political events and elections around the globe.

However, the impact of troll farms stretches far beyond election interference. Shady social media accounts have been found infiltrating daily online interactions, swaying public opinions, and even influencing marketing strategies and business outcomes.

What are Troll Farms?

Troll farms are bots or groups of humans used for the specific purpose of spreading disinformation.

While they are often leveraged for activities such as posting comments on social media platforms or sharing news, they can also be used to manage pages and communities on sites such as Facebook, as well as to amplify genuine organic content that suits the troll’s narrative.

Every discussion about troll farms starts with the Internet Research Agency (IRA), a Russian-based organization infamous for its role in the 2016 United States presidential election. Though the IRA is but one example, it epitomizes the tactics and objectives of troll farms, also known as troll factories, worldwide.

Troll farms like the Internet Research Agency operate by employing individuals to create fake accounts on social media, news sites, and online forums. These employees, or trolls, are then tasked with disseminating disinformation, amplifying specific narratives, and sowing discord among users.
The operations of the IRA and troll farms in other countries are sophisticated and well-funded, harnessing complex networks of bots and humans to execute large-scale misinformation campaigns.
Typically, the content spread by these troll farms is designed to be inflammatory and divisive, usually targeting vulnerable sectors of the population. An example of this is the targeting of Christians, Black Americans and Native Americans in the run up to Donald Trump’s successful 2016 election campaign by the Russia backed Internet Research Agency. (MIT Technology review article)

Here the campaign worked effectively in exploiting existing societal fractures to tip the odds firmly in Trump’s favor (as well as dragging Hilary Clinton’s name further through the mud).

A prime example of how strategic dissemination of content can sway public opinion, manipulate political outcomes, and even erode trust in democratic institutions.

Troll factories have been found to have had a hand in many recent global political processes including the election of Trump in 2016, the Brexit vote in the UK in the same year, promoting president Jair Bolsonaro in Brazil and pro-Russian propaganda during the ongoing Ukraine war.

While the Russian trolls are perhaps the most famous, there are examples of centrally managed troll accounts in countries such as Ukraine, China, Myanmar, Nicaragua, Macedonia, Brazil and even Poland. (Source)

How do Troll Farms work?

One of the clever tricks with troll farms is how they leverage both bots and real humans to maximize their reach and impact (also known as cyborg accounts due to their half human/half bot operation).

A single human operator can manage hundreds or potentially even thousands of troll accounts across multiple social media platforms. Once fake accounts have been set up, the human element is often responsible for creating and posting content, as well as engaging with real users to lend credibility to their fake personas.

The bots, on the other hand, are automated programs designed to perform repetitive tasks at a scale no human could achieve, such as liking, sharing, and commenting on posts to inflate their visibility and apparent endorsement. Many troll bots will re-tweet/re-share or be dispatched to post comments, often copy-pasted, designed to inflame viewers and trick the algorithms into showing the content across the platforms.

This partnership between trolls and bots creates a potent force capable of manipulating online narratives. The terms fake news and hate speech have both spawned and proliferated in the troll factory era – a symptom of the effect of these destabilizing organizations.

Bots amplify the trolls’ content, ensuring it reaches a wide audience quickly, while the human touch of trolls makes the disinformation campaigns more credible and effective in influencing real users.

One tragic example is the use of coordinated social media misinformation, using troll farms, to spread hatred in Myanmar, which eventually led to genocide and forced displacement.

What are Troll Farms trying to achieve?

Troll farms tend to be heavily backed, financially, which usually indicates state level sponsorship. Or, at the very least, a private entity for hire by powerful actors.

As such, the goal of most troll factory setups is to spread and promote their own brand of propaganda. This can incorporate:

  1. Political Manipulation: Many troll farms are funded by state actors seeking to disrupt foreign elections, undermine confidence in democratic institutions, and sow discord among political opponents.
  2. Social Division: By amplifying contentious issues, troll farms aim to deepen societal divisions, creating an environment of distrust and hostility.
  3. Economic Gain: Some operations are motivated by profit, using their tactics to manipulate stock prices, sabotage competitors, or promote certain products and services.
  4. Influencing Public Opinion: Beyond politics, troll farms seek to sway public opinion on various topics, from environmental policies to human rights issues, shaping the narrative in favor of their benefactors.

Shockingly, a report found that 140 million American voters per month were exposed to troll factory content in the run up to the 2020 election. The impact was particularly felt on Facebook and Twitter, but famously amplified by other smaller independent social media platforms such as Trump’s own Truth Social.

This is also evidenced by the amount of fake accounts created on Facebook. Facebook has detected approximately 1-2 billion fake accounts created by bots and troll farms every quarter. This data is available on Facebook’s Transparency Report on Fake Accounts.

Do Troll Farms affect businesses too?

The influence of troll farms affects not just individuals and society at large, but also crossed into the realm of business.

Primarily, the divisive content they proliferate can inadvertently associate brands with political or social controversies, damaging reputations.

For digital marketers, the manipulation of social media platforms by troll farms presents a significant challenge. The fake engagement generated by these operations can skew analytics, leading to misinformed strategies and wasted ad spend on reaching non-existent audiences. Moreover, the negative sentiment spread by trolls can result in brand damage, affecting customer loyalty and sales.

While this might not directly affect a brands marketing efforts, the broader impact on platforms such as Facebook highlights the need to reduce exposure to fake accounts and bad actors.

Prevent fake and malicious traffic from bots

In response to the pervasive threat posed by troll farms, and their close cousins click farms, business marketers need to become adept at identifying and mitigating their influence.

Typical techniques to monitor your website traffic for trolls, spammers and other bad actors include:

  • Analytical vigilance: Pay close attention to analytics for unusual spikes in traffic or engagement from regions that do not align with your target audience.
  • Content moderation: Implement strict moderation policies on your platforms to quickly identify and remove inflammatory or suspicious content.
  • Educate your audience: Inform your community about the signs of troll farm activity and encourage them to report suspicious behavior – especially if you’re in a high risk industry such as government agencies, not-for-profit organizations or energy firms.
  • Invest in security: Use anti-bot and anti-spam tools to filter out malicious traffic and protect your online presence. If using Meta ads on Facebook or Instagram make sure to use click fraud prevention software such as Fraud Blocker.
  • Verification practices: Encourage or require account verification for users engaging in discussions or leaving reviews on your platforms, reducing the impact of fake accounts.

The best way to avoid bad clicks on your paid ads is by using click fraud prevention tools such as Fraud Blocker.

Facebook is one of the most popular targets for Russian troll factory accounts, as well as bot accounts. And while Meta campaigns can be very successful, keeping an eye on your traffic quality and avoiding fraudulent clicks is key to campaign success here. In fact, $15 billion was lost to fraudulent clicks on social media ads in 2023 alone.

Fraud Blocker now offers protection for Meta ad campaigns, including Instagram and Facebook. This works by tracking and blocking bad actors by their IP addresses, ensuring they can’t click on your Meta ads.

Try Fraud Blocker for free for 7 days and find out how much you could save by blocking bot clicks.

Facebook
Twitter
LinkedIn

More from Fraud Blocker