
New report:
Invalid Click Rate Benchmarks


Click fraud is costing advertisers billions in loses. Learn more here.

Click fraud is costing advertisers billions in loses. Learn more here.
A Reddit bot is a script designed to automatically post, comment, or moderate conversations on Reddit. They are often used to spam users, whether that’s promoting links, selling shady products, or spreading phishing scams. Reddit bots can also be programmed to target specific threads and keywords.
According to Reddit’s Transparency Report, 410 million pieces of content were removed from the platform in 2024, representing 3.6% of total content.
It’s important to note that while the total amount of content created on Reddit has increased significantly in the last few years, the percent of content removed due to violations has remained flat or decreased. This is striking and could suggest that bots or AI-generated content are becoming harder to detect.
While Reddit’s reports provide detailed information, their recent reports attempt to obscure the true volume of “spam” by presenting critical data that “excludes spam,” groups spam with “other content manipulation” or excludes the actual number of spam violations and only provides a percent.
However in 2021, their reports showed 91.8% of the content violations were due to content manipulation and spam, which are predominantly bot-driven activities. While many bots on Reddit are helpful – such as RemindMeBot – some are malicious, promoting spammy links and looking to scam users.
Bots on Reddit appear to be increasing and users are noticing:
Spam bots on Reddit are built to blend in, post en-masse, and generate traction for scams without getting detected. They use automated scripts tuned to specific keywords in order to hijack trends, hold human-sounding conversations, and deceive users en-masse.
Spam bots can automatically upvote, comment, and post across Reddit, targeting trending topics or specific subreddits to hijack conversations and lure real users.
Bots rack up karma — Reddit’s engagement point system — to appear trustworthy. They reshare popular posts, create their own subreddits, and upvote each other’s content to quickly build fake credibility.
Bots automatically send private messages packed with promotions, phishing links, or malware, with the goal of driving traffic, stealing data, or manipulating users.
Bots can create thousands of accounts in minutes, staging fake discussions, pushing specific narratives, or downvoting opposing views.
Modern bots build life-like Reddit profiles using AI, complete with realistic post histories and casual comments, making them harder to spot.
Some bots scrape Reddit for user comments, post histories, and engagement metrics, either to resell or to power content farms and influence campaigns.
Spam bots can interact with Reddit Ads, artificially boosting engagement metrics and misleading advertisers about campaign performance.
Read more: How to spot YouTube view bots.
We’ve seen this behavior with Instagram bots where fake accounts auto-generate comments that could apply to almost any post. This type of commenting takes very low effort, and is easy to propagate.
Generic comments like “Nice post,” “Awesome pic 🔥🔥🔥” or “Love this ❤️” attempt to fake engagement.
Sometimes, the bots may also generate comments based on keywords in the post title, which appears less generic blends in better.
This is often a strategy to farm Karma and gain access into more subreddits. The behaviour is also harder to flag as spammy because they are just posting interesting stuff. Bots using this strategy tend to target large subs like r/oddlysatisfying and r/todayilearned.
Many bots will identify comments that received a lot of karma in an attempt to capitalize on the popularity. This can be especially true for popular comments on news driven topics that tend to have a more vicerial and viral nature to them.
Nonsensical usernames are also popular with Twitter spam bots. Since all these accounts are auto-generated, they often have a random alphanumeric username, or a jumble of english words that almost make sense.
Many spam bot accounts use ChatGPT to generate their human-sounding responses, so, you can use prompt engineering to uncover their LLMs, instruction set, and more.
Examples of prompts to try:
The scammers may also have their ChatGPT accounts suspended. But because these reddit bot scripts run automatically, you could get comical posts and comments like these:
🚨Disclaimer: This is for informational purposes only. We condemn the use of Reddit bots for spam or malicious purposes.
Here is how to create Reddit bots:
Reddit monitors IPs to determine where requests come from. To avoid getting rate-limited or IP banned, you need to:
Besides the script, bad actors need accounts to send their spam comments. Reddit blocks new users from posting freely, so they create new accounts automatically with tools like Selenium or Puppeteer, and warm them up using the Karma farming techniques mentioned earlier.
💡Special Note: Unlike on other social media apps such as Facebook and Instagram, users have very little ability to block spam on Reddit.
Unfortunately Reddit doesn’t provide spam filters for user, so the best way to help reduce the spam bots on Reddit is to report and block the accounts when you encounter one. Block bot accounts
If you find a spam bot, you can stop the account from interacting with you using these steps:
Use Reddit’s report feature to flag all spam accounts you come across. To do this:
In addition to reporting spam accounts, use these steps to keep bot accounts out of your subreddits.
Post, comment, or subreddit-specific karma filters are a great way to block bots. Here’s how each of them would work.
You can use Reddit’s Automoderator to block and remove posts or comments based on patterns – specific keywords, spammy domains, etc.
You can choose to flag them, which sends the posts/comments to the modqueue for review, or, you can outright block them.
Here are some keywords commonly used by spam bots on Reddit you should consider flagging/blocking:
Spam bots aren’t just a problem on Reddit, they’re everywhere online, inflating engagement metrics, spreading scams, and draining ad budgets. In fact, advertisers lost an estimated $84 billion in 2023 to ad fraud driven by bots and bad actors..
With Fraud Blocker, you can stop these bots before they ever touch your paid campaigns. While Reddit moderates its platform, your ads need a stronger defense.
Start a 7-day free trial and see how much better your traffic looks without the bots.


