What is Bot Traffic?
When most people hear the term “bot traffic,” they think of cybercrime or fake social media views. These are both common uses for bot traffic, but bots themselves are not inherently bad. Depending on the purpose of the bots, they can be beneficial for your site and your user experience, or they can be completely neutral. Bots are just programmed scripts that do the job they were made to do.
What is bot traffic?
Bot traffic is a term used to describe non-human traffic to websites and apps. Some bots are designed for good and some are for evil.
Some bots are just there to collect essential data so that search engines can provide better results. Other bots can be malicious and disruptive to your analytics. Over 40 percent of all internet traffic in 2020 was attributed to bots, good and bad, so it’s essential to know who is actually on your site and take action to prevent bad bot mischief.
Chart source: Imperva, Bad Bot Report, 2021
What are good bots?
Good bots are bots that are primarily used to legally gather information to improve the user experience. One of the best bots you can have on your site is a Google crawler. Crawlers catalog and index your web pages so Google can serve up your content in the search results. Without Google crawlers, the internet would be a vastly different place than it is today.
Some bots are used for website monitoring. They monitor loading times, downtimes, and other metrics that provide a health assessment of the site and tell relevant parties where there are bottlenecks and other issues that can be addressed. There are also aggregation bots that gather information from multiple locations to collate in one place.
Scraping bots can be good or bad, depending on the intended purpose of the scraping. “Scraping” means to collect information from a website, often contact information. When done legally, the information can be used for research and other wholesome activities, but it’s often lifted illegally.
What are bad bots?
Malicious bots usually exist to make someone money, whether that’s by stealing your information or taking down your site. Spambots will fill your comment sections with ad comments or scrape contact info for phishing email attacks. If they’re able to hack your data, they may execute a ransomware attack, in which your data will be held hostage for payment.
Ad fraud is one of the most prevalent bot attacks, clicking your ads and driving up your PPC costs. Ad fraud is typically executed or commissioned by competitors who want to disable your campaigns or by fraudulent ad companies looking for a higher payout. In either case, you’re paying big bucks for zero conversion.
One of the worst bot attacks you can get, other than ransomware, is a distributed denial-of-service (DDoS) attack. A DDoS attack is when a network of bots (a botnet) floods your IP address with requests to flood the server or network. The goal is to render your website or service inoperable. DDoS attacks are carried out by competitors, disgruntled employees, governments, activists, and hackers, sometimes just for fun.
Good bots vs bad bots [Infographic]
How do you detect bot traffic?
Fortunately, there are a few indicators for bot traffic. The most reliable way is for experienced engineers to monitor network requests. Short of that, an analytics tool can help detect surges of non-human visitors. If you’re watching your analytics for bot traffic indicators, you can look for:
- Unusually high page views and bounce rates
- Inexplicable changes in session duration
- Phony conversions like forms filled with fake info
- Spikes in failed login attempts and validations
- Traffic spikes from unexpected locations.
How do I block bad bots?
The problem with blocking bots is that you have to figure out how to stop bad bots while allowing good bots. If all bot traffic were blocked, you wouldn’t be able to rank on Google, collect analytics, or maintain the health of your site.
You can weed out less advanced attackers by putting CAPTCHAs (completely automated public Turing test to tell computers and humans apart) on outdated user agents and browsers and blocking known hosting providers and proxy services. You should also be protecting your secondary access points like exposed APIs and mobile apps, sharing blocking information between systems when possible.
Advanced attackers won’t be put off by most of the measures you can take on your own to tighten your web security. The volume and sophistication of cybercrime are increasing daily. New bots are programmed to closely mimic human behavior and slip right past traditional security tools. Fraud Blocker’s proprietary algorithms detect malicious behaviors and automatically block bad traffic sources to keep your site and data secure. Get started with our 7-day free trial.