The Bot Fraud Toolbox: Part 1
In our last article, we mentioned that bots have a number of tools they can use to trick your detection service into thinking it human – or even a lot of different humans. Let’s take a closer look at the main ones to see how they work to commit fraud. Note, there are SO many different tools worth knowing about, we’re going to break this section up into 2 parts so it doesn’t get overwhelming.
The first tool is anti-fingerprinting. Device or browser fingerprinting creates a unique signature that can be used to track a device over the internet. Since a bot is just one device with a browser, it doesn’t want to be tracked using that single fingerprint. So the bot’s browser uses an anti-fingerprinting setting that changes the user agent and browser features when it’s running. This lets the browser appear genuine when it visits a site.
In other words, if the browser claims to be an iPhone that uses Safari, the setting will change all its values to make it look like an iPhone. The list is long and technical, but to show you how specific they get, these settings include changing the values of canvas rendering, screen resolution, audio fingerprinting, clientRects, plugins, webGL, time zone and many more. The unique combination of all these values will perfectly match values returned by real devices. And all it takes is telling the browser to “look like an iPhone”.
Fraud can also be detected at the network level using TLS fingerprinting. This helps to determine whether a browser is really the browser it claims to be. But to prevent detection, bots are able to generate the TLS fingerprints that correspond to the browser engine they emulate.
Another tool bots use to trick you and commit fraud is an anti-fraud detection setting. Knowing how fraud detection works lets you to program bots to go around the detection. Since these bots are basically modified browsers (see our last post where we talk about headless browsers for more details), the fraudster is running the detection within their own attack. That means they can tell it to return any values it wants so that it looks like something the fraud detection will like.
This starts a vicious cycle – the JavaScript will need to be reverse engineered in order to know what is being looked for and based on that return the correct data. Then, once the fraud detection rate goes below a certain threshold, the JavaScript will be updated, and need to be reverse engineered again, etc.
In our next installment, we’ll continue looking through the bot fraud toolbox, so make sure you subscribe to get the rest of the story!