Recently, there has been growing coverage in the media about Russian advertising (and influence) during the run up to the 2016 election. Some of this was driven by fake users (bots) on Facebook. While I'm thinking about writing a longer post about the election influence operation and online marketing in a broader context, one thing jumped out at me recently that I think is worth highlighting as it's somewhat misrepresented in the media.
In this USA Today article, the author wants Facebook to "Guarantee that bots will no longer be able to impersonate humans on the platform."
This fundamentally misunderstands the problem with "Bots". It's not like Facebook ever sat down and invited Bots onto the platform. Bots are carefully crafted bits of code that are scripted to mimic humans as you go through typical online activities. Bots are not obviously bots.
On one of the web sites I run, over the years I've had bots submitting inquiry forms thousands of times (Dear Salesforce.com: Web-to-Lead/Case Spam Sucks). Often, the form submission is some form of Spam. Even an inquiry form on a site in a niche industry can be a target for this type of activity. But what was actually interesting, in a way, was watching the form bots evolve. Essentially, when even when you make it more difficult for the bot to fill out the form, the bot kept exploring the parameters and requirements until you'd see it coming through again.
In that way, initally, my best defense against the form spam bots were to look for aspects that made them seem not human and try to filter against those. But eventually, you get to a point where if the bot fills out the form like a human would, you can't tell the different between an automated form engine and a human.
As you'll note in my Spam Post, at the time, Salesforce.com recommended that I install a Captcha, one of those image recognition test tools on the form. You know the tests, sometimes they're difficult to solve, even as a human. Sure, they provide an increased barrier for bot traffic, but they also provide a significant barrier to user engagement. Imagine if every time you wanted to post something on Facebook, you had to face a Captcha test?
And this is the fundamental problem with the "don't allow bots on your platform" arguement. It's just BS. Something being promoted by someone with a very simplistic view of the problem.