With software robots (bots) settling into what seems to be a permanent and major part of website traffic, a Detroit-based company is out with a report that takes a different angle.
Instead of focusing on identifying and warding off bots, says “State of the Human Internet Report, 2015,” why not focus on verifying which visitors are human?
That’s the mission of Are You A Human (AYAH), Inc., a company that tracks visitors on over three million websites in nearly 200 countries and determines which ones are clearly exhibiting human behavior. Its new report quantifies the results.
It’s like a country setting up border security, CEO Ben Trenda told me. It “starts by issuing passports” so its citizens can cross, and focuses less on identifying everyone who can’t cross.
Besides, he said, bots are rapidly changing targets. The average lifespan of a bot is four to six days — and then its makers issue a different one.
With a Verified Human whitelist, websites might need fewer, annoying screening procedures — which could mean doing away with captchas, extra security questions or other filters.
Not All Bad Guys
This orientation toward humans is certainly an optimistic strategy, in light of the breakdown in the AYAH report.
Only 42 percent of 3.2 billion impressions from more than 600 million devices could be verified as coming from our species.
The remaining 58 percent could not be verified. They could all be bots, although at least some may be humans who weren’t behaving as pointedly as they might.
And the ones that are bots may not all be bad guys.
Cofounder and COO Reid Tatoris pointed out that some of those bots are Google spiders helping to populate its search engine, retailers’ pricing bots looking at competitors’ pricetags, or other benign software agents. While Google’s identifies itself to sites, not all good bots do.
Malicious bots, of course, include those carrying out attacks, posting fake reviews, fraudulently loading pages to get ad impressions, automating content piracy and conducting other nefarious occupations.
Tatoris said his company doesn’t break down the non-human total because its assignment is to tell sites which visitors are, you know, decaying biological entities.
But, of the bots it did identify in that 58 percent, the split is nearly equal: about 48 percent were considered helpful to human experiences and about 52 percent were thought to be harmful.
The report also points out that this 42-58 percent Verified Human-to-other split is not consistent across sites. Only about 30 percent of traffic on dating sites, for instance, is certifiably human — a statistic that many singles looking for companions on those sites might not find surprising.
Education sites have a relatively high level of Verified Human Activity (VHA) of 64 percent, and job search sites have very low VHA levels — 6.2 percent. Dot coms have an average VHA of 53 percent, .gov sites show a whopping 71 percent and .info sites exhibit only 3 percent.
Blacklists Not Much Help
Traffic also varies by U.S. states. Oregon and Virginia have abnormally low levels of verified impressions, with only 23.8 percent of Oregon’s pegged as definitely human and a mere 28.6 percent for Virginia. Most of the other states have Verified Humans in the 40 to 54 percent range.
Not coincidentally, the report notes, Oregon and Virginia are home to Amazon Web Services’ hosting facilities. That major cloud platform, it points out, is used by companies to host software “programs for malicious purposes.”
The report also shoots down the idea that bot blacklists help much. For instance, the Interactive Advertising Bureau (IAB) maintains a list of known spiders and bots — but only about 0.14 percent of the non-verified impressions were on that list.
AYAH detects humans through a variety of techniques, which include device and OS types, patterns of mouse movements and clicks, and non-duplicating behavior across sites.
Visitors are tracked between sites via cookies or device fingerprinting. Reid noted that a significant portion of bots are mimicking some forms of human behavior, such as one bot that follows a hexagonal pattern as it browses and clicks.
This is supposed to show human-like random movement, he indicated, but when that visitor displays the same hexagonal patterns across a variety of sites, its cover is blown.
I asked if bots could become so adept at mimicking human behavior that they could eventually pass as Verified Humans.
It’s very difficult to do that across sites, Tatoris pointed out, since “most bots are written to do one thing specifically” — and their makers don’t have much incentive to make them more human-like across sites.
But what if AYAH and other human detectors become the norm? Wouldn’t bot-makers then have an incentive?
“The more successful we are,” Trenda admitted, “the greater that incentive will be.”
No comments:
Post a Comment