Google has been charging advertisers for commercials placed on YouTube even when the video platform’s fraud detection system identifies the “viewer” as a robot and not a human being.
The discovery, revealed in an experiment conducted by European researchers, casts doubt on whether Google is doing enough to protect advertisers from cheating.
Ads falsely assisted by “bots” —computer programs that mimic the behavior of internet users—have become a major problem for advertisers as they shift an increasing portion of their advertising budget to online media.
In the experiment, the first of its kind to be conducted, researchers uploaded videos to YouTube. Then buy ads on the platform, to serve on the videos they had uploaded themselves. Finally, they created bots and directed them to the videos. The three steps allowed them to monitor how Google’s various systems handled each false visit to an ad.
When the researchers sent bots in 150 visits to two specific videos, the YouTube hit counter only recorded 25 of those visits as real. But AdWords, Google’s advertiser service, charged researchers for 91 of the robot visits.
In other words, Google’s basic advertising service charged researchers for visits that YouTube was clearly able to identify as fake.
The experiment was conducted by computer network experts from four research institutions: UC3M, Imdea, NEC Labs Europe and Polito. Their findings were published in a study entitled “understanding the detection of fraud in false visits to video content portals.”
Google said it would contact the researchers to discuss the findings. The company said it took “invalid traffic very seriously,” and that it had invested significantly in technology and personnel to “keep it out of our systems. The vast majority of invalid traffic is filtered and deleted from our systems before the opportunity arises to charge advertisers.”
Rubén Cuevas, an assistant professor at UC3M who led the research, said the bots used in the experiment were not sophisticated. “It’s not like we’ve found a strange vulnerability, an exceptional case,” he said. “Anyone who knows a little about software codes could have done the same.”
In January, computer security company Symantec discovered a daninho software called Tubrosa that had infected computers around the world and made repeated visits to certain YouTube videos without the user of the machine in question noticing.
It’s easy to find online services offering tens of thousands of visits to YouTube for prices starting at $5. The more visits a video receives, the more money the person who raised it can earn from YouTube’s revenue-sharing scheme.
YouTube is the most popular online video platform on the planet, and is estimated to generate more than $4 billion in revenue per year.
The National Association of Advertisers in the United States, whose members invest more than $250 billion in advertising each year, and White Ops, a fraud detection company, found in a study last year that a quarter of commercials associated with online videos were “watched” by bots.
The association estimated that advertisers would lose more than $6 billion worldwide because of bot use in both static and video online advertising.
While YouTube is vulnerable to false visits, Cuevas of UC3M found that the platform’s fraud detection mechanism was superior to rivals such as Vivendi’s Dailymotion.
Mikko Kotila, a researcher at Botlab.io, an internet research foundation, said criminals were increasingly trying to defraud advertisers via YouTube because “hiding” behind such a large and well-known platform offers them “legitimacy” coverage. It also gives them access to one of the fastest growing sections of the market.
According to estimates by Magna Global, a media buying agency, investment in advertising associated with digital videos will rise by nearly 40 percent this year to about $15 billion.