Some of the nation’s biggest advertisers, including AT&T (T), Verizon (VZ) and Johnson & Johnson (JNJ), are pulling their ads from Google’s display network and YouTube videos until the search giant can assure them that their precious brands will not be associated with extremist, offensive or fake content. They may be waiting a very long time.
Continue Reading Below
In an exclusive interview on FOX Business, Eric Schmidt, executive chairman of Google parent Alphabet (GOOGL), seemed to brush the growing problem aside, implying that the search giant can simply tweak an algorithm here and tighten a policy there and all will be right with the world.
Not only do I think Schmidt is wrong, but in my view, this is just the tip of the iceberg. The internet is being consumed by low-quality content at such a rapid rate that aggregators like Google and Facebook (FB) will likely never catch up.
When Schmidt says, “It should be possible for computers to detect malicious, misleading and incorrect information,” he’s talking about making that sort of content harder to find in the search rankings. In my view, search isn’t the problem. Marketers aren’t pulling their ads from Google search, but from its network of millions of third-party websites and videos. That’s a whole different story.
In the digital world, display advertising is programmatic, meaning companies like Google and Facebook use software to match ads with websites and videos viewed by certain demographics that marketers specify when they place their ads. The quality of content is determined by automated scans of metadata, including title and description, for certain keywords and phrases.
That’s one of the reasons why big online advertisers are so profitable. Besides the ongoing trend of ad dollars flowing from TV and print to digital media, Google and Facebook use machines to do all the work. Advances in artificial intelligence are making it possible for computers to scan images and videos, as well as text, and therein lies the rub.
Continue Reading Below
Computer programs – even those the machines generate from their own neural networks that mimic the way the human brain works – are far from perfect. Content can be highly subjective and nuanced, and computers don’t do subjectivity and nuance very well. Come to think of it, neither do most people, but I digress.
With millions of websites and videos to police, an enormous amount of low-quality content still manages to squeak through, as you and I know all too well. And if the rules are too stringent, there’s a risk of legitimate content getting blocked. That’s the dilemma that Google faces. And as Schmidt was careful to point out, the problem may be far worse for his biggest competitor, Facebook.
Sadly, most people get at least some of their news from their Facebook news feed – a misnomer if there ever was one. If you follow legitimate media sites then some of what appears in your stream may actually be news, but the feed is mostly a mashup of posts from friends, sites you follow and some recommended by the social network. How much of that is legitimate journalism? For most of us, I’m guessing not much.
Facebook has more than 1.8 billion users, so all those feeds are aggregated by computer programs. And what determines what ends up in your feed? I’m not privy to the company’s algorithms, but I’ll take a wild guess and say it probably has something to do with the number of likes, shares and views among those of similar demographics and interests. Last time I checked, popular and factual are two different things.
No wonder Verizon has taken the extraordinary step of suspending all its non-search digital advertising until the Googles and Facebooks of the world get their act together. The telecom giant can’t afford to have its brand associated with terrorist recruiting videos or made up stories by sites that traffic entirely in clickbait.
Schmidt can try to sweep this mess under the rug, but he’s going to have a pretty hard time finding a rug big enough to hide all the internet’s crappy content.