Violka08 | iStock | Getty Images
Fake news is big business. Ad tech companies, like a start-up called Cheq, are trying to make it a less lucrative enterprise.
Cheq is led by Guy Tytunovich, a former member of the Israeli Defense Force’s 8200 unit that deals with military cybersecurity. Relying on some of the cybersecurity and natural-language processing knowledge he picked up in his past life, the CEO wants to prevent advertisers from appearing on certain harmful content, like fake news.
In the lead-up to the 2020 presidential election, Cheq is using artificial intelligence to try to identify fake news and make sure brands and agencies don’t place ads on them. The five-year-old start-up made the 2019 CNBC Upstart 100 list announced on Nov. 12.
The fake news economy gained a reputation in 2016 after stories appeared on Facebook to influence the presidential election. And it made recent headlines when presidential candidate Elizabeth Warren sparred with Facebook over ads that contain misleading information.
More from Upstart 100:
Amazon has triggered an arms race in this technology
Why Canada’s is becoming a start-up mecca rivaling Silicon Valley
This start-up aims to break internet monopolies and deliver 5G broadband for $49 a month
Despite it being a known problem, it’s hard to tell exactly how big the fake news problem is. The Global Disinformation Index, which was formed about a year ago, has tried to take a stab at the question. It recently released a study sampling 20,000 domains that are known publishers of disinformation, then sampled the advertising and traffic, and determined those sites were bringing in $235 million a year. The organization seeks to provide advertisers with risk ratings for online news domains.
Guy Tytunovich, co-founder and CEO of CHEQ
“Of course, there are far more domains that disinform,” Global Disinformation Index co-founder Clare Melford said. “It’s a very rough number … a massive underestimate.”
Tytunovich, along with co-founders Chairman Ehud Levy and Chief Technology Officer Asaf Butovsky, founded Cheq in July 2017 to protect advertisers from “everything that’s bad in the digital advertising ecosystem.” That includes identifying fake news spread by bots and fake users, and shielding brands from content that could be potentially damaging.
Brands might elect to avoid the obvious red flags, like pornography, hate speech or graphic violence, but may also choose to steer clear of news articles that could conflict with their image. For instance, a burger brand might consider a news article about obesity as “unsafe” for its ads, says Cheq.
A brand watchdog
Cheq is based in Tel Aviv and has 60 employees, 25 of which are engineers with defense or cybersecurity backgrounds, the company said. Cheq today claims to be working with some of the world’s biggest advertisers and ad agencies but said it could only name Dentsu‘s Cyber Communications, which Cheq is working with in Japan. The company announced its Series A round of funding of $5 million in June 2018.
“I may be the only person in New York who has enjoyed the Trump effect,” Tytunovich said. “I’m of course saying it tongue-in-cheek and jokingly,” he quickly adds.
Cheq’s technology tries to digest a news article like a human brain would. For example, it examines aspects like whether it’s written at a high level. “Obviously, anything we can recognize as [user-generated content] is less reliable,” Tytunovich said.
The company claims its technology can also discern if the content is posted on a social network versus a different site, and the status of a site’s reputation. It will also corroborate certain news against various sources to help validate any information. The company said its company was trained using millions of pieces of content across 14 languages.
Cheq’s technology also examines how that fake news is proliferated. This can happen when bots are sharing given content, Tytunovich said.
“When we are able to see that something has been distributed massively or retweeted or reposted by a user that is a bot, obviously it raises red flags,” he said.
Cheq tries to identify this content and who’s distributing it anywhere it is, whether it’s on a platform like Facebook, Google or Twitter, or on another publisher. When the company’s technology has flagged a piece of content, it automatically prevents its clients from appearing as advertisers on that page.
If it’s just a matter of the content flagging a brand’s safety filters, Cheq blocks the placement and it ends there. If it is believed to be fake news or something more sinister, the company alerts the publisher or platform. Tytunovich said if there seems to be something with a criminal aspect to it, like a fraudulent or bot attack on a certain publisher, the company will provide data and technological help to combat the issue, and in some cases the company will also talk to law enforcement.
Is automation enough?
There’s some hesitation in the advertising industry of using artificial intelligence alone in discerning the safety of a given page or site.
GDI’s Melford said commercial companies operating in this space face a few difficulties. She said when companies claim to use AI to assess any domain, they’re likely not able to bring the same kind of context that a human brain can help with. For example, she said, if a site is covering an area that is in the midst of a coup but doesn’t cover the coup at all, the information could be determined to be “disinforming” even if it’s just leaving out information, not necessarily putting out fake information.
When we are able to see that something has been distributed massively or retweeted or reposted by a user that is a bot, obviously it raises red flags.
CEO of CHEQ
“You can actually disinform with no information at all,” she said. “If you’re not covering a coup, how is an algorithm going to know?” She added, “Any commercial entity is going to struggle to be totally free from conflicts of interest in this space.”
Josh Braun, an associate professor of journalism at the University of Massachusetts who has studied the business of fake news, said other verification services and brand-safety vendors in the ad tech market typically use a mixture of human review and simpler algorithmic filters. Companies like Moat, DoubleVerify, IAS and TrustMetrics offer different takes on this, he said.
But is it fair to charge brands for this kind of service when some would argue ad tech partners shouldn’t place ads on questionable content in the first place?
“If we had a working ad tech ecosystem, then this kind of brand safety would be a baseline assurance and not a premium product,” Braun said.
And if the platforms were pressured to take more responsibility for the content within their own walls, some of this would also be less possible for fraudsters, Braun said. At this point, Google’s business model involves minimizing effort in these areas while maximizing profit margins, he said.
“If they were forced into a more responsible business model through regulation, they could afford to do a lot more,” Braun said. That might lead them to be less profitable and less appealing to shareholders, but they could hire more human moderators and “act more responsibly,” he said.
Cheq said it uses human-quality testing to make sure its AI is accurately making decisions and identifying concepts. The company gives clients full lists of blocked URLs by the algorithms so they can verify how it’s performing.
Looking into 2020 and beyond
Tytunovich predicts that big brands are going to shy away from news content in the coming years. He said he hopes the company’s technology can make news content a better bet for big brands by making it safer and thus keep dollars flowing to good journalism.
“When you think about it at the global scale, this is very, very, very scary,” he said. “We’re not just talking about news publishers and their livelihoods. I’m talking about the fact that we need to support good, impartial, real journalism.”
Looking into the 2020 election and beyond, Tytunovich believes it will even be harder to discern real from fake.
“I think currently it’s very easy to fact-check the different potentially deep-fake videos that might arise,” he said. “My worry with deep-faking is actually as technology evolves. … There is going to come a point in time … where deep-faking technology is so advanced that even technology will have a very tough time in understanding that it’s not legit.”
Things are going to get even weirder, he said, adding: “We’re going to see a lot of things that even Hollywood couldn’t think of 20 years or 30 years ago,” Tytunovich said.