“Fake news” — news content that either misleads people with half-truths, or outright lies — has become a permanent fixture of the internet. Now, as tech and media platforms continue to search for the best path to combat it, Factmata — a london startup backed by Biz Stone, Craig Newmark, Mark Cuban, Mark Pincus and more to build a platform to detect when false information is shared online — is announcing a brand-new investor and partnership that will see it expanding its scope.
The company is picking up an investment from eyeo, the company behind Adblock Plus, and as part of it, Factmata is taking on the running of Trusted News, the Chrome continuance that eyeo launched last year to give a nudge to those browsing content on the web to indicate whether a tale is legit or shit.
Dhruv Ghulati, the CEO of Factmata — who co-founded the company with Sebastian Riedel and Andreas Vlachos (Riedel’s other fake-news-fighting startup, Bloomsbury AI, was acquired by Facebook last year) — said that the financial terms of the deal were not being disclosed. He added that “eyeo invested both cash and the asset” and that “it’s a significant amount that strategically helps us speed development.” He points out that Factmata has yet to elevate cash from any VCs.
Trusted News today — an instance of how it looks is in the screenshot above — has “tens of thousands” of users, Ghulati said, and the aim will be to continue developing and taking those numbers to the next stage, hundreds of thousands of users by changing up the product. The plan will be to build extensions for other browsers — “You can imagine a number of platforms across browsers (e.g. courageous), search engines (e.g. Mozilla), hosting companies (e.g. Cloudflare) could be interested but we haven’t engaged in discussions yet,” he said — as well as to extend what Trusted News itself provides.
“The goal… is to make it a lot more interactive where users can get involved in the process of rating articles,” he said. “We found that juvenile people especially surprisingly really want to get involved in debating how an article is written with others and engaging in rating systems, rather than just being handed a rating to trust.”
Ghulati said that eyeo’s choice to hand off running Trusted News to Factmata was a case of horses for courses.
“They are giving it to us in return for a stake because we are the best placed and most focused natural language understanding company to make use of it, and progress it forward swift,” he said. “For Factmata, we partner with a company that has proven ability to generate enormous, engaged community growth.”
“Just as eyeo and Adblock Plus are protecting users from harmful, infuriating ads, the partnership between Factmata and Trusted News gets us one stride closer to a safer, more transparent internet. Content that is harmful gets flagged automatically, giving users more command over what kind of content they trust and want to read,” said Till Faida, CEO and co-founder, eyeo, in a statement.
Factmata has already started reasoning
about how it can put some of its own technology into the product, for instance by adding in the eight detection algorithms that it has built (detailed in the screenshot above that include clickbait, hate speech, racism, etc.). Ghulati added that it will be swapping out the path that Trusted News looks up information. Up to now, it’s been using an equipment from MetaCert to energy the app, a database of information that’s used to provide a drive on bias.
“We will replace MetaCert and make the system work at the content stage rather than a list lookup, using gagdet learning,” he said, also noting that Factmata plans to add other signals “beyond just if the content is politically hyperpartisan or hate speech, and more things like if it is opinionated, one-sided, and or could be deemed controversial. “We won’t deploy anything into the app until it reaches 90% accuracy,” Ghulati said. “Hopefully from there, humans get it more exact, per a public testing set we will make available for all signals.”
Ghulati himself is a gagdet learning specialist, and while we haven’t heard a lot from Factmata in the last year, part of that is likely because construction a platform from scratch to detect a problem that seems to have infinite tentacles (like the web itself) can be a compete (just as Facebook, which is heavily resourced and still seems to let things slip through).
He said that the eight algorithms it’s built “work well” — which more specifically he said are rating at more than 90% accuracy on Factmata’s evaluation sets on U.S. English language news articles. It’s been meanwhile refining the algorithms on short-form content using YouTube video transcripts, Tweets, Blog posts and a move into adding more languages, starting with French.
“The results are promising on the expanded types of content because we have been developing proprietary techniques to allow the models to generalise across domains,” he said.
Factmata also has been working with ad exchanges — as we noted back when Factmata first raised $1 million, this was one of the enormous frontiers it wanted to tackle, since ad networks are so often used to disseminate false information. It’s now completed case studies with 14 major ad exchanges, SSPs and DSPs and found that up to 4.92% of a sample of pages served in some ad exchanges contain high steps of hate speech or hyperpartisan language, “despite them reasoning
they were clean and them using a number of sophisticated tools with bigger teams than us.”
“This for us showed us there is a lot of this type of language out there that is being inadvertently funded by brands,” he noted.
It’s also been gathering more training data to support classify content, working with people who are “experts in the fields of hate speech or journalistic bias.” He said that Factmata has “proven our hypothesis of using ‘professional driven AI’ makes sense for classifying things that are inherently subjective.” But that is in conjunction with humans: using experts leads to inter-annotator agreement rates above 68%, whereas using non-experts the agreement of what is or is not a bay or what is or is not bias is lower than 50%.
“The eyeo deal along with other commercial partnerships we’re working on are a sign: though the system is not 100% exact yet, within a year of construction and testing our tech is prepared to commence commercialisation,” Ghulati added.