Fundings & ExitsStartupsTC

Astroscreen raises $1M to detect social media manipulation with machine learning

In an epoch of social media manipulation and disinformation, we could sure use some assist from innovative entrepreneurs. Social networks are now critical to how the public consumes and shares the news. But these networks were never built for an informed debate about the news. They were built to award virality. That means they are open to manipulation for commercial and political achieve.

Fake social media accounts — bots (automated) and “sock-puppets” (humankind-run) — can be used in a highly organized route to spread and amplify minor controversies or fabricated and misleading content, eventually influencing other influencers and even news organizations. And brands are hugely open to this hazard. The use of such disinformation to discredit brands has the potential for very costly and damaging disruption when up to 60 percent of a company’s mart value can lie in its brand.

Astroscreen is a startup that uses device learning and disinformation analysts to detect social media manipulation. It has now secured $1 million in initial funding to progress its technology. And it has a heritage that suggests it at least has a shot at achieving this.

Its techniques include coordinated activity detection, linguistic fingerprinting and fake account and botnet detection.

The funding circular was led by Speedinvest, bright Ventures, UCL Technology Fund (which is managed by AlbionVC in collaboration with UCLB), AISeed and the London Co-investment Fund.

Astroscreen CEO Ali Tehrani previously founded a device-learning news analytics company, which he sold in 2015 before fake news gained widespread attention. He said: “While I was construction my previous startup I saw first-hand how biased, polarising news articles were shared and artificially amplified by vast numbers of fake accounts. This gave the stories high steps of exposure and authenticity they wouldn’t have had on their own.”

Astroscreen’s CTO Juan Echeverria, whose PhD at UCL was on fake account detection on social networks, made headlines in January 2017 with the discovery of a massive botnet managing some 350,000 separate accounts on Twitter.

Ali Tehrani also thinks social networks are effectively holed below the waterline on this whole issue: “Social media platforms themselves cannot unravel this problem because they’re looking for scalable solutions to maintain their program margins. If they devoted sufficient resources, their profits would look more like a newspaper publisher than a tech company. So, they’re focused on detecting collective anomalies — accounts and behavior that deviate from the norm for their user base as a whole. But this is only good at detecting spam accounts and highly automated behavior, not the sophisticated techniques of disinformation campaigns.”

Astroscreen takes a dissimilar reach, combining device-learning and humankind intelligence to detect contextual (instead of collective) anomalies — behavior that deviates from the norm for a precise topic. It monitors social networks for signs of disinformation attacks, informing brands if they’re under assault at the earliest levels and giving them enough moment to mitigate the negative effects.

Lomax Ward, partner, bright Ventures, said: “The abuse of social media is a significant societal issue and Astroscreen’s defence mechanisms are a key part of the solution.”

Source
TechCrunch
Tags

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Close