New York-based Blackbird.AI has closed a $10 million Series A as it prepares to launch the next version of its disinformation intelligence platform this fall.
The Series A is led by Dorilton Ventures, along with new investors including Generation Ventures, Trousdale Ventures, StartFast Ventures and Richard Clarke, former chief counter-terrorism advisor for the National Security Council. Existing investor NetX also participated.
Blackbird says the funding will be used to scale up to meet demand in new and existing markets, including by expanding its team and spending more on product dev.
The 2017-founded startup sells software as a service targeted at brands and enterprises managing risks related to malicious and manipulative information -- touting the notion of defending the "authenticity" of corporate marketing.
It's applying a range of AI technologies to tackle the challenge of filtering and interpreting emergent narratives from across the internet to identify disinformation risks targeting its customers. (And, for the record, this Blackbird is no relation to an earlier NLP startup, called Blackbird, which was acquired by Etsy back in 2016.)
Blackbird is focused on applying automation technologies to detect malicious/manipulative narratives -- so the service aims to surface emerging disinformation threats for its clients, rather than delving into the tricky task of attribution. On that front it's only looking at what it calls "cohorts" (or "tribes") of online users -- who may be manipulating information collectively, for a shared interest or common goal (talking in terms of groups like antivaxxers or "bitcoin bros").
Blackbird CEO and co-founder Wasim Khaled says the team has chalked up five years of R&D and "granular model development" to get the product to where it is now.
"In terms of technology the way we think about the company today is an AI-driven disinformation and narrative intelligence platform," he tells TechCrunch. "This is essentially the efforts of five years of very in-depth, ears to the ground research and development that has really spanned people everywhere from the comms industry to national security to enterprise and Fortune 500, psychologists, journalists.
"We've just been non-stop talking to the stakeholders, the people in the trenches -- to understand where their problem sets really are. And, from a scientific empirical method, how do you break those down into its discrete parts? Automate pieces of it, empower and enable the individuals that are trying to make decisions out of all of the information disorder that we see happening."
The first version of Blackbird's SaaS was released in November 2020, but the startup isn't disclosing customer numbers as yet. V2 of the platform will be launched this November, per Khaled.
Also today it's announcing a partnership with PR firm Weber Shandwick, to provide support to customers on how to respond to specific malicious messaging that could impact their businesses and which its platform has flagged up as an emerging risk.
Disinformation has of course become a much labelled and discussed feature of online life in recent years, although it's hardly a new (human) phenomenon. (See, for example, the orchestrated airbourne leaflet propaganda drops used during war to spread unease among enemy combatants and populations.) However it's fair to say that the internet has supercharged the ability of intentionally bad/bogus content to spread and cause reputational and other types of harms.
Studies show the speed of online travel of "fake news" (as this stuff is sometimes also called) is far greater than truthful information. And there the ad-funded business models of mainstream social media platforms are implicated since their commercial content-sorting algorithms are incentivized to amplify stuff that's more engaging to eyeballs, which isn't usually the grey and nuanced truth.
Stock and crypto trading is another growing incentive for spreading disinformation -- just look at the recent example of Walmart targeted with a fake press release suggesting the retailer was about to accept litecoin.
All of which makes countering disinformation look like a growing business opportunity.
Earlier this summer, for example, another stealthy startup in this area, ActiveFence, uncloaked to announce a $100 million funding round. Others in the space include Primer and Yonder (previously New Knowledge), to name a few.
While some other earlier players in the space got acquired by some of the tech giants wrestling with how to clean up their own disinformation-ridden platforms -- such as U.K.-based Fabula AI, which was bought by Twitter in 2019.
Another -- Bloomsbury AI -- was acquired by Facebook. And the tech giant now routinely tries to put its own spin on its disinformation problem by publishing reports that contain a snapshot of what it dubs "coordinated inauthentic behavior" that it's found happening on its platforms (although Facebook's selective transparency often raises more questions than it answers).
The problems created by bogus online narratives ripple far beyond key host and spreader platforms like Facebook -- with the potential to impact scores of companies and organizations, as well as democratic processes.
But while disinformation is a problem that can now scale everywhere online and affect almost anything and anyone, Blackbird is concentrating on selling its counter tech to brands and enterprises -- targeting entities with the resources to pay to shrink reputational risks posed by targeted disinformation.
Per Khaled, Blackbird's product -- which consists of an enterprise dashboard and an underlying data processing engine -- is not just doing data aggregation, either; the startup is in the business of intelligently structuring the threat data its engine gathers, he says, arguing too that it goes further than some rival offerings that are doing NLP (natural language processing) plus maybe some "light sentiment analysis", as he puts it.
Although NLP is also key area of focus for Blackbird, along with network analysis -- and doing things like looking at the structure of botnets.
But the suggestion is Blackbird goes further than the competition by merit of considering a wider range of factors to help identify threats to the "integrity" of corporate messaging. (Or, at least, that's its marketing pitch.)
Khaled says the platform focuses on five "signals" to help it deconstruct the flow of online chatter related to a particular client and their interests -- which he breaks down thusly: Narratives, networks, cohorts, manipulation and deception. And for each area of focus Blackbird is applying a cluster of AI technologies, according to Khaled.
But while the aim is to leverage the power of automation to tackle the scale of the disinformation challenge that businesses now face, Blackbird isn't able to do this purely with AI alone; expert human analysis remains a component of the service -- and Khaled notes that, for example, it can offer customers (human) disinformation analysts to help them drill further into their disinformation threat landscape.
"What really differentiates our platform is we process all five of these signals in tandem and in near real-time to generate what you can think of almost as a composite risk index that our clients can weigh, based on what might be most important to them, to rank the most important action-oriented information for their organization," he says.
"Really it's this tandem processing -- quantifying the attack on human perception that we see happening; what we think of as a cyberattack on human perception -- how do you understand when someone is trying to shift the public's perception? About a topic, a person, an idea. Or when we look at corporate risk, more and more, we see when is a group or an organization or a set of accounts trying to drive public scrutiny against a company for a particular topic.
"Sometimes those topics are already in the news but the property that we want our customers or anybody to understand is when is something being driven in a manipulative manner? Because that means there's an incentive, a motive, or an unnatural set of forces... acting upon the narrative being spread far and fast."
"We've been working on this, and only this, and early on decided to do a purpose-built system to look at this problem. And that's one of the things that really set us apart," he also suggests, adding: "There are a handful of companies that are in what is shaping up to be a new space -- but often some of them were in some other line of work, like marketing or social and they've tried to build some models on top of it.
"For bots -- and for all of the signals we talked about -- I think the biggest challenge for many organizations if they haven't completely purpose built from scratch like we have... you end up against certain problems down the road that prevent you from being scalable. Speed becomes one of the biggest issues.
"Some of the largest organizations we've talked to could in theory produce the signals -- some of the signals that I talked about before -- but the lift might take them 10 to 12 days. Which makes it really unsuited for anything but the most forensic reporting, after things have kinda gone south... What you really need it in is two minutes or two seconds. And that's where -- from day one -- we've been looking to get."
As well as brands and enterprises with reputational concerns -- such as those whose activity intersects with the ESG space; aka "environmental, social and governance" -- Khaled claims investors are also interested in using the tool for decision support, adding: "They want to get the full picture and make sure they're not being manipulated."
At present, Blackbird's analysis focuses on emergent disinformation threats -- aka "nowcasting" -- but the goal is also to push into disinformation threat predictive -- to help prepare clients for information-related manipulation problems before they occur. Albeit there's no time frame for launching that component yet.
"In terms of counter measurement/mitigation, today we are by and large a detection platform, starting to bridge into predictive detection as well," says Khaled, adding: "We don't take the word 'predictive' lightly. We don't just throw it around so we're slowly launching the pieces that really are going to be helpful as predictive.
"Our AI engine trying to tell [customers] where things are headed, rather than just telling them the moment it happens... based on -- at least from our platform's perspective -- having ingested billions of posts and events and instances to then pattern match to something similar to that that might happen in the future."
"A lot of people just plot a path based on timestamps -- based on how quickly something is picking up. That's not prediction for Blackbird," he also argues. "We've seen other organizations call that predictive; we're not going to call that predictive."
In the nearer term, Blackbird has some "interesting" counter measurement tech to assist teams in its pipeline, coming in Q1 and Q2 of 2022, Khaled adds.