AITech Interview With Wasim Khaled, Co-Founder and CEO of Blackbird.AI


In this interview with Wasim Khaled, the Co-founder, and CEO of Blackbird, we will delve into the world of disinformation, its impact on organizations, and the role of artificial intelligence in combating this growing threat. Mr. Khaled is a computer scientist with extensive knowledge and experience in information operations, computational propaganda, behavioral science, AI, and their applications to defense, cyber, and risk intelligence. He has been a consultant and advisor to government agencies and Fortune 500 companies on the risks and mitigation of information warfare.

Below are the interview highlights:

Having extensive knowledge of the industry can you enlighten our audience about disinformation and misinformation as the next cybersecurity threat that leaders of organizations need to pay attention to?

At, we see AI-enabled disinformation as one of the most significant challenges faced by organizations today. Each day we see new risks driven by an emerging breed of bad actors who use sophisticated tradecraft and technologies to drive harmful narratives, impacting organizations, employees, and executives.

With the democratization of AI, access to such technology will no longer be solely in the hands of nation-states or large organizations but available to any fringe group or anyone with an axe to grind. This means that any threat actor can flood the zone across any number of harmful narratives containing misinformation and deep fakes which can then be driven by bot networks. What used to be the purview of a nation-state can now be executed by a lone actor.

In your opinion, what are the ways to prevent disinformation?

There are no ways to prevent disinformation as there will always be actors that are incentivized to create it. The key is early detection and rapid mitigation. Risk and communications teams need to be able to identify toxic narratives before it surfaces and know what to do with the information as part of their mitigation strategy.  Speed to insight is key, especially when there is a potential crisis looming.

Often, teams of analysts spend hundreds of hours a week reading 100K+ words a day just to understand the dominant narrative without ever seeing invisible threats like bot activity and anomalous activity. In the words of a client, current methods are like bringing a knife to a gunfight. Organizations need technology that is purpose-built to understand new risks in an evolving digital media ecosystem. Additionally, tapping on advanced technology such as AI can help reduce tedious, manual tasks, and rapidly improve response times.

Once a brand understands the source of its problem, it can then identify how online threats are being spread both on a surface level and under the radar. By understanding the pattern, the company can swiftly identify the channels used to target specific audiences and where they need to focus their efforts so that they can get in front of the narrative or misinformation quickly and effectively.

Read Full Interview @