<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">

Fact Check with Logically.

Download the Free App Today

Dealing With Disinformation: How AI Products Can Help During a War

Dealing With Disinformation: How AI Products Can Help During a War

Logically's team has been hard at work helping our clients and the general public understand and interpret what's going on in the Russia-Ukraine war. 

Numerous outlets, including ours, have reported on how Russian disinformation and propaganda affect the rest of the world. Since the start of the invasion, our fact checking teams have been hard at work verifying images and debunking mis- and disinformation that appears on social media. 

I sat down with Anil Bandhakavi, Logically's Head of Data Science, to understand how our technology has been helping counter disinformation and propaganda during the war, and how it might be used in national security crises.

How could we protect digital infrastructure from disinformation attacks by hostile foreign powers? Can our products be used during wartime to protect regular people?

Our product Logically Intelligence facilitates the monitoring of trends and insights about disinformation, misinformation, and online harms. We enable any organization to leverage our product, which is built using a combination of machine learning and AI that can extract automatic insights from large-scale data sets that are representative of the information space. We can do this across platforms, irrespective of whether it's a major or minor platform. On top of that, we can collate all this information together to offer insights from our experts, so that our clients can make decisions about strategic operations. 

This means if a particular company or government is interested in understanding the kinds of information threats, or the types of risk they are being targeted with, we definitely have a product in place that can monitor a specific information space. Using that, our clients can then formulate strategies to tackle the disinformation targeted towards them. 

When we're talking about monitoring these information spaces and being able to extract insights from them, what are some of the problems that you and the rest of the team, as data scientists, might encounter? What kinds of solutions have you built to those problems?

The data could be highly heterogeneous, because we are talking about content from multiple platforms. And understanding English content itself is challenging for machines, so if you bring in multilingualism, then the problem becomes even more complex. And then, we also need to understand various linguistic-cultural nuances when it comes to modeling language to interpret whether something is a misinformation claim or carries a threat. This is especially true if you talk about the current crisis, where information could be shared in both the Ukrainian and Russian languages.

Understanding the context in which something is said – as in, what are the various threatening subtexts within the discourse – is a challenge. And let's say the medium of communication to spread misinformation is not text, but it's multimedia like images or memes. Then the problem gets even more complicated because we are talking about understanding not just text, but also text embedded within images and memes, which might have subtle references to some kind of ideologies, which are again hard to decipher just using technology. So that would then mean we bring in our experts, who have some domain intelligence on the current crisis situation. 

Then we combine all this with artificial intelligence pipelines that can output the data as a whole to produce reliable insights about the particular information space that we are monitoring. 

During a conflict, it's hard to trust social media, because it's hard to know what's deliberate and malicious propaganda; and what's misinformation. Why should people trust our AI to be able to sort what's real from what's fake? How can AI and machine learning help the process both of fact checking and of analyzing the information space?

There are two kinds of trust here. The first is trust in the platform or the medium through which information is consumed. Each platform has different levels of trust when it comes to users relying on them to consume credible information. Platforms provide lots of information, but what percentage is credible or not is something that is questionable.

Secondly, there is this hesitancy or kind of like skepticism around, okay, there's this new pipeline claiming to always provide credible information. How trustworthy is it? Once we contextualize things around what a particular topic or a particular claim is, then users can start reviewing it themselves and then understand.

We don't necessarily tell people what to believe; we just give them more tools to make better decisions.

AI and machine learning can speed up the process so you don't spend an awful amount of time researching and reviewing stuff. Human intervention and supervision are still required, but some of the laborious steps in this whole process can be addressed with tech. The amount of time it takes to arrive at some kind of timely insight can be significantly reduced through machine learning and AI-based pipelines. We don't necessarily tell people what to believe; we just give them more tools to make better decisions. 

Are we succeeding in places that other companies haven't been able to?

When we talk about converting misinformation and problematic online harms, the problem space itself is very broad. There could be so many areas within this problem space as a company. You could focus on building tech, or putting together a workforce to tackle specific aspects of misinformation and disinformation. 

For us, I would say the real strengths are that we have expert teams on disinformation analysis and OSINT research and fact checking. Then, we also have data scientists who are skilled at developing machine learning models to handle text data, images, and other various kinds of like multi-modal data. We are developing products that use the intelligence of these expert teams, as well as our world-class engineers and data scientists that are basically large-scale technology pipelines that can ingest large volumes of data and then facilitate monitoring of misinformation at scale.

We also have interdisciplinary teams with unique experiences that come from different cultural backgrounds as well as professionally diverse backgrounds who are able to like give input. We are constantly working on improving our technology, our process, and our products so that we improve the quality of insights and reduce the turnaround time to generate those quality insights. Other companies could be doing a similar thing. But they might be focused exclusively on technology, or maybe they're exclusively focused only on expertise. But you can only have success as long as it’s scalable. And that addresses the pain points of customers coming from different segments, like the private sector or public sector across different geographies. You can do this only if you're able to put together some kind of interdisciplinary solutions built by highly collaborative and innovative teams.

The problem space is so dynamic that technology alone cannot solve these kinds of problems, nor can experts alone tackle this problem, precisely because it's highly dynamic – you wouldn’t be able to offer insights at scale. Logically’s unique value is a combination of technology and experts.

Related Articles

Making Sense of Russian Disinformation and Propaganda

Making sense of an ambiguous threat like state disinformation is not easy, but more important now than ever. Because nations such as Russia artfully implement denial and deception, it may seem near impossible to determine the scale and types of...

Logically's Best Journalism of 2021

At Logically we aim to understand information problems. This all falls under the umbrella of "misinformation," but what is misinformation? Is it a state-run troll farm discrediting the opposition with falsehoods? A social media harassment campaign?...