<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">
View All Articles

Fact Check with Logically.

Download the Free App Today

How AI Helps Us Fact-Check in a Crisis

How AI Helps Us Fact-Check in a Crisis

During the Russian invasion of Ukraine, social media was flooded with false or misleading claims – some deliberate, drawn from the Russian disinformation ecosystem; some half-reported rumors; and some misunderstood or misattributed photos and videos. The sheer volume of posts presents a considerable challenge for our fact-checking teams to sort through and identify claims before they can spread. 

I talked with Hella Hoffmann, Logically's Client Services Technical Program Manager, about how Logically's data science and fact-checking teams came together during this crisis. Over the past year, Hella has been leading Logically's Veracity AI team, which improves the quality of our misinformation analysis and fact-checking technology. When the Russia-Ukraine war broke out, Hella took on feedback from the fact-checking team about claim discovery and built a brand-new tool to help sort through the vast quantity of posts and extract claims. Because of this collaboration, Logically's teams are able to sort through 2,000,000 pieces of content per day. 

How have we been using AI to help our fact-checking team?

In other fact-checking operations, claims have to be sourced manually by fact-checkers, whereas at Logically, we synergize our AI capabilities and the fact-checking team's expertise. The AI team might ask the fact-checkers what topics in particular they're working on at the moment, and we can build queries from there. 

Here, we can run a basic query using Boolean searches on Logically Intelligence, an advanced AI platform for at-scale analysis of mis/disinformation. We're actually looking at 2,000,000 content items on a daily basis coming into Logically Intelligence, and simple queries reduce these to a topic that might be of interest to the fact-checkers. From each topic, we identify the claims that might be fact-check-worthy. We usually boil this down to about 2,000 and then match that against our misinformation detection model, based on these linguistic patterns and also matching against a fact check database. With the help of this tool, fact-checkers were able to find 40 fact-check-worthy claims in two days, whereas previously, we were looking for claims manually, which was much more laborious.

What's the advantage to using an AI technology like Logically Intelligence to find claims?

One of the major advantages is bringing together content from all sorts of different platforms. When we trialed this tool with users, one of the first things fact-checkers said was, "oh, this is awesome," because it provides an overview across different platforms. We have content from Twitter, other social media, articles, forum posts, and Telegram, for example. Monitoring all those kinds of channels is really hard to do. It's one of the things people in our sector have to spend a lot of time on; identifying specific accounts to monitor. AI can help to aggregate the right data and get something out there that will help speed up the process.

In a crisis situation, you have both an expert monitoring an ongoing crisis situation paired with an AI system that can help them be more effective at their job, so we can ultimately flatten the curve of online harm.

The first step is to get all the content in Logically Intelligence that matches a query that someone on the fact-checking team has defined. Let's take the Bucha massacre in Ukraine as an example, as we refined the technology for this situation. We collated some keywords from fact-checkers, and that provided us with a query to run. Using such a pipeline really speeds up the review process and allows expert users to process a volume of content indirectly that would never otherwise be possible. Even a team of experts would not have time to review two million items. 

This is a key benefit of AI experts working in conjunction in the field of fact-checking. People might say to us, "hey, your team has identified that this is happening right now; let's monitor the situation." So whenever misinformation is being spread around, we can engage with it and process it quickly by using AI to see what the trends are.

Is the aim to train the AI to the point where it can find potential misinformation? How does the process work?

Claim discovery is a two-stage process. First, we reduce the content corpus to "likely fact-check-worthy" based on linguistic pattern recognition. Second, we rank these claims by a combination of frequency, reach, and semantic similarity to existing fact-checks.

The first step of the process is all about identifying the kind of sentences that incorporate linguistic patterns similar to those used in historical misinformation claims. This includes grammar and syntax, as well as a common vocabulary. A claim about a particular event or entity that uses assertive language and includes temporal or numeric quantifications is much more likely to be fact-check-worthy than someone expressing a personal opinion or making a future or hypothetical reference.

Fact-check-worthy claims example:

  • The Russian Army is destroying several U.S.-controlled biolabs in Ukraine.
  • The COVID-19 vaccine doubles your chances of catching the virus.

Non-fact-check-worthy sentences:

  • I think NATO should take more action in supporting Ukraine.
  • The weather will be great tomorrow.

It's about language: linguistic patterns independent of the topic being used to spread information. We have worked with fact-checkers to help train some models on linguistic patterns that appear, and now we have a deep learning model that learns linguistic patterns, for example, grammar or syntax, as well as a common vocabulary. It is essentially a corpus of language collected from texts online. 

Let's say you have a general news headline: there's no fact check in the headline itself, and it will have a different linguistic pattern  — the genre of writing is going to be different in a short tweet from how it is in a headline. This is why we have models trained on different texts; for example, we have one trained on articles and one trained on tweets. We've applied the one trained on tweets to Facebook content, and it works relatively well, but it requires further customization for platforms like Reddit, for example.

What can our AI detect about a claim?

At the moment, you can give our AI a claim, and it can tell you whether this claim has been fact-checked before. The way we're doing this is like an automated signal to say, well, there are other very similar fact-checks, so it's likely to be misinformation because many fact-checks are misinformation fact-checks rather than truths. 

That's one of the ways our AI recognizes what a claim is and what isn't, along with the comparisons against data and language. The linguistic patterns matter a lot, though: our AI models can now understand these because we're using deep learning to recognize these patterns, and we feed it data, so it learns more patterns.

We've got all this amazing AI and technology, how does this work in conjunction with people? How important are humans to this process?

Throughout the process, I think the human is still the pivot point of this. Actually, I see AI as more of a helper or servant to the human expert. Any final decision in this process at the moment is still being made by the expert analyst.

The power of AI lies in processing large amounts of information at a scale that humans cannot. This mainly works well for repetitive and clearly defined tasks within a more extensive process – the decision making, customization, and interpretation – are still up to the human expert.

For claims discovery, the fact-checking team guides and initiates the process, contributing its expert knowledge about current events. The AI's job is mainly to reduce noise and point experts to the most relevant content items. The final assessment and action still lie with the expert.

I think this example demonstrates how, in a crisis like the Russian invasion, you can get the best of both worlds: an expert that's monitoring an ongoing crisis situation paired with an AI system that can help them be more effective at their job, and ultimately, we can flatten the curve of online harm.

Related Articles