<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">

Using HAMLET to Detect Coordinated Inauthentic Behavior

Popular social media platforms

HAMLET, our human-in-the-loop AI conceptual framework, facilitates machines and experts to work together to counter misinformation. In this article, we dive into a potential use case for HAMLET, which is to detect coordinated inauthentic behavior (CIB) and therefore tackle risks associated with influence operations.

What is HAMLET?

Logically’s conceptual Human-in-the-loop AI framework, HAMLET (Human and Machine in the Loop Evaluation and Training) facilitates machines and experts to work together to design effective large-scale systems to counter mis- and disinformation. The framework has mechanisms to collect expert data annotations, expert feedback, AI system performance monitoring, and life cycle management. Reflecting the complexity of the information environment, HAMLET supports different modes of data (text, speech, and multimedia) to enable the implementation of interdisciplinary counter-misinformation solutions. For a detailed overview of how HAMLET works, download our white paper, Tackling Misinformation with HAMLET, an Expert-in-the-loop AI Framework.

What is Coordinated Inauthentic Behavior (CIB)

As we explained in a recent blog post, a clear and widely accepted definition of CIB is currently missing. However, the concept of malicious account networks, using varying ways of interacting with each other, is observed as central to any attempt to identify CIB. There are ongoing efforts from researchers and investigators to expose CIB, and the method is at the crossroads of manual investigation and data science.

At Logically, we define CIB as a group of accounts coordinating to interact and ultimately amplify or push certain types of content. Our data science teams have been working with our analysts to investigate and understand this concept. The proposed process for detecting and analyzing CIB, starts with CIB detection through AI models. Human analysts would struggle to map and analyze at the speed and scale that is required to find the coordination patterns which detect this problem. So this is where technology comes in. Detection can be done using coordination analysis which applies state-of-the-art graph analysis. Narrative analysis can then be carried out separately by our OSINT teams who can look at the data itself and what content is being amplified.

Monitoring CIB with AI and Machine Learning

The objective here is to monitor co-related activities concerning known/potential campaigns and predict if a group of accounts related to a campaign is exhibiting coordinated behavior. 

We propose a 3-step methodology that applies state-of-the-art graph analysis techniques, feature engineering, community detection, and machine learning to detect coordinated inauthentic behavior (CIB) in online social networks.

CIB Computational Solution Overview

Firstly, we computationally model user groups and communities using several coordination patterns such as co-reposts or co-hashtags in a specific time windows and also over time (daily, weekly, and monthly). Coordination pattern detection techniques have been implemented for eight (pre-defined) different coordination activities (our coordination pattern modeling approach is a variant of Vargas et al., which we extended to capture multi-modal content interactions).

  • Repost: accounts involved in reposting or retweeting each others’ message; which helps increase overall reach when individual accounts have different sets of followers.
  • Co-repost: multiple accounts jointly amplify messages by co-reposting the exact same message.
  • Co-post: broadcasting the same message seemingly independent of one another from different accounts managed by a person or a team.
  • Co-hashtag: two accounts tweeting the same hashtag within a time threshold. It is a common tactic in disinformation operations to use generic hashtags, such as #corona and #covid19, likely in an attempt to inject their content into the mainstream pandemic-related conversation online (for more information, see Facebook July-2021 report).
  • Co-hashtagseq: Coordinated hashtag sequence behavior. Accounts may try to obfuscate their coordination by paraphrasing similar text in messages. This means that paraphrased text is likely to include the same hashtags based on the targets of a coordinated campaign.
  • Co-mention: two accounts mentioning the same user/handle within a time threshold; Including another (or multiple) user’s screen name (@) in a post allow to inject disinformation narrative in the context of discussion.
  • Co-url: two accounts sharing same URL with a time threshold.
  • Co-sharing the same URLs can potentially expose trace that accounts controlled by one or multiple entities with the goal of amplifying the exposure of a disinformation source.
  • Co-image: this is a new behavior of Image Coordination and involves coordinated posting many of the same or similar images or internet memes (for more information, see European Commission report “It’s not funny anymore. Far-right extremists’ use of humour“). 

Secondly, we apply advanced graph algorithms and build user coordination graphs to detect high-overlap social graphs and communities using advanced community detection techniques. Graph algorithms help identify what communities look like. The algorithms find the cohesive sections of the graph, where users have enough similar traits in their behavior to suggest coordination according to our coordination pattern modeling approach.

Finally, we extract subgraphs of each coordination pattern type, and apply feature extraction in order to classify them as authentic or inauthentic. By taking out sections of the graph and focusing on the sub-graphs where communities are, you can do an analysis on meaningful data.

Our Expert-in-the-loop Workflow for CIB Detection

In addition to the computational solution, coordination strategies involve subject matter experts. Disinformation knowledge graphs are curated continually by our in-house subject matter experts (OSINT analysts), and iterative feedback loops facilitate expert and machine interaction for holistic analysis of the information environment. The workflow is illustrated below. The development of the broader workflow is motivated by the reasons discussed throughout this paper, reflecting the nature of the online information environment. 

Logically’s Expert-in-the-loop Workflow for CIB Detection

Adversaries design and deploy a range of Techniques, Tactics, and Procedures (TTP’s) to launch disinformation campaigns and run influence operations that impact the decision making of individuals, institutions and communities. Further the TTP’s evolve quite rapidly and also vary across platforms to derive effective outcomes. In order to stay attuned to the dynamic nature of such TTP’s and other influencing factors of the information environment, we have developed a hybrid workflow for CIB involving the technical and subject matter experts. 

Due to the nature of coordination analysis, you will never get 100% accuracy in terms of detecting CIB. Social media interaction is so diverse and vast, and it's constantly changing. It would be very difficult to model all the instances of how users interact. We use machine learning to hint and prioritize, but ultimately you still need experts in the loop to review and make a judgment on relevancy and inauthenticity. This is really important for dealing with some of the nuances in social interaction and deciding what is worth investigating. For example, coordinated hashtag usage in one context might be fandom, marketing, or support of a meaningful cause, but in a different context, something much more harmful.

The hybrid workflow offers us the flexibility to continually curate and maintain a catalog of adversarial actors and their affiliates responsible for running harmful, problematic influence operations and other threatful online activities. This step ensures our knowledge about the online operational space is contemporary and enables us to devise reliable solutions to counter adversaries. 

A formal knowledge curation step gives us the ability to create formal schemas to represent adversarial actors, agendas, narratives and activities. The outcome of this step is a disinformation knowledge graph that can potentially complement the data-driven insights extracted from the CIB pipeline. Furthermore, the feedback loops between the computational components and the experts help our OSINT/information analysts monitor and investigate the information environment and the operational spaces within it to detect influence signals and their associated impact. Such hybrid workflows offer holistic insights which are essential to get a competitive edge in conducting an in-depth analysis of the information environment. 

Conclusion

The hybrid workflows illustrated the application of our framework (HAMLET) could potentially help researchers and investigators tackle the rising issue of coordinated inauthentic behavior (CIB). A hybrid expert-based workflow offers holistic insights which are essential to gaining a competitive edge in conducting an in-depth analysis of the information environment. 

For more information on HAMLET, download our paper 'Tackling Misinformation with HAMLET, an Expert-in-the-loop AI Framework'.

 

Coordinated Inauthentic Behavior on social media

What is Coordinated Inauthentic Behavior (CIB)?

What exactly is Coordinated Inauthentic Behavior (CIB), and how are adversaries coordinating to spread disinformation online?

 

climate change misinformation

What is Climate Change Misinformation?

Learn all about climate change misinformation and what platforms are doing to combat this.

deciding whether to build or buy AI

To Build or to Buy AI?

If you’re deciding whether to build or buy AI to solve a problem, follow these steps to figure out the best way forward for your organization.