What is Coordinated Inauthentic Behavior (CIB)?

In recent years, there has been a proliferation of attempts to define, understand and fight the spread of problematic information in contemporary media ecosystems. Most of these attempts focus on false content or individual bad actor detection. By contrast, in the detection of Coordinated Inauthentic Behavior (CIB), the challenges are to identify patterns in campaigns and attacks rather than in the behavior of single actors.
This vision is a shift of perspective from detecting simple account properties ("micro-level") towards identifying coordinated strategies ("macro-level"), i.e., orchestrated activities of multiple (automated, semi-automated, or human-steered) accounts. The macro-level of strategy detection on group-based behavior is a far greater challenge to research but certainly of greater importance than the micro-level of social bot detection.
Defining Coordinated Inauthentic Behavior (CIB)
CIB is a term coined by Facebook that has shaped our understanding of disinformation, but one that's been publicly criticized for its ambiguity. Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook, explains it as when "groups of pages or people work together to mislead others about who they are or what they are doing." This definition moves away from a need to assess whether content is true or false, and whether some activity can be determined as Coordinated Inauthentic Behaviour, even when it contains true or authentic content.
Nathaniel Gleicher tweeted recently that, “CIB is to capture a specific (and fairly narrow) but very serious set of harms" and that online threats and emerging harms have become, “more broader.” Different from CIB, inauthentic behavior (IB) networks is another prevalent emerging harm. According to Meta's 2022 Second Quarter Report and as detailed in their latest Community Standards, IB is: "An effort to mislead people or Facebook about the popularity of content. It is primarily centered around amplifying and increasing the distribution of content. IB operators typically focus on quantity rather than the quality of engagement."
What is missing is a wider cross-platform definition of CIB, e.g., a clear line of what makes an activity authentic or coordinated and what does not. At Logically, we have observed that CIB campaigns typically possess six common characteristics:
- Serves a common agenda
- Aims for massive reach quickly
- Posts same messages from multiple accounts
- Involvement of influencers
- Tagging specific accounts
- Use of bots
Although a clear and widely accepted definition of CIB is currently missing, the concept of malicious account networks, using varying ways of interacting with each other, is observed as central to any attempt to identify CIB. There are ongoing efforts from researchers and investigators to expose CIB, and the method is at the crossroads of manual investigation and data science. It's worth noting that coordinated inauthentic behavior (CIB) is often used interchangeably with other terms in this field. These terms include: “information operations”, “coordinated information operations”, “disinformation campaign”, “astroturfing”, and “organised trolling”.
How adversaries are coordinating to spread disinformation
Adversaries have a big arsenal of coordination tactics to spread and amplify disinformation online. While significant public attention has been on foreign governments employing bots in organized influence operations, there have been cases where non-state actors, domestic groups and commercial companies have all engaged in these tactics. Below we highlight some of the different coordination tactics currently in play.
Coordinated posting is when multiple accounts broadcast the same message seemingly independent of one another from different accounts managed by a person or a team. A study by EU Disinfo Lab shows that the CIB accounts usually post the same message a few seconds apart. Coordinately posting many of the same or similar images is a popular strategy in a mix-media information dissemination campaign which refers to the use of multiple social media channels and multimedia formats to disseminate a narrative (as discussed by Agarwal). Information campaign agents commonly use applications like scheduling platforms to control their multiple accounts or manually copy and paste the messages into the accounts under their control. This strategy has been largely constrained on the Twitter platform.
Coordinated reposting is the simplest way in which a fake or astroturfing account can amplify a campaign message and requires only one click. Just simple reposting of a campaign message is a bit obvious and therefore not used in real-world campaigns. A more common phenomenon is when multiple accounts jointly amplify messages by co-reposting the exact same second-party message. Coordinated reposting is a strategy employed by many information campaign agents and accounts, even the heavily automated ones.
Coordinated hashtagging is when multiple accounts use the same hashtag within a time threshold. It is a common tactic in disinformation campaigns to have posts with slight text variations under the same hashtag (see examples in this recent DFRLab report). Accounts try to obfuscate their coordinated hashtagging by paraphrasing similar text in messages. Another common tactic in disinformation operations is to use generic hashtags, such as #corona and #covid19, likely in an attempt to inject their content into the mainstream pandemic-related conversation online.
Coordinated mentioning is when multiple accounts mention the same user within a time threshold, including another (or multiple) user’s screen name in a post allow to inject disinformation narrative in the context of discussion. This commonly applies to celebrities or political entities in political disinformation campaigns (see examples in ‘Detecting and Tracking Political Abuse in Social Media’ by Ratkiewicz).
What are social platforms doing to combat CIB?
Social media platforms are creating more specific guidelines against coordination and manipulation on their platforms. For example, Meta has a strict policy against coordinated inauthentic behavior and its guidelines prohibit content from disinformation campaigns. Twitter prohibits all forms of technical coordination under the Twitter Rules. TikTok’s guidelines prohibit the use of deepfakes and coordinated, strategic use of the platform to influence opinion. The latter form of campaigning would be more difficult to operate on TikTok due to the way its algorithm works, but TikTok’s looking to ensure it has clear enforcement measures in place to combat such, in case it becomes a problem.
However, only a small number of instances have been shared which show the efforts being made by platforms to detect and mitigate CIB. In August 2022, Facebook and Twitter shared they had taken down a series of accounts that had for five years been engaging in pro-Western propaganda. The Chinese-origin influence operation ran across multiple social media platforms, and was the first one to target US domestic politics ahead of the 2022 midterms and Czechia’s foreign policy toward China and Ukraine. The Russian network primarily targeted Germany, France, Italy, Ukraine, and the UK with narratives focused on the war and its impact through a sprawling network of over 60 websites impersonating legitimate news organizations. Twitter said the accounts violated its policies on platform manipulation and spam, while Meta described them as 'coordinated inauthentic behavior'. Even more recently, Twitter revealed that it has disrupted three China-based operations that were covertly trying to influence American politics in the months leading up the midterm elections by amplifying politically polarizing topics.
There is always tension over the strategy of dealing with misinformation and disinformation threats. Removing or exposing accounts early can limit what you learn about the adversary, their playbook, and the extent of their network. This is vital information for improving resilience over the long term. However, platforms must act in a timely way if adversaries are violating their platforms to protect their users and reduce the impact of disinformation campaigns.
Responsibility, Explainability and Ethics in AI
Due to the power of AI and its potential to have a profound impact on people’s lives, this technology needs to be developed with care. Learn about using AI responsibly and ethically.
India’s Fight Against Foreign Disinformation Operations
Foreign actors are increasingly attempting to influence Indian citizens online using covert and overt methods to sow distrust and heighten tensions between communities.
To Build or to Buy AI?
If you’re in the position of deciding whether to build or buy AI to solve a problem, follow these steps to guide your decision-making process and figure out the best way forward for your organization.