<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">

India’s Fight Against Foreign Disinformation Operations

Public support for Indian official after misinformation campaign

Foreign actors are increasingly attempting to influence Indian citizens online using covert and overt methods to sow distrust and heighten tensions between communities. In addition to this, neighboring states continue to use various techniques to influence Indian citizens and further their agendas. As a response, India has had to use hybrid approaches, including manual and automated techniques, to identify, monitor, and mitigate the effects of disinformation operations and election interference.

Growth of Cross-border Influence Operations via Social Media

China and India are two regional powers in Asia and the most populous countries in the world with varying socio-economic and political standpoints, which is bound to create some contention. Although the two bordering nations have sought economic cooperation, they have had frequent border disputes that have turned more acute in recent years. 

China’s most recent occupation of contested border areas with India resulted in the first lethal border conflict between the neighbors since 1975. Ladakh’s Galwan valley witnessed the most violent clash in 45 years, resulting in the deaths of 20 Indian army personnel. A Chinese state-affiliated media’s Twitter handle posted a video on January 1, 2022, purportedly showing People’s Liberation Army (PLA) soldiers hoisting the Chinese national flag in “Galwan Valley, South-west China.” On January 2, the same account posted another tweet with the caption “never yield an inch of land” alongside a video the account claimed was from the Galwan Valley – exacerbating the Indian border dispute over the contested area.

These displays created harmful rumors in India and heightened tensions between the two countries in an apparent attempt to rally nationalism in China, assert its dominance, and show India in poor light, which it managed to achieve. The tweet received replies from agitated Indians and was reported by India Today and the Global Times.

Using geolocation OSINT techniques, we were able to confirm that the flag was raised in a Chinese-controlled territory by the PLA soldiers. Without such technological means to verify information, situations like these can trigger a wave of misinformation that can turn into sophisticated disinformation campaigns.

However, without the quick deployment of these techniques to verify information, it’s easy for situations like this to get out of hand, especially if citizens and governments were to act on malinformation - information that stems from the truth but is often exaggerated in a way that misleads and causes potential harm. 

India has been wary of Pakistan-China collusively employing disinformation as a weapon to further their political agendas. Logically has conducted several investigations into Pakistani state misinformation and influence operations. For instance, a Logically investigation into the helicopter crash that resulted in the death of India’s Chief of Defence Staff General Bipin Rawat revealed a bot network of more than 480 accounts attempting to amplify narratives about Tamil Rebels being responsible for the crash. An analysis of the top retweeters revealed that these accounts originated from Pakistan. 

In another investigation, we identified Pakistani Twitter accounts pretending to be Chinese influencers spreading pro-Chinese propaganda, promoting Chinese-Pakistani relations, and disseminating narratives regarding the China–India border skirmishes between 2020 to 2022. These accounts were seen to be interacting with Chinese government officials and Chinese-affiliated media. They were also followed by Chinese government officials such as the Cultural Counsellor at the Chinese Embassy in Pakistan. You can read about this investigation here.

Election Interference and Disinformation

Deployment of mis- and disinformation during elections in India through various social media platforms/services and political advertisements is seen as an effective method of influencing voters. We have noticed an uptick in digital campaigns with misleading and inaccurate content that violates the Indian Electoral Moral Code of Conduct.

With the size and scale of influence campaigns, coupled with the varied disinformation techniques utilized, multiplied by the number of actors involved in pushing harmful narratives and their individual and collective agendas, it's incredibly difficult to identify election disinformation as it is tailor-made to target voter bases and leverage on communal fault lines. Foreign actors only exploit the resultant domestic crises. Malicious actors create targeted propaganda to exacerbate these tensions and share them widely across major social media platforms frequented by domestic users in the country.  

Election disinformation can have a tangible impact on the electorate and undermine democratic processes. Unfortunately, by the time influence campaigns can be identified, it is often too late to act, leaving agencies to enter damage limitation mode instead of tackling the core issue. 

Combating Mis- and Disinformation With Human and Artificial Intelligence

With the complex and ever-changing online environment and the constant increase in the scale of information, there is limited specialist capability and resources within law enforcement agencies world-over to prevent the spread of harmful and misleading content. 

Using advanced AI in combination with human expertise, Logically can identify, monitor, and mitigate misinformation at a mass scale. We’re able to monitor misinformation in multiple languages, including English, Hindi, Arabic, and Mandarin, while securely retaining evidence when content or accounts are deleted.

Logically significantly enhances law enforcement agencies’ threat intelligence capabilities by ensuring greater speed, a wide breadth of monitoring, depth of analysis, and accuracy of information. We leverage artificial and human intelligence to allow analysts to effectively manage and contain harmful real-world consequences such as communal violence or terrorist propaganda.

“Disguising coordinated inauthentic behaviour as unsolicited public commentary is a niche aspect of disinformation and propaganda,” said Ishaana Aiyanna, Senior Analyst at Logically. 

“Techniques such as creating troll farms and sock puppets can be used to influence a specific demographic or astroturf consensus on an issue. They can also be used as a way to prompt and justify action in an actors’ interests. We also see the spread of misinformation from public and official channels to co-opt and coerce audiences. Without specialist training and advanced tools, these techniques often go unnoticed - causing people to unwittingly act/react in harmful ways.” 

Our enhanced intelligence capabilities allow for more cases to be closed successfully and more quickly than before. Our OSINT team members can also assist state police departments in linking real-world identities to online malicious activity by successfully attributing and de-anonymizing bad actors.  

As the disinformation problem continues to grow, it's paramount that law enforcement agencies are equipped with the right tools and knowledge to identify and mitigate misinformation to protect Indian citizens, critical infrastructure, and democracy. 

To find out more about how Logically’s expertise can help you, contact us here.


Responsibility, Explainability and Ethics in AI

Due to the power of AI and its potential to have a profound impact on people’s lives, this technology needs to be developed with care. Learn about using AI responsibly and ethically.


fake news in india

How AI Is Helping Combat Misinformation in India

Domestic and foreign actors are utilizing disinformation as a weapon - creating fresh challenges for Indian legislators, law enforcement officials, and citizens.

China Surveilance Reuters Small

3 Examples of OSINT Techniques Used To Expose Chinese Operations

Find out how geolocation OSINT techniques are used to unveil Chinese mis-, dis-, and malinformation.