<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">

Fact Check with Logically.

Download the Free App Today

How to Use Social Media During the Russian Invasion of Ukraine

How to Use Social Media During the Russian Invasion of Ukraine

Russia’s warfare against Ukraine is being fought both physically and digitally. There have been waves of cyberattacks against Ukraine’s internet infrastructure, and disinformation operations are targeting the citizens of Russia, Ukraine, and the world at large. 

During a conflict, it can be even harder to navigate social media with responsibility, and even harder to recognize propaganda and disinformation. This guide offers an overview of how to use social media responsibly during the Russian invasion of Ukraine.

Wider Narratives

To better identify propaganda, you have to know what narratives are at play. We have seen three major narratives that Putin and the Russian propaganda networks are using.

The first false narrative concerns the U.S. and NATO. Putin frames the conflict as one of NATO aggression. The argument goes that Russia has to engage in military action to protect itself. Putin seeks to portray Ukrainian President Volodymyr Zelenskyy as a puppet of the U.S., and that NATO expansion caused this fight. The international community recognizes, however, that Putin has been illegally occupying parts of Ukraine for eight years, and is the aggressor in this conflict.

Despite Ukraine’s government having no Nazis, and Nazi ideology having no popular support in civil society, Putin said he wanted to “de-Nazify” Ukraine in his declaration of war. This often extends to accusations that Ukraine is committing war crimes, especially against ethnic Russians. Pro-Russia outlets often talk about the presence of neo-Nazis at the 2013 Euromaidan protests as a way of delegitimizing the current government, or about Nazis in the current Ukrainian armed forces. If this narrative were true, however, Putin would not be providing support to some of the Russia-led separatists in Donbas who espouse Nazi ideology. 

The second false narrative concerns Russian sources seeking to demoralize and delegitimize the Ukrainian military. Often this appears as claims of war crimes. “We’ve seen false accusations of the Ukrainian military killing civilians or allegations of a genocide against Russian-speaking Ukrainians,” Al Baker, Logically’s Editorial Director, said. 

There are also stories about the Ukrainian military fleeing, being defeated, or giving up entirely. The founder of Ukraine’s StratCom Centre, Liubov Tsybulska, discovered a story on Telegram purporting to give updates about the front line of the conflict, but likely originated within the Russian spy agency GRU.

The last narrative falsely frames Ukraine as the foreign oppressor of the Donetsk and Luhansk regions of Ukraine. The DFR Lab found numerous examples of this narrative; for instance, the debunked claim that Ukrainian Special Services planted a bomb in an administrative building in Donetsk. Together, these false narratives frame Russia as a liberator in a regional conflict — a protector rather than an aggressor. 

Why propaganda?

But what is the purpose of Russian propaganda? In short, divide and conquer. Disinformation operations from Russia seek to “pollute the information environment in order to influence what information is available to policymakers or affects them via democratic pressures or to erode trust in institutions, such as host governments and traditional media, often by proliferating multiple false narratives,” according to a 2018 report about Russian social media operations conducted by the RAND Corporation, a global policy think tank.

Logically’s Brian Murphy explained this in the American context: “The more polarized Americans become, and the less trusted America's institutions are, the harder it is for the U.S. to exercise soft power in foreign affairs.”

Russia’s disinformation and propaganda apparatus have a sophisticated structure. Russian-affiliated media outlets, such as RT and Sputnik, create misleading or false content, which is often drawn from a seething Telegram ecosystem. This content is then repackaged and disseminated by Russia’s networks of paid trolls, botnets, and so-called “grey propaganda” sites — groups of websites and publications that parrot the Kremlin line but maintain some plausible deniability or have unclear attribution. While some of them are directly funded by the Russian government, others might be “news aggregators, far-right or far-left sites, blogs, [or] users drawn in by clickbait headlines that reinforce their previously held beliefs,” according to RAND. 

Multiple contradicting narratives can serve as an advantage to the Russian propaganda apparatus, according to a report from the U.S. Department of State. Propagandists can spin local narratives to suit their target audience without needing to be consistent. The contradictions can provide cover in the form of plausible deniability for Kremlin officials and can create a kind of paralysis in its audiences as they’re unsure of what’s true or how to act. It creates “media multiplier effect among the different pillars of the ecosystem that boost their reach and resonance,” the report reads. 

As the European Research Council's COMPROP project puts it, “By laying out multiple conflicting stories, authoritarian regimes prevent their citizens from knowing which narrative to respond to.” The point of all this propaganda – even the baldly obvious propaganda – is not just to get people to believe Russia’s false narratives, but to get them to disbelieve everyone else’s true ones. Disinformation campaigns are so destabilizing because it makes real information harder to locate and harder to trust. 

Fake videos and photos

Social media platforms can provide immediate access to information about a conflict as it unfolds. People within the conflict zones rely on it to get vital information about friends, family, and safe areas. 

That said, they are also the sites where false videos, pictures, and harmful narratives can spread, and the sites that countries use to conduct information operations. Flooding these channels with falsehoods can impair access to lifesaving information.

As Logically reported last year, though platforms such as Twitter, Facebook, Instagram, and TikTok prohibit content that glorifies violence or terror groups — and all of them prohibit illegal activity — they do not have any public policies on how open conflict is moderated. It is algorithmic responses and user-generated reports that drive moderation. 

The International Fact-Checking Network (IFCN) issued a plea to refrain from sharing unverified posts on social media: 

Logically, alongside other IFCN signatories worldwide, is working to verify images and videos as they come in. The investigative journalism outlet Bellingcat has created a public database of debunked claims taken from Russian media outlets. Old footage from unrelated conflicts often goes viral during new conflicts, and this time is no exception.

While some of these old videos may have been distributed by Russian trolls or bots, it’s also common for unrelated bad actors to take advantage of a tragedy for money, clicks, or clout. For instance, Ben Collins at NBC found several TikTok users in the U.K. who had used audio of gunshots and layered it over a livestream from their house in order to make it look like they were in danger, before scamming viewers out of donations. 

It might be tempting to share info on social media that could help someone or raise awareness. However, it is vital that users don’t inadvertently boost something incorrect or damaging. The best method for citizen fact checking is the SIFT methodology, developed by misinformation researcher Mike Caulfield: stop, investigate the source, find better coverage, and trace it back to the source.

Rachel Schraer at the BBC noted that stopping isn’t just about assessing whether you understand what a piece of content is, but also about noticing the emotional reaction you’re having. 

So when it comes to interacting with social media during a conflict, less is best. Sharing, liking, or replying to bad content can all amplify the bad content — inadvertently increasing its reach. Even talking critically about false information can also amplify its reach, if not framed carefully. 

The nonprofit organization Defend Democracy has created an extremely useful list of dos and don’ts of dealing with lies and disinformation on Twitter. It advises that the best way to combat bad information is with good information — amplify verified and trusted sources, and post about narratives based on truth. When you want to talk about bad or unverified content, screenshot it rather than engage with it, and clearly label the screenshot as false or misleading.

Al Baker adds another point. “Lack of certainty about facts on the ground is one factor encouraging wilder misinformation narratives to spread,” Baker says. “I would add a plea not to amplify any speculation about specific events.”

 

Image Credit: Reuters

Related Articles