<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">
View All Articles

Fact Check with Logically.

Download the Free App Today

From Micro to Macro: How Counter-COVID Influencers Evade Moderation

From Micro to Macro: How Counter-COVID Influencers Evade Moderation

When Louise Hampton posted a video appearing to suggest the COVID-19 pandemic didn’t exist, she was mostly unknown to the world. She worked for Care UK, a major NHS service provider that operates the 111 call center service.

The video, in which she claimed the service "wasn’t getting calls'' and had been "dead" throughout the pandemic quickly gained traction, pulling in half a million views across Facebook and Twitter. At the time, a spokesperson for Care UK told the BBC: "We are aware of this video, which we consider to be materially inaccurate in a number of ways," adding "our call centres were, in fact, exceptionally busy, handling a peak of 400% more calls than usual."

Hampton later resigned before she could be sacked for gross misconduct. She then began encouraging other medical professionals to follow her lead, posting a list of doctors, nurses, and professors who "had spoken out against the current global narrative on COVID." Shortly after, another post followed, this time lamenting that the list had been removed: "Those who speak the truth are shut down," she wrote.

In the months since the video where she came out as an “NHS whistleblower,” Hampton’s following on Facebook grew to more than 22,000 people. Over this time, her posts matched a growing trend of moderation avoidance tactics used by disinformation influencers to push their chosen narratives.

Anti-vaccine microinfluencer Louise Hampton. Image: Facebook

"You must watch this before it’s removed," she posted alongside a video claiming that people are being forced to take vaccinations, "guess what...[my post has] been REMOVED!!!!!" she writes in another, ending with #FreeSpeechIsDead.

In doing so, Hampton joined a league of microinfluencers who oppose any and all moderation. She leveraged an identity that followers of COVID-19 conspiracy theories feel they can relate to — that their opinions are not the mainstream and therefore they have to speak out against being silenced.

Leveraging the whistleblower narrative

The whistleblower narrative involves current or former healthcare workers purporting to "reveal the truth" about the pandemic, almost always on social media. This route is more direct and personal than posting in groups or on pages and centers the discourse on the struggle of a particular individual and their pursuit of truth. 

When one such influencer has been banned or otherwise had action taken against them, such as labels on their posts, fact checks, or removal of their demonstrably false claims, the initial response is often not to admit fault but to frame the corrective action as an attack on free speech. In itself, these posts support a narrative of self-proclaimed oppression, appeasing an audience who purport to believe in free speech without consequences.

Positioning oneself as oppressed or silenced helps build an audience

In many cases, there is a direct monetary motivation for this behavior. When someone loses their job for posting damaging false claims online, or has been struck off for medical malpractice or making false communications about the pandemic, there is an obvious incentive to "double down" and monetize their influence to replace lost income. 

Even though some crowdfunding platforms, such as GoFundMe, have banned fundraisers for anti-vax campaigns, more direct sources of funding are utilized by influencers looking to raise money. A simple search of “IBAN” in German disinformation Telegram groups reveals hundreds of posts soliciting “financial support.” Equally, donations received via PayPal help support disinformation influencers. 

This is especially true when there are so many well-trodden paths to profiting from social media influence. An industry of "alternative news outlets" readily accept self-proclaimed proponents of free speech who are then supported by a ready-made audience who self-identify as soldiers in a war against truth.

He avoided censorship with this one weird trick! Social media giants HATE him!

Microinfluencers are different in how they share content. Instead of posting to a Facebook page or group, they use their own personal pages to address sometimes tens of thousands of followers. In the case of Facebook, the use of personal pages instead of verified "brand" pages means that microinfluencers can completely avoid a swathe of existing moderation policies. Since these pages revolve around a single person, they can be living documents of an individual's radicalization towards more divisive and damaging content, passing this process on to their followers.

Personal pages create an effective space to spread disinformation in plain sight, avoiding moderation under Facebook’s content policy for verified pages. Furthermore, these accounts aren’t shown in Facebook’s mass analytics and are, as such, more difficult to track and monitor.

Another technique that allows content to fly under the radar of tools like Facebook’s CrowdTangle is encouraging users to download and repost content under the premise that "Facebook doesn’t want you to see this." Not only does this method create further intrigue, but it also avoids reliable moderation of a certain piece of content. Unique native posts (posted directly to a platform) cannot be tracked in the same ways as posts shared from other people and pages, or even links to external websites.

With Facebook’s limits on what data it shares, content from personal accounts that utilize its followed/follower feature is rarely flagged up by current analytics tools. Without actively monitoring these accounts manually, action can only be taken when their posts spill out into monitorable spaces, such as existing groups and pages or other social media platforms.

The problem of re-platforming

Some techniques even allow propagators of disinformation to entirely avoid directly interacting with platforms, instead being shared by people following their work off-platform. Efforts to deplatform influencers who habitually spread misinformation can easily be frustrated if they are "re-platformed" by other channels or users of that platform.

Facebook and YouTube profiles for Indian anti-vax “celebrity” Biswaroop Roy Choudhary were removed in June 2020, but his content still appears across social media via links to external websites. Choudhary’s claims are as familiar as they are outlandish, “COVID-19 doesn’t exist,” “masks are actively harmful to the wearer” etc. Following the ban, Choudhary’s book about COVID-19 was publicized across other pages, and he gave interviews on other YouTube channels and Facebook pages.

Despite the ban, Choudhary’s official website continued to gain traction on Facebook. The website, which contains false and misleading claims about COVID-19, was shared to a combined audience of more than 26 million people on the site throughout 2020. Roughly half of that reach came after Choudhary was removed from the platform.

Attempts to prevent this kind of action can result in a game of cat and mouse. Even if a platform blocks a link to a website, it’s still possible to share the same website via a URL shortener like Tinyurl or Bit.ly. Unless a platform actively monitors posters, finding a way around blocks on posting a link is as simple as registering a new web domain. 

Going deeper down the rabbit hole

If there’s still one place where disinformation can spread almost entirely unchecked, it’s Telegram. With the looming threat of being “silenced” for their views on traditional social media platforms, misinformation influencers seek to build a following elsewhere on the internet.

Telegram has a certain allure to people posting misinformation and conspiracy theories. The platform is privacy-focused, touts strong encryption as a feature, and has terms of service that allow almost anything on the platform, so long as it isn’t posted in publicly viewable chats.

Building a following on Telegram usually comes when influencers notice restrictions placed on their account or assume they will soon be banned from mainstream platforms. 

Although some users don’t make the jump to closed platforms like Telegram, moving further away from public view and deeper into the rabbit hole comes hand-in-hand with more extreme claims being easier to find. What can start as vaccine skepticism quickly evolves into more dangerous beliefs: the vaccine is a ploy by elites to control people, potentially life-threatening alternative remedies are more effective, or even that the pandemic doesn’t exist at all.

Though not always completely comprehensive, social media platforms have protections in place to provide context, check facts and moderate content to help better inform their users. Microinfluencers are concerning not only because they can circumvent moderation attempts, but because in many cases they have benefitted from these directly. When Facebook and other platforms cracked down on groups spreading disinformation, leagues of microinfluencers and anti-vax celebrities sprang up through the fractures the crackdown left behind. In this case, it seems perhaps that the cure was as risky as the disease.

Related Articles