<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">
View All Articles

Fact Check with Logically.

Download the Free App Today

A Year in Terms and Conditions

A Year in Terms and Conditions

On 12 May 2020, 62 days after the World Health Organization (WHO) declared COVID-19 a pandemic, Facebook updated its COVID-19 policy. In this update, they included a particularly  tedious sentence written in corporate legalese — a soul-crushingly generic statement full of the turns of phrase familiar to those of us who read terms of use before bed. In short, a sentence to put you to sleep:

“We’re working to remove content that contributes to the risk of real-world harm including through our policies prohibiting coordination of harm, sale of medical masks and related goods, hate speech, bullying and harassment and misinformation that contributes to the risk of imminent violence or physical harm, the details of which we’ve outlined below.”

Versions of this sentence have since appeared in subsequent iterations of Facebook’s riveting COVID-19 misinformation policy. It is there today — one year and 12 days after the WHO declared COVID-19 a pandemic. But when Facebook says that it is “working to remove COVID-19 content that contributes to the risk of real-world harm,” what do they mean exactly? Why don’t they just say that they are removing harmful COVID-19 content? And when Facebook says that they are taking down “misinformation that contributes to the risk of imminent violence or physical harm,” what does “imminent” violence mean exactly? And what of this “real-world harm”? Is it really the case that there is harm that happens in real life and harm that happens not in real life? Is the division between online and offline life so sharp? Or did the distinction between online and offline actions become unstuck some time ago? And if Facebook says it is “working to remove COVID-19 content that contributes to the risk of real-world harm” rather than something more succinct — for instance, “Facebook removes harmful COVID-19 content” — is there a reason for this?

A gigantic task

Facebook is enormous. 2.8 billion of the world’s 7.8 billion people use the platform every month. This means that any move Facebook makes is a cumbersome one, and any tweak in its terms of use is sweeping. For the social media giant to classify a user’s post as misinformation — and for any particular piece of misinformation to be then taken down — the post has to meet a mind-boggling number of seemingly random criteria. And the more criteria the piece of content has to meet to be classified as misinformation, the more room there is for ambiguity, and the less likely it is that piece of content will be classified as misinformation, the less likely Facebook will have to do anything other than think giant thoughts on a giant scale, without these thoughts translating to small but necessary actions.

On 13 December 2020, a relatively small Facebook group made its Facebook debut.  The group’s title — “Against COVID-19 Vaccine, We are not “Anti-Vax” in general, We are Anti-RNA” — is presumably a reference to the messenger RNA (mRNA) technology (which used in some COVID-19 vaccines) and not RNA in general (which is a nucleic acid  found in all life forms).

The title of this Facebook group points to a classic stance adopted by COVID-19 vaccine doubters. Logically has previously reported on the phenomenon of vaccine hesitancy towards COVID-19 vaccines specifically but not vaccines generally.

MicrosoftTeams-image (6)-2

By this, I mean that the conspiracy theories in this particular group — which has existed for four months now — are not subtle or in any way difficult to find. One user, who we shall call Jason, is the author of the above comment. He talks about Bill Gates — a longstanding favorite among conspiracy theorists — claims that a second wave will originate from vaccinated people, and urges everyone to go out in the sunshine, take zinc and look after “your immune system IE fruit yogurt.”

While Jason’s claims are easily debunkable, it’s hard to see if Jason’s claims contravene either Facebook’s terms of use or community standards. For instance, Facebook COVID-19 vaccine policy states that users cannot claim “for the average person, something can guarantee prevention from getting COVID-19 or can guarantee recovery from COVID-19.” The policy then states that this would include medicinal or herbal remedies, vitamin C, and topical creams. 

If we view Jason's comment charitably, then he is simply encouraging people to do uncontroversially healthy things: to get out in the sun, to make sure you are consuming enough vitamins in your diet. He doesn’t say that such things will “guarantee” immunity, even if he implies that such measures might help. Similarly, though Jason erroneously says the “second spike” will come from vaccinated people, as of March 2021, there is little evidence that being vaccinated can prevent COVID-19 entirely, and organizations such as the Centers for Disease Control and Prevention (CDC) are still recommending that vaccinated people continue to maintain physical distance due to the ongoing risk of transmission.

MicrosoftTeams-image (8)-1

Another post (depicted above), which was shared by a user we are going to call Pedro, links to dailyexpose.co.uk: a website that apparently went live in November 2020. The linked article claims that women are miscarrying as a result of being vaccinated against COVID-19. Unsurprisingly, there is no causal link between those getting a COVID-19 vaccine and miscarrying. However, pregnant women are routinely excluded from clinical trials, and as such, health services such as the NHS advise that "those who are pregnant should not routinely have [a COVID-19] vaccine."

Pedro's post is one of the few in the group that bears a misinformation label — a new addition to Facebook and something CEO Mark Zuckerberg wrote about in a March 15 blog post. Presumably, as Pedro’s post has been labeled as misinformation, one of Facebook's 58,000 full-time employees knows about it and knows that it contravenes some rule or other. Regardless, the post still exists.

MicrosoftTeams-image (7)-1

A final example is from a user asking for "good arguments against vaccination." The user — who we shall dub Phyllis — says that she has two problems. The first problem is her fear about her sister getting vaccinated. The second problem is her lack of fluency when it comes to speaking in the language of pseudoscience. With this in mind, she asks the group to help her learn more about anti-COVID-19-vax arguments.

This is probably one of the posts in this group that does not contravene the terms of use or community guidelines. People like Phyllis — who are apparently walking with open minds, hearts, and arms into the world of conspiracy thinking — are not contravening the terms of use or the community guidelines by asking for assistance in spreading pseudoscience. They are just contravening common sense. 

This particular group has existed since December 2020, without any apparent interference from Facebook other than some content labels. The group is, after all, a group — a feature of the platform the social media giant has previously been reluctant to tamper with, despite groups being among the most successful ways that people promote vaccine falsehoods.

As of March 2021, this anti-vax group only has one thousand members. This means that if it is influential, it is only on a micro-scale. Facebook is a giant. Its giant eyes either cannot or will not focus on something so small.

A pat on the back

On February 8 of this year, Facebook announced new misinformation measures in a blog post titled Reaching Billions of People With COVID-19 Vaccine Information. Instead of affecting a somber tone, the February 8 blog post boasts a self-congratulatory one, aided and abetted by a sprinkling of superlative adjectives. Referring to a survey supported by Facebook, the author — Head of Health Kang-Xing Jin — states that this survey “one of the largest ever conducted.” When talking about the scale of misinformation, Kang-Xing Jin claims is that Facebook is running “the largest worldwide campaign to promote authoritative information about COVID-19 vaccines.”

This tone is echoed in Mark Zuckerberg's more recent blog post, where he writes that Facebook has "already connected over 2 billion people to authoritative COVID-19 information," and that "3 billion messages have been sent by governments, nonprofits and international organizations to citizens through official WhatsApp chatbots on COVID-19."

Mark Zuckerberg's and Kang-Xing Jin's announcements are written in a sort of corporate PR-ese. Roughly translated to plain English, their respective statements read as follows:  We're doing loads, lads.

Facebook is not doing loads, lads. It was not until October 2020 that Facebook banned using the platform to place advertisements that promote vaccine falsehoods, including ads that encouraged people not to get vaccinated. And before February of this year, Facebook would demote “the ranking of groups and Pages that spread misinformation about vaccinations in News Feed and Search.” This meant that the misinforming groups and pages wouldn't be included in recommendations or in predictions when you type into Search, but it didn’t mean these misinforming groups and pages weren’t there, to be linked to and promoted by members. This is not loads.

Even more surprising is that on February 8 of this year, Facebook finally banned vaccine misinformation in general. This came 334 days after the WHO declared COVID-19 a pandemic and two centuries after the first vaccine was developed.

We can only speculate as to why Facebook has been so slow off these marks. By this, I mean that we can only have hunches. My hunch is this: On the front page of the community guidelines, Facebook reiterates its commitment to “voice” — its term for what would commonly be called “freedom of speech.” Under the commitment to voice section, Facebook writes that the goal of its Community Standards “has always been to create a place for expression and give people a voice. This has not and will not change.” Facebook’s “commitment to voice” may well be the thing that has prevented it from silencing those who spread harm on its platform. If this is the case, the tech giant would do well to remember how loud its voice really is.

Related Articles

Plandemic 2: Memetic Boogaloo

On May 4th 2020, the first Plandemic video appeared on various social media platforms, seemingly from nowhere, and introduced Dr. Judy Mikovits to the world. The 26 minute long video was perfectly designed to tap into doubt and uncertainty around...

Why Twitter is (Epistemically) Better Than Facebook

Social media has the potential to expand our horizons, connecting us with a wider range of people and more information and analysis than ever before. This has a clear epistemic benefit (i.e. a benefit to the way that we gain knowledge and...