<img src="https://trc.taboola.com/1321591/log/3/unip?en=page_view" width="0" height="0" style="display:none">
View All Articles

Fact Check with Logically.

Download the Free App Today

How Can Platforms Improve Moderation Without Affecting Free Speech?

How Can Platforms Improve Moderation Without Affecting Free Speech?

“You click report and it's like sending out a prayer, you hope for the best,” Nathalie Van Raemdonck, a doctoral researcher at the Vrije Universiteit Brussels, tells me.

As part of her research into how online spaces are designed, Van Raemdonck thinks a lot about how user experience can be improved on social media platforms. One of the ways she believes platforms can be improved is by what she calls context specificity, and the other is slowing down the speed at which content spreads and how it is amplified.

Delete or de-amplify?

“There's a huge difference between saying leave it up or don't leave it up, which is a very binary choice, while you can also say, okay, but does this need to go viral?” Van Raemdonck says.

“In traditional media, we have gatekeeping systems, like editorial boards. We don't have that with platforms, but we also don't really want a Facebook moderation team to be that speed bump. So we do need some sort of way of creating that friction.”

Creating more friction to help prevent mis- and disinformation isn’t a new concept, in fact, if you’ve ever tried retweeting an article without clicking the link, you will already see attempts to slow how quickly information spreads on social media. 

During the 2020 U.S. election, Twitter took this a step further, trialing a system of prompts that asked users if they wanted to share a tweet that’s already been labeled as potentially misleading, while also pointing them to sources of credible information. Tweets with such labels had already been de-amplified, resulting in fewer people seeing them. According to Twitter, this was to “give individuals more context on labeled Tweets so they can make more informed decisions on whether or not they want to amplify them to their followers.”

Whether or not these actions will continue to be implemented is uncertain but — considering that content that invokes anger is most likely to be shared — nudging users to think before they amplify content could result in healthier internet discourse. When content is shared this effect is compounded because on lots of platforms, engagement breeds more engagement, drawing more people into the discussion and boosting the content further.

When we share something impulsively, it can feel like we are helping — like we’re helping fight injustice or just amplifying newsworthy content — but impulse sharing can also amplify misinformation and create problems where there were none before.

On April 24, a wave of TikTok users created videos to raise awareness of something called “National Rape Day” — a day that came to light when a TikTok user posted a video “revealing” that a number of men were organizing sexual assaults in a group chat. In reality, there was no such group chat and, more surprisingly, there was no evidence that the video that had supposedly started it all was ever real.

“TikTok users created this panic alone, we awareness-videoed it into existence,” Abbie Richards, who researches disinformation and debunks conspiracy theories on TikTok, tells me. “It was entirely out of good intentions. People thought that they needed to spread this message so it just spread like the plague, even though it wasn’t true.”

When it comes to TikTok, Richards identifies another issue that comes from direct amplification of content — livestreams. TikTok allows users to broadcast to the entire platform, reaching beyond their follower count to anyone who stumbles across the stream on the “For You” page, where users land upon opening the app.

The livestreams that do really well are the weird ones. I once saw one where a girl had been sucking her toes for 15 minutes.

“The kind of live streams that do well are the weird ones,” Richards says “You get a lot of people role-playing as murderers, hunting ghosts, or exploring abandoned areas. I once saw one where a girl had been sucking her toes for 15 minutes.”

The fact that livestreams are, well, live, makes them more difficult to moderate. Unlike posts from an account, which can be caught by algorithms that detect problematic content, by the time platforms are made aware of live streams that contain content that violates their terms of service, hundreds if not thousands, of people may have already seen and shared them.

A potential solution? Moderators who are part of the online communities that first see these kinds of content, enabling moderation that’s grounded in the context it aims to police.

Context specificity

Some recent thinking about content moderation suggests that we learn from the theories around public spaces. These spaces are designed to have social activities and sign-posting about accepted behaviors, designed to be accessible, engage people to help steward and maintain the space, and are in partnership with the people that use them.

Van Raemdonck is particularly positive about one example of a content moderation model along these lines: the system built into Reddit’s platform architecture. On the site, users join interest-based communities that revolve around anything, be it furniture building, fashion, or having a beer in the shower, or anything in between. The landing page of Reddit, known as “the front page of the internet” is an amalgamation of the most popular content across the whole platform.

When it comes to moderation, Reddit’s system relies on moderators from within the community deciding what is acceptable and what is not. Van Raemdonck says this kind of system means that people who understand that community well can help say whether something is acceptable or not. This also provides the first-line defense against bad information being shared platform-wide.

I was talking to one of the admins of a group that has 100,000 members. He didn’t know what posts were conspiracy theories anymore. People are just being left to handle it themselves.

There are obvious drawbacks to relying too heavily on community moderation though, especially when it’s unpaid work. The system that works so well on Reddit can lead to mods getting overwhelmed, especially on other platforms like Facebook, where the community moderation culture isn’t as established. “I was talking to one of the admins of a group that has 100,000 members. He couldn’t keep on top of it,” Marianna Spring, the BBC’s specialist reporter covering disinformation and social media, recalls. “He didn’t know what posts were conspiracy theories anymore. People are just being left to handle it themselves.”

Sometimes it’s not just the moderators that are out of their depth either. Spring says some of the conversations she’s had have made her wonder how familiar tech executives are with the culture on their platforms. “I remember being on a call with Nick Clegg in around September and I mentioned Save Our Children, and this was the first he had heard about it,” Spring says, referring to the QAnon adjacent anti-child trafficking group which proved a concerning vector for online radicalization. “It feels as though the platforms have no concept of how radicalization happens or works and what it leads to, whether it's inspiring violence or any other form of real-world harm. The problem is platforms can argue that that's not their responsibility, but they ultimately are a causal factor.”

Spring says that experts regularly point out that the fundamental business model and nature of social media makes this a difficult problem to resolve, but “there are mitigating things that can be done and I think proactive moderation of Facebook groups would be a really big one.”

“I think that there are solutions, but we know that [tech platforms’] business models and the way that they work doesn't necessarily mean that they are wholly invested in pursuing those solutions because if they were, this probably would have been dealt with much more quickly.”

International contexts

These community contexts are only one part of the broader contexts that make up how acceptable a post is—local geographic contexts also play a role in what is acceptable in one part of the internet compared to elsewhere.

In the U.S., this is hotly contested due, in part, to a fundamental expectation of free speech under the First Amendment. Elsewhere in the world, free speech isn’t held up as a cornerstone of democracy in the same way as in the U.S. Germany’s basic law protects human dignity, in France, the Republic and the equality of its people is protected.

These political and legal structures are one of the reasons that water-tight moderation policies are difficult to implement at scale, resulting in a patchwork of different rules depending on where a person is located. In Germany, for example, the Network Enforcement Act (NetzDG) requires platforms to “take down or block access to manifestly unlawful content within 24 hours of receiving a complaint,” resulting in content sometimes simply being blocked within Germany and accessible elsewhere.

With this in mind, it seems almost impossible to expect platforms to be moderated in a way that keeps everyone happy, and it’s probably the case that one region will end up setting standards for others. Logically CEO Lyric Jain thinks that region might be the EU, or even the U.K., giving the example of GDPR as a regulation that became the global standard. “There’s an opportunity for the U.K. and for Europe in particular, I think because even when GDPR got introduced, platforms used it around the world because that was technically the strongest standard that existed at the time, and still is today,” Jain says. “So if they move sensibly and fast enough, the U.K. or the European paradigm that is likely to appear might end up being the global standard.”

Jain also highlighted the language around duty of care in the U.K.’s Online Safety Bill. The bill gives communications regulator Ofcom the power to fine companies failing at upholding their duty of care to their users up to £18 million or ten percent of annual global turnover.

This innovative legal framework “could be vital,” Jain says, “because right now, although we all use social media platforms, there's nothing in regulation that's directing them to look out for their users' best interests. Commercially, they should be doing that because otherwise in the long-term they're going to hemorrhage users, but it's really the first time where language around specific duties that platforms have is being introduced.”

And if platforms really want to help improve moderation and fulfill that duty of care? It would make a big difference if platforms released more of their data to researchers who specialize in countering extremism. “Only very few communities and experts have access to the underlying data from platforms that shows what's harmful, and there’s only very opaque reporting from platforms themselves,” Jain says. “The research community could certainly be better supported and empowered with better data access.”

Related Articles