Twitter is currently rolling out a new feature granting bad actors a larger and mostly unmoderated platform to spread hate speech and misinformation: Twitter Spaces.
Over Thanksgiving week in the U.S., Twitter expanded the presence of Twitter Spaces, a Clubhouse-like feature that Twitter first introduced earlier in 2021 to a limited set of users. Like Clubhouse, Twitter Spaces is now acting as a vector for misinformation and hate speech.
Through Twitter Spaces, users can host a live “audio room,” with the ability to set a topic for the room, designate up to two other co-hosts, and invite up to 10 “speakers” at a time – allowing for 13 active speakers and an unlimited audience. Invites to join the Space can be sent through DMs, tweeted out as a link, and shared as a direct link allowing potential attendees without Twitter accounts to join in.
As of October 21, all Twitter users gained the ability to create their own Space through the Android or iOS mobile app. In late November, Twitter rolled out an update prioritizing Spaces with its own central tab at the bottom of the app, directly in between the search function and notifications, which Twitter first unrolled for beta testers in June of 2021. Additionally, when a user enters a Space, the Space now appears at the top of their follower’s Twitter feed on the mobile app and at the top of the Spaces tab.
Twitter announced Spaces in late 2020, describing the feature as “a small experiment focused on the intimacy of the human voice.” Continually, the company described the feature as a “dinner party” where users could connect and chat. On the official Twitter Spaces account, the company listed “Notable Hosts,” highlighting discussions around journalism, K-Pop, TV shows, cryptocurrency/NFTs, and more topics from around the world, including topics of local interest. Additionally, Twitter pushed Spaces as a way for experts in fields to share their knowledge in new ways with an even wider audience. The Spaces account even recommended several Spaces hosted by those involved in combating the COVID-19 crisis. Unfortunately, the opposite crowd immediately took a liking to the new feature.
As Twitter rolled the feature out to all users, an extremely wide array of actors spreading disinformation and hate took advantage of the expansion of Spaces. On both my main account and a right-wing sock-puppet account, I came across chats with names such as “Why do black people exist?”; “You are a man dressed up like a woman sir not a real woman!”; “Controlling the population, vaccine discussion”; “Covid/conspiracy hour”; and “What conspiracy theory do you believe in?”
In the conspiracy-focused Spaces, which I spent the most time listening to, most speakers devoted time to sharing disinformation about COVID-19. Claims that the virus was manufactured in a lab, that the vaccine included a microchip meant to control you, that ivermectin or a million other things can or prevent the disease, that the virus is an excuse for the government to control all of us, all made appearances. Nearly constantly, speakers violated Twitter’s COVID-19 misleading information policy.
A long history of misinformation spreading on Twitter exists. In the past, Twitter’s algorithm pushed politically extreme content to users. Multiple governments utilized Twitter to boost disinformation campaigns. Numerous harassment campaigns occurred on Twitter, such as that against actor Leslie Jones. In light of the COVID-19 pandemic, Twitter implemented labels marking tweets that may contain misinformation about the pandemic but still allowed the tweets to remain on the platform. Twitter has also allowed political figures who spread misinformation to their followers to remain on the platform, notably not suspending President Donald Trump until the last weeks of his presidency and after the January 6 storming of the U.S. Capitol.
Researchers who monitor online disinformation are worried. Although researchers already raised concerns about Twitter Spaces and Clubhouse before it, Twitter nonetheless expanded the feature. It remains unclear how content on Spaces is monitored. Moderating voice-centered platforms is much more difficult, according to disinformation researcher Sara Aniano.
This isn't 4chan or Gab. This is a popular, highly accessible platform with extensive reach.
“My big worry with Spaces is how, and if, those conversations are monitored by any content moderators on Twitter. It's much simpler with the written word. Recently, a journalist I follow was listening to a Space for anti-vax conspiracy theories, and so I just sort of wandered in. The first thing I heard was people laughing about prominent Democrats being publicly executed by hanging or firing squad. To find that rhetoric flowing so freely on Twitter should alarm people. This isn't 4chan or Gab. This is a popular, highly accessible platform with extensive reach.”
Unfortunately, explicit white supremacists also made clever use of Twitter Spaces. The “why do black people exist” Space was one such Space. A similar Space titled “Why haven’t black built a successful society” drew in over 1200 listeners. All of the co-hosts used different Pepes as their profile picture, and a quick glance at their profile showed neo-Nazi-like beliefs. The co-hosts invited listeners to “argue” with them, but wound up speaking over or even insulting those that disagreed with them while they regurgitated white supremacist talking points.
Due to the opacity of Twitter’s algorithms, the way Twitter orders Spaces for users in the Spaces tab remains unclear. The provocative and offensive name drew people in to hear what speakers were saying, and as those people joined, the Space appeared at the top of their followers’ feeds and Spaces tab. As a result, Spaces appeared in reaction to this Space with similar titles or discussing the growth of this Space.
The far right’s use of Spaces is a trend, not an aberration.
The far right’s use of Spaces is a trend, not an aberration. “Twitter Spaces has seen an influx of far-right users over the past few months, many of whom are exploiting the feature for the purpose of spreading hateful and violent rhetoric,” says Alex Newhouse, Deputy Director of the MIIS Center on Terrorism, Extremism, and Counterterrorism. Like Aniano, Newhouse is concerned about how Twitter is moderating Spaces. “While algorithmic detection of far-right, hateful, and violent content in videos, images, and text is reasonably effective at scale, moderation for voice—especially live—is extremely difficult to do. As a result, a feature like Twitter Spaces almost certainly relies on user-generated reports and manual post-hoc review, meaning that Spaces presents a vulnerability—and an opportunity for propagandists.”
The current algorithm of Spaces allows extremists to make their beliefs appear much more widespread than they may be. “The important thing here is that they want you to see the titles and the numbers more than what’s in them,” says independent extremism rhetoric researcher Emmi Kuhn. “They want to seem bigger and more popular than they are. It’s smart and low effort. Twitter wants people to use their new service, so they gave it lots of visibility and engagement metrics. Unfortunately, bad actors figured out how to game it before Twitter figured out how to effectively moderate it.”
Kuhn also believes the voice-only format offers a lower risk for extremists. “The other factor that’s critical is it lets them communicate with their regular large Twitter networks without the risk of screenshots. Someone has to devote their time to listening to their shit if they hope to counter it.” For researchers, Spaces’s ephemerality poses a huge challenge. Unless researchers are present and actively paying attention to the listener count for the duration of the Space, it’s impossible to know how many listeners were present for a Space.
In an update on December 3, the Twitter team behind Spaces acknowledged that they were aware of “spaces used to cause harm” and “bugs that allowed harmful spaces to remain visible” over the weekend. Spaces that clearly violated the Twitter ToS appeared in the Spaces tab far past the weekend, and as the Twitter team answered questions about Spaces, I browsed the Spaces tab, finding multiple that centered on conspiracy theories. 
Despite claims that measures are being taken to reduce harmful content, questions remain about how this content will be moderated. While less egregiously named Spaces are appearing in the Spaces tab, accounts pushing misinformation about COVID-19 and the 2020 U.S. Presidential election remain in the Spaces tab. Therein lies the problem. Unless one is constantly listening to all Spaces, it’s impossible to know what is being said.


W. F. Thomas is a disinformation researcher who works in the U.S. and Germany. Thomas’s research broadly follows social movements and political violence, specifically on the expansion of QAnon cultic ideology in Germany, with a focus on the spread of the ideology over digital platforms.
Copied!