For Twitter’s more than 200 million daily active users, it’s been a long and strange week and a half. Billionaire, CEO and Twitter superfan Elon Musk has spent $44 billion on a highly contested, overstated and sometimes dubious acquisition of his favorite corner of the internet. Besides firing Twitter’s senior management and half of the rest of the staff, exactly what he plans to do with his new toy is murky at best.
With one exception: Among Musk’s many antics and public statements since the start of acquisition talks in April, he has consistently cited the Twitter content moderation erasure as a primary reason for the purchase.
For those who work in the social media industry and those, like me, who study it, what Musk considers content moderation seems remarkable for its narrowness. It mainly focuses on removals of controversial claims, language, or account holders, ignoring moderation work that removes spam and destructive bots. To social media pundits, Musk’s disdain for Twitter’s rules comes across as naïve, and his desire for near-absolute “freedom of speech” on the website as a misguided impossibility.
People also read…
Despite what critics like Musk seem to think, content moderation is far from being a partisan tool for the woke crowd.
Done right, content moderation requires an extensive, interdependent, and cross-enterprise system of people, policies, and practices. It must comply with legal mandates which differ from country to country and which can lead to costly fines. It should encourage the widest possible user participation and at the same time reduce the potential for user harm resulting from that participation. And it must constantly hone the IT tools, automation, and human judgment needed to achieve those goals.
The night before Musk officially took over Twitter, he sang to his 115 million followers, “the bird is set free.” It was as good as announcing that he didn’t know what he didn’t know.
Twitter’s moderation standards have already erred in permissiveness, especially when compared to its closest peers in the market. For example, unlike many other platforms, Twitter allows users to post consensual adult sexual content, giving an outlet for people who enjoy that content while taking anything that crosses a legal line or seriously. violates policies against such things as gratuitous violence, threats, self-harm, and animal abuse or torture.
Former Twitter lawyer and policy chief Vijaya Gadde played an outsized role in establishing and maintaining its extensive but still safe rules. She is as well known for fighting in court for a user’s right to post as she is for banning @realDonaldTrump after the Jan. 6 attack on the US Capitol.
Musk fired Gadde on his first round of chopping.
In total, Twitter’s new CEO hacked staff in two on Friday morning. In a thread he posted, Yoel Roth, the still-serving content moderation manager, tried to reassure skeptics that the site’s “basic” practices were in place. Its internal “Trust & Safety” team, he tweeted, had been reduced by just 15%, and frontline workers – the external contractors scattered around the world who do most of the moderating work of Twitter – less than that.
Most of us couldn’t stand what these human moderators see over and over, every day. It’s a sad but universal truth that there are enough people interested in uploading and spreading this stuff that a social media company needs to employ a small army of low-wage workers and lower status to deal with it. Twitter’s small army has just shrunk.
Even before the corporate bloodshed, Musk’s Twitter began to show his dark side. Montclair State University researchers recorded an “immediate, visible and measurable spike” in hate speech on the site within the first 12 hours of Musk’s ownership.
Musk has sent an open letter to advertisers in an attempt to allay their concerns about the site’s possible reputational damage. Twitter, he said, would not sink into a “free-for-all hellish landscape.” Nevertheless, major advertisers – General Mills, Volkswagen and General Motors among others – have “suspended” their participation.
Last Thursday, Musk again tried to calm advertisers’ fears. “Elon, great conversation yesterday,” marketer Lou Paskalis tweeted on Friday. “As you heard overwhelmingly from senior advertisers on the call, the issue that concerns us all is content moderation and its impact on BRAND SAFETY/SUITABILITY. You say you are committed moderation, but you just fired 75% of the moderation team!”
Musk didn’t tweet a correction to that percentage; he just blocked Paskalis. He also threatened to “name and shame” specific brands that had taken down ads and he blamed “activist” groups for the loss of advertising dollars.
A fitting analogy to the new Elonian Twitter might perhaps be a car with dodgy brakes, speeding down a road without guardrails. But that could be lost on Musk; he has so far been relatively unfazed by the spontaneous combustion of Teslas and troubled autopilot programming which, in a series of tests, allegedly failed to recognize the shape of a moving child in its path .
Just before Musk’s takeover of Twitter was finalized, sharp-eyed users noted that he had changed his profile, naming himself “Chief Twit.” After 12 days of staff bloodshed, revenue missteps, abrupt policy changes and general Twitter mayhem, we can now all say: Hail Chief.
Sarah T. Roberts is faculty director of the Center for Critical Inquiry at UCLA.