Opinion: Preserving the Public Conversation

Elon Musk's recent acquisition of Twitter raises questions about balancing free speech with a the need for a healthy public forum.

May 25, 2022

Content+moderation+has+challenged+the+ideal+of+absolute+free+speech+on+social+media.

Vanessa Hong

Content moderation has challenged the ideal of absolute free speech on social media.

On March 25th, Elon Musk took to Twitter to broadcast a poll.

“Free speech is essential to a functioning democracy,” he wrote. “Do you believe Twitter rigorously adheres to this principle?”

In a reply to his own tweet, Musk added, “The consequences of this poll will be important. Please vote carefully.” 

70.4% of over two million users said no. One month later, Twitter accepted Musk’s $44 billion buyout offer. 

Musk, a self-proclaimed “free speech absolutist,” has hinted at his plans for the platform. “I think it’s very important for there to be an inclusive arena for free speech,” he said in a recent TED interview. He’s also clashed with top executives over Twitter’s future, endorsing complaints that the company’s general counsel, Vijaya Gadde, is a “top censorship advocate.”

It’s been a long time since Twitter’s former general counsel, Tony Wang, called Twitter “the free speech wing of the free speech party.” It’s been less time since Twitter’s former CEO Jack Dorsey said in an interview that the quote “was never a mission of the company” and was, in fact, “a joke.”

Dorsey and his co-founders learned the hard way why relatively unrestrained free speech on social media platforms is a pipe dream. They learned to curb their naivete and dig into the realities of content moderation. But Musk’s acquisition of Twitter could wind the platform back to when that “joke” of a mission was taken seriously.

Rules

Social+media+giants+all+eventually+realized+that+their+platforms+need+specific+content+moderation+rules.

"Facebook headquarters" by Frank Kehren is marked with CC BY-NC-ND 2.0.

Social media giants all eventually realized that their platforms need specific content moderation rules.

Section 230(c) of the 1996 Communications Decency Act is lauded as the “most important law on the internet.” It absolves social media companies of liability for the content they host, save for that which violates federal law. It gives them a remarkable amount of freedom not afforded to traditional media, immunizing them from responsibility for libel, infliction of emotional distress, commercial disparagement, threats, and distribution of sexually explicit material. 

At first, social media companies embraced a laissez-faire approach allowed under Section 230; the earliest content moderators were unpaid and largely unrecognized. But when platforms became flooded with hateful speech, companies stepped in. CompuServe, an early online service provider that hosted chat systems and forums, developed one of the earliest “acceptable use” policies prohibiting racist speech after a user filled a popular forum with anti-semitic messages. In 2001, eBay banned symbols of racial, religious, and ethnic hatred, including Nazi and Ku Klux Klan memorabilia. 

Catherine Buni and Soraya Chemaly recount for The Verge the story of Julie Mora-Blanco, who worked on Youtube’s content moderation team during the site’s early years. Her ten person team was guided by one sheet of paper with bullet points listing the type of videos that should be removed: anything containing animal abuse, blood, visible nudity. That list evolved into Youtube’s first booklet of content moderation rules, just six pages long. 

Now, the internal content moderation rules for the biggest social media companies are dozens of pages long, updated constantly, and cover a wide range of topics. Twitter, for example, doesn’t allow content containing “[h]ateful conduct,” “[s]uicide or self-harm,” or “[t]errorism/violent extremism.”

These rules exist for a reason. They ensure that any given social media platform is where people want to spend their time.

“[W]e want Twitter to continue to be a place where the expression of diverse viewpoints is encouraged and aired. To do that, we have to keep Twitter safe for the widest possible range of information and opinions to be shared, even when we ourselves vehemently disagree with some of them,” writes Vijaya Gadde, a Twitter executive, in a 2015 op-ed for The Washington Post. “Freedom of expression means little as our underlying philosophy if we continue to allow voices to be silenced because they are afraid to speak up.”

Social media companies don’t regulate what’s allowed to stay up on their sites because they’re held legally accountable for the content of their users—Section 230 ensures that they largely aren’t. They invest in content moderation because a platform without any sort of moderation quickly devolves into an unpleasant, borderline unusable cesspool.

4chan, for example, is a site infamous for dark content. It has been the source of a leak of dozens of stolen celebrity nude photos, fake bomb threats, self-harm campaigns, and extensive cyberbullying. It’s a breeding ground for racism, misogyny, homophobia, and other types of hatred. And it has incredibly minimal content moderation rules compared to other social media sites. Users are only prohibited from violating U.S. law, posting people’s personal information, impersonating site administrators, and using bots. Racist, graphic, and “grotesque” material is permitted on one messaging board. Moderators, known as “janitors,” work for free. 

Evelyn Douek, a lecturer at Harvard who researches online speech, posits that all online platforms are pushed to regulate content eventually, usually in response to public pressure when someone discovers abhorrent content. Even Peloton, the exercise equipment and media company famous for their stationary bikes, faced a content moderation scandal when a Washington Post editor found QAnon hashtags on the site connecting cyclists.

“If you’re going to have users generating content, you’re going to have users generating harmful content,” said Douek. That has to be confronted eventually.

But content moderation is hard. Really, really hard. Hard enough that years of work by engineers and leaps in progress on artificial intelligence have still left gaping holes in our content moderation infrastructure. 

Still, Musk has taken it upon himself to broadcast a simple solution via the platform he now owns.

“By ‘free speech,’” he tweeted a few weeks ago, “I simply mean that which matches the law.”

The logic of Musk’s enlightening eleven-word solution is, unfortunately, painfully superficial. It makes sense on the surface to hold that if a tweet doesn’t break the law of the country of the user, it should stay up. But it only makes sense if you’ve never actually dug into content moderation.

Moderation that only goes as far as U.S. law allows a lot of unsavory content. Musk has said that he wants to rid Twitter of spam, but it’s legal under the First Amendment. So is hate speech, pornography, misinformation, and even the incitement of violence (so long as it does not or is not likely to incite or produce “imminent lawless action”).

In 2016, a man was found not guilty for posting violent threats and images in which he Photoshopped the crosshairs of a rifle scope over pictures of FBI agents. His speech was protected by the First Amendment—under Musk’s definition of how Twitter’s content moderation should operate, that content should be left up. But it would arguably make the platform worse and discourage users from logging on if the rules were that loose.

For Twitter to be “maximally trusted and broadly inclusive,” as Musk says in the TED interview, a level of moderation is necessary. Hate speech may be legal in the U.S., but leaving it up isn’t the sort of action that causes people to trust an online platform—or attracts advertisers, on which free social media platforms heavily depend.

In a 2015 leaked company memo, then-Twitter CEO Dick Costolo bluntly admitted the platform’s failings in content moderation. “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years,” it reads. “It’s no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day.…There’s no excuse for it.”

All social media companies started at the same place: with less moderation and a loud adherence to free speech. And now, they’re all reaching the same conclusion: You can’t just let people post whatever they want. The rule of “no rules” isn’t a realistic option.

The problem is not people posting content with which others disagree. The problem is harassment, spam, violent threats, graphic content, dangerous misinformation, and hate speech—things that make social media much worse for users and ultimately stifle free speech.

Mike Masnick, CEO of the information technology services company Floor64, suggests that online platforms are not “town squares,” a trite analogy that misses the point. Instead, “[t]he internet itself is the town square,” and social media platforms are each private shops with their own rules—rules that exist for a reason. 

A recent shift towards more moderation came in 2020, when many companies started to tackle misinformation with fact-checks and then removals, an onus that they previously ignored. In the later months of that year, Facebook (now known as Meta) and Twitter started banning Holocaust denial content.

“I have to admit that I’ve struggled with balancing my values as an American, and around free speech and free expression, with my values and the company’s values around common human decency,” said Reddit CEO Steve Huffman in June of 2020.

Social media companies have realized that they have a role to play in society. The hands-off approach just isn’t enough anymore. 

Other vague maxims like “if it’s a gray area…let the tweet exist,” offered by Musk also don’t do much good to guide content moderation—they don’t work at scale. Twitter and its counterparts are global platforms that span many cultures, which makes content moderation extremely difficult. 

As Michael Karanicolas writes for Slate, “A racially charged statement in Canada might cause psychological harm, but in Sri Lanka, it might lead to lynchings and communal violence. As recently as August, violent clashes in Bengaluru, India, were triggered by a Facebook post about the Prophet Muhammad. The potential harms, in other words, vary enormously.”

In 2012, a Syrian protester posted a picture of herself with a sign advocating for women’s rights. The image shows the woman in a tank top, without a veil, and it was removed after conservatives who considered the image obscene reported it. Facebook later apologized, but the incident demonstrates how complicated it is to judge content when it is up to wildly different interpretations depending on the cultural context.

“Breaking the code for context—nailing down the ineffable question of why one piece of content is acceptable but a slight variation breaks policy—remains the holy grail of moderation,” write Buni and Chemaly.

Sometimes content moderation has gone too far. Youtube’s campaign to remove extremist content resulted in the destruction of ten percent of an archive documenting human rights abuses in Syria. And autocratic governments can use social media to repress content unfavorable to their regimes.

These difficulties force companies to perform a constant cost-benefit analysis and undermine the applicability of a set of worldwide rules—rules that are forced to contend with the contradictions inherent in content from a chaotic world. 

Buni and Chemaly write, “Content flagged as violent—a beating or beheading—may be newsworthy. Content flagged as ‘pornographic’ might be political in nature, or as innocent as breastfeeding or sunbathing. Content posted as comedy might get flagged for overt racism, anti-Semitism, misogyny, homophobia, or transphobia. Meanwhile content that may not explicitly violate rules is sometimes posted by users to perpetrate abuse or vendettas, terrorize political opponents, or out sex workers or trans people.” 

According to Facebook’s internal moderation policies, the phrase “Autistic people should be sterilized” stays up, while “Men should be sterilized” is taken down—autism is not a “protected characteristic” in the same way race and gender are. In 2018, the company caught flack because one of its moderators removed a post quoting the Declaration of Independence containing the phrase “Indian savages.” 

Facebook’s Oversight Board, made up of about 20 former political leaders, human rights activists, and journalists, was created to sort out these ambiguities. A sort of Supreme Court, it processes a very small number of appeals submitted by users who feel their content was unfairly removed. In 2020 it took up its first five cases, which included one post containing two photos of a dead child with commentary about China’s treatment of Uyghur Muslims and another sharing a quote from Joseph Goebbels, the head of Nazi propaganda.

Whether certain content stays up or is taken down has the power to influence social movements. During the early months of her time at Youtube, Mora-Blanco and her team dealt with a video depicting the death of a young woman in Iran during protests against the election of Mahmoud Ahmadinejad. It was violent and graphic, but it was newsworthy and of enormous political significance. They decided to keep it up.

Enforcement

Manila, the capital of the Philippines, attracts content moderation contractors.

"Seaside Manila" by Storm Crypt is marked with CC BY-NC-ND 2.0.

Manila, the capital of the Philippines, attracts content moderation contractors.

Content doesn’t just “get moderated” in a passive way—reported posts aren’t fed into a computer that immediately spits out a verdict to keep it up or take it down. The job has not entirely, or even mostly, been handed off to AI, nor is it conducted by law enforcement or social media experts.

The task falls to human moderators: people, largely overseas, who work for contractors of big social media companies and enable moderation to churn around the clock while earning very little.

The median Facebook employee has an annual income of $240,000, while moderators working for one contractor in Arizona make just under $29,000 per year. They are outpaced by the enormous volume of posts they have to screen, have to apply a single set of rules that is confusing and changes nearly constantly, often lack the necessary context to make a clear call, are frequently forced to review content in languages they don’t speak, and have just a few seconds to stamp each post with a binary decision.

The Philippines, whose historical connection to the U.S. and high frequency of English speakers has made it ideal for call centers, is one popular location for contractors to set up shop. In the country’s capital, Wired journalist Adrian Chen witnessed an “army of workers employed to soak up the worst of humanity in order to protect the rest of us.”

These people are exposed to an unrelenting stream of graphic, traumatic content: violent killings, child exploitation, animal abuse. They make decisions on several hundred pieces of content a day that have been reported, racing to meet quotas that demand accuracy. They are offered little in the way of breaks or mental health support. Most people quit after a year or less, driven away by traumatic experiences and unethical working conditions. 

“I don’t think it’s possible to do the job and not come out of it with some acute stress disorder or PTSD,” said one worker for the Facebook contractor Cognizant.

Another estimates that during his time working for Cognizant he reviewed roughly ten murders each month and 1,000 pieces of content related to self-harm.

Automation just hasn’t made enough progress yet to take over the job. Mark Zuckerberg, the co-founder of Facebook, said in a 2019 interview that “I think there will always be people” moderating content.

“The result of this admission,” a team of journalists for The Washington Post points out, “is that tech companies are creating a permanent job—dependent on invisible outsourced workers—with the possibility for serious trauma, with little oversight and few answers about how to address it.”

Social media users “were being sold on this notion that you could blast your thoughts around the world in this unfettered way, and in fact it turned out that there were big buildings filled with people making really important decisions about whether or not that self-expression was appropriate,” writes Isaac Chotiner for The New Yorker. “And the platforms themselves were saying nothing about it. They weren’t saying who those people were, where they were, and they weren’t really saying on what grounds they were making their decisions. There was just total opacity in every direction.”

The darkest secret about content moderation is that whether or not a site chooses to take down unsavory content, someone has to contend with it. On more laissez-faire sites, that’s a large number of users who have to get comfortable with unpleasant material. On mainstream platforms, that’s initially the users who report the content; Buni and Chemaly point out that because “almost every content moderation system depends on users flagging content and filing complaints…users are not so much customers as uncompensated digital laborers who play dynamic and indispensable functions.” Then, it’s the human moderators who process these complaints.

“Online access was supposed to unleash positive and creative human potential, not provide a venue for sadists, child molesters, rapists, or racial supremacists,” write the Verge journalists. “Yet this radically free internet quickly became a terrifying home to heinous content and the users who posted and consumed it.”

People will always post awful things. Until AI is advanced enough, humans will have to deal with the awful things that other humans do and say. There is no realistic model of content moderation that sidesteps this fact or deals elegantly with its implications.

There’s this misconception that social media platforms don’t have perfectly balanced content moderation rules that err on the side of freer speech because of some “Big Tech” anti-conservative bias or lack of effort on the part of CEOs and engineers. But that’s not true. Social media companies have been trying to solve the problem of content moderation since they began, and it’s only gotten more complicated.

“We will never come to a stable agreement on perfect content moderation rules,” said Douek. “So we need to construct better ways to disagree about them. Forever.”

Leave a Comment
About the Writer
Photo of Sam Podnar
Sam Podnar, Staff Writer

Sam Podnar is a senior at NASH. When she's not writing, she enjoys baking, reading, and talking too much about local politics.

The Uproar • Copyright 2024 • FLEX WordPress Theme by SNOLog in

Comments (0)

All The Uproar Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *