Rules

May 25, 2022

Social+media+giants+all+eventually+realized+that+their+platforms+need+specific+content+moderation+rules.

"Facebook headquarters" by Frank Kehren is marked with CC BY-NC-ND 2.0.

Social media giants all eventually realized that their platforms need specific content moderation rules.

Section 230(c) of the 1996 Communications Decency Act is lauded as the “most important law on the internet.” It absolves social media companies of liability for the content they host, save for that which violates federal law. It gives them a remarkable amount of freedom not afforded to traditional media, immunizing them from responsibility for libel, infliction of emotional distress, commercial disparagement, threats, and distribution of sexually explicit material. 

At first, social media companies embraced a laissez-faire approach allowed under Section 230; the earliest content moderators were unpaid and largely unrecognized. But when platforms became flooded with hateful speech, companies stepped in. CompuServe, an early online service provider that hosted chat systems and forums, developed one of the earliest “acceptable use” policies prohibiting racist speech after a user filled a popular forum with anti-semitic messages. In 2001, eBay banned symbols of racial, religious, and ethnic hatred, including Nazi and Ku Klux Klan memorabilia. 

Catherine Buni and Soraya Chemaly recount for The Verge the story of Julie Mora-Blanco, who worked on Youtube’s content moderation team during the site’s early years. Her ten person team was guided by one sheet of paper with bullet points listing the type of videos that should be removed: anything containing animal abuse, blood, visible nudity. That list evolved into Youtube’s first booklet of content moderation rules, just six pages long. 

Now, the internal content moderation rules for the biggest social media companies are dozens of pages long, updated constantly, and cover a wide range of topics. Twitter, for example, doesn’t allow content containing “[h]ateful conduct,” “[s]uicide or self-harm,” or “[t]errorism/violent extremism.”

These rules exist for a reason. They ensure that any given social media platform is where people want to spend their time.

“[W]e want Twitter to continue to be a place where the expression of diverse viewpoints is encouraged and aired. To do that, we have to keep Twitter safe for the widest possible range of information and opinions to be shared, even when we ourselves vehemently disagree with some of them,” writes Vijaya Gadde, a Twitter executive, in a 2015 op-ed for The Washington Post. “Freedom of expression means little as our underlying philosophy if we continue to allow voices to be silenced because they are afraid to speak up.”

Social media companies don’t regulate what’s allowed to stay up on their sites because they’re held legally accountable for the content of their users—Section 230 ensures that they largely aren’t. They invest in content moderation because a platform without any sort of moderation quickly devolves into an unpleasant, borderline unusable cesspool.

4chan, for example, is a site infamous for dark content. It has been the source of a leak of dozens of stolen celebrity nude photos, fake bomb threats, self-harm campaigns, and extensive cyberbullying. It’s a breeding ground for racism, misogyny, homophobia, and other types of hatred. And it has incredibly minimal content moderation rules compared to other social media sites. Users are only prohibited from violating U.S. law, posting people’s personal information, impersonating site administrators, and using bots. Racist, graphic, and “grotesque” material is permitted on one messaging board. Moderators, known as “janitors,” work for free. 

Evelyn Douek, a lecturer at Harvard who researches online speech, posits that all online platforms are pushed to regulate content eventually, usually in response to public pressure when someone discovers abhorrent content. Even Peloton, the exercise equipment and media company famous for their stationary bikes, faced a content moderation scandal when a Washington Post editor found QAnon hashtags on the site connecting cyclists.

“If you’re going to have users generating content, you’re going to have users generating harmful content,” said Douek. That has to be confronted eventually.

But content moderation is hard. Really, really hard. Hard enough that years of work by engineers and leaps in progress on artificial intelligence have still left gaping holes in our content moderation infrastructure. 

Still, Musk has taken it upon himself to broadcast a simple solution via the platform he now owns.

“By ‘free speech,’” he tweeted a few weeks ago, “I simply mean that which matches the law.”

The logic of Musk’s enlightening eleven-word solution is, unfortunately, painfully superficial. It makes sense on the surface to hold that if a tweet doesn’t break the law of the country of the user, it should stay up. But it only makes sense if you’ve never actually dug into content moderation.

Moderation that only goes as far as U.S. law allows a lot of unsavory content. Musk has said that he wants to rid Twitter of spam, but it’s legal under the First Amendment. So is hate speech, pornography, misinformation, and even the incitement of violence (so long as it does not or is not likely to incite or produce “imminent lawless action”).

In 2016, a man was found not guilty for posting violent threats and images in which he Photoshopped the crosshairs of a rifle scope over pictures of FBI agents. His speech was protected by the First Amendment—under Musk’s definition of how Twitter’s content moderation should operate, that content should be left up. But it would arguably make the platform worse and discourage users from logging on if the rules were that loose.

For Twitter to be “maximally trusted and broadly inclusive,” as Musk says in the TED interview, a level of moderation is necessary. Hate speech may be legal in the U.S., but leaving it up isn’t the sort of action that causes people to trust an online platform—or attracts advertisers, on which free social media platforms heavily depend.

In a 2015 leaked company memo, then-Twitter CEO Dick Costolo bluntly admitted the platform’s failings in content moderation. “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years,” it reads. “It’s no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day.…There’s no excuse for it.”

All social media companies started at the same place: with less moderation and a loud adherence to free speech. And now, they’re all reaching the same conclusion: You can’t just let people post whatever they want. The rule of “no rules” isn’t a realistic option.

The problem is not people posting content with which others disagree. The problem is harassment, spam, violent threats, graphic content, dangerous misinformation, and hate speech—things that make social media much worse for users and ultimately stifle free speech.

Mike Masnick, CEO of the information technology services company Floor64, suggests that online platforms are not “town squares,” a trite analogy that misses the point. Instead, “[t]he internet itself is the town square,” and social media platforms are each private shops with their own rules—rules that exist for a reason. 

A recent shift towards more moderation came in 2020, when many companies started to tackle misinformation with fact-checks and then removals, an onus that they previously ignored. In the later months of that year, Facebook (now known as Meta) and Twitter started banning Holocaust denial content.

“I have to admit that I’ve struggled with balancing my values as an American, and around free speech and free expression, with my values and the company’s values around common human decency,” said Reddit CEO Steve Huffman in June of 2020.

Social media companies have realized that they have a role to play in society. The hands-off approach just isn’t enough anymore. 

Other vague maxims like “if it’s a gray area…let the tweet exist,” offered by Musk also don’t do much good to guide content moderation—they don’t work at scale. Twitter and its counterparts are global platforms that span many cultures, which makes content moderation extremely difficult. 

As Michael Karanicolas writes for Slate, “A racially charged statement in Canada might cause psychological harm, but in Sri Lanka, it might lead to lynchings and communal violence. As recently as August, violent clashes in Bengaluru, India, were triggered by a Facebook post about the Prophet Muhammad. The potential harms, in other words, vary enormously.”

In 2012, a Syrian protester posted a picture of herself with a sign advocating for women’s rights. The image shows the woman in a tank top, without a veil, and it was removed after conservatives who considered the image obscene reported it. Facebook later apologized, but the incident demonstrates how complicated it is to judge content when it is up to wildly different interpretations depending on the cultural context.

“Breaking the code for context—nailing down the ineffable question of why one piece of content is acceptable but a slight variation breaks policy—remains the holy grail of moderation,” write Buni and Chemaly.

Sometimes content moderation has gone too far. Youtube’s campaign to remove extremist content resulted in the destruction of ten percent of an archive documenting human rights abuses in Syria. And autocratic governments can use social media to repress content unfavorable to their regimes.

These difficulties force companies to perform a constant cost-benefit analysis and undermine the applicability of a set of worldwide rules—rules that are forced to contend with the contradictions inherent in content from a chaotic world. 

Buni and Chemaly write, “Content flagged as violent—a beating or beheading—may be newsworthy. Content flagged as ‘pornographic’ might be political in nature, or as innocent as breastfeeding or sunbathing. Content posted as comedy might get flagged for overt racism, anti-Semitism, misogyny, homophobia, or transphobia. Meanwhile content that may not explicitly violate rules is sometimes posted by users to perpetrate abuse or vendettas, terrorize political opponents, or out sex workers or trans people.” 

According to Facebook’s internal moderation policies, the phrase “Autistic people should be sterilized” stays up, while “Men should be sterilized” is taken down—autism is not a “protected characteristic” in the same way race and gender are. In 2018, the company caught flack because one of its moderators removed a post quoting the Declaration of Independence containing the phrase “Indian savages.” 

Facebook’s Oversight Board, made up of about 20 former political leaders, human rights activists, and journalists, was created to sort out these ambiguities. A sort of Supreme Court, it processes a very small number of appeals submitted by users who feel their content was unfairly removed. In 2020 it took up its first five cases, which included one post containing two photos of a dead child with commentary about China’s treatment of Uyghur Muslims and another sharing a quote from Joseph Goebbels, the head of Nazi propaganda.

Whether certain content stays up or is taken down has the power to influence social movements. During the early months of her time at Youtube, Mora-Blanco and her team dealt with a video depicting the death of a young woman in Iran during protests against the election of Mahmoud Ahmadinejad. It was violent and graphic, but it was newsworthy and of enormous political significance. They decided to keep it up.

Leave a Comment

The Uproar • Copyright 2024 • FLEX WordPress Theme by SNOLog in

Comments (0)

All The Uproar Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *