Enforcement

May 25, 2022

Manila, the capital of the Philippines, attracts content moderation contractors.

"Seaside Manila" by Storm Crypt is marked with CC BY-NC-ND 2.0.

Manila, the capital of the Philippines, attracts content moderation contractors.

Content doesn’t just “get moderated” in a passive way—reported posts aren’t fed into a computer that immediately spits out a verdict to keep it up or take it down. The job has not entirely, or even mostly, been handed off to AI, nor is it conducted by law enforcement or social media experts.

The task falls to human moderators: people, largely overseas, who work for contractors of big social media companies and enable moderation to churn around the clock while earning very little.

The median Facebook employee has an annual income of $240,000, while moderators working for one contractor in Arizona make just under $29,000 per year. They are outpaced by the enormous volume of posts they have to screen, have to apply a single set of rules that is confusing and changes nearly constantly, often lack the necessary context to make a clear call, are frequently forced to review content in languages they don’t speak, and have just a few seconds to stamp each post with a binary decision.

The Philippines, whose historical connection to the U.S. and high frequency of English speakers has made it ideal for call centers, is one popular location for contractors to set up shop. In the country’s capital, Wired journalist Adrian Chen witnessed an “army of workers employed to soak up the worst of humanity in order to protect the rest of us.”

These people are exposed to an unrelenting stream of graphic, traumatic content: violent killings, child exploitation, animal abuse. They make decisions on several hundred pieces of content a day that have been reported, racing to meet quotas that demand accuracy. They are offered little in the way of breaks or mental health support. Most people quit after a year or less, driven away by traumatic experiences and unethical working conditions. 

“I don’t think it’s possible to do the job and not come out of it with some acute stress disorder or PTSD,” said one worker for the Facebook contractor Cognizant.

Another estimates that during his time working for Cognizant he reviewed roughly ten murders each month and 1,000 pieces of content related to self-harm.

Automation just hasn’t made enough progress yet to take over the job. Mark Zuckerberg, the co-founder of Facebook, said in a 2019 interview that “I think there will always be people” moderating content.

“The result of this admission,” a team of journalists for The Washington Post points out, “is that tech companies are creating a permanent job—dependent on invisible outsourced workers—with the possibility for serious trauma, with little oversight and few answers about how to address it.”

Social media users “were being sold on this notion that you could blast your thoughts around the world in this unfettered way, and in fact it turned out that there were big buildings filled with people making really important decisions about whether or not that self-expression was appropriate,” writes Isaac Chotiner for The New Yorker. “And the platforms themselves were saying nothing about it. They weren’t saying who those people were, where they were, and they weren’t really saying on what grounds they were making their decisions. There was just total opacity in every direction.”

The darkest secret about content moderation is that whether or not a site chooses to take down unsavory content, someone has to contend with it. On more laissez-faire sites, that’s a large number of users who have to get comfortable with unpleasant material. On mainstream platforms, that’s initially the users who report the content; Buni and Chemaly point out that because “almost every content moderation system depends on users flagging content and filing complaints…users are not so much customers as uncompensated digital laborers who play dynamic and indispensable functions.” Then, it’s the human moderators who process these complaints.

“Online access was supposed to unleash positive and creative human potential, not provide a venue for sadists, child molesters, rapists, or racial supremacists,” write the Verge journalists. “Yet this radically free internet quickly became a terrifying home to heinous content and the users who posted and consumed it.”

People will always post awful things. Until AI is advanced enough, humans will have to deal with the awful things that other humans do and say. There is no realistic model of content moderation that sidesteps this fact or deals elegantly with its implications.

There’s this misconception that social media platforms don’t have perfectly balanced content moderation rules that err on the side of freer speech because of some “Big Tech” anti-conservative bias or lack of effort on the part of CEOs and engineers. But that’s not true. Social media companies have been trying to solve the problem of content moderation since they began, and it’s only gotten more complicated.

“We will never come to a stable agreement on perfect content moderation rules,” said Douek. “So we need to construct better ways to disagree about them. Forever.”

Leave a Comment

The Uproar • Copyright 2024 • FLEX WordPress Theme by SNOLog in

Comments (0)

All The Uproar Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *