Don’t Let Your Children Grow Up to Be Content Moderators - Our Analysts' Insights
X

Our Analysts' Insights

Blogs & Events / Blog
07Dec

Don’t Let Your Children Grow Up to Be Content Moderators

Listen to This Blog
 

 


Would you encourage a colleague, a friend, or your children to work in a profession that is extremely damaging to their mental health? No, I didn’t think so. However, that’s exactly what we’re doing when hiring people to moderate extremely high volumes of (multilingual) web and social media content, hour after hour, day after day. Being a content moderator (sometimes called a “process executive”) has been cited as “the worst job in technology,” whether you’re contracted at arm’s length from the social media platform in question by BPO outsourcers such as Accenture (WhatsApp in Dublin, Ireland), Cognizant (Facebook in Austin, Texas), Genpact (Facebook in Hyderabad, India) or Sama (Facebook in Nairobi, Kenya), or directly by the brands themselves.

The challenge of how to police harmful material, while at the same time avoiding censorship, has only gotten (much) worse – especially for non-English content – since we wrote about it back in 2018 when Meta was still Facebook. To cite two examples: The 1,500 Twitter employees and contractors (probably all gone after Elon Musk’s house cleaning exercise) and 15,000 staff directly and indirectly employed by Meta to moderate content are nowhere near enough – especially since these companies admit that they focus almost solely on policing content in English. And yet, more than two-thirds of Meta’s users log on from outside of the United States. Unfortunately, the application of AI is not making nearly as much of a dent as Silicon Valley engineers had hoped – not even in English, let alone in other languages.
 

Cleaning Up Toxic Digital Waste

Cleaning up others’ toxic waste – albeit digital – often includes watching videos of murders, rapes, suicides, and child sexual abuse while consuming horrific hate speech hour after hour, day after day. Companies such as Alphabet (YouTube), Meta (Facebook, Instagram, Messenger, WhatsApp), and Twitter have adopted the same outsourced model for “content clean-up” as many first-world countries have implemented for their physical toxic waste recycling. Obviously, this is not the “dignified digital work” that some US-based outsourcing companies would have you believe – especially as they fire multilingual staff for attempting to unionize halfway around the world from Silicon Valley. 

There are far too many instances of how content moderation has gone awry or simply been AWOL over the past few years to cite them all. However, here are just a few to jog your memory.
 

  • Violence – and possible genocide – in Myanmar, India, the Philippines, and Ethiopia (Tigray). All of these instances (with most ongoing) extend way beyond “lost in translation” to “killed due to the lack of proper content moderation.” As of the end of 2021, 2.82 billion people were logging onto one of Meta’s applications somewhere in the world every single day, with 806 million in Asia-Pacific alone. Meta’s Facebook, with 90% of its monthly users from outside of North America due to its aggressive expansion into markets to bolster its non-US userbase, only covers 63% of the languages (around 70 out of 111) that its platform offers with either artificial intelligence-powered “classifiers” or human content moderators. This has led directly to violence against: 1) the Tigrayan people in Ethiopia from 2019 onward, 2) the Rohingya in Myanmar from 2017, 3) the citizens of the Philippines during former President Duterte’s drug war against them, and 4) Muslim communities in India (Facebook’s largest market by audience size).
     
  • Weaponizing of Twitter meltdown by Chinese bots. With the entire human rights team fired by Elon Musk and the continuing staff departures that include many employees responsible for moderation and misinformation policies, Twitter recently served up a constant barrage of tweets offering escort services, pornography, and gambling opportunities. They were the result of Chinese bots deliberately created to obscure legitimate tweets about protests related to lockdowns imposed as part of China’s zero-COVID strategy. One more note: Musk’s content moderation council, which he announced at the end of October, is yet to be set up.
     
  •  “Just the cost of doing business in certain places.” Frances Haugen, the whistleblower behind The Facebook Papers (here and here, has stated that one of her main reasons for sharing internal documents was to highlight the huge gap between the company’s integrity and security systems in the US versus the rest of the world. At the time that the Facebook Papers were released, one of its former vice presidents told the Wall Street Journal that the company considers potential harm to people in foreign countries due to its actions as "simply the cost of doing business" in those markets.
     
  • Reactive mitigation strategies that aren’t nearly enough. Depending on human content moderators supported by AI-driven filters and (under-staffed) third-party fact-checking organizations has led to woefully inadequate results to date. Taking Meta’s resources as an example: 15,000 people divided by 111 languages equals just 135 people per language to monitor gargantuan volumes of written, audio, and multimedia content 24/7. Exacerbating the issue is that the vast majority of these people and the AI tools can only handle English-language content. Add on top of that the need to expand to appropriately monitor dialects within languages such as Arabic and Chinese – dialects that are so different that they are not always mutually intelligible by all Arabic- or Chinese-speaking content moderators or automated classifiers.
     
  • And if you had any doubts … Meta’s own engineers doubt the efficacy of AI to do an adequate job of protecting its users into the future.


Why You Should Care

The (multilingual) content moderation challenge doesn’t appear to be dramatically improving. It certainly won’t be going away anytime soon.
 

  • Working as a content moderator can cause psychological and emotional damage. A study done by NYU/Stern (New York University Stern School of Business) in 2021 estimates that content moderators had an average of just under 150 seconds to decide if posts met or violated community standards. One “content analyst” engaged by Accenture for Facebook in Austin, Texas described his function in April 2021: “Content analysts are paid to look at the worst of humanity for eight hours a day. We’re the tonsils of the internet, a constantly bombarded first line of defense against potential trauma to the userbase.” And for very low pay, regardless of geographic location. Meta has even had to pay out US$52 million to settle a class action suit by content moderators as compensation for PTSD.

                        Pictureblg2
Source: Meta Facebook
 

  • The problem has metastasized far, far beyond Meta’s four brands. It isn’t only Facebook, Twitter, Twitter-wannabes and their local equivalents, but also Instagram, Snapchat, TikTok, WhatsApp, and YouTube, along with the latter’s local and future equivalents – not to mention all of their successive generations of associated bots. The list goes on – Discord, Hive Social, Mastodon, Reddit, Twitch, Weibo, WeChat. And it’s not only written content (obviously), but images and audio – Spotify, for example, recently acquired Dublin-based Kinzen to focus on combatting misinformation in 28 languages for its podcasting content.
     
  • Languages that lack enough human-labeled content make it difficult for AI engineers to build filters. This raw material is required to train machine learning algorithms to flag similar content. Without it, AI designers and engineers are severely restricted in their ability to deliver machine learning-based solutions.
     
  • The metaverse is sneaking up on us. Whatever its contribution(s) eventually turn out to be, users are already reporting and documenting issues that involve sexual harassment and violence within the metaverse. If people are allowed to build avatars based on the full range of human nature, we all know where that will most probably lead.


Why Do We Have Added Responsibility to Address Multilingual Content Moderation?

The answer to this question is fairly simple and obvious. The most at-risk places in the world when it comes to content moderation are almost always linguistically diverse, with languages spoken by smaller communities of speakers – communities that often become the target of hate speech and violence. We already have tools to identify those smaller communities.

We know multilingual content – and all of its ins and outs. We produce, manage, manipulate, transform, and deliver huge volumes of it on a daily basis – including user-generated content, which is the source of most of the problems for moderators. In addition, many of us service or work for the platforms that are most in need of better policing (or even perhaps regulation).
 

Soooo, What Can We Do?

There are multiple ways to shine a brighter light on the multilingual content moderation challenge and apply pressure for real change. One place to start is to familiarize yourself with the issues and possible solutions through resources such as the June 2020 report from NYU/Stern entitled, “Who Moderates the Social Media Giants?,” which includes a set of recommendations.

Other possible actions to get started:
 

  • If you work for, collaborate with, or sell to any of the platforms in need of better solutions for content moderation, look for ways to partner to ensure ethical investment in AI-powered solutions for multilingual content moderation.
     
  • Volunteer to educate people in minority language communities on how to protect themselves from hate speech.
     
  • If you’re politically so inclined and live in a country where political pressure can lead to positive change, contact political representatives. Support the efforts of content moderators and non-profits that focus on ensuring equitable working conditions and compensation for multilingual content moderators.
     
  • Talk to your children if you have them. Let them know when to switch off and to inform you immediately if they see anything strange or unkind. They should not have to deal with the fallout from improper or non-existent (multilingual) content moderation – not today, not tomorrow, nor in their future careers.

About the Author

Rebecca Ray

Rebecca Ray

Director of Buyers Service

Focuses on global digital transformation, enterprise globalization, localization maturity, social media, global product development, crowdsourcing, transcreation, and internationalization

Related

Spanish-Speakers: Informal or Formal?

Spanish-Speakers: Informal or Formal?

Do you work for a brand that addresses Spanish-speaking prospects and customers formally during some...

Read More >
The Language Sector Slowdown: A Multifaceted Outlook

The Language Sector Slowdown: A Multifaceted Outlook

After we published our recent Q3 2024 update on market sizing for the language sector, which was als...

Read More >
The Global Enterprise Content Production Line

The Global Enterprise Content Production Line

In today’s interconnected world, a global enterprise’s success hinges on its ability to produce, r...

Read More >
Developers: Open Windows in Your Silo to Collaborate

Developers: Open Windows in Your Silo to Collaborate

Partnering with localization teams to achieve internationalization compliance on time every time mea...

Read More >
Is It Time to Recruit a Generative AI Specialist?

Is It Time to Recruit a Generative AI Specialist?

It Depends As your organization pivots toward integrating generative AI (GenAI) into more of its ...

Read More >
Bigger Isn’t Better, Or Why FLLMs Matter

Bigger Isn’t Better, Or Why FLLMs Matter

In October 2023, we argued that the future of AI would be in “focused large language models” (FLLM...

Read More >

Subscribe

Name

Categories

Follow Us on Twitter