Content moderators in Kenya handling Ethiopian-related Facebook content report receiving threats from the Oromo Liberation Army (OLA). These concerns were dismissed by their employer, leading to a legal challenge questioning Meta’s moderation practices globally.
Content moderators in Kenya tasked with reviewing sensitive content related to Ethiopia have alleged receiving threats from the Oromo Liberation Army (OLA) for removing the group’s videos from Facebook. According to court documents filed on December 4, these threats have raised significant concerns about the safety of individuals working in content moderation roles.
Abdikadir Alio Guyo, one of the affected moderators, revealed that he received direct threats from OLA members warning of dire consequences if his team continued to remove their content from Facebook. Another moderator, Hamza Diba Tubi, reported receiving messages from OLA listing their names and addresses, causing fear for their safety in Ethiopia.
These accusations were dismissed by their employer, Sama, initially accusing the moderators of creating false reports about the threats. After facing public pressure and receiving one of the moderators, publicly identified by the rebels, Sama eventually agreed to investigate and relocated the moderator to a safehouse.
The Oromo Liberation Army, a banned splinter group with grievances rooted in the alleged marginalization of Ethiopia’s Oromo community, has been linked to recent violence in Oromiya following failed peace talks in 2023. The Ethiopian regional government has accused the OLA of killing civilians and escalating tensions in the region.
The moderators also claimed that Meta, the parent company of Facebook, ignored advice from experts it hired to tackle hate speech effectively on its platform. Alewiya Mohammed, a supervisor of dozens of content moderators, stated in her affidavit that she felt trapped in an “endless loop” of having to review hateful content that Meta’s policies prevented from being removed. This case could have implications for how Meta works with content moderators globally, as the U.S. giant continues to face scrutiny for its handling of sensitive content.
In another case filed in Kenya in 2022, Meta was accused of allowing violent and hateful posts from Ethiopia to proliferate on Facebook, fueling the civil war between the federal government and Tigrayan regional authorities. This incident underscores the challenges faced by global tech companies in effectively moderating content amidst volatile situations.