A wave of racial violence against and forced evictions of Black migrants in the North African nation of Tunisia in March 2023 has escalated following the rise of misleading videos on social media. The digital campaign grew so much that it became the subject of a statement from Tunisia’s president, Kais Saied. He called migration a “plan” to change the country’s profile from Arab to Black. But, according to Reality Check and BBC Monitoring, nearly all of the videos that claim to show African migrants in Tunisia were filmed in other locations.
The scale and uptick of the virtual racial hatred campaign that swept Tunisia is certainly unprecedented for the country, which championed the Arab Spring a decade back. Yet, the far-reaching and severe offline consequences of digital platforms in the region of the Middle East and North Africa (MENA) are nothing new. Tech companies in this region are at the heart of pivotal issues relating to free speech, nondiscrimination and human rights. Recent reports document how security forces in Tunisia, Jordan, and Morocco resorted to social media digital targeting with the aim of gathering and creating evidence in support of the prosecution, arbitrary detention, and torture of LGBTQI+ people. In Kuwait and Saudi Arabia, social media was reported as a black market for selling domestic workers and a platform for enabling modern slavery. Platforms such as Facebook (Meta) and YouTube (Alphabet) previously garnered attention for hosting content that contributed to the rise of extremist violence in the region.
Certainly, issues at the intersection of human rights and content moderation are not exclusive to the MENA region: in Myanmar, Meta’s algorithm was under the spotlight for allegedly promoting violence against the Rohingya community. The same company was sued in Kenya for fanning violence and hate speech in relation to the Tigray Conflict in Ethiopia. While it is a welcomed progress that digital companies have started engaging on these issues globally, they have done too little to mitigate the risks associated with their activities at the local level.
For a long time, activists in the MENA region have deplored the absence of dialogue with digital platforms and tech corporations’ lack of effective engagement with their demands. Such concerns have been echoed consistently in Bread&Net, the conference of reference on digital rights in the region. Besides gaps in engaging stakeholders, corporate-borne mistakes in Arabic content moderation are too common and carry significant human rights costs. Research from 2020 highlights that Meta’s algorithms incorrectly deleted nonviolent Arabic content 77% of the time.
Cases where humans from the region performed moderation have been tainted by labor rights abuse issues: a recent investigation has exposed how TikTok allegedly imposes a workplace environment of near-constant surveillance and near-impossible metric goals to its subcontracted workers in Morocco.
In the current wave of the racial hatred seizing Tunisia, activists deplored the ineffectiveness of the reporting mechanism established by platforms and their inability to promptly or effectively remove threatening content to Black migrants.
Under the United Nations’ Guiding Principles on Business and Human Rights, companies have a responsibility to respect human rights, including the rights to nondiscrimination, privacy, and freedom of expression. They must place heightened attention to contexts where human rights are at greatest risk.
This is the case for the MENA region, whose worldwide performance on human rights and governance consistently remains at the bottom. Adding to the egregious abuse prevailing across the Gulf region and in conflict zones, countries such as Tunisia and Jordan have recently taken a turn for the worse on a wide range of liberties, including digital rights; in 2022, seven countries in the MENA region registered internet freedom declines. The governments of Tunisia, Egypt, and Turkey have been criticized for promulgating controversial legislation on broadcasting false rumors via social media platforms as a tool to silence dissidents. These recent developments along the long-standing political trends in the region render the corporate need for heightened due diligence all the more significant.
In addition to it being the right thing to do from a human rights perspective, big tech’s increased focus on the human rights risks associated with their operations in the MENA region might also be a smart investment going forward. MENA users are among the most prolific users of social media platforms worldwide, and Arabic is the third most widely used language on Facebook. As Meta witnesses slower global growth, the company continues to benefit from a powerful social media presence in the region. With 56 million users, Egypt was the ninth largest market for Meta as of May 2022.
Conversely, as governments are likely to continue their rise toward digital authoritarianism, communities and people are likely to turn to digital platforms as a unique avenue for accountability. Building a social license to operate is therefore key for tech companies to maintain and grow their activities in the region.
For digital platforms, the MENA region can serve as a crucial test case for the adoption of a regionally rooted approach to performing human rights diligence. The relative similarity of Arabic dialects in comparison to the language diversity found in other continents, such as Africa and Asia, means that companies can play a more visible and proactive role in avoiding the negative consequences of their activities on free speech, equality, and beyond in the region.
To get there, they should start by dedicating adequate funding, expertise, and human resources to assess and then respond to the likely growing demands of transparency and accountability from their millions of users in the region. They may invest in empowering their regional corporate teams on human rights topics and hiring staff conversant in Arabic and with an understanding of the context specificities.
To avoid severe negative offline consequences, digital platforms need to build a refined contextual picture of the risks associated with their operations and act on them. To that end, they should immediately undertake and continually update a human rights risk assessment with participation from the impacted groups in the region. At the heart of that should lie a corporate willingness to build allyships with civil society groups and communities, where they can codesign policies and remedial mechanisms.