Risks and responsibilities of content moderation in online platforms

Photo:ROBERT GHEMENT

Online platforms, and the complex role they play in society, are coming under unprecedented scrutiny.


 

For many decades, the international legal order has recognised the concept that states have a duty to protect, respect, and promote human rights. Since the development and adoption of the UN Guiding Principles on Business and Human Rights, governments, states, and companies have paid more attention to the responsibilities of businesses in this arena, particularly in industries with longstanding and tangible human rights concerns like the extractive and garment sectors. However, in these debates, one sector has received notably less attention: online platforms.

This omission is striking considering the importance of online platforms to the modern-day exercise of freedom of expression. Every day, billions of people depend on search engines like Google to search for and receive information and ideas, and use Facebook, Twitter, and Snapchat to build relationships, share stories, and connect with others. These platforms also play a key role in organising dissent and mobilisation, from the Tunisian revolution to #MeToo. In countries where public discourse is closely monitored or repressed—and for marginalised or vulnerable groups such as LGBTI individuals and human rights defenders—online platforms can serve as an everyday lifeline. As the UN Special Rapporteur on freedom of expression stated in 2016, “The contemporary exercise of freedom of opinion and expression owes much of its strength to private industry, which wields enormous power over digital space, acting as a gateway for information and an intermediary for expression.”

With this power comes serious responsibilities. Platforms have a duty both to respect the freedom of expression of their users, and ensure that the harmful and unlawful content that can arise on their platforms—such as harassment, hate speech and child sexual abuse imagery—is properly dealt with. At the moment, many aren’t consistently fulfilling these responsibilities, and the area where this failure most starkly manifests is in their approach to content moderation.

In 2017, for example, YouTube took down thousands of videos containing evidence of atrocities and war crimes in Syria after deeming them a terms of service violation, following a move by Facebook to delete a post containing a famous photograph of a napalm victim in the Vietnam War. Facebook has also come under criticism for its policies on content removal and profile and group suspensions, which have led to the censorship of LGBTI individuals referring to themselves as “dyke” or “queer”, and the deletion of posts highlighting violence against Rohingya Muslims in Myanmar. At the same time, the alarming trend of laws which make tech companies legally liable for the material which is posted on their platforms—like Germany’s NetzDG law and the Anti-Fake News Act in Malaysia—is exacerbating this problem, incentivising tech companies to remove controversial content rather than risk fines or other sanctions.

Both trends—overzealous moderation of content by platforms, and heavy-handed legislative responses from governments—are reflections of a general lack of consensus over the specific responsibilities of online platforms, and how they should be exercised. While the UN Guiding Principles offer a useful framework for the responsibilities of businesses in general, they do not address the specific activities of online platforms. And sector-led efforts to translate the Principles into this context, like the Global Network Initiative Principles on Freedom of Expression and Privacy, while valuable, only take us so far. They are still set at a very high level, and do not, for example, address key issues such as how platforms should develop and implement Terms of Service.

In the absence of clear, actionable guidance, some tech companies are already taking proactive steps to improve their processes—from setting out more detail about their terms of service and how they are implemented, to increasing transparency over the number of content removals. And in June 2018, UN Special Rapporteur on freedom of expression David Kaye presented a landmark report to the Human Rights Council, looking specifically at online content regulation and setting out, for the first time, recommendations for tech companies on how to ensure they respect the right to freedom of expression.

These are promising signs. But to address the challenges set out above, much more work needs to be done by a range of actors within the business and human rights community. At the national level, states need to ensure that national legal and regulatory frameworks support, rather than undermine, online platforms in respecting the right to freedom of expression. This could mean, for example, including tech sector and freedom of expression issues in National Action Plans which implement the UN Guiding Principles. Those of us calling for businesses to respect human rights also need to consider how rights-respecting processes for content moderation could be established and monitored globally—a key consideration, given the global reach of online platforms. In a recent white paper—building on other valuable interventions by Article 19 and Public Knowledge—Global Partners Digital proposes one possible solution: the creation of a global, multistakeholder body with a mandate to review and assess compliance with international human rights standards.

Under this model, interested online platforms would appoint a set of independent experts who, following a multi-stakeholder consultation, would develop a set of Online Platform Standards. These would set out, among other things, how platforms should respond to harmful content, establish rules on accountability and transparency, and require grievance and remedial mechanisms to be put in place allowing users to challenge decisions. Adherence to these Standards would be monitored by a global multistakeholder oversight body, comprised of representatives from online platforms, civil society organisations, academia and, potentially, relevant national bodies such as national human rights institutions. Platforms that failed to meet the Standards would be publicly called out and provided with recommendations for improvement.

This is by no means the only approach possible. Article 19 recently proposed a similar but distinct multistakeholder Social Media Council which, would function as a kind of ombudsman, presiding over a code of ethics for content removal and assessing complaints about online platforms. Proposals tend to share the acknowledgement that some sort of independent oversight is necessary; but all would face similar questions around the oversight's inclusivity and legitimacy, how to avoid stifling innovation and competition, and how they would consider the different forms and sizes of the platforms they would oversee.

The issues noted above are not abstract or theoretical, or ones which we can afford to delay discussing. Online platforms, and the complex role they play in society, are coming under unprecedented scrutiny, and the regulatory environment in many countries is already starting to change—with serious implications for freedom of expression. All of us in the business and human rights community—whether governments, the private sector, or civil society—have a critical role to play in developing and implementing standards which address legitimate concerns, while also ensuring the protection of human rights. The costs of inaction may be steep.

*** This article is part of a series on technology and human rights co-sponsored with Business & Human Rights Resource Centre and University of Washington Rule of Law Initiative.