On January 9th, 2020, Mark Zuckerberg addressed his Facebook’s global population of 2.45 billion users with a post on his personal profile, setting out his longer-term focus for the next decade. The themes ranged from the role that Facebook could play in the next generational change, to the need to rediscover the “intimacy” of interpersonal relationships, to the possibility of economic growth that Facebook’s products could offer to small and mid-size enterprises (SMEs) around the world, and to new forms of governance for digital communities.
Barlow’s “Declaration of Independence for Cyberspace”—with its idea that technology will bring about a better world—was strongly evident throughout Zuckerberg’s recipe for the next decade. Unsurprisingly, human rights were never mentioned, not even with respect to the alleged beneficial outcomes that his projects could bring. This glaring absence of rights was despite the recent scandals involving Big Tech companies and prominent calls by the international community for a “human rights by default” approach to platform governance. Such scandals include the recent investigations and leaked documents now confirming that “the Facebook data scandal was part of a much bigger global operation that worked with governments, intelligence agencies, commercial companies and political campaigns to manipulate and influence people, and that raised huge national security implications”.
One reason that Zuckerberg never mentioned rights could be that Facebook does not consider itself bound by international human rights law, as stated on their website—indeed, there is ongoing debate on whether corporations should have direct international human rights obligations. Another hypothesis, however, stems from the self-perception and the internal human rights narratives that reign within the platform: human rights are product features which are inherently contained in the services provided. Freedom of expression is equated with the ability to speak, to connect and to share content. This presumption that rights are “built in” allows Facebook and other social media platforms to avoid any explicit discourse about human rights standards.
This idea that human rights are a product feature of the services these platforms provide has been confirmed in a study by Rikke Franke Jørgensen, who found that “there is a disconnect between the company discourse on freedom of expression and the external discourse and concern related to these issues”. In fact, while these companies like Google and Facebook internally perceive themselves as being strongly committed to and actively promoting human rights, this commitment is translated externally as the idea that they have to protect users against external (e.g., governmental) threats. At Facebook, the promotion of freedom of expression presents itself in the guise of the perceived link that exists between freedom of expression and the ability to connect and share. This effectively blinds the company to how their business practices have a negative impact on their users’ rights and freedoms.
This presumption that rights are “built in” allows Facebook and other social media platforms to avoid any explicit discourse about human rights standards.
For example, when a government issues a content removal request, Facebook would assess it also against human rights standards, but for any other content reported by a user, the takedown is measured only against internal content moderation policies. Interestingly, Facebook can, in some cases, allow content which would otherwise go against [their] Community Standards if deemed newsworthy and in the public interest—in these derogation instances, the company looks to “international human rights standards to make these judgments”.
But if Facebook considers itself as inherently human rights-friendly, it is also very careful in avoiding the term human rights. Even when referring explicitly to free speech or privacy, these are not labeled as human rights but as social values.
As evident in Zuckerberg’s Georgetown speech, and as underscored by Kate Klonick, these social values are deeply rooted in American values. The reference to the term values (presented as neutral but actually representing American values) is actively promoted and used as a shield against human rights discourses. If one of Zuckerberg’s proposed governance solutions is more regulation and “clearer rules” established by governments, a more revolutionary one is the creation of an Oversight Board that will allow users to appeal content decisions. Here again, human rights are carefully avoided. According to the Charter, “the Board will review content enforcement decisions and determine whether they were consistent with Facebook’s content policies and values”—not human rights standards. As also discussed here, content policies offer lower protection to free speech than human rights standards. The real-name requirements, for example, have raised significant human rights concerns for vulnerable users. Vague rules and inconsistent enforcement can also result in the instrumentalization of social media for spreading hate speech, as reported by the Independent International Fact Finding Mission on Myanmar.
But why not use the international human rights framework that is already in place? Human rights are an internationally agreed set of norms, and arguably are more suitable than a vague notion of social values for governing global digital communities. A strong preference from these companies for an “ethical” framework seems to be grounded on the idea that ethical principles such as fairness and or prevention of harm are flexible to interpretation.
Despite its insistence that it is not bound by human rights law, Facebook still has a responsibility to respect human rights under the UN Guiding Principles on Business and Human Rights, by virtue of which it should “avoid infringing on the human rights of others and […] address adverse human rights impacts with which [it is] involved” (Principle 11).
The relationship that needs to be regulated is not only the one between the individual and the State, but also the one between the individual and digital companies.
In fact, the UN Special Rapporteurs on Freedom of Opinion and Expression and on Privacy have produced a series of reports that already outline normative frameworks for the online sphere. Human rights should become the explicit standards upon which platforms’ governance systems are to be based. Because social media platforms dominate public forums worldwide, a governance system rooted in social values may be convenient for companies, but it is deeply unsatisfactory. As underscored by David Kaye, a human rights discourse would be extremely powerful for achieving those goals that a company such as Facebook is seeking to achieve, since “human rights law gives companies a language to articulate their positions worldwide in ways that respect democratic norms and counter authoritarian demands”. While it is commendable that these platforms are starting to acknowledge their social responsibility and the power they exercise, their current responses and proposed solutions are still not adequate.
If Facebook is to truly address the challenges it sets out to, human rights are a necessary ingredient. As far as freedom of expression is concerned, social media have become an integral part of what Jack Balkin calls “the free speech triangle”. If individual human rights are to be protected effectively, it is necessary to recognize that the relationship that needs to be regulated is not only the one between the individual and the State, but also the one between the individual and digital companies, which now effectively regulate speech in an unprecedented manner.
Human rights law already provides a framework for balancing the competing interests that Facebook seeks to solve. Human rights also provide global standards for governing a global digital public sphere. Additionally, they would also establish predictable and consistent standards for user behavior.
If internal narratives at the company do not evolve, Facebook’s recipe for the next decade will not only be disappointing but will leave users increasingly vulnerable.