The flawed case against more-than-human rights
Steps have been taken to widen the scope of the “human” part of human rights—and to rethink the way those more-than-human entities also impact human rights.
Credit: Alejandro Ospina
Over the past fifteen years, movements to extend legal rights to different kinds of non-human entities have achieved notable victories. In particular, the world has witnessed the arrival of concrete expressions of rights for nature, animals, and even robots, as evidenced by laws and court decisions across jurisdictions.
Reasons vary as to why these innovations have occurred. Explanations include the incorporation of Indigenous perspectives into domestic legal systems, judges adopting holistic interpretations of nature that include animals, and legislative attempts to address practical issues like who (or what) has access to public spaces. Despite the diversity found in recent origin stories, these examples of more-than-human rights all suggest a major shift is afoot in the way we think about rights.
Yet, these developments have drawn considerable ire from across the political spectrum. Some on the left have derided these advancements as diverting attention away from so-called “real” human rights concerns. Others on the right have decried the shift away from a Eurocentric, human-centered vision of society. Opponents of more-than-human rights who normally disagree on most policy issues seem oddly united on this topic. Antipathy towards rights for non-humans has made for some strange bedfellows, indeed.
What critics of all stripes share in common is a fundamental misunderstanding of how, when, and why rights change. There are three common objections to more-than-human rights and each of them relies on a deeply flawed logic. Pursuing a truly robust participatory discourse on the evolution of rights requires identifying and overcoming these erroneous lines of reasoning.
First, some claim that the idea of rights for non-humans, specifically technological entities, is dangerous. Proponents of this argument believe that discussing the rights of robots in particular ignores the perils of a capitalist system that actively exploits vulnerable groups and only serves the interests of corporations seeking to evade liability for the actions of their products. Interestingly, this same group is often mum on the subject of rights for nature and animals.
A more thoughtful perspective maintains that, instead of burying our heads in the sand, we should prepare for an uncertain future by anticipating potential challenges that may shake up our moral and legal systems. It seems intuitively unwise to risk being caught flat-footed given the pace at which social and technological innovations occur these days.
Second, another attack on rights for the more-than-human world alleges that such protections amount to a distraction from more pressing concerns facing humanity. This contention implies that scarce intellectual resources should be allocated towards addressing certain issues and not others. Many commentators raised this objection during the Lemoine controversy, in which a Google engineer sought legal representation for an artificial intelligence system on the basis that it was sentient and therefore deserving of rights.
But this economic approach to scientific inquiry is inaccurate at best and elitist at worst. As Sætra and Fosch-Villaronga argue, the subjective task of determining which problems society should prioritize is better handled in the realm of politics (hopefully through deliberative democratic means) than science. Broadly participatory conversations about societal goals enhance the likelihood that outcomes obtained by decision-makers enjoy a degree of democratic legitimacy. Attempts at gatekeeping may reveal more about the biases of the one advocating the restrictive position than the topic at hand.
Third, some assert that rights constitute a zero-sum game. In other words, rights extended to one group necessarily lead to the elimination of rights for another. Perhaps ironically, this argument has been used in the past to deny rights to groups we now unequivocally agree to protect. Proponents of this argument imagine that rights constitute a pie that can only be cut into so many slices; the pie itself can never be made larger.
However, the history of human rights offers a strong rejoinder to this sort of thinking. The struggle of human rights has often proceeded from the initial exclusion of certain groups to contestation surrounding their inclusion to the eventual expansion of rights (with plenty of blood, sweat, and tears along the way). As Schulz and Raman write, the “addition of a newly recognized right does not vitiate a previously recognized right.”
As new rights emerge, so too may new conflicts between existing rights-holders and new subjects of rights. But legislatures and courts can adjudicate these disputes. The prospect of new conflicts need not and should not preemptively foreclose the evolution of rights and the list of entities to which they apply.
Of course, implementing more-than-human rights will require careful thinking about how they might work in practice. For instance, as Arpitha Kodiveri inquires, who possesses the authority to defend nature’s interests? To avoid asking these kinds of questions at all risks stifling the entire project of rights, which is constantly under assault, continuously reinvented by those seeking to decolonize it, and rendered urgently in need of recalibration in the Anthropocene. Arguably the most pressing existential question for human rights revolves around whether the current conception of rights can effectively combat the kind of global ecological destruction made possible by the very same human supremacy that inspired its emergence. Resolving this quandary will take debate, discourse, and demonstration. Not discussing this issue is not an option.
Far from needing “less science fiction and more philosophy,” perhaps human rights needs more science fiction in order to cope with the speed and intensity of social, environmental, and technological changes affecting our world. Creative destruction comes to mind. To make this conversation productive, we must begin with a clear and shared understanding of the conditions under which rights grow to accommodate new realities, new entrants into the moral circle, and new obligations towards those with whom we share this fragile planet.
Joshua C. Gellers is Associate Professor of Political Science at the University of North Florida, Research Fellow of the Earth System Governance Project, and Expert with the Global AI Ethics Institute. He is the author of Rights for Robots: Artificial Intelligence, Animal and Environmental Law (Routledge 2020). Follow him on Twitter @JoshGellers.