Editorial: Will human rights guide technological development?

Credit: Alejandro Ospina

Inside a mountain somewhere in Sierra Diablo, Texas, there is a clock under construction that will keep the time for 10,000 years. Jeff Bezos provided $42 million to develop the plan. Meanwhile, Elon Musk is regularly on TV demanding that billions of dollars be funneled into building colonies on Mars. Tech billionaires seek to present themselves not just as shaping the future but as masters of the very dimensions that define human existence: time and space. Will the human rights field reclaim the driver’s seat? 

This editorial presents the five parts of a series exploring how technological development is reshaping human relations and human rights, and what we can do about it. Each op-ed should serve as a wake-up call for the human rights community to reclaim the narrative of technological development from tech elites, ensuring that innovation serves collective well-being rather than private profit.

The problem

Technology is reshaping the world economy to the point where tech CEOs act like pharaohs, building clocks in pyramids and claiming to expand the reaches of the known realm into the stars.

Technology is reshaping our culture to the point where over 500 hours of video are uploaded to YouTube every minute, and 57% of US teenagers want to be influencers: professionals putting the intimacy of their lives online through a handful of privately owned platforms. Meanwhile, 57% of teen girls in the US (up from 36% in 2011), and 29% of boys express persistent feelings of sadness or hopelessness, with 30% of teen girls seriously considering suicide and 13% attempting it, according to the American Psychological Association (APA). We need to change course.

The challenge tech is posing for human rights and its defenders

Digital technologies impact what is visible and concrete (documents, borders, walls, institutions), while simultaneously unleashing an invisible layer that radically modifies existing relationships between people, states, and infrastructures. In this series, experts from around the world present concrete examples of how the duality triggered by the digital transformation challenges and reshapes our human rights system.

Digital ID: The digitization of identities is reshaping how states see and interact with the public. Grace Mutung’u highlights how programs in Kenya and Uganda exclude vulnerable communities from essential services. For example, incorrect birth dates are denying elders pensions, and thousands are being stripped of citizenship, all under the guise of increasing efficiency.

“Smart” borders: Despite the promises of a borderless digital world, the border walls are growing taller—and smarter. Petra Molnar reveals how “smart borders” have become systems of surveillance, exclusion, and violence. From drones patrolling the Mediterranean to AI lie detectors at checkpoints, “smart” border tech is turning humans into “problems to be solved.” Moreover, the right to asylum is being eroded at a moment in which the collapse of ecosystems and wars make the movement of humans inevitable.

E-carceration apps: Carceral technologies, often marketed as humane alternatives to incarceration, are extending psychological torture beyond prison walls while stripping individuals of autonomy and dignity. Nedah Nemati and Dasha Pruss explain how ankle monitors and facial recognition systems transform everyday life into a state of constant surveillance and control. Incessant device alerts induce sleep deprivation, and bulky monitors lead to medieval-style public humiliation.

Automated bureaucracy: The government adoption of opaque AI systems is enabling systemic harm. Juan David Gutiérrez argues that the secrecy surrounding these algorithms denies citizens the ability to challenge arbitrary and unfair automated decisions. In Colombia, governments hid welfare algorithm details, forcing civil society into legal battles; in the Netherlands, a biased AI falsely accused thousands of welfare fraud, ruining reputations and livelihoods. With the growing adoption of AI by governments, opacity risks becoming the default for policy deployment.

Networked activism: Authoritarianism is rising at a time of rapid digital innovation. The communication infrastructure necessary to nurture interpersonal trust and coordinate civic resistance to such authoritarianism is at risk. Claudio Ruiz argues that networked responses and coalition-building are becoming critical for sustaining global digital rights advocacy and bringing resilience to our digital infrastructure amidst democratic backsliding, austerity, and funding cuts.

Corporate tech leaders have presented each innovation as world-redefining, claiming prior institutions, rules, and norms to be obsolete. As a result, over the past decades, tech companies have downgraded the human rights discourse to the role of thin—albeit glossy—guardrails with limited ability to achieve the guidance or protection they were originally expected to offer.

A path forward 

The great challenge we have before us is reclaiming the narrative of human utopia from the powerful players that are taking over our creative capacity and imagination. The future has been captured by tech giants that, in their attempt to organize their cash flow, are moving from predicting the future to organizing the future. 

Amazon’s subscription models aided warehouse optimization. It is also taking the conveyor belt into the drawers of your home in order to turn your behavior into a predictable revenue stream. Facebook’s algorithms curate content and organize their notifications to trigger addiction mechanisms that reorganize your free time into labor time, which can be sold to advertisers. The goal of these companies is to extend the automation practices they have perfected within the production side to the consumption side. In doing so, they are transforming the individual and society. Their innovations aim to capture and compress human existence into the cells of a quarterly earnings report.

Meanwhile, the daily decisions regarding technological development are in the hands of an ever-shrinking number of billionaires. The decision-making processes are themselves increasingly opaque, both because of the complexity of the processes that power it (like Large Language Models) and also because retreating public interest actors, including universities and regulatory agencies, have allowed this opacity to fester and become normalized. 

The human rights field has spent decades studying and understanding human nature. Now, it must show that it offers more than glossy guardrails for private innovation. It needs to provide a program that helps determine what needs to be built and how. It needs to rearrange resources so that we can effectively tackle urgent collective problems, such as climate change, while providing individuals with the necessary means to advance their dreams.

The human rights field needs to build on its organizing heritage to reclaim the dimension of time. To do so, researchers and practitioners must take back the narrative of the future from those who see only paralyzing dystopias ahead, and the tech billionaires who believe that building those dystopias is a legitimate and sustainable business model. We need to reclaim the dimension of space by rallying ourselves out of the kaleidoscopic slumber we have been herded into, so that we can once again see each other as humans deserving a dignified life.

Go to the series landing page.