The future of human rights is inseparable from our evolving relationship with—and societal dependency on—technology, given the complex and ever-changing moral, legal, and philosophical dilemmas posed by algorithms that remember nearly everything their data sets are built around. As artificial intelligence (AI) and algorithmic systems increasingly shape reputations, opportunities, and even identities, the right to “algorithmic forgettability” has become a crucial new frontier for achieving both justice and privacy. This right empowers individuals, particularly youth and future generations, to exercise agency and claim narrative control over their digital footprint, preventing algorithmic memory from being used in the service of profiling or suppression.
Technological innovation, surveillance, and human rights
Technological innovation has dramatically expanded both the prospects and the perils of human rights discourse. Amnesty International emphasizes that technology must serve the interests of humanity as a whole and not merely reinforce the power of a small, privileged elite. The organization has advocated for digital tools that are designed and used in a rights-respecting manner, demanding concrete and accountable regulatory frameworks to ensure safeguards for human rights in an era of digital proliferation.
Echoing this concern, the United Nations warns that the future of rights depends on our ability to channel technological innovations in a human-centered direction, so that privacy, agency, and democracy are not erased or oppressed by the desire for efficiency and the unchecked exploitation of data.
The dangers of unrestricted surveillance are well documented. Edward Snowden’s 2013 leaks of classified information about the United States National Security Agency (NSA) exposed the extent of government overreach—not only regarding the data and privacy of citizens of other countries, but of the US’s own citizens as well. Snowden’s disclosures highlight how the NSA, with cooperation from intelligence partners and tech giants, gathered massive amounts of data from individuals worldwide. Emails, calls, locations, and browsing data were all subject to collection and storage without meaningful public consent or judicial oversight. Programs like PRISM made it possible for intelligence agencies to collect data directly from the servers of tech companies, including Google, Facebook, Microsoft, and Apple.
Snowden warned that “I, sitting at my desk, could wiretap anyone, from you or your accountant, to a federal judge or even the president, if I had a personal email.” These revelations highlight a global surveillance apparatus that violated privacy rights, creating a digital environment where information could be weaponized against not only domestic politicians and officials but also ordinary citizens and political leaders worldwide.
The role of algorithmic memory
Modern algorithms act as the architects of the world’s collective memory. Their capacity to store, organize, and relay information profoundly influences and impacts not just what is visible, but who is seen, accepted, or understood. In her book Algorithms of Oppression, Safiya Umoja Noble warns that algorithmic oppression is deeply embedded in the web. Search engines often reinforce racial profiling and what she terms “technological redlining,” whereby inequalities become encoded into information systems themselves. In advancing these claims, Noble further argues that “we have to ask what is lost, who is harmed, and what should be forgotten with the embrace of artificial intelligence in decision making.”
These consequences extend beyond mere misrepresentation to create violations of rights. Digital histories, once encoded in algorithmic data sets, can define individuals long after they have changed in meaningful ways.
Technology philosopher Shannon Vallor deepens this critique in The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking, observing that AI systems “mirror our own intelligence back to us,” essentially trapping us in “endless permutations of a reflected past.” This mirroring effect, advanced by Vallor throughout her work, shows how AI makes it difficult to evolve beyond algorithmically mediated identities. Without forgettability—or the human right to be forgotten—individuals lose the power to shape their own stories, trapped by past errors and perpetually defined by damaging algorithmic traces that their communities cannot escape.
As Noble and Vallor both stress, the battle for algorithmic forgettability is not simply a question of privacy; it is a fight to regain narrative control over one’s community and personal life. For marginalized groups and future generations, algorithmic memory threatens the possibility of being recognized as dynamic, evolving human beings. Forgettability is ultimately about agency—not erasing history, but restoring the freedom to move beyond past and current harms.
The impact of algorithmic permanence weighs heaviest on young people, who often come of age surrounded by and socialized within digital and algorithmic technologies that subject them to relentless scrutiny, all before they can understand the potential consequences of their actions.
Approaches to forgettability
Legal and ethical frameworks are beginning to recognize the necessity of algorithmic forgettability. The European Union’s General Data Protection Regulation (GDPR) enshrines the “right of erasure,” inspiring the development of machine learning algorithms that can “forget” data upon request. In the United States, California’s “Eraser Laws” have granted minors the ability to remove content they posted online. Such protections, however, remain rare occurrences. The prevalence of digital and AI systems built on stored data sets subjects everyone’s past to constant analysis, with real repercussions for privacy and professional futures.
The issues of AI and its potential for harm are not only questions and concerns for philosophers. These issues must be further questioned and addressed by the legislators and political theorists of our time. Current scholarship has done well to note the potential political, social, and ethical constraints of AI, but further work is needed to design policies and implement strategies to right these wrongs and develop better, more sustainable, and more just systems for future generations. Although answers may not come soon, the call for a human right to forgettability should remain central to the approach of policy makers and legislative bodies worldwide.