What a datafied worldview means for human rights
Understanding how datafication affects the rights and interests of people, and power relationships at large, is key for an effective defense of human rights.
A Huawei employee introduces face recognition and tracking technology to a client in Huawei headquarters' showroom in Shenzhen, Guangdong Province, China, in March 2019. EPA/ALEKSANDAR PLAVEVSKI
What do we mean when we say datafication? The term loosely refers to the process of turning phenomena into data. Over the past decades, the increasing reliance on statistics for decision-making in both the public and private sectors has been paired with an explosion in the use of computers, which have powered processes of digitization and automation.
As implied by this proposed definition, datafication is a process whereby a series of characteristics are identified as relevant and then synthesized and translated into a structured format to enable downstream tabulations, classifications, analysis, and actions. As such, datafication is a deep driver of change. One which is likely to have a structural impact on rights and governance practices.
As with any process of change, the final outcome ultimately depends on the balance of power shaping it. Thus, the degree to which human rights practitioners are capable of understanding the ways in which such processes of change might affect the rights and interests of people, and power relationships at large, is key for an effective defense of human rights in the years to come.
The process of datafication of public and intimate spaces
The clearest example of how datafication takes place in public spaces is the set of processes taking place under the umbrella term “smart city.”
This is a commercial term that is also loosely defined. But it is used in reference to the processes and subset of markets focused on the adoption of digital technologies for city-services. These technologies collect vast amounts of data about people and firms working in cities, how they interact with municipal governments and with the cities’ infrastructure. The term, though, usually raises a heated debate, intersected by different conceptions of rights and ethics.
On a parallel front, technological development has allowed us to move powerful computers from warehouses to intimate spaces like homes, pockets, wrists, and sometimes even under people’s skin. Every possible appliance seems to be getting a chip inserted into it. Following this process of technological adoption, the spaces that used to be intimate and private are now increasingly datafied and then absorbed into the digital realm of the internet.
As of 2020, over half the world population has access to the web, and with it, access to a human-made, virtual universe where much of a person’s activity is being tracked and analyzed, both to ensure things operate smoothly, as well as to develop synthetic profiles of users that facilitate targeted advertising. Not only is the proportion of people engaging with the web growing, but also, the amount of time spent online by those who are connected. Whereas in 2010 it is estimated that the average person spent just over an hour online, today those same estimates place us at over three hours today. And tracking capabilities in the online space have also increased, all of which means more data and more detailed data.
The degree to which human rights practitioners are capable of understanding the ways in which such processes of change might affect the rights and interests of people, and power relationships at large, is key for an effective defense of human rights in the years to come.
As the depth and breadth of the delegation of tasks onto data-powered algorithms increases and becomes commonplace, a set of basic assumptions regarding who we are and how the world works are likely to change. Algorithms operate at a scale that is different from that of humans.
On the one hand, datafication appears to be operating on a shorter scale than that of our deeply rooted collective culture. Datafied systems are, for example, potentially quicker to react to changes in the data stream than humans, who typically allow their actions to be informed by values and culture that might have been forged over centuries—centuries during which data was not produced at the scale that it is produced today, and therefore not informing the analysis of datafied systems.
On the other hand, datafication operates on a scale broader than that over which our individual memories can operate. These datafied systems are processing and reacting upon billions of datapoints every second. For these reasons, enabling space for the type of systems datafication provides, for better or for worse, is likely to either require or trigger a shift in our worldview.
Why is it relevant for human rights?
The world will be a very different place in 2030 compared to the times in which most of the core human rights declarations and conventions were discussed and agreed on.
Since the days in which the core conventions and declarations were agreed upon, many African states have gained their independence and are now coordinating in favor of their interests. And countries like Brazil, China, Indonesia, Mexico and Nigeria are now increasingly capable of shaping global technologies, regulation and markets in line with their cultural views.
Running in parallel, but also exerting its influence over the way technologies are designed and datafication takes place, are what power has considered its peripheries. With the advancement of communication technologies, people from the periphery are not only receiving information, but adding their voices, cultural perspectives and tech in ways that shape the hegemonic views and the tech landscape that are themselves creating the process of datafication. This includes the peripheries within every country, whose communities often have radically different traditions and value systems than the majority, and are more actively shaping the global stage through soft and hard power.
The combination of political, geopolitical, technological and socio-cultural shifts suggests a change in the way we understand rights is most likely already underway. And one undercurrent fueling this process is most likely the shift in worldview fueled by datafication. The tensions at this fault line are becoming apparent. Be it when the Indian Executive defends its extensive data collection scheme by claiming that privacy is a western construct, or when the EU requires that companies processing data regarding its citizens do not send it over to the US, its historical ally, because it considers the US violates basic privacy rights.
As this process continues to evolve, categories that might have been cornerstones of our past and present might very well become outdated. A key category that is likely to come under pressure is that of the individual. Since datafication is typically leveraged to segment and group, it is likely that such groupings become increasingly relevant, perhaps at the expense of the notion of the individual, which might become but another collection of varied characteristics. A unit of analysis that is considered at times too broad, and at other times too narrow to be considered relevant or useful.
Some of the early expressions of such pressure points were perhaps first visible through the coining of concepts such as context collapse to describe how what used to be separate practical identities (father, son, colleague) merged into one in the context of centralized social media, as people suddenly had to face their varied audiences all at once. This exemplified the power tech had to force a rearticulation of personal identity.
The way in which computers dissect and process the faces of people to define what their relevant and unique characteristics are, yet again involves a process of redefinition and rearticulation of identity in a way that is consequential. Because we share so many characteristics with others, some of these systems are able to construct synthetic variables about us to represent characteristics we have not disclosed but which can be extrapolated onto us based on information that people who are considered to be similar to us in relevant ways have disclosed. This suggests that we may no longer be in control over who we are in the face of these systems.
This further suggests datafied systems do not understand our identity as something cohesive, but rather as a collection of characteristics. If this construct continues to develop we would need to re-evaluate the boundaries between individual autonomy and group rights.
In synthesis, the ways in which we will define and redefine our existence is inextricably linked to the process of datafication and automation. The value systems promoted by such technologies can become normalized, adopted, and thereafter, difficult to observe as such, and even more difficult to push back against. It is crucial for human rights practitioners to understand these processes and help shape them, guiding or combatting their designs and deployment as we move into the future.
This post is a revised extract from an upcoming report by JustLabs and OpenGlobalRights on the the impacts that the process of datafication has had on public and intimate spaces over these past decades, and how the way in which the risks and opportunities played out in the past can inform our thinking about human rights in the near future.
Juan Ortiz Freuler is an associate at JustLabs, an affiliate at the Berkman Klein Center, a PhD fellow at Annenberg School of Communication and Journalism at USC, and a rower for the non-aligned tech movement.