What “datafication” means for the right to development

Breakthroughs in technology—including artificial intelligence—can help fulfill the right to development, but digital technologies are not magic bullets; there is a strong role for governance. 


By: Anita Gurumurthy & Deepti Bharthur
September 27, 2018

Available in:
Español | Français


Photo: pixaby/geralt/18399 (CC0 Creative Commons)


The digital paradigm with its arsenal of technologies promises to solve the most pressing social and economic challenges of our time. Breakthroughs in technologies such as Artificial Intelligence, the Internet of Things, block chain, and other developments are being heralded as possible solutions for a diverse range of problems. Yet technology-based decision-making also raises important questions on how the right to development will be realized in this new digital paradigm.

The right to development declaration outlines development as “a comprehensive economic, social, cultural and political process, which aims at the constant improvement of the well-being of the entire population on the basis of their active, free and meaningful participation in development and in the fair distribution of benefits resulting therefrom.”

But the development discourse today, inextricably situated within the political economy of data—where a North-led paradigm of “datafication” functions as a key mode of value generation and economic restructuring—bears little resemblance to this inclusive articulation. Instead, the discourse has become increasingly technical, and the idea of big data for sustainable development seems to raise more questions than it provides answers.

First, global policy discourses and frameworks around data have skewed the digital innovation tide in favor of developed countries. This discourse does not account for differing data capacities of countries, allowing the global North to devise and deploy frames of evaluation selectively. For example, the cherry picking of indicators and benchmarks in the SDGs to measure highly complex development goals can lead to a situation where “counting the trees can hide the forest instead of understanding it”.

Second, data and technological arrangements in the global South and North worryingly point to a wholesale private capture and consolidation of critical data regimes in the developing world: trade, agriculture, health and education. Not only does this leave citizens in developing countries vulnerable to acute privacy violations, but it also bears decisively on their economic, social and cultural rights (ESCRs).  For example, in India, the acquisition of home grown successes, such as Wal-Mart’s purchase of the domestic e-commerce unicorn, Flipkart, poses very serious outcomes for the livelihoods of small producers and traders.  

A third trend is the influx of “data-led” developmental solutions pouring into the African continent, with states, corporations and non-governmental actors engineering various interventions in social and economic sectors. As scholar Linnet Taylor has observed, this can lead to “a new scramble for Africa: a digital resource grab that may have implications as great as the original scramble amongst the colonial powers in the late 19th century.”

Last but not least, the global “platformization” of content (i.e., the rise of Video on Demand and online streaming platforms) shows us that guarantees of people’s access to culture of their selection may now become beholden to digital intermediaries. In Brazil, for example, the widespread expansion of Netflix is eviscerating the local audio-visual market. As Mariana Valente from InternetLab in Brazil notes, Netflix does not contribute in any way monetarily to the largely levy-supported media industry of Brazil. The platform’s predictive “search” and “suggest” recommendations (the primary way in which audiences make viewing decisions) is not open to scrutiny. Also, longstanding policies for positive discrimination of traditional media to promote local content do not apply in the Netflix context. Brazilian independent media thus risks being relegated to the back pages of a catalog, at the mercy of a private data model that favors popular cultural content from the North.

Despite these threats to the economic, social and cultural rights of citizens, the scenario is not one of complete gloom. Some AI for good” initiatives are seeking social justice solutions rather than creating techno-fixes for the world’s problems. For instance, Stanford’s Immigration Policy Lab’s new machine learning algorithm analyzes historical refugee settlement data in the US and Switzerland to predict and optimize placement and integration policies for refugees. The algorithm predicts refugee chances for success and matches them with placement opportunities accordingly. This solution, free to government agencies, non-profits and administrations in the US and Europe, could expand social and economic opportunities for refugees.

However, while this algorithm is meant to complement and not necessarily displace human discernment, it is not hard to imagine a future where humanitarian assistance to refugees becomes predicated on their (technology proven) ability to viably assimilate and contribute to their host economies. Could the trade-off for a smoother resettlement process be the exclusion of those that the algorithm will one day write off as “inadmissible” and “unsolvable”? These are pressing policy challenges, and such prediction models need to be closely and continuously tracked for possible social distortions and subject to institutional audit. As the digital paradigm evolves, the pathway for human rights is likely to become more complicated, making appropriate regulation more important than ever. The realization of ESCR and the right to development centers on data democracies that are accountable.

Could the trade-off for a smoother resettlement process be the exclusion of those that the algorithm will one day write off as “inadmissible” and “unsolvable”?

Earlier this year,  the Toronto Declaration on “equality and non-discrimination in machine learning systems” was brought out at RightsCon by Amnesty International and Access Now.  While the declaration is an important step, global democracy in the 21st century hinges on the taming of runaway machine intelligence through a binding global code of ethics that can reject the use of AI for purposes that contravene international law and human rights obligations.

States have a more proactive role to play in developing frameworks around data governance.

First, frameworks of data governance need to move beyond the idea of “informed consent” and also account for public interest.  Simplistic assumptions about user ability to make rational choices regarding data may disregard data subjects’ differing felicities and locations. For instance, for the citizen critically dependent on the state’s welfare measures, there is no real  “opting-out“. Therefore, for consent to be a truly effective means of ethical data usage and brokering, it must be tied to real choice.

The second imperative for policy making is to recognize the role of data and AI that is based on data as crucial public infrastructure that is key for a new development horizon.  States must invest in the idea of “data as a public good”, so it can work to enhance human rights. First, states must create policy for effective data sharing between governments and the private sector for sectors that are of critical social importance. The municipality of Curitiba in Brazil, for instance, has taken the lead in passing local legislation that mandates anonymized data sharing between the local government and the ride aggregator Uber. The intention is to tap into Uber’s large and rich data sets towards better city planning and traffic management outcomes. 

More importantly, states must create a data commons with independent oversight. Although nascent, experiments with models for managing big data repositories are increasing.  Such repositories can encourage domestically led innovation, with local start-ups and public agencies taking the lead in developing appropriate AI-based solutions for social problems. The Indian government’s decision to develop a precision and prediction analytics portal of cross-domain data on agriculture for farmers is a good example of this. As a public resource, such a tool can have widespread use among farmers, startups, agro-businesses and public agencies. Of course, the success and long term sustainability of these initiatives depends on appropriate policy oversight and measures to regulate data sharing, processing and use, predicated on a new vision of the indivisibility of rights. 

*** This article is part of a series on technology and human rights co-sponsored with Business & Human Rights Resource Centre and University of Washington Rule of Law Initiative.


Anita Gurumurthy is a founding member and executive director of IT for Change, where she leads research collaborations and projects in relation to the network society, with a focus on governance, democracy and gender justice.

Deepti Bharthur is a research associate at IT for Change. She contributes to academic, action, and policy research in the areas of e-governance and digital citizenship, data economy, platforms and digital exclusions.


 

COMMENTS