Addressing the potential human rights risks of the “Fourth Industrial Revolution”
Technology has the power to free us from drudgery or to decimate livelihoods, and the choices that governments and companies make will often determine the difference.
A robotic hand developed by the Southkorean company holds an apple during the International Convention of Robotics and Automatization (ICRA).
Over the last five years, technology has entered the business and human rights sphere with a power that few of us anticipated. This power has both captured our imaginations with unexpected opportunities and created new risks: the potential of automation to free billions of us from drudgery or to decimate livelihoods; the opportunity of the gig economy to create both shared economies and responsive services, or to undermine the essential rights of workers; the power of big data to strengthen corporate due diligence, or to turbo-drive the existing prejudices and bigotry in our societies. These stark options raise a critical question for the business and human rights movement: how can we help to ensure that no one is left behind by this transition? The choices that we, our governments, and tech companies make will determine whether essential human rights will be realised by the “fourth industrial revolution” or whether this “revolution” will result in increased violations of human rights and exacerbate inequality.
Technology has entered the business and human rights sphere with a power that few of us anticipated.
Automation, as one example, has long threatened the manual jobs of low-skilled, low-waged workers and developments in technology only renew and expand this shift. The threat of job losses is real, particularly among workers that are already vulnerable, and could drive new levels of inequality as wages are turned into profit and this capital is concentrated even further into the hands of a few. But this is not inevitable. The same wealth could, with enlightened foresight and a focus on rights, end the slog of assembly lines, and provide retraining and work opportunities that pay a living wage. The automation of mundane, repetitive tasks could create new opportunities for workers, while enhancing the provision of rights and services such as health and education for all. For this to happen, it is critical that workers and labor unions be at the center of discussions and decision-making about automation. Companies increasing their use of automation can help to mitigate negative impacts by encouraging and participating in these discussions and taking action on what workers and unions say they need, such as by investing in new skills development opportunities.The business and human rights movement must also bring its powerful insight to the table, upstream, before the norms and rules are set.
While social media and the internet have become increasingly important means through which human rights defenders and activists mobilise and advocate, these same tools are often used by governments and the private sector to restrict and monitor advocates, violating freedom of expression and assembly. This includes internet shutdowns by governments, which can be used to stop protests, influence elections, and control dissemination of information, restrictions of virtual private networks (VPNs), and attacks on online activists and journalists. According to a report by the Anti-Defamation League, during the year leading up to the 2016 US presidential election, there were 19,253 overtly anti-Semitic tweets directed at 800 journalists, generating 45 million “impressions” (views of tweets). Just ten journalists, all of whom are Jewish, received 83% of these tweets. This is just one chilling description of the power of social media to facilitate and spread hate and direct threats.
In the last year, tech giants like Google and Facebook have undergone a reverse-metamorphosis in the minds of some: from the butterfly of democratic flowering, to the worm eating at the sacred fruit of rights and democracy. Significant public trust has been squandered with revelations of political manipulation of these platforms in elections and referendums. The most notable recent example is the allegations against Facebook that 50 million profiles were harvested without explicit consent and used by data company Cambridge Analytica to influence the outcome of the US 2016 presidential election in favor of Donald Trump.
Surveillance, online and through specialized technology, is also a significant concern. Governments are now regularly acquiring powerful surveillance technology from private firms, which is sometimes being used to monitor human rights advocates. The Mexican government’s alleged spying on human rights defenders using NSO group’s spyware is just one example. In 2017, civil society organizations alleged that journalists and advocates denouncing forced disappearances and sexual abuses were spied on by the government using software from NSO Group that was intended for use against drug cartels or terrorist groups.
There is also an urgent need for action to protect privacy and decision-making from algorithmic bias. The EU’s General Data Protection laws, which came into effect in May 2018, could be the start of a wave of more robust government and multilateral action to insist that tech giants become more socially responsible. At the same time, ICT companies’ practices can positively affect users’ freedom of expression and privacy, such as Microsoft’s five-year partnership with the Office of the UN High Commissioner for Human Rights to develop technology to better predict, analyse, and respond to critical human rights situations.
Given all of these concerns, the Business and Human Rights Resource Centre is increasing its focus on understanding and analyzing the nexus of technology, business and human rights, as evidenced by the launch of our new portal on Tech and Human Rights. The portal is our initial contribution to an emerging group of human rights organisations that believe the global business and human rights movement has to do more in this field. Tech now informs the human rights choices of every other business sector, as well influences the ways in which civil society operates. Many in our movement are contributing substantial expertise, but some others do not yet have the specialized knowledge and capacity to push for human rights to be at the centre of this transition, and to utilize new technologies to strengthen their work. Human rights organisations need to further incorporate this analysis into our work and focus on how technologies are being utilized: from the protection of civic freedoms and human rights defenders, to fighting land and water grabs, to preventing modern slavery and ensuring a living wage, to ending corruption and tax evasion.
In this new OpenGlobalRights series on technology and human rights, in partnership with the University of Washington Rule of Law Initiative and OpenGlobalRights, we encourage readers to join the debate and invite thought leaders to explain the new, positive applications and impacts of tech to our field—and there are many—alongside the allegations of abuse, and the threats to rights that emerge. These discussions can demonstrate the potential for a future of shared prosperity and shared security with tech, or conversely one of hyper-inequality, polarisation, and permanent surveillance. We look forward to a robust and enlightening debate that leads to action in business and human rights.
***An earlier version of this article was first published by the Business and Human Rights Resource Centre and is available here.
*** This article is part of a series on technology and human rights co-sponsored with Business & Human Rights Resource Centre and University of Washington Rule of Law Initiative.
Phil Bloomer is the Executive Director at the Business & Human Rights Resource Centre. Follow him on Twitter @pbloomer
Christen Dobson is a senior project lead and researcher at the Business and Human Rights Resource Centre, and she manages the Resource Centre’s work on technology and human rights.