AI insights into human rights are meaningless without action

We need to act upon the insights that we glean from AI: technology is not a replacement for the political will needed to drive change.

 ILO in Asia and the Pacific - Flickr(CC BY-NC-ND 3.0 IGO)

AI can be used to process the multitude of available data to detect human rights violations that many workers face around the world in factories, farms and mines.

There has been a lot of discussion lately about Artificial Intelligence (AI) and its potential consequences, positive or negative, on human rights. While AI has tremendously improved our ability to process the world around us, and can be used to promote human rights, governments, multi-national corporations and others with the power to drive change don’t often realize this potential. That is, while technology can help uncover and improve understanding of human rights issues—we, the humans, have to develop the political will to intervene. Unfortunately, this is something that is not done enough of in the human rights space.

For example, the US Department of Labor estimates that 139 goods from 75 countries may be made from child or forced labor. AI can be used to process the multitude of available data to detect human rights violations that many workers face around the world in factories, farms and mines. Most multi-national corporations have codes of conduct that they expect suppliers to abide by, using audits to monitor working conditions. Computing technology is often required to process this vast amount of audit data in order to flag issues.

However, corrupt auditors can forge audit data; or, inexperienced auditors may not communicate directly with workers in their native languages and thus not record possible violations. These limitations may skew the picture of whether or not the supplier is acting ethically. AI enabled systems can validate audits with other information sources, such as news reports, court filings, public records and other materials. Furthermore, AI can also be used to scan social media, chat forums, message boards or public comment websites for any references to those suppliers left by workers. Supply chain managers can use AI to analyze all these various streams of data together to obtain an independent human rights assessment of a supplier’s labor practices.

However, this AI enabled processing can also be flipped for nefarious purposes. Illicit factory owners can use technology to comb through worker social media posts to target union organizers or “trouble makers”. Corrupt states can use facial recognition technology coupled with AI to target migrant workers and/or human rights defenders who are challenging repressive regimes on exploitative labor practices. Also, this technology can identify workers that governments could subject to harassment or arbitrary detention based on the AI informed suspicion that they might challenge employment practices and poor working conditions.

Furthermore, increased automation can lead to a decrease in certain types of jobs, displace low wage workers, and depress wages. A 2016 US White House report finds that anywhere from 9% to 47% of jobs over the next two decades could be disrupted by AI and automation. The International Corporate Accountability Roundtable estimates that two-thirds of all jobs in developing countries could face significant automation, mainly in the apparel, electronics and agricultural sectors. A 2016 report by the International Labour Organization identifies the risks that “automation, robots and artificial intelligence” will place on millions of workers in Asia. These impacts will be born on the shoulders of low-income women and migrants, who are already some of the world’s most vulnerable. Governments need to create policy frameworks and invest resources to ensure that AI does not just generate wealth for a select few at the expense of others.

While governments and companies should continue to invest in “AI for good”, they also need to act upon the insights that AI and machine learning deliver. The investment criteria should not just be that the technology was developed or deployed—the human outcomes that were achieved should also be measured. That is, did the “good” that was being sought in an “AI for good” application actually happen? For example, AI can help pinpoint exactly which factory might utilize child labor—but that insight is wasted if the company does not respond, with the risk of deliberately ignoring its ethical obligations to those children. Unfortunately, while some progress is being made, many companies are simply not doing enough to act upon the technology enabled insights on labor abuses that they have access to.

While governments and companies should continue to invest in “AI for good”, they also need to act upon the insights that AI and machine learning deliver". 

Similarly, government regulatory and oversight agencies can also use AI and machine learning to verify how workers in public and private supply chains are being treated. These insights can be used to apply laws already on the books that prohibit goods made from forced labor and child labor from entering consumer markets, or to enforce trade agreements that have often-ignored labor protections in place. Governments need to act upon such technology-gleaned insights by compelling companies to drive supply chain improvements and press their trading partners to uphold workers’ rights.

While AI has, and can have, a tremendous positive impact on human rights, governments need to ensure that the wealth it creates will not exacerbate existing economic disparities. The application of AI should also be measured by the outcomes it produces and whether those violate human rights principles. Most of all, those with the power and ability need to act upon the insights gleaned: technology is just a tool to help understand a problem better—it is not a replacement for the political will that is needed to drive change.

*** This article is part of a series on technology and human rights co-sponsored with Business & Human Rights Resource Centre and University of Washington Rule of Law Initiative.



Samir Goswami is a consultant on business and fair labor issues and teaches CSR in the Division of Global Affairs at Rutgers University. This blog is an excerpt from his recent testimony to the Tom Lantos Human Rights Commission in the US Congress at a hearing titled, “Artificial Intelligence: The Consequences for Human Rights”.


Stay connected! Join our weekly newsletter to stay up-to-date on our newest content.  SUBSCRIBE