The “new green”? Business and the responsible use of algorithms

EFE/John G Mabanglo

The process by which the data powering AI is collected, stored and analyzed is often opaque.


Artificial intelligence (AI) refers to the suite of computer science, engineering and related techniques that are used to build machines capable of “intelligent” behavior, such as solving problems. The patterns and information that can be gleaned from analysis of huge datasets using AI have proven to be extremely commercially valuable: Accenture has estimated that the AI sector will be worth $8.3 trillion in the US alone by 2035.

Algorithms—the step-by-step procedures used by machines for calculations, data processing, and automated reasoning—have long been used to aid decision-making. But in the last few years, the growing ability of AI to interrogate large data sets and make predictions has driven an increase in algorithmic decision-making. Algorithms now take decisions with varying degrees of autonomy in a range of situations, including the criminal justice system, financial services, human resources, advertising (particularly on social media), and healthcare. As algorithms are entrusted with ever more sensitive decisions, this increasing reliance on AI is raising serious human rights and privacy concerns.

For example, the process by which the data powering AI is collected, stored and analyzed is often opaque. Nowhere is this more clear than in the recent scandal concerning the sharing of over 87 million Facebook users’ data with the political consultancy Cambridge Analytica. Generally speaking, many internet users either don’t know that their data is being collected, or else “consent” to its collection when presented with impenetrable “terms and conditions” forms.

Another concern is that it is not always clear that an algorithm is being used to make a decision, or how the algorithm came to a particular decision. Modern machine learning algorithms, particularly neural networks, have often been referred to as “black boxes”. Using algorithms that are not transparent or explainable raises justifiable concerns, especially when the decisions being made could significantly affect a person’s life. One notorious example is provided by the COMPAS algorithm, which was used by American courts to assess the likelihood of an individual re-offending. The algorithm was two times less likely to falsely flag white people, and two times more likely to falsely flag black people as likely to reoffend (the company that produces the algorithm, Equivant (formerly Northpointe Inc.), disputes this analysis. Even worse, the manufacturers have refused to disclose the inner workings of the algorithm on intellectual property grounds, and legal challenges to this position have failed.

"Using algorithms that are not transparent or explainable raises justifiable concerns, especially when the decisions being made could significantly affect a person’s life".

The old computer science adage of “garbage in, garbage out” is particularly relevant here. The analysis and predictions outputted by AI algorithms depend on the data that is inputted—if this is biased in some way, then the outputs will reflect that bias. Biases can occur when the data available is not an accurate reflection of the group of people it is meant to represent, which could be a result of inaccurate measurement, incomplete data gathering or other data collection flaws. The process being modelled may itself exhibit unfairness. For example, a biased algorithm meant that Google ads for jobs paying more than $200,000 were shown to significantly fewer women than men, reflecting established gender pay gaps. Kate Crawford, Distinguished Research Professor and co-founder of the AI Now Institute at NYU, has written compellingly about “artificial intelligence’s white guy problem, which may lead to bias being considered less of a problem, or not being identified when it occurs. The lack of diversity in the AI field compounds clearly matters.

Businesses need to work with governments and civil society to avoid biased and discriminatory uses of AI becoming entrenched. Even as regulation around the application of algorithms to business is being developed, the obligations to protect human rights still apply. Privacy International and Article 19, two organizations that campaign for privacy rights and the right to freedom of expression, have recently argued that companies must ensure that the use of AI must at the very least respect, promote, and protect international human rights standards. 

What about regulation? Chapter Three of the General Data Protection Regulation (GDPR), coming into force in all EU member states in May 2018, deals with transparency in cases of automated decision-making, and provides for “meaningful information about the logic involved”, which can be translated as “the right to explanation”. Consumers residing in EU states will gain the right to ask businesses for insights into decisions taken about them using algorithms. Although many of these most sensitive aspects of GDPR are left to the discretion of Member States with respect to implementation in local law, businesses may want to seize this opportunity to show that they are taking issues of algorithmic transparency seriously by offering their customers as much information as possible about their algorithmic decision-making processes. Market research has suggested that customers are valuing transparency more when making purchasing choices.

There are also good commercial reasons to work hard to avoid discriminatory outcomes from the application of AI. Consumers are more likely to share their data and interact with institutions that they trust. What would make businesses more trustworthy in the age of AI?

In the white paper published by the World Economic Forum in March, the authors argue that when undertaking human rights due diligence, businesses need to proactively consider and integrate principles of non-discrimination, empathy, and the primacy of human dignity into their work. More specifically, they call for:

  1. Active Inclusion: when designing AI tools, developers must ensure that they source the views and values of a wide range of stakeholders, particularly those most likely to be affected.
  2. Fairness: developers, and those who commission the development of AI, should think about what being “fair” means when deploying an algorithm. They should then prioritize this definition of fairness when deciding on performance and optimization metrics.
  3. Right to Understanding: businesses should make it clear when AI algorithms are being used to take decisions that affect individual rights. Algorithms should also provide an understandable explanation of their decision-making. If these conditions cannot be met, then businesses should consider whether the algorithm should be used at all.
  4. Access to Redress: businesses should proactively and transparently provide access to redress to anyone who may be negatively affected, and they should quickly make changes to the algorithms to prevent similar cases reoccurring.

This may be new territory, but businesses can learn lessons from environmental campaigns and the global movement to tackle climate change. Just as consumers expect companies to be environmentally aware (by minimizing their carbon footprint, and reducing the use of plastic packaging, for example), there could be a significant commercial advantage for companies to use algorithms correctly and responsibly. Adopting the correct approach to algorithmic decision-making could become “the new green”. Only then can businesses, governments and citizens work together to maximize the opportunities and minimize the risks of these technologies.

*** This article is part of a series on technology and human rights co-sponsored with Business & Human Rights Resource Centre and University of Washington Rule of Law Initiative.