As artificial intelligence progresses, what does real responsibility look like?

EFE/ Michael Reynolds
While Mark Zuckerberg prepares to testify before the Senate, 100 cardboard cutouts of the Facebook founder and CEO stand outside the U.S. Capitol in Washington on Tuesday, April 10, 2018.

Artificial intelligence (AI) technologies—and the data driven business models underpinning them—are disrupting how we live, interact, work, do business, and govern. The economic, social and environmental benefits could be significant, for example in the realms of medical research, urban design, fair employment practices, political participation, public service delivery. But evidence is mounting about the potential negative consequences for society and individuals. These include the erosion of privacy, online hate speech, and the distortion of political engagement. They also include amplifying socially embedded discrimination where algorithms based on bias training data are used in criminal sentencing or job advertising and recruitment. Further, certain vulnerable groups will require special attention. For example, UNICEF has recently published a series of papers on Child Rights and Business in a Digital World elaborating on risks and opportunities children face in the areas of access to education and information, freedom of expression, privacy, digital marketing, and online safety.

All of this means we urgently need a robust view of what responsible conduct looks like, and a vision for how markets and governance mechanisms can guide the right behaviors while also encouraging innovation. We believe that the business and human rights field, in particular the UN Guiding Principles on Business and Human Rights (UNGPs), offers a compelling way forward that perfectly fits the challenge ahead.

In a recent US Senate hearing, Facebook CEO Mark Zuckerberg returned again and again to a sentiment that the company did not take a “broad enough view” of their responsibility. This perfectly captures the state of affairs, not just for Silicon Valley, but for the entire global business community in the midst of developing their digital strategy for the so-called fourth industrial revolution. Zuckerberg spoke of a “broader philosophical shift in how we approach our responsibility as a company” and that “we have to go through all of our relationships with people and make sure that we’re taking a broad enough view of our responsibility”. He said it was a mistake to “view our responsibility as just building tools” and when questioned about the company’s early move fast and break things mantra, admitted: “I do think we made mistakes because of that”.

In a recent US Senate hearing, Facebook CEO Mark Zuckerberg returned again and again to a sentiment that the company did not take a “broad enough view” of their responsibility. 

Yet beyond noting several different legislative proposals (for example around data privacy, online child protection, consent, and campaign advertising) none of the 50 Senators probed for a clear articulation of what “broader responsibility” might mean in practice. We are left with a hopeful sentiment and no clear sense of what society should expect (or require) of organizations and companies developing, selling, buying, and using data-driven and AI technology solutions. Having no framework of responsibility—nor an understanding of how stakeholders can check that responsibility—is not a good result for anyone. Law makers and corporations will endure a tug-of-war between the false choice of libertarianism and government intervention. Civil society and affected people will seek accountability when harms occur but likely lack the technical know-how or resources to achieve their desired results. And entrepreneurs, engineers, technologists, data scientists, sales teams, and business leaders will have no compass to guide their practices. All the while, innovation will continue and more harm to people (especially vulnerable groups) will likely occur. Lots of noise, but no change.

For these reasons, the UN Guiding Principles on Business and Human Rights (UNGPs) could meet the challenge ahead by moving steadily forward to fix what is not working for rights-holders. Several recent developments point in that direction. Initiatives such as the Partnership on AI, the Ethics and Governance of AI Fund, AI Now, and the work of Data & Society are exploring the social, ethical, and human rights implications of AI. Governments are moving forward too. The Australian Human Rights Commission, for example, has launched a project which seeks to ensure human rights are prioritized in the design and regulation of new technologies. And numerous organizations, such as the Information Technology Industry Council, the Software and Information Industry Association, and the Institute of Electrical and Electronics Engineers, have published principles for ethics and AI—some explicitly referring to the UN Guiding Principles.

We see three key questions as critical to this debate and will be writing papers that address them in coming months:

  • In what ways can the business and human rights field address the potentially negative consequences of disruptive technologies? For example, how can society’s governance of adverse impacts on the most vulnerable (such as children, those living in poverty, and ethnic minorities) keep pace with change? Companies should know and show that they respect all human rights in the course of doing business, and there is a global need to provide swift and meaningful remedy when harms occur. Those working in this area need to reflect on the limits of current frameworks, tools and practices, and identify where innovation is needed—such as methods that address human rights opportunities, as well as human rights risks.
     
  • Who needs to be part of realizing a broader practice of responsibility? Large technology firms have a key leadership role to play. But we believe that the human rights issues associated with big data and AI are now a concern for all industries, not just technology companies. The potential impacts on human rights of deploying AI solutions in certain core business activities (such as in sales, marketing, and the workplace) and in diverse industries (such as financial services, retail, healthcare, and transport and logistics) are significant. However, the question of “who” has other important dimensions. Who within large companies should be involved in ensuring respect for human rights (such as technology officers, engineers, data scientists, and lawyers)? What is the role of university research centers, app developers, entrepreneurs, professional bodies, and venture capitalists? How should civil society and rights-holders be involved in defining good corporate practice?
     
  • What tools, methodologies or guidance will operationalize business respect for human rights in the context of disruptive technologies? How can analysts and activists map the needs of business leaders while integrating respect for human rights into company decision making? In this aspect of the debate, it is critical to highlight cases of good or emerging practices, whether related to embedding respect for human rights into policies and processes, or related to specific impacts and dilemmas. Polices, guidance, and tools that could help integrate respect for human rights into decision making about disruptive technologies are also of utmost importance. These might include: tools designed to assess the human rights impacts of products, services, and technologies; key human rights-related questions to address in due diligence when developing or procuring technological solutions; guidance on how to ensure rights-holder perspectives inform technology design; or principles for how to address informed consent and remedy in the context of digital complexity.

It is time to define responsibility in real terms and time to figure out how to embed dignity for all and respect for human rights into the fourth industrial revolution. As this debate in OGR develops, we welcome your responses to these questions, and we look forward to testing and refining our ideas with leaders in business, civil society and government.

*** This article is part of a series on technology and human rights co-sponsored with Business & Human Rights Resource Centre and University of Washington Rule of Law Initiative.