Addressing the gender bias in artificial intelligence and automation

Geralt/Pixabay


Twenty-five years after the adoption of the Beijing Declaration and Platform for Action, significant gender bias in existing social norms remains. For example, as recently as February 2020, the Indian Supreme Court had to remind the Indian government that its arguments for denying women command positions in the Army were based on stereotypes. And gender bias is not merely a male problem: a recent UNDP report entitled Tackling Social Norms found that about 90% of people (both men and women) hold some bias against women.

Gender bias and various forms of discrimination against women and girls pervades all spheres of life. Women’s equal access to science and information technology is no exception. While the challenges posed by the digital divide and under-representation of women in STEM (science, technology, engineering and mathematics) continue, artificial intelligence (AI) and automation are throwing newer challenges to achieving substantive gender equality in the era of the Fourth Industrial Revolution.

If AI and automation are not developed and applied in a gender-responsive way, they are likely to reproduce and reinforce existing gender stereotypes and discriminatory social norms. In fact, this may already be happening (un)consciously. Let us consider a few examples:  

  • As a 2019 UNESCO report highlights, it is not a coincidence that virtual personal assistants such as Siri, Alexa and Cortana have female names and come with a default female voice. Companies behind these virtual assistants are reinforcing the social reality in which a majority of personal assistants or secretaries in both public and private sectors are women.
  • Gender bias pervades AI algorithms as well. With 78% of AI professionals being men, male experiences inform and dominate algorithm creation. This gender bias can have significant adverse implications for women. For example, algorithms could affect women’s access to jobs and loans by automatically vetting out their applications or giving women applicants an unfavourable rating. Similarly, the algorithm-based risk assessment in criminal justice systems could work against women if the system did not factor in that women are less likely than men to reoffend.
  • Although robotization and automation of jobs will impact both men and women, gender bias is likely to carry forward and thus impact women disproportionately. For instance, women over-represented in certain high-risk automation sectors might suffer more: if over 70% of workers in apparel manufacturing are women, automation will affect women more than men. Relative lack of mobility and flexibility is also very likely to reduce bargaining position of, or alternative job options for, women generally.

Despite the potential for such gender bias, the growing crop of AI standards do not adequately integrate a gender perspective. For example, the Montreal Declaration for the Responsible Development of Artificial Intelligence does not make an explicit reference to integrating a gender perspective, while the AI4People’s Ethical Framework for a Good AI Society mentions diversity/gender only once. Both the OECD Council Recommendation on AI and the G20 AI Principles stress the importance of AI contributing to reducing gender inequality, but provide no details on how this could be achieved.

The Responsible Machine Learning Principles do embrace “bias evaluation” as one of the principles. This siloed approach of embracing gender is also adopted by companies like Google and Microsoft, whose AI Principles underscore the need to avoid “creating or reinforcing unfair bias” and to treat “all people fairly”, respectively. Companies related to AI and automation should adopt a gender-response approach across all principles to overcome inherent gender bias. Google should, for example, embed a gender perspective in assessing which new technologies are “socially beneficial” or how AI systems are “built and tested for safety”.

What should be done to address the gender bias in AI and automation? The gender framework for the UN Guiding Principles on Business and Human Rights could provide practical guidance to states, companies and other actors. The framework involves a three-step cycle: gender-responsive assessment, gender-transformative measures and gender-transformative remedies. The assessment should be able to respond to differentiated, intersectional, and disproportionate adverse impacts on women’s human rights. The consequent measures and remedies should be transformative in that they should be capable of bringing change to patriarchal norms, unequal power relations. and gender stereotyping.

States, companies and other actors can take several concrete steps. First, women should be active participants—rather than mere passive beneficiaries—in creating AI and automation. Women and their experiences should be adequately integrated in all steps related to design, development and application of AI and automation. In addition to proactively hiring more women at all levels, AI and automation companies should engage gender experts and women’s organisations from the outset in conducting human rights due diligence.

Women should be active participants—rather than mere passive beneficiaries—in creating AI and automation.

Second, the data that informs algorithms, AI and automation should be sex-disaggregated, otherwise the experiences of women will not inform these technological tools and in turn might continue to internalise existing gender biases against women. Moreover, even data related to women should be guarded against any inherent gender bias.

Third, states, companies and universities should plan for and invest in building capacity of women to achieve smooth transition to AI and automation. This would require vocational/technical training at both education and work levels.

Fourth, AI and automation should be designed to overcome gender discrimination and patriarchal social norms. In other words, these technologies should be employed to address challenges faced by women such as unpaid care work, gender pay gap, cyber bullying, gender-based violence and sexual harassment, trafficking, breach of sexual and reproductive rights, and under-representation in leadership positions. Similarly, the power of AI and automation should be employed to enhance women’s access to finance, higher education and flexible work opportunities.

Fifth, special steps should be taken to make women aware of their human rights and the impact of AI and automation on their rights. Similar measures are needed to ensure that remedial mechanisms—both judicial and non-judicial—are responsive to gender bias, discrimination, patriarchal power structures, and asymmetries of information and resources.

Sixth, states and companies should keep in mind the intersectional dimensions of gender discrimination, otherwise their responses, despite good intentions, will fall short of using AI and automation to accomplish gender equality. Low-income women, single mothers, women of colour, migrant women, women with disability, and non-heterosexual women all may be affected differently by AI and automation and would have differentiated needs or expectations.

Finally, all standards related to AI and automation should integrate a gender perspective in a holistic manner, rather than treating gender as merely a bias issue to be managed.

Technologies are rarely gender neutral in practice. If AI and automation continue to ignore women’s experiences or to leave women behind, everyone will be worse off.

 


This piece is part of a blog series focusing on the gender dimensions of business and human rights. The blog series is in partnership with the Business & Human Rights Resource Centre, the Danish Institute for Human Rights and OpenGlobalRights. The views expressed in the series are those of the authors. For more on the latest news and resources on gender, business and human rights, visit this portal.