Lessons and consequences of the failure to regulate AI for women’s human rights

Remko Koopman / Flickr

The global human rights community has woken up to the evident risks that artificial intelligence (AI) poses to human rights. It seems that a new report is released everyday on the widespread perils that the digital world poses, particularly to minority and excluded groups but also in the form of new harms, from biometric surveillance  to discrimination. Human rights experts now generally agree that States need to regulate this technology given the risks at stake.

In our research, we take this one step further. We argue that the absence of adequate regulation of AI may be in and of itself  a violation of human rights norms. This gap reflects a state of play where governments have relinquished their obligations to protect, fulfill, and remedy. 

Despite the gendered implications in how AI operates and impacts our lives, it remains largely under-analyzed from a women’s rights perspective, particularly when such a perspective provides a useful lens of analysis to extract lessons for other groups of rights-holders.

First, let’s look at the question of gendered stereotypes associated with voice-controlled devices like Alexa or Siri. These types of voice-based technologies promote a limiting stereotype of women: ‘she assists rather than directs, she pacifies rather than incites’. For example, in a now-abandoned project called Nadia used Australian actor Cate Blanchett’s voice to develop an AI-based assistant to deliver virtual support services to people living with disabilities. 

By endorsing such stereotypes, gendered AI risks a violation of Article 5 of the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), which requires state parties to modify practices that promote the ‘idea of the inferiority or the superiority of either of the sexes or on stereotyped roles for men and women.’ ‘Nadia’ is a particularly interesting example because it was driven by a well-intentioned goal of facilitating ‘accessible information and communications technology’ as set out in the Convention on the Rights of People living with disabilities. However, it was designed in such a way that neglected the rights of women broadly as set out in CEDAW.

Expanding on our research, we look at the issue of gender-based violence (GBV). For more than 10 years, GBV teams led by the police in Spain have been assisted by a computer-based system (VioGén), which assesses the levels of protection and assistance needed by victims of GBV, considering (among others) the risk of recidivism of the perpetrator. However, many questions have emerged about  use of AI by the public sector and who should be held accountable when AI goes wrong. 

The system in place does not yet use an AI-based algorithm, which is still being tested. Instead, it uses an old system of risk assessment that is reliant on traditional statistics. In 2020, the Spanish government compensated the family of a woman who was murdered by a former partner when she was denied a restraining order by the Spanish police based on the non-AI system. 

It is possible that the new AI-based model may have been able to better identify the risks. Yet, without adequate regulation of AI, when do governments decide it is the right time to move to AI-based decisions? If an AI system protects victims better overall, how do you decide between prioritizing the rights of victims (necessarily women, by law in Spain) and the rights of perpetrators (mainly men) and, for example, a family life? If an AI-model is in place and automatic, how can the victim or perpetrator challenge its decision with data that is not considered by the algorithm? Finally, who is responsible (and how is this determined) if things go wrong? 

Finally, and possibly one of the best known examples of the abuse of AI tech, is the use of deepfake technologies. A certain website (not identified here to avoid its promotion) generates pornographic photos or videos through the superimposition of an uploaded photo onto videos and photos of pre-filmed women (and a small number of men) in its database. Pornography related attacks on non-media relevant figures are more likely to target women than men. This is because deepfakes are an AI tool most often developed with women’s bodies in mind, and partly because a higher proportion of pornography images involve women rather than men. 

The issue here is not whether society considers pornography acceptable. Rather, it is the non-consensual use of the images of women for economic and personal gain. The website stores mainly images of women’s bodies and, therefore, primarily targets women. Should this obvious discrimination by a private entity, only made possible by the AI-driven technology, be allowed? Would it be different if it affected the reputation of political figures, where men are overrepresented across the world?

These three examples illustrate how a gendered lens can help us identify the inherent risks that AI poses for women, but we can imagine  groups of rights holders such as ethnic and linguistic minorities and children facing similar challenges and being left unprotected by the law.

The current absence of adequate regulation—that is, a state’s failure to establish legally binding norms to protect human rights from the deployment of AI systems—is, in itself, a violation of human rights. We believe all UN review procedures should begin asking countries if their legal regimes are ready for these challenges and what they are doing about it.

It is possible that some of the violations of human rights could be solved within the existing legal regime. However, many others will not. For example, it is difficult to attribute responsibility for failures of machines that keep learning. If states do not have an adequate legal regime to prevent and solve these problems, they would be relinquishing their obligations under international human rights law. This would not be the first recognized instance under human rights law that a failure to regulate would constitute a violation of the positive duties of states to protect. 

These three examples illustrate how a gendered lens can help us identify the inherent risks that AI poses for women, but we can imagine  groups of rights holders such as ethnic and linguistic minorities and children facing similar challenges.

Change could theoretically be achieved with an international treaty. Some regional bodies are already trying to address it—for example a proposal for an EU regulation is currently in the legislative pipeline. But whether the approach is international, regional, national or all of the above, ultimately, there is no reasonable or legitimate case to ‘wait and see’.