Why data science belongs at the heart of international human rights advocacy

Credit: Alejandro Ospina

Many human rights communities feel skeptical about emerging technology. This caution is grounded in a commitment to leveling inequality, protecting personal privacy, and placing human thriving over technological advancement. The world has witnessed Big Tech’s capriciousness, the erosion of digital safeguards, and the weaponization of platforms against communities that advocates seek to protect. 

Yet in the face of intersecting global crises and a shifting funding landscape for human rights, there is an increasing need and opportunity to harness data science and emerging technologies toward better justice outcomes. How can advocates make use of new technologies, and what are the challenges in doing so? We confronted these questions as we designed a university-based program that integrates legal advocacy with data science and emerging tech.

Over the past decade, open-source intelligence (OSINT) labs have emerged within the human rights field. Primarily housed within universities or international NGOs, these labs gather, verify, and analyze publicly available digital information (social media posts, satellite imagery, and government statistics) to document abuses. Their work has broken new ground, providing new forms of evidence and powerful investigative tools. However, the current generation of OSINT labs largely focuses on documentation and verification, with less attention given to turning this knowledge into advocacy tools, uncovering deeper patterns, quantifying harm, or helping establish responsibility at scale through statistics or machine learning. 

What data science can offer advocates 

Data science brings a rigor and analytical depth that can transform human rights practice, helping to build legal cases, providing powerful support for policy change, and broadening public engagement with community-generated human rights narratives. 

Significant human rights cases often involve mass violations of rights and systemic injustice. In many instances, researchers and communities have collected huge amounts of documentation that would require untold hours of manual labor to review. Advanced algorithms can mine these materials to reveal connections between perpetrators, victims, and events that would otherwise remain hidden. Machine learning models can map networks of actors involved in systematic abuses or detect links between violations and civilian harms. As the Human Rights Data Analysis Group (HRDAG) has pioneered, advocates can use statistical modeling to estimate the true scale of violations, even when direct evidence is partial or suppressed. This can be crucial for establishing the magnitude of harm in court.

By analyzing trends across diverse data sources (public health records, census data, satellite images), data science can also help predict emerging threats, enabling proactive legal and advocacy responses. For example, global warming increases climate change–related impacts on health, such as heat-induced illness. To maximize its impact, data-based advocacy requires human rights concepts and definitions to be translated into data frameworks so that analyses support the elements required to prove violations under international law.

But this is only a first step: data, computations, and abstract statistics need to be brought to life in order to communicate effectively to judges, policymakers, or the general public. Interactive, multimedia platforms can do that and help build narratives that center the voices and experiences of affected communities. Groups like Forensic Architecture, SITU, and Bellingcat are already doing important work in this area, and digital storytelling should be encouraged.

Human rights data ethics

However, the promise of data science comes with profound challenges. Human rights practitioners must navigate a landscape pockmarked by ethical dilemmas and practical constraints. In the spring of 2025, we interviewed almost two dozen practitioners working at the intersection of human rights and digital technology, and they identified several ongoing tensions to manage.

The first is the ethics of using digital evidence, which can elevate the risk profile of victims and communities, exposing them to surveillance, retaliation, or loss of privacy. Moreover, consent is often difficult (if not impossible) to obtain for open-source information, which means that advocates have to exercise heightened precautions. Additionally, the environmental impact of large-scale data processing cannot be ignored. These concerns are not easily solved because they are inherent in the application of digital tools, a classic polarity with which organizations will continue to grapple.

The second challenge centers on the reception of data-driven evidence in legal proceedings. Lawyers we spoke to worry that unproven or poorly explained methodologies like facial recognition or generative AI risk being dismissed, undermining entire cases, or contaminating other evidence. It is crucial, then, to think carefully and creatively about how these methods can be made reliable and accessible enough to legal professionals that appropriate data science methods become trusted sources in court. 

Finally, resource constraints place real limitations on the degree to which data science can be integrated into human rights advocacy. Well-funded groups based in the Global North told us that they see the promise of techniques like generative AI for human rights investigations, but they do not have the human or computing infrastructure to use them. This lack of resources creates a dependency on donations from big technology companies, raising questions about sustainability and values alignment.

The contribution of universities to building the field

The future of human rights advocacy will be built by those who can bridge the worlds of law, technology, and ethics. We are exploring how our university-based human rights clinic can make a distinct contribution. Through hands-on, client-facing projects, multi-disciplinary teams of students from law, data science, and other technical fields will learn not only how to apply scientific tools but also how to translate complex findings into accessible, actionable insights for courts, policymakers, and the public. We believe this will create a pipeline of practitioners who can continue to build the field after graduation.

Universities are uniquely positioned to facilitate experimentation, deep analysis, and the development of practical tools that the broader human rights field can adopt. We have the ability to bring together experts across a range of domains to approach complex challenges, while educating the next generation of human rights advocates.

The integration of data science into legal case-building is not a panacea—technology cannot replace the fundamental work of listening to communities, building trust, and advocating for systemic change. But when used thoughtfully, we believe it can accelerate our ability to document, prove, and remedy violations of international law.