No single dataset is sufficient for understanding human rights, nor should it be

Do cross-national measures of human rights provide meaningful information that could be used to promote better enjoyment of human rights globally and without discrimination? In their recent post, Neve Gordon and Nitza Berkovitch argue that they do not, using as an example the CIRI Human Rights Data Project’s Electoral Self-Determination measure and its failure to account for the United States’ criminal disenfranchisement policies. I am sympathetic to many of Gordon and Berkovitch’s concerns. Like them, I personally believe that the degree of criminal disenfranchisement in the United States constitutes an unacceptable limitation on the right to vote, especially among African-Americans. However, as a human rights scholar and co-director of the now-archived CIRI Project, I believe that quantitative scholars have produced many important findings and are constantly improving our knowledge. The usefulness of those advances, of course, depends on understanding how the data are created and using the right data for the question being asked.

The difficulties surrounding the definition and measurement of the enjoyment of human rights are common to all studies in the field, regardless of approach. When the CIRI Human Rights Data Project began, its founders, David L. Cingranelli and David L. Richards, chose to focus on government respect for human rights—i.e., the degree to which government practices mirror the state’s obligation to respect human rights in international law. As stated in the CIRI coding guidelines (pg. 61), the electoral self-determination measure was grounded primarily in Article 25 of the International Covenant on Civil and Political Rights (ICCPR), which states that citizens have the right to electoral self-determination “without unreasonable restrictions.”


Photo by Ann McCrannie (Some rights reserved)

The human rights movement must study many different types of, and interactions between, datasets in order to understand the scope and status of its mission.


However, when the CIRI measure made its first appearance in 2001, what constituted an “unreasonable restriction” was still up for debate. In 1996, the Human Rights Committee released a general comment on the right to vote that stated, “If a conviction for an offence is a basis for suspending the right to vote, the period of such suspension should be proportionate to the offence and the sentence”. This suggests that some, but certainly not all, forms of criminal disenfranchisement are inconsistent with Article 25. The Human Right Committee’s first mention of the United States’ use of criminal disenfranchisement did not come until 2006, when the Committee recommended that the US review its states’ uses of criminal disenfranchisement and ensure that they meet the reasonableness test. Even now, while consensus is developing around the idea that criminal disenfranchisement is an unacceptable violation of Article 25, others argue that Article 25 does not fully prohibit the practice, a global consensus has yet to emerge, and an optional protocol to the ICCPR may be necessary.

CIRI took Article 25 and its current (at that time) legal interpretation as the foundation for a measure intended for use comparing and studying human rights practices across countries and across time. Thus, the measure had to usefully apply Article 25 in the same way to every country in the world from 1981 forward. Given the continuing debate over criminal disenfranchisement today, not to mention in 2001, CIRI could not justifiably include this limitation on voting while still maintaining its necessary link to the consensus legal interpretation of Article 25. In other cases, when the legal interpretation of certain acts changed, CIRI updated its coding guidelines; however, the time for such a change to electoral self-determination did not arrive prior to CIRI’s archiving.

Expecting a single number to capture all important aspects of the frequency and distribution of human rights violations is unfair and unrealistic.

Gordon and Berkovitch also criticize CIRI’s measures on the basis of their inability to capture the lived experience of those suffering from human rights abuses; likewise, they point out that CIRI captures the total amount of abuse in a country, rather than its distribution. Both of these points are accurate. CIRI’s data on human rights abuse, while useful for cross-national comparisons, are likely to be far less useful for internal analysis and completely inappropriate for understanding the lived experience of those suffering from human rights abuse. On the other hand, if we want to determine whether international legal obligationseconomic sanctionsforeign economic penetrationINGO publicity efforts, or any of many other state-level or cross-national factors generally lead to better or worse human rights practices, we need data that can be compared across countries and across time. Expecting a single number to capture all important aspects of the frequency and distribution of human rights violations is unfair and unrealistic—different types of data are useful for different questions.

In addition, Gordon and Berkovitch’s portrayal of quantitative human rights scholars as uncritically using problematic data is inaccurate. For instance, in their discussion of CIRI’s inability to capture the distribution of human rights abuse, Gordon and Berkovitch miss the long running debate over the degree to which such measures should capture the frequency of abusethe distribution of abuse, or both. They also ignore other data sets that may capture differential respect for human rights across groups and include greater information about the identities of both the victim and the abuser. Further, there is a growing literature aimed at trying to determine the implications of our treatment of human rights data, which includes recent work focused on the degree to which the information that we receive about human rights practices globally is a biased undercount of the actual level of abuse, the degree to which we can leverage multiple sources of information to better approximate the level of respect for human rights in countries around the world, and the degree to which changing information over time may (or may not) affect our ability to gauge global improvements in respect for human rights.

Nevertheless, I agree that we need more and better human rights data, ideally data that can be broken down in many different ways. In fact, one of the reasons that the CIRI data were archived was the belief among some of the co-directors that, given advances in methodology, technology, and information, better cross-national human rights data are now possible. Currently, I am involved in several projects aimed at collecting human rights data that will provide information about both the frequency and the distribution of human rights violations around the world. One of them, the Sub-National Analysis of Repression Project (SNARP), is an NSF-funded collaborative effort with Christopher FarissReed Wood, and Thorin Wright aimed at providing more precise data on the locations of physical integrity rights abuses around the world. Several other projects with similar aims follow close behind. However, no one data set will be able to accomplish all of the goals set out by Gordon and Berkovitch. For that, we need several different approaches working at very different levels of analysis and utilizing diverse methods. Only when this occurs, and when scholars utilizing different approaches effectively communicate with one another, will we be able to leverage our findings into a better understanding of global respect for, and local enjoyment of, human rights.