Can rights organizations use low-burden self-reflection for evaluation?

Recently in openGlobalRights, Emma Naughton and Kevin Kelpin wrote that human rights work does not easily lend itself to quantifiable, results-oriented evaluations. Their comments echo those of another expert who examined evaluation at Amnesty International and others in 2014, and also found that linear evaluations are ill-suited to their work.

Complexity at all levels is perhaps the greatest challenge. Human rights issues and problems are multi-faceted with numerous stakeholders, causes, potential solutions and outcomes. Human rights groups have multiple partners and coalitions, and are concerned with multiple types of victims. We work with these actors and more to influence the behavior of multiple stakeholders, policymakers and perpetrators. How we conduct our work is intricate, with many different types and methods of research, communications products designed for multiple audiences, and multiple advocacy strategies and targets.

Organizational culture, resources and prioritization concerns, along with the case-specific nature of much of our work, are among the other challenges to implementing standardized impact evaluation strategies at HRW.


Shutterstock/iQoncept (All rights reserved)

"Useful evaluation is not simply filling in boxes next to the impact objectives in a logical framework. Learning comes from taking the time to reflect on how work was done, what actions were successful and why, and whether these steps could, and should, be replicated."


Although there is real consensus throughout HRW that evaluation is important, and that we must do a better job with it, we have yet to hit upon feasible and effective solutions. Our geographic and thematic Programs and Divisions have a high degree of autonomy and do not share a common view, language or methodology for evaluation. We are still working towards finding evaluative processes that work more broadly. Given this, here are some important themes that guide our current thinking:

“Pathway”

Naughton and Kelpin described a “pathway to change”, which mirrors how we think about impact. The path may begin simply with gaining attention for an issue and getting it on the agenda of actors, and leads, along a series of steps, towards an ultimate goal—changes in the human rights conditions facing people on the ground. Objectives along the path might include setting conditions for international aid, instigating changes in legislation or policy, helping local rights defenders—the list goes on.

Then there are the activities and successes that move us a step further along the path, such as scheduling advocacy meetings, getting op-eds published, attending hearings, securing official statements, and so on. The work that goes into research, communication and advocacy is indeed an impact in and of itself, and deserves to be documented. It can be a relief to accept this—what is important isn’t just whether we have stopped extra-judicial killings in X country, but whether we have been effective in moving towards that ultimate goal.

Quantification or proving causation

The term “monitoring and evaluation” is deeply tied in people’s minds to the concepts of result-based management, randomized control trials, or expensive external consultants. Perhaps surprisingly, it can be difficult for people to conceive of a light, nimble and qualitative form of evaluation.

And yet, qualitative documentation is precisely the type of evaluation that is most suitable for much of our work. Simply changing the tone and paradigm of what “evaluation” is has been important for myself and my colleagues. We do not need to prove causation when evaluating impact. We do not work in a bubble and we do not need to generate empirical evidence that x activity resulted in y output. We simply need to document what we know, which includes our own activities, what was occurring external of our activities, and how we pushed further along the path towards the goals we were trying to achieve.

Evaluation is about learning

We are currently framing our evaluation discussions to be more about 'learning' and less about 'impact'. 

There is a very real perception that monitoring and evaluation of a research project might be used as an evaluation of staff, and this has a chilling effect, especially when people fear resources are at stake. Getting staff buy-in is essential, and there are no easy solutions. We are currently framing our evaluation discussions to be more about “learning” and less about “impact”. Every program staff member at Human Rights Watch wants to be more effective in his or her work, so this is not a hard sell. Useful evaluation is not simply filling in boxes next to the impact objectives in a logical framework. Learning comes from taking the time to reflect on how work was done, what actions were successful and why, and whether these steps could, and should, be replicated.

These reflections are already occurring daily at HRW. Our colleagues are consistently seeking avenues to better achieve higher and higher goals. We have insight into what we did, how it worked, and what we might have done differently. Our challenge is to get the most crucial pieces of information out of our heads and conversations and into a format that allows us to build institutional knowledge.

Simplify

Complex and burdensome processes simply will not succeed for our organization. We are dispensing with the idea that evaluation necessitates some specific level of rigorous documentation. It doesn’t have to be perfect. Capturing some reflection is better than none. For the time being, we are trying to develop a method of learning and evaluation that is simple and has a low-resource burden, especially in terms of staff time. The idea is to document, in a light and agile way, the most important achievements and lessons. How do we turn short conversations into knowledge that is generalizable, accessible and useful? How do we do this consistently and in ways that do not distract from our substantive work?

Rather than institute a mandatory process dictated by management, we are exploring ideas with enthusiastic staff. We seek to solidify a common language of impact and the “pathway to change”. We are attempting to develop tools to document reflective conversations in ways that will allow us to share relevant lessons with others. By simplifying the process of how we evaluate, we may be able to learn much more about how to succeed in our work.