Happy New Year to everyone reading this!! For my first blog post of this year I want to focus on the purposes for evaluation that the International Labour Organization use in their policy material. These are actually very common to many international development agencies and multilateral organizations which have some kind of evaluation function built into their structure. The United Nations Evaluation Group (UNEG) state in their mission that they aim to “advocate for the importance of evaluation for learning, decision-making, and accountability” (UNEG, 2014 p.4). This implies that all evaluation offices within the UN system have the mandate to pursue evaluation work with these purposes in mind.
These purposes are good ones and mirror the basic evaluation purposes articulated by Scriven all those years ago when he developed the concepts of formative (for program improvement) and summative (for making go/no-go decisions about programs). They also take into account Weiss’s (1998) ideas of enlightenment use of evaluation, whereby evaluations feed into a larger knowledge base for decision makers who might reflect on that body of knowledge in future decision making contexts, but don’t necessarily use the evaluation results immediately.
But what I want to discuss here is the perception that often permeates institutions: that evaluations for learning and improvement are mutually exclusive from evaluations for accountability. See, evaluation for learning and improvement conjures the images of evaluation that we like to use to talk to anyone who sometimes has evaluation anxiety. It looks like evaluators working with program managers and other stakeholders to learn more about what they are doing, and help them to improve their work and ultimately serve their stakeholders better. We certainly value this concept at iEval, as we strive to make evaluation as useful as possible to our clients, which often means helping them to use evaluation to improve their work.
But evaluations are also often conducted for some accountability purpose, and accountability is also important. Ensuring that public funding is being used as it was intended, while also achieving results, is part of being a responsible steward of these dollars. These seemingly divergent purposes for evaluation don’t have to be mutually exclusive. However, to make this point more salient, we may need to re-conceptualize traditional definitions of accountability.
Accountability has strong connotations which make people think of auditing, oversight and some large entity casting a watchful, but distant eye over their work. But, accountability isn’t necessarily a bad thing, because don’t we all want to fulfill our obligations? What might begin to bring these different evaluation purposes closer together is a culture of accountability which:
- Instead of generating fear of admitting mistakes, instead fosters a commitment to learning from them.
- Promotes the identification of issues through data driven monitoring systems, and rewards corrective action, instead of reprimanding organizations for these very same issues.
- Encourages the explanation of these issues in an honest and transparent way providing guidance for future program managers on how they may avoid similar mistakes.
One of the most important threats that the current perception of accountability poses to itself is that it incentivizes being less truthful about program realities. People running programs may think that if they don’t get everything right their jobs may be at stake or a cause which they believe deeply in might be jeopardized. This may make them more likely to be less clear about the outputs or results that they are achieving. Perhaps if the definition of accountability were to align more with learning and improvement then we could foster an environment where programs feel more comfortable identifying issues that arise and taking subsequent corrective action.
COREY’S USEFUL TIP: As evaluators, we often occupy a unique position in-between program staff and funder. If accountability is a stated purpose of an evaluation you are a part of, ask what that looks like to the client. What kinds of accountability-related questions do they want to answer? Are these types of questions going to incentivize program staff to be truthful about what has taken place during the programs lifecycle?