Michael Scriven once said, “bad is bad, good is good, and it is the job of evaluators to determine which is which.” Now, I’m not one to challenge Michael Scriven, and this wouldn’t necessarily be the place to do it, but it illustrates a way of thinking about value claims. It is the construction of these value claims which is what I want to focus on in this post.
The root word of evaluation is value. To me, the fundamental difference between evaluation and research is the value part of evaluation, in that we are obligated to make value statements about the programs we are tasked with evaluating. What I mean by “make value statements” is that part of our job is to eventually, after going through some systematic process, say whether something is good or bad. Involving key stakeholders in the development of these value statements can enhance the utility of evaluation for those stakeholders.
To illustrate this point, let me briefly describe an example. For a current project I am working on, our team recently spent almost a full day with a client reviewing initial findings. We presented our findings by criteria that we had used in the evaluation. After we presented the data for each of the criteria, we asked each of the project stakeholders in the room to indicate whether they felt the finding was positive or negative. They did this confidentially, so as not to influence one another. That evening we analyzed their responses and presented them back to the group in the morning. It sparked rich discussion about whether or not the program was working or not. The purpose was to enhance the utility of the findings by comparing our interpretation of the findings with the stakeholders’ interpretations.
I was also inspired to write this post by a recent conversation with a fellow student. What we discussed was the general lack of valuing in our profession. We got on the topic of how these value statements are constructed, more specifically, that there doesn’t seem to be much discussion in evaluation of how to systematically align the values of project stakeholders with the value statements that we as evaluators construct. If we involve project stakeholders in this process, it has the potential to enhance use since the conclusions are relevant and meaningful to those individuals who are most likely to use the evaluation.
Now, I can’t necessarily proclaim in this post the best way to go about facilitating the interpretation of findings in order to best facilitate use. However, I do believe that by engaging with stakeholders around the interpretation of findings and the construction of value statements about programs, we can enhance the utility of evaluations. The value claims become joint conclusions about a program, weighing the empirical evidence produced by the evaluation with the contextual information held by the program staff.
At this point, I need to address the major criticism of doing evaluation in this way. Obviously, external evaluators are hired to provide a level of objectivity not attainable through the sole use of internal evaluators. By engaging with stakeholders, we [evaluators] risk losing some of that objectivity and independence. Despite this, I believe that we should be able to balance this possible encroachment of bias through critical thinking and self-awareness.
I think there is merit to a process which engages program stakeholders in the interpretation of findings to reach a conclusion of good or bad. That sounds simplistic, but if you engage stakeholders in this process, it increases their level of investment in the evaluation and makes the findings more relevant. It isn’t just “some outsider person” who doesn’t know the program or the context. Instead it is a collaborative effort between the evaluator and program stakeholders in coming to meaningful conclusions about how well or not well the program is working.