This post wraps up my exploration of the paper by Henry and Mark (2003) which discusses the outcomes of evaluation at three different levels. In my last two posts I explored the outcomes of evaluation at the individual level and the interpersonal level. Today I want to discuss the idea of evaluation outcomes, beyond use, at the collective level.
The collective, in the context I work in, refers most often to a program or institution. These are the entities which I most often do evaluation work with. For example, a particular non-profit might represent the collective or it may be a department within an agency that is responsible for implementing a particular program. Collective evaluation outcomes are concerned with how groups or institutions learn from evaluations and apply knowledge or information generated by an evaluation to their work.
Henry and Mark discuss four different types of these collective level outcomes, including:
- Agenda setting – evaluation information leads an issue to become a part of a larger agenda.
- Policy oriented learning – evaluation information leads to a shift in the way an organization or collective body views a particular policy or program. This lies in contrast to individual changes in attitudes.
- Policy change – evaluation information actually leads to a direct change in the way a program operates.
- Diffusion – evaluation information leads others, outside of the original context, to adopt a program because of reported impacts or effects.
In my previous posts about the content of this paper I have discussed the importance of building research into our practice to try to build a base of evidence for these types of outcomes. This is still critically important, and all that I have said for the previous two levels of outcomes still applies here. However, I want to explore another aspect of outcomes in this post: how evaluation is used to make programmatic changes and how these changes are applied.
I believe that each of the four outcomes shown above is plausible and important. However, I am most interested in points 2 and 3. They are related in a way. The most obvious connection between the two would be that policy oriented learning leads to policy change, however I believe that sometimes change may precede the learning. This may not make intuitive sense, and in fact I would argue that order of events may not be the best approach to applying evaluation findings. However, evaluations and the application of the information they present is often the domain of managers or organizational leadership. In this scenario, policy changes based on evaluative conclusions might be made by organizational leadership, but run into problems because the learning has not been effectively diffused throughout the organization.
The scenario presented above reminds me of a current project we are working on. Much learning has taken place over the lifespan of this project, and we believe that the information produced by the evaluation has contributed in some way to this. However, the learning which has taken place, and which has sometimes led to drastic changes in direction, has not been properly communicated. What this has meant is that others who are involved in the work have been left to try and muddle through the reasoning of those leading the effort, which has often led to frustration and discontent, and eventually a diminished credibility of the leadership in the eyes of other key stakeholders.
I highlight this example, and the process of communication it describes, because it relates to the use of evaluation results. As a consumer of evaluation it is important to consider how the decisions and results, which are made based on evaluation findings, are communicated to other program stakeholders or staff. As evaluators, we must consider how the information we produce is used and disseminated. We, of course, want our work to be used, the findings applied, and organizational learning to occur. But we also don’t want to enhance the feelings of anxiety among program staff which sometimes accompany an evaluation. Managing the way evaluations are used to produce outcomes could go a long way in preventing misuse or even abuse. It may also help us spread the gospel of evaluation by showing how it can help passionate people do their work better.
COREY’S USEFUL TIP: Consider a strategy for disseminating or communicating evaluation findings to program staff or other stakeholders, particularly when they are going to be affected by decisions made based on that information. This will enhance policy oriented learning by building new processes or approaches into the collective memory and knowledge, as opposed to these justifications only existing at the management level.