A couple of weeks ago I introduced a framework developed by Gary Henry and Mel Mark (2003), two well-known evaluation scholars. The basic concept was that it laid out three levels of evaluation influence: the individual level, the interpersonal level, and the organizational level.
In this post I want to explore evaluation outcomes and influences at the individual level. The framework previously mentioned breaks out individual mechanisms–or what I will call outcomes–into three distinct groups: overall knowledge acquisition or attitude change, behavior change, and the acquisition of new skills.
Attitude change would relate to an evaluation providing new knowledge or information about a program which changes how stakeholders perceive said program. Behavior change would be when individuals change the way they implement or the program, or more generally, do their work. Skill acquisition is when individuals involved in the evaluation might learn something new over the course of the process such as how to create better surveys or facilitate group conversations.
To me a discussion of these outcomes brings to mind the purposes of doing evaluation, beyond the classical “formative” and “summative.” For example, is our evaluation supposed to be participatory with the intent of building the evaluative skills of the program staff we are working with? Or, do we intend to generate knowledge and new information about a program in order to affect the attitudes and beliefs stakeholders hold about a particular program? Can an evaluation affect all of these different elements at once? Questions lead to more questions lead to more questions. What to do about it, I am not really sure.
What I do think is that each of these outcomes is important when we consider them in the context of the utility. As I read through these concepts and the possible outcomes of the work that evaluators do, it makes me think about how I might study my own evaluation work and the work of other evaluators. This could be for purposes of self-improvement, but also to produce empirically based research on evaluation, by developing measures and data collection methods for these different outcomes. It also makes me wonder whether these types of outcomes are important to consider when developing an evaluation plan, so that we may clearly define our purposes, and build our approach to support them.
One word of warning before I sign off: the outcomes or mechanisms described here and in the paper cited are only a few, drawn from the literature, in a pool which might be far larger. Consider these, but also consider what you see in your own practice and share them. Share them here, at conferences, or in papers. We must begin to build this base of evidence within our profession, because without some idea of what evaluation actually contributes to this world we are basing the things we do on hopes and anecdotal evidence.
COREY’S USEFUL TIP: When developing an evaluation plan or brainstorming ways to tackle a particular project, consider what the outcomes the evaluation is intended to produce. What are the purposes of the evaluation and how might this shape the choices made along the way. By considering these questions on the front end, and collecting data around them on the back end, we might begin to build a collection of evidence for the outcomes evaluation can produce.
Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293-314.