Week 35: Evaluation Use and its Outcomes

I recently gave a guest lecture to a group of undergraduate students at Kalamazoo College introducing to them the concept and field of evaluation. One of the first things I talked about was why I chose to go into this field. I was reminded that what I wanted to get out of being an evaluator was to help improve the programs and/or policies which are affecting social change and making my community, this country and world, a better place.

Now, this means I operate under an assumption that when evaluation findings are used, they have some subsequent outcome or impact (just as a program would) on the evaluand. This is a logical connection to make, in my mind. However it is not something that we as a profession know much about. We create our evaluation reports, facilitate data interpretation workshops and give presentations with our eyes on use, but we don’t yet know much about what the outcomes of that use is. 

This is a concept I have been mulling lately and having conversations about with some of my colleagues. I don’t have answers but in the next couple of posts I am going to explore this idea by unpacking some of the concepts that Gary Henry and Mel Mark discuss in their 2003 paper titled “Beyond Use: Understanding Evaluation’s Influence on Attitudes and Actions.” The first thing I want to share with you is a framework that they present in their paper which outlines three levels of evaluation impact, or what they call “evaluation influence.”

  1. Change within individuals: Changes in individual’s opinions or attitudes about something.
  2. Change in the interaction between two or more individuals: Changes in how individuals talk about the program as a result of evaluation. For example, the use of evaluation information to make a case for a program to a funder.
  3. Change in the practices or decisions of programs: Changes that occur within programs or policies regarding their implementation.

What this framework provides us with is a way to think about the different levels of evaluation impact, and for those of you, who are ambitious enough, a way start cataloging our own examples of impact. I think it would be extremely interesting to collect this kind of data which could be used to share experiences in a more systematic way.

 

COREY’S USEFUL TIP: When an evaluation has been completed, it may be interesting and meaningful to you as a professional to reconnect with a client after a period of time has elapsed to investigate what types of impacts your evaluation may have had on the program. Building a record of this may help you adjust your professional practice, but also build a base of evidence that can be presented to potential clients about work you have done in the past.

Also, if any of you reading this have an example of when an evaluation you did made a notable impact, please share in the comments section.

Scriven, M. (1991). Evaluation thesaurus. Sage.
Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293-314.