My post today wraps up my discussion of Cousins (2004) evaluation use-misuse framework. For a summary of the framework see this blogpost (first one). Today I want to talk about how this framework relates to clients of evaluation and why it is important for them to understand.
As a client of evaluation, it is easy to think of your evaluator as a sage wizard (I know, we give off this aura) of evaluative techniques and research methodologies. But, you don't need to know an enormous amount about evaluation and/or research methods to be able to critically review an evaluation plan or report to get a sense of how the evaluation was conducted or to clearly understand the findings presented.
This same argument is made by Cousins (2004) in his paper introducing the misuse framework. He offers that evaluation clients must review anything they are given, as well as demand a clear articulation of the methods and, maybe more importantly, the limitations of those methods. It is a reasonable request to ask your evaluator to walk you through the methods they employed for your evaluation to help you understand what the strengths and weaknesses were. Understanding each of these will help you better understand what kinds of conclusions you can come to based on the evidence presented and help you avoid “mistaken use” (Cousins, 2004). Mistaken use could be an instance where you, as the user, unintentionally make claims about the program which are not justified.
Now, as a client of evaluation you must also recognize your own responsibility for using the findings correctly. By understanding the evaluation methods and design, you will understand what you can and cannot say about your program. However, you must also not shy from parts of an evaluation which show room for improvement in your program or maybe even reflect negatively on your program. Evaluations are intended to not only prove, but to improve. Don’t “abuse” (Cousins, 2004) your evaluation by either suppressing the findings or putting pressure on your evaluator to change their conclusions to make a program look better.
As many of our blog posts go, I will offer a few tips to help you, the evaluation client avoid misusing the information and hopefully lead to better and more appropriate use:
- DO review the methods employed in the evaluation and ask your evaluator to clearly spell out their strengths and weaknesses.
- DO NOT jump to causal conclusions (e.g., my program caused a five point gain on reading scores). Rarely are evaluators able to employ evaluation designs which can support causal claims. More often, we are able to point to relationships between a program and an outcome. The word “cause” can get sticky.
- DO NOT take your evaluator’s words out of context. Do not just pull the parts out of the evaluation which make your program look good. This can be misleading and untruthful.
- DO ask questions. As you review your report, make notes about what isn’t clear or things which stand out. Remember, you likely know your program better than anyone and it is important to remember to think logically about the findings. If something just doesn’t make sense, ask.