Week 86: When NOT to Use Well Done Evaluation Results

Last week, I talked about instances where you may choose to use results from a poorly done evaluation, and there are some legitimate times to do that. On the flipside, there are also times when you will choose NOT to use the results from an evaluation done well.

What?!?! Are you crazy, Wendy?

Well, sometimes maybe a little, but I’m being completely honest here. We’ve previously referenced some of Brad Cousin’s work on use and misuse (clearer graphic here), and there are obviously times when not using well done evaluation results is a clear case of misuse. However, sometimes the nonuse is what needs to happen at that point in time. For example…

  • Sharing the evaluation findings and recommendations now will skew future results. We serve as the evaluators on many Mathematics and Science Partnership projects. As part of those evaluations, we often observe teachers in the classroom and test teachers’ content and pedagogical content knowledge in the subject area. If we shared the baseline evaluation results with the teachers on their knowledge and instructional strategies, we would bias the impact of the actual project. Later, when conducting follow-up observations and testing, we wouldn’t be able to tell if the changes in knowledge and practice were because of the professional development design or because teachers knew which areas they were weak on and worked on them outside of the project.
  • Sharing the evaluation findings and recommendations now will stress the client unnecessarily.  That sounds like a weak reason, I agree, but I can think of one case where we waited a few weeks to share the evaluation results. The client was in the midst of applying for continued funding and was allowed to use any evaluation results from the previous years but not the current years. The midyear evaluation report, which we had just completed, had very positive continued results in it…results that would really strengthen their argument for continued funding. Instead of giving the report to the client and frustrating them because they would not be able to use the results in their grant application, we waited until after the grant application was submitted then share the results. It worked out well – they were exhausted from the application process and were excited to hear some positive news.
  • Program leadership is in the middle of major staff changes. Changing leadership can be traumatic for any program. When a new leader comes on board, s/he needs to take the time to understand the program, figure out what has been working and not working, and make the project their own. This can be a timing issue, too, for sharing evaluation results. The evaluation may have been done with complete fidelity, providing accurate and reliable findings with strategic recommendations for improvement. However, the new leader may have already decided drastic changes are going to be made to the program, based on his/her prior experiences or future expectations. In this case, the evaluation findings may not be helpful. If they are positive, it may create more duress among staff who do not want the program to change. If they are negative, they are really moot since the program is changing anyway. The evaluation was conducted, and does need to be shared at some point, but this one would warrant a conversation with the new leadership to determine the best timing of that.

DR. TACKETT’S USEFUL TIP: If you are not going to use the analyses, findings, and recommendations from an evaluation that was conducted with fidelity, reliability, and validity, make sure you have clearly defensible reasons for doing so.