As a client, have you ever gotten an evaluation report that…after reading it…you seriously question the implementation of the evaluation? Maybe…
- the sample size was too small to create generalizations, yet there they were in the report
- the timeline for gathering and analyzing data was too soon after the program intervention, not allowing time for the actual impact of the program to be realized
- the conclusions and recommendations made have nothing to do with the data analyzed and presented in the report
- the wrong key stakeholders were interviewed and the findings presented as if they were representative of the entire project
You get the idea. I’m not talking about evaluation results that are poor (because, in any good evaluation, both the positive and negative findings will be reported and used to help improve programs and make key decisions). I’m talking about a poorly done evaluation. We’ve all seen them. As evaluators, iEval has been hired to work on numerous projects after another evaluation team has been fired, typically because of poorly conducted evaluations.
So, evaluators and clients alike, we’ve seen these types of evaluations and we typically disregard them – and appropriately so. Are there times when we should actually use these evaluation results? I can think of a couple. Can you?
- You want to show your client the impact of inadequate data collection methods. Maybe the client you are working with is giving you partial data so you can’t fully analyze the data with any validity. Or possibly the client’s staff members don’t understand the value of the data they are entering into a system (e.g., student attendance, participation in training). By sharing the analyses of the poorly collected data and having a discussion around those results with the client (which will probably result in a lot of frustration and claims that the evaluation is wrong), you can help the client realize for themselves the importance of accurate data collection and what it means for the true evaluation of program impact.
- You want to demonstrate a starting point from which the program can build. Maybe there were no opportunities for additional data to be collected, a more appropriate timeframe for analyses to be done, comprehensive interviews with all key stakeholders, etc. Maybe you were hired at the last possible moment to conduct the evaluation. What you had was all you could possibly have by the time the evaluation report was due. You may go ahead and put together what – ultimately is – a poorly implemented evaluation and report. In this instance, you need to stress the caveats of the restrictions of the evaluation, emphasize this is merely a starting point, and focus the evaluation findings and recommendations on the next steps necessary for a valid and reliable, comprehensive evaluation during the next fiscal period.
DR. TACKETT’S USEFUL TIP: If you are going to use the analyses, findings, and recommendations from a poorly done evaluation, make sure you are crystal clear in explaining the prescribed use of that evaluation report.