There is some general agreement in the field regarding the primary uses of evaluation findings: program improvement, program judgments, program decisions, funder requirements, policies, and knowledge creation. We could work our way up to the most important use, in my opinion, but I think I’d rather start with it.
We can always do better! I don’t believe there’s true perfection in program design and implementation, so we need to be operating in a mode of continuous improvement to take the data we have, learn from it, and make changes. If you have conducted or participated in an evaluation that did not help improve your program in some way, was the evaluation really worth it?
Sometimes the evaluation findings have immediate use for program improvement. For example, the evaluation determined that children who have highly trained mathematics tutors have higher academic achievement than students who have untrained mathematics tutors, and then you can immediately make a shift to ensure that all tutors are required to participate in the training. It’s fantastic when that happens - immediate positive change occurred and there was gratification that the evaluation was worth it.
However, change isn’t always that immediate and traceable. For example, national studies have identified specific factors related to predicting high school dropout so your after school program starts targeting students based on evaluation findings related to students who have the risk predictors. The student academic achievement may not show significant improvements in the first or second years, but several years down the road you may see improved academic achievement, better student behavior, and higher graduation rates…all traceable back to the improvement you made in your program by targeting those students in need based on the evaluation.