Week 36: Compromising the evaluation process

We all know what the typical evaluation process is. While the steps and buzzwords may vary based on the approach you’re using, it’s basically: identify key stakeholders, develop evaluation questions, identify indicators and measures, collect data, analyze data, synthesize results to answer evaluation questions, and create a report. In the perfect world, the evaluation questions and process jointly created with key stakeholders at the beginning of the evaluation is the plan you adhere to during the entire evaluation – it’s your map and you don’t deviate from it. By implementing that plan, you’re able to validly and reliably answer the evaluation questions, thereby helping the key stakeholders improve programs and make critical decisions. In the perfect world, that is. But, we all work in the real world, which can be pretty messy. What do you do when it doesn’t work that way?

Some projects and organizations are in such a state of flux that the evaluation plan has to be continually adapted to meet the changing environment, context, program design, funder requirements, organizational shifts, etc. While this does throw off the original evaluation design, I think this is something many of us expect to happen and are ready to react to it. We modify the timeline, adapt instruments, involve new stakeholders, create more specific evaluation questions, and change analysis procedures. Because we are operating in the real world and not implementing scientifically-based research processes, we can be more reactive to changes while still staying true to the purpose and quality of the evaluation work.

That was the easy one, but what do we do when clients have “emergency evaluation components” that don’t fit into the overall evaluation purpose or plan? They want a quick survey because they feel they need to do it, even if it doesn’t have a clear purpose. They want to pre-/post-test after every treatment because they’re worried the participants will drop out and they’ll lose data. They want interviews conducted immediately to get feedback on satisfaction of program implementation. Here are some questions you can ask yourself to help decide what you should do:

  1. What is the purpose of this “emergency evaluation component?” How will the findings from it be used in a strategic way to improve the program, inform policies, or make decisions? If there is not a clear answer to these questions, then it should not be done.
  2. Is this “emergency evaluation component” an excessive burden on participants? If the results from the component would be meaningful, it’s still important to understand the burden on respondents (pay particular attention to the Belmont Report). Collecting excessive data from respondents can have multiple negative effects include bias in responses, respondents not taking the instrument seriously, respondents tiring of being over-surveyed, etc. If it’s an excessive burden, then it should not be done.
  3. Can the “emergency evaluation component” be developed quickly in a way that still adheres to the Program Evaluation Standards, particularly the accuracy standards, and is valid, reliable, sound in design, and explicit in evaluation reasoning? If you create an instrument that ends up not measuring what it was intended to measure or asks questions that aren’t used or is poorly constructed, then you forfeit the utility of the results and risk the future engagement of key stakeholders. If the answer is no to this question, then it should not be done.

DR. TACKETT’S USEFUL TIP: While we want to be responsive to client needs, it is important to always strive to implement high quality evaluation services!