Summer is always a tumultuous time for our evaluation team. We are collecting, cleaning, and analyzing a myriad of data types. We are creating user-friendly and useful reports based on those data. We are providing training around using the reports for program improvement. We are seeking new contracts for the upcoming school year and calendar year. It’s an extremely crazy time of the year. Throw into that mix some other atypical occurrences like vacations, internships, new babies, remote working, and conferences…it creates an even more dynamic evaluation environment. Thinking about how we adapt to that and continue to provide high quality, meaningful data and reports to our clients led me to think about how flexible we often have to be in the actual evaluation implementation with clients.
A key staff member got a promotion…
A partner organization lost their funding…
Two team members are on maternity leave…
150 new children sign up to participate midway through the program…
The pre-/post-tests were lost…
A snowstorm cancelled four program sessions in a row…
We live in the real world where good and bad things happen that dramatically impact how a program is implemented and evaluated. While we can’t plan for every contingency when designing an evaluation plan, we need to be responsive to those changes, making adjustments where needed and holding steadfast when appropriate.
Here are a few important questions we can ask ourselves to help us decide if we should modify the evaluation plan or stay true to its original design and account for that change in the data analyses portion of our work:
What are some other questions you ask yourself as you decide if you’re going to change your evaluation design? Please comment below and share!
DR. TACKETT’S USEFUL TIP: Be thoughtful and intentional if you are going to change your evaluation design, oftentimes programmatic changes can be accounted for during the data analyses.