Collective impact is a hot topic right now. Try googling it – you’ll get about 424,000 results in Google and about 12,000 results in Google Scholar. There is a plethora of webinars and conferences dedicated to understanding and implementing collective impact. While understanding how to do collective impact work is necessary when embarking on that process, it is also critical to understand how to know when it is or is not working.
Developmental evaluation is often paired with collective impact models, and that methodology fits well (when implemented appropriately) for initiatives that operate in a continual state of emergence. Other evaluation styles can also work with collective impact initiatives, as long as the evaluators are responsive to the changing needs of the work and share meaningful feedback along the way that is used. Used – that is the critical word here. This entire Carpe Diem blog is dedicated to improving the use of evaluation findings in meaningful ways, and it is very easy to forget to use evaluation results when operating in the state of chaos that is collective impact work.
In my experience with evaluating initiatives that are using a collective impact approach, the use of evaluation findings typically falls into two categories: 1) react to evaluation results immediately and make changes or 2) consider evaluation results, put them aside, come back to them 6-18 months later, and then use them. In other evaluation projects, I’ve seen a use timeframe in between where the findings are immediately reviewed but not acted upon for a few months (after a little more time to thoughtfully prepare a plan for use), yet I have not seen that as the primary approach within any of the collective impact initiatives we have evaluated. I’d like to propose to other collective impact initiatives that the middle ground may be the better method to employ.
The table below highlights just a few pros and cons, based on my experience, of each type of evaluation use within a collective impact framework. There are definite pros and cons with each timeframe for evaluation use, and we’d love to hear some of your thoughts on your experiences evaluating collective impact frameworks within the comments section at the end of this post.
DR. TACKETT’S USEFUL TIP: Collective impact work often requires rapid changes in program design, communications, connections between organizations, evaluation, etc. While operating in this world of fast-paced change, it is important to remember to carefully consider evaluation results and strategically plan for use of those findings, but don’t wait too long to use them otherwise you’re paralyzing the work.