Week 75: Collaborative Evaluations (not Collaborative Evaluation)

Recently I participated in a half-day training on collaborative evaluations given by Brad Cousins. In a previous post, I have written about Brad Cousins who published a paper about use and misuse of evaluation. Brad Cousins may be most well known in evaluation for his development of what is called Participatory Evaluation. But, this workshop was intended to introduce us to some work he and his colleagues in Ottawa are working on. This focuses on an effort to develop a set of principles for collaborative evaluations.

Now, there is an important distinction I want to make here. The work that Dr. Cousins presented was related Collaborative Evaluation approaches, not Collaborative Evaluation. The latter is a specific evaluation approach (See Rodriguez-Campos, 2012) based upon ongoing engagement with stakeholders by the evaluator(s). The former though is what Cousins and his colleagues describe as a family of approaches which have collaborative inquiry at their heart (Cousins, Whitmore & Shulha, 2013). This includes approaches which are premised upon the development of a collaborative relationship between evaluator and stakeholders. They differ in terms of who is engaged, how in-depth that engagement goes and the level of decision making that stakeholders receive (see figure 1). They also have different purposes (i.e., empowerment evaluation is intended to actually empower stakeholders through the evaluation process)

Figure 1. Dimensions of form in collaborative inquiry.

Cousins, Whitmore & Shulha, American Journal of Evaluation 2013;34:7-22

So, they have this basic concept of collaborative inquiry as it relates to evaluation, and I find the argument for talking about these different approaches as part of a family makes good sense. Not only do they make the case for this family to exist, but they have developed a set of principles to guide evaluators who are implementing an approach that is a member of this collaborative family. I am not going to share those here because a paper written by Dr. Cousins and his colleagues will be published in the American Journal of Evaluation soon and I don’t want to let the cat out of the bag.

But, these types of evaluation are very use-friendly. Their emphasis on engaging with stakeholders around various evaluation processes means they end up feeling like they own the results, making it more likely that stakeholders use the findings to make program improvements or programmatic decisions. These types of evaluation also build evaluative capacity within the groups that are deemed “collaborators.” They offer quite a bit, and they are worth looking into if you haven’t already.

I find this type of research on evaluation fascinating, and I wanted to share a little bit of what I learned. I also think that trying to refine and condense what I find to be a bloated universe of evaluation approaches is a laudable effort. I also wanted to convey some information about evaluation approaches which can be very good for fostering use. Maybe this makes you more inclined to do some reading up on these types of evaluation, maybe not. But I hope I have at least introduced you to something interesting in the world of evaluation.

COREY’S USEFUL TIP: Dr. Cousins and his colleagues are looking for evaluators who are interested in utilizing a collaborative approach to evaluation to pilot the principles they have developed to get a better sense of how they translate into practice. A public call will be going out soon. They will be offering support to anyone who chooses to participate related to designing an evaluation with these principles in mind. If you are interested, let me know.

 

Cousins J. B., Chouinard J. (2012). Participatory evaluation up close: A review and integration of research-based knowledge. Charlotte, NC: Information Age Press.

Cousins, J. B., Whitmore, E., & Shulha, L. (2013). Arguments for a common set of principles for collaborative inquiry in evaluation. American Journal of Evaluation34(1), 7-22.

Rodriguez-Campos L. (2012). Advances in collaborative evaluation. Evaluation and Program Planning, 35, 523–528

 

Week 74: Being Evaluated by a Client

How does a client evaluate the evaluator? Giving referrals or being hired again are certainly signs the client gave the evaluator a positive evaluation. But what are other ways?

I worked with a client that had a standing evaluation procedure for all people or organizations they contracted with. They had developed a one page questionnaire with items about different attributes such as timeliness, professionalism, and positively representing the organization. For each items there were two options, “Meets” or “Does not meet,” and a section for comments.

We were asked to fill out this questionnaire each quarter as a self reflection tool. Their core team would fill out the same document and then we would hold a meeting to discuss.

Outcome: Prepping for the initial meeting took quite a bit of time. We had all of the evaluation team members on the project fill out the questionnaire and provide evidence for each item. We also used the original contract to support what we were doing.

The actual meeting took about an hour. Five people from the client’s team were on the phone, along with five evaluation team members. We discussed each item on the questionnaire and were graded by the client as “Meets” or “Does not meet.”

Tips if you are asked to do this…

  1. Make sure the tool is appropriate. One of the questions asked if the contractor (in this case, the evaluation team) was qualified to do the work. After the first quarterly review, it is not necessary to ask this question every three months, unless the team changes. Review the instrument and make sure it is appropriate.
  2. Conduct the review with your contract close by. It can be easy for a client to say, “You should be doing X, Y, and Z.” But if X, Y, and Z aren’t part of the contract, this shouldn’t negatively affect your review.
  3. Is this process giving you or the client information you really need? In my experience, we had 10 people on the phone at one time, a difficult feat to coordinate so many schedules. Besides talking about the contractor, it would have been helpful to have time during this meeting to discuss the project and what needs to be done.
  4. Have the time available. To complete the questionnaire, plan and conduct the meeting took about two hours per person of the evaluation team. With five people on the team, doing these meetings on a quarterly basis takes 40 man hours a year. This is a week’s worth of time every year not spent on the project. Are there time and funds available for this type of reflection?
  5. Be clear with the client. How will this be used? Will this be shared? Who will see it? 

So what do you think? Would you find value in this type of exercise as an evaluator? As a client?

DR. EVERETT’S USEFUL TIP: Reflecting on practice is an important aspect of an evaluator. Do you reflect on your practice with your client? What does it look like?

 

Week 73: Evaluation Volunteer

Last week Wendy brought up grappling with the “big” questions of career and purpose. Is evaluation the “right” fit? For me, it hasn’t been as straight forward of a path or decision. However, where the rubber meets the road in meaning for me as an evaluator is my capacity as an active community volunteer with an evaluation skill set.

In the past year, I have had an “ah-ha” moment. Rather than sticking with volunteer activities that are important passions of mine—mentoring at-risk youth and low-income housing—I’ve been able to help with broader and more evaluation-focused tasks, such as policy analysis, survey administration, and strategic planning.

Specifically, this month, I had the opportunity to help address some dire enrollment concerns for my daughters’ public school district. Enrollment is dropping drastically, even attracting national attention—not the kind a superintendent wants. A district with large budget concerns does not have the funds to hire a strategic researcher to collect data on why families are leaving and what it would take to get them back. I have been able to develop, collect, and analyze surveys for families that have “choiced” out. In the process, I have also been able to advise the district on some of their data collection and tracking processes.

While I don’t know if my pro-bono work will save the district, it feels good to know that I can do my part to volunteer as a parent. I can give back with skills I have and am comfortable and passionate about. Finding my “niche” as a volunteer has provided meaning in my volunteer commitments and professional life.  

KELLEY’S USEFUL TIP: Whatever your career or passion, there is great fulfillment in finding a way to utilize your professional skills in a way that gives back to a cause that is important to you. 

Week 72: Rededicating My Work

I think we all struggle with the questions at some point in our lives of…

Am I truly happy in my job?

Does what I'm doing really matter?

Should I change careers?

It’s usually something monumental that causes us to examine these questions a little more closely…a shift in priorities at work, a major family event, complete exhaustion, an amazing vacation, etc. For me, it was losing my dad, David Switzer, to cancer just over a month ago. Dad was such an inspiration for me growing up – teaching me long division years before most of my friends really understood addition and subtraction, sharing stories from work with me so I felt like a grown up, encouraging me in whatever I was doing (he so wanted me to play Flight of the Bumblebee on the marimba in a concert, but that never happened), and just being there. Mom continues to play a huge role in my life, but losing Dad has definitely left a hole.

Such a major event has led me to some internal contemplation. Should I be spending my time figuring out how to cure cancer instead of working as an evaluator in education and health? Does what I’m doing even make a difference in the world? Am I just going through the motions of earning a paycheck? So, I have spent some time reflecting on the questions above.

I know Dad was proud of me, but am I truly happy as an evaluator? Yes, I am! I really love my work. I love building new relationships with clients and their partners. I love teaching as I work, sharing what evaluation is and how it can be meaningful in all types of work. I love playing detective with the data to try to figure out patterns, causes, and outcomes. I love sharing reports with clients and seeing that A-HA moment when the understanding happens.

Okay, so I love my work, but am I doing something that really matters? There is nothing better in my work than when clients use the data and findings to make improvements in their programs that, ultimately, benefit their clients (e.g., children, parents, teachers, organizations). No, I’m not curing cancer, but one of the programs I’m working with may have a positive impact on a budding young mind of a child who someday will be the one to do something amazing that we have not even conceived of yet! And, that program may have a more positive impact on that child because of the data analyses we are doing, allowing the program staff to improve and tailor the program accordingly. I’m in it for the long haul. I may not see those benefits, but I know the future possibility of them is real.

With the answers to the first two questions being yes, then I can definitively say no to changing careers! I love what I do, and I see the potential for having a positive impact on the world. What I can do, though, is continue to ensure that the evaluation processes, findings, and reports my team and I do are conducted with validity and reliability, adhering to the highest moral standards, exceeding all ethical practices, and shared in a user-friendly way that is non-threatening and facilitates use for improvement and decision-making. Personally, I’m taking this opportunity of contemplation to rededicate myself to those high standards and continue to make my parents proud!

DR. TACKETT’S USEFUL TIP: It is important to take time to reflect on our own practices as evaluators, but also to think about the reasons why we are working as evaluators in the first place. In order to provide user-friendly, meaningful evaluation results, you must be mentally, emotionally, and physically committed to the work you are doing.

Week 71: Two Basic Principles for Reporting for USE

This past week I was asked to review an evaluation report which was in its final stage and needed one more going over. Here is how that day went:

  • 10:00am - Colleague asked me to review and provide comments by 3PM
  • 10:10am - Begin reading
  • 12:10pm - Nap due to boredom as a result of reading report
  • 2:10pm - Eventually arrive at conclusions and decide that I couldn’t take it anymore

Now, I don’t mean to sound facetious, but perhaps you have experienced something similar, either as a client of evaluation or as an evaluator. Too often evaluation reports are looooong, technical, poorly structured and ineffective in presenting the data that have been collected. After reading it I had two simple suggestions I gave the authors of this report, which I will share with you here:

  1. Present key findings as sub headers, followed by evidence to support each one. This report presented 25 pages (single spaced) of "findings," organized by the criteria that were introduced in the introduction, with no clear findings, leaving it up to the reader to extract said findings, which pretty much guarantees some inconsistency in interpretation on the part of said readers. The evaluator is tasked with collecting a body of evidence and extracting from that a series of relevant findings. It is not good practice to present a body of evidence, leaving out reference to findings, and expecting the reader to arrive at these findings on their own. By clearly presenting each finding in a subheading and then discussing this finding with all relevant pieces of data available, you provide the reader with a clear picture of what it is that was found and how you went about finding it.
  2. Use data to support claims. Seems obvious, I know. But, as evaluators, we often begin to develop a deep understanding of a program or project that we are evaluating. We begin to get hunches and gut feelings about a program. These should NOT come out in our reports without an evidence base. Sure, these can be helpful for knowing where to look, or even what to look for, but they must not be used in a way which resembles an opinion piece in the Sunday paper. In this particular case, I noticed varying use of evidence and too many claims that had no apparent evidence base. Use the data that you have likely spent a lot of time collecting, analyzing and trying to make sense of. Share the knowledge you have gained in a way which is transparent, in that you are making it clear where each of these claims comes from and how you arrived at it.

These two things would have made this report far easier to read and far easier to follow. Instead, I struggled through the findings section and emerged weary and unclear of what the “story” was. What was I supposed to get out of this report, as an interested party? I think I knew, but should we operate on the basis that we let readers get out of a report what they will? I don’t think so. Instead, I would argue it is our imperative to make it clear to the reader exactly what it is we found during the course of the evaluation, and present to them why we believe this to be true.

It was clear to me that the authors of this report had done a substantial amount of data collection and work and had a deep understanding of the subject. This is how we often emerge at the time of report writing in an evaluation. We have a story, in our minds, about the program or project we are evaluating and we need to put this story down on paper to tell it clearly to our readers. Make it clear, make it digestible, make sure that the story comes smoothly and not in unclear chunks. These tips are nothing fancy, they are just key principles I believe we must all keep in mind as we produce evaluations for people looking to learn from them.

COREY’S USEFUL TIP: When writing reports, make it clear to the reader what it is they are reading by chunking out findings, lessons learned, conclusions, etc. Support each of these with data. Put yourself in the shoes of the reader who won’t have the deep, evidence-driven understanding of the program that you have as the evaluator. 

Week 70: Evaluation Lessons from a New Baby

I took the summer off from blogging and evaluation work as our family welcomed our third child in June. I’m back at both, and reflecting on life with a new baby. I’m finding lessons learned from a life with a baby that can be translated into my evaluation work.

1.     Sometimes life doesn’t follow the plan you set.

My daughter was born 12 days past her due date. My two other children were born before their due dates. Following previous experience, baby #3 should have also been early. However, she took her time. As I was impatiently waiting for her arrival, people kept reminding me babies come when they are ready. My plan was not her plan.

In evaluation, it’s important to create an evaluation plan. The evaluation plan guides the evaluation, ensuring you are collecting the data you need at the correct times. But what happens when things don’t follow the plan? What happens if the program changes, if there is a snow day, staff quits, schools shut down or any other things that happen during a program? Evaluation plans may have to change. The evaluator needs to be in tune with the program and make the changes you need to complete the evaluation.

2.     Use the proper tool for the job.

Babies require a lot of stuff, including diapers and wipes. Since I have two older kids, I have everything pretty much streamlined, including my favorite diapers and wipes.  However, we recently had a Costco open near us and people had been telling me how amazing Costco wipes are. I went to Costco and bought what I thought were the amazing Costco baby wipes. I brought them home and didn’t like them at all. They had a sickening sweet smell and were too small to do the job. As I looked closer at the package, I realized I bought flushable wipes, not baby wipes. Although they were similar, they weren’t the right tool for the job.

This can happen in evaluation. Maybe you have a survey you really like and always use. What do you do when a new tool comes on the scene? Or maybe your client is pressuring you do something different from what you know works. You need to review new tools carefully. Has there been done reliability and validity testing done? Will the tool answer the questions you need? Maybe you just need to update the surveys you have? Make sure you use the proper tools for the job.

3.     Life goes on.

 I have a three year old, a two year old, and a new baby. My days are filled from morning to night with changing diapers, refilling sippy cups, cleaning up spills, and figuring out what to feed people who require at least three meals a day. It gets stressful and tiring. Days run into each other. But I am always surprised at how quickly the week goes by and I find myself at another weekend.  No matter what happens, we get through it.

Like life with three small kids, evaluation projects move forward, no matter what. I have worked on some stressful evaluation projects, some of which included tight deadlines, moving targets, programs changing, PI’s leaving, surveys not getting administered at the right time, or clients wanting different results. Projects can be stressful but they do, eventually, end. As I enter my second decade of working in this field, I am much better at taking things as they come and realizing that even if it seems like the program is imploding, life goes on.

DR. EVERETT’S USEFUL TIP: People always say the first year of a baby’s life flies by…because it’s true. This can also be true for a project. Make sure you schedule regular check-in with project staff because a project can be over before you know it!


Week 69: Knowing when to fold ‘em

Sometimes, you just have to surrender. Yesterday, our lovable 2-year old, 100-pound “puppy” Mac bit (yet) another dog. After extensive interventions and months of training, we made the difficult decision to put him to sleep. My 3 year old remains hopeful that Mac will be home for his presents at Christmas.

Making difficult decisions about when to quit are hard. Really hard. I am hard-nosed and determined. I love to train for marathons. Take on too much. Push myself. And then, despite my personal efforts, I still sometimes have to give up. I have a memory of running a marathon after taking Nyquil because I had the flu. I somehow thought finishing, despite the terrible toll on my body, was better than quitting.

I think lessons about examining circumstances, prior efforts, and data to make a decision to “fold ‘em” is a crucial lesson when working with programs as well. So often, as evaluators, we work with clients investing thousands or millions of dollars and precious weeks of stakeholders’ time to push a program to be successful. Sometimes those efforts work. Sometimes staying the course makes sense. And then, sometimes, it doesn’t.

As evaluators, questions to ask clients when examining their lack of intended outcomes might include:

  1. Are you continuing to push for a program’s continuation because of the personal stake you have in the game? Is this about your ego? Your career? The effort you’ve invested?
  2. What political implications are there for continuing to invest in an unsuccessful program? What implications are there for closing shop? How can you use program/community data to inform this question?
  3. Is the number of people impacted/participating in the program significantly more than the number of people invested in implementing the program? In other words, are 10 people working to keep a program that serves 30 people afloat?
  4. Has the program moved (or have plans to move) from measuring inputs to measuring outcomes? Examine what outcomes have resulted from the program. If a program is only counting beans, efficiently move to a system that examines impact to determine viability.
  5. Are there other programs or entities that could/would fill the gap if the program no longer existed? Is it possible that a particular program is not succeeding because something else already exists, there is a lack of trust, or the timing is not right?

This week, as we mourn the loss of Mac, I’ll still put on my running shoes and train for the next race. I’ll look forward to what is next and remember that the next chapter will hold something new. There is freedom in knowing we made the right decision, even when it was hard.

KELLEY’S USEFUL TIP: While our ability to work as evaluators is dependent on programs staying in business, ethically, we need to keep our eyes open to the possibility of an organization sunsetting when they are no longer able to produce the outcomes they have worked hard to achieve. 

Week 68: Color Boarding

I recently went to the D23 Expo in Anaheim, California. This was a conference for Disney fans! I got to attend panels where I learned past Disney secrets (e.g., photos from Disneyland’s early days, 20-year reunion of the group that created Toy Story) and upcoming Disney excitement (e.g., new parks & resorts experiences, upcoming animated films, upcoming Star Wars and Marvel films). I also got the opportunity to meet and share with Disney fans from all over the world. I went purely for myself, since I love Disney everything, and I never dreamed I would learn something that could be applicable to my evaluation practice.

In a session with John Lasseter, Andrew Stanton, Pete Docter, and others from Pixar, I learned about a technique created by Ralph Eggleston (who was there too!) called color scripting. Color scripting is a type of story boarding, but Ralph would change the main colors of each panel to reflect the emotion the animated film was supposed to portray at that time. It helped the Pixar team understand what was going on in the film emotionally at a quick glance, and it also made it easier to create a musical score to enhance those emotions. You can click here to see some examples of color scripting. I thought it was a fascinating idea, and I didn’t really think about it again.

Then, a few days later for work, I was sitting in a large event, observing from the back of the room. I started taking notes on the engagement and enthusiasm of the audience based on who was presenting. I created some metrics on the spot including number of people on their mobile devices, number of people leaving the event, laughter, murmuring, applause, etc. I thought I would create a simple chart with a timeline of the event, highlighting who was presenting at different times, and indicating if engagement was high/medium/low and if enthusiasm was high/medium/low. I quickly realized, after the event was done, that engagement and enthusiasm were 100% related. If engagement was high, then enthusiasm was high (although it may have lagged behind a few minutes). So, instead of charting two dimensions, I really only needed to chart one: engagement & enthusiasm combined. That’s when it hit me – color scripting! Okay, I’m no artist like Ralph Eggleston, so I wasn’t going to storyboard who was on the stage at the various times during the event, but I could use a simple color scheme. Below, I’ve put an example of what I’ve done, removing the names of who was presenting at the time. I'm not going to call it color scripting because there's no underlying script, so how about color boarding? You can get the general idea...

screenshot_643.jpg

In sharing this with the clients who put on the event, they could clearly see how the audience reacted to the various elements of the event. It will be helpful in determining how to improve the event in the future. This was a quick and easy visual, made in Excel, to illustrate the overall reactions of the audience. I also included some commentary on each portion of the event, including suggestions for the future, so the color board was not created to be used in isolation.

DR. TACKETT’S USEFUL TIP: You never know where you’re going to learn a technique or tool that could be useful in your evaluation practice and useful to the client. Be open to learning everywhere you go!

Week 67: Different Strokes for Different Folks

I have begun to settle into my new surroundings and my new job here in France/Switzerland. This past week I attended the Swiss Evaluation Society’s annual conference, which was held in partnership with the Geneva Evaluation Network. I got an overview of a wide array of work being done within international organizations related to evaluation. During the afternoon of the second day we were split into three groups. We then spent an hour and a half rotating through three rooms (30 minutes in each room), each with a different facilitator who posed to us a different question. One of these questions was, “what factors negatively or positively influence the utility of an evaluation?”

There were some familiar themes that emerged from the three groups when it came to what enhances use. Engaging with stakeholders throughout the evaluation, including during the design phase, execution phase and reporting phase was something most people agree enhances use. Additionally, follow up by someone, whether it is the evaluator, a project manager or a member of an organization’s leadership, seems to be an important facet of enhancing the use of an evaluation’s findings. Many of these things are elements of the type of our own evaluation practice at iEval.

When it came to what detracts from use, there were some interesting comments. First, long reports were something everyone agreed nobody wants anymore. Yes, a full technical report has to exist, but it does not need to be the product that is shared with the intended users (unless they want it of course). Instead, we must aim to create products which best serve the various audiences that make up that intended users group. Additionally, we need to be creative with how we do this. Executive summaries are the traditional approach to providing a brief overview of an evaluation’s findings. But, I would argue that most of these still don’t get the job done. Instead, things that emulate data dashboards or infographics might be more effective at catching the eyes of readers. We have experimented with this in our work, and the feedback is good. People appreciate having a visual, easy-on-the-eye, one pager that they can print out and hand to funders, their leadership or project beneficiaries to give them an overview of an evaluation’s findings.

Some of the work I have been doing recently has had me reviewing a number of evaluation reports done of ILO programs and projects within various countries. The vast majority of them are not pleasing to the eye. So, to get back to the title of my blog post, I want to encourage you to put some thought into how reports that you might produce look, read, and feel to a user. It makes a difference, it draws attention, it makes people take notice. These are all hugely important when we want our evaluations to be used more by the people we produce them for.

COREY’S USEFUL TIP: Consider different forms of reports for different audiences. Think about what the various audiences are for your evaluation and what kinds of reporting needs they have. It might be worth the extra work to produce different forms of reports to enhance the utility of the work to different audiences.

Week 66: Impact of Evaluation Focused on Use, a Client’s Perspective

Continuing our temporary monthly component of asking some of our clients to serve as guest bloggers, sharing their perspectives on how specific evaluation tools are useful or how they use evaluation findings in their jobs, next up is James Hissong. James was the quality and evaluation coordinator for Communities in Schools of Kalamazoo, working closely with their 21st Century Community Learning Centers programs. He has recently moved on to a position with St. Joseph County, where we wish him the best! Here are some of James’ thoughts on how he saw evaluation being useful…

Evaluation is a vital part of Communities In Schools of Kalamazoo’s (CIS) model of continuous quality improvement. iEval’s practices reinforce evaluation as an ongoing part of the organization’s culture as opposed to treating evaluation as an event. CIS uses the evaluation reports in a variety of ways.

At the programmatic level, our after school coordinators incorporate evaluation findings into annual program improvement plans, setting goals based on the data presented in the evaluation reports and using subsequent reports to monitor progress. This process has brought staff greater awareness of such issues as the impact that increased dosage (i.e., regular program attendance) has on student outcomes and has spurred staff to set goals using program attendance benchmarks. Using this method, CIS has had positive results in increasing regular program attendance from 63% of program participants attending 30 or more days in the 2011-12 school year to 77% in the 2012-13 school year.

At an administrative level, CIS has used evaluation findings to make changes to organizational policies and practices such as student enrollment procedures and staffing structures. For example, in an effort to improve student academic outcomes and increase ties to the school day curriculum, CIS employs school day teachers within our after school programs as instructional specialists. Additionally, feedback from iEval’s evaluations of like programs across the state prompted CIS to change staff recruitment strategies, targeting staff members with a stronger background in teaching and youth development.

We believe that these evaluations have made a positive impact on our programs because of our capacity, with assistance from iEval, to actually use the reports in our decision-making processes. Events such as Camp iEval days, are an added benefit, as they allow for CIS program directors to interact with other program directors with similar goals, facing similar issues. These interactions allow for genuine discussion across sites and are a great way to share best practices among colleagues that we do not normally interact with. iEval provides multiple mechanisms to prompt the use of data for healthy and continuous self-assessment. Among the other valued approaches of working with an external evaluation team is their responsiveness to questions, observations, and requests, but especially a joint commitment to making the evaluation as relevant as possible to the end users.

JAMES’ USEFUL TIP: It is important your organization to have built-in mechanisms and expectations for using the information provided by the external evaluation team. 

Week 65: Making the Most Out of Conferences

It is conference season in my world. I have attended many professional development sessions and conferences over the summer aimed at providing educators and other stakeholders worthwhile ammunition to kick-off their school year. However, here is what I hear (over and over and over again)….

            Ugh, this is such a waste of time (eyes rolling back)

            It is freezing in here…or… It is a sauna in here.

            The food is terrible…or they ran out of food…or the food is cold.

            I have so much WORK to do, why am I here?

In general, the demeanor of many conference goers is negative before they even step up to the registration table. So, while this post’s topic seems extraneous to straight-up evaluation use, it is relevant because we all find ourselves at conferences. Often as an evaluator, I attend these conferences to either provide feedback on the sessions or to support grantees that are attending. I too have a growing list of complaints about conferences I’ve attended this summer, especially when the content does not seem worthwhile to waste a full day’s time. However, below is a list of tips for those that find themselves at a conference and struggling to find meaning or make sense of their purpose and time there.

  1. Get to know everyone at your table. Exchange cards, but also get to know each other well enough that you could call on them if there was an appropriate opportunity to do so.
  2. Be positive. Negativity is the influenza of conferences—eye rolling and murmurs or disgust and getting on your phone are contagious. Instead, try to hear what may be worthwhile.
  3. Make it a goal to leave with three take-aways. If you have to be there anyway, try to learn three new things from a presenter or other people attending the conference. Don’t let yourself close off and disengage, despite the tremendous amount of self-discipline that may take. If you are signed up for a session that is covering content that you are not interested in, take note of who is in the room or think about how your work might intersect with this seemingly unrelated topic.
  4. If you do have a complaint about the room temperature/food, take that complaint to the front desk of the hotel/facility or write a letter/email after the fact to the facility management. There is likely nothing they can do on the spot to make you comfortable, so wasting time stewing about it or talking with your colleagues won’t make it go away.
  5. Take the time to complete the event feedback survey. Often this is passed out at the end of a conference (so stay for it). It is the best opportunity you will have to impact the conference in future years.
  6. Stay off of your phone during presentations. While it is SO hard to do, putting your phone away is a basic courtesy to the presenters and sends a message to the others at your table that you are there to engage. It might just change the tenor of your experience!

KELLEY’S USEFUL TIP: Before agreeing to attend a conference, try to verbalize or write out three rational reasons why it is worth your time to attend. Sometimes the desire to travel or “get out of the office” sound good in theory, but being at a conference that is not tightly aligned with your professional objectives is a waste of time. If you can’t come up with three reasons, it probably isn’t worth it.

Week 64: Asking Your Evaluator Questions

A long, expensive car with darkly tinted windows pulls up…

A team of three individuals get out holding their leather briefcases, dressed in name brand tailored suits, and walking in union with dark sunglasses covering their eyes…

They walk on past you into your board room where you’re not even sure you belong anymore…

One removes her sunglasses, peers down at you, and says, “Now we’re going to tell you what your data mean, what you are going to do next, and when we expect payment.”

What a nightmare! Hopefully no evaluation team you’ve ever worked with operates like that, but sometimes the clients feel that way even if that’s the furthest thing from the truth. Data can be intimidating. Evaluation connotes judgment, which is always scary. As evaluators, we need to be aware of the image we portray, and do what we can to be friendly yet professional. As clients, you need to remember that you are paying for the evaluation service, and you have the right to question anything and everything your evaluation team is telling you. Just because the evaluators have fancy degrees or long reports does not mean that your questions are any less important to the process…just the opposite, in fact! The more questions the clients ask, the better the evaluation is understood and internalized, and the more likely the findings will be applied in the appropriate manner to improve programs and make important decisions.

This evaluation is FOR YOU…for the client! You need to be prepared to ask questions of the evaluation team. You may not feel ready to do that when you first see a report, which is why we typically share a report virtually first so the client has time to read through it and process it on your own time. That way you don’t feel pressured to ask brilliant questions on the spot. After the clients have had time to review the reports, we either have a phone call or meet in person to review the reports and answer any questions. Sometimes we are asked questions that we don’t have immediate answers for and need to go back and do some more digging to uncover the answers – those are the best kinds of questions because they push everyone to go further!

If you’re unsure of what to ask, here are some general ideas to get you started:

  • If a chart or table isn’t clear – ask for a step-by-step explanation of what it says or if it can be presented in a different visual or if there could be some narrative text inserted to explain it so you don’t forget later.
  • If there is a finding or recommendation you don’t quite understand or even disagree with – ask how that finding was validated…that is, were there multiple data sources that confirmed that finding or recommendation.
  • If a finding or recommendation seems unrelated to your specific project – ask how the evaluation team came up with that one (e.g., maybe it’s based on their knowledge of other similar projects and they’re trying to apply findings more universally).
  • If something you expected to see in the evaluation report is missing – ask the evaluation team where it is. It may be embedded in another component, it may have been purposefully excluded because of problems with the data, or it may have been forgotten…but you don’t know unless you ask.
  • If a finding interests you and you want to know more – do not be afraid to ask probing questions that may need some additional data analyses to uncover the answers. Most evaluators build in that expectation into their plan – do not assume that a report is final. We always present a final report more as a draft, with the understanding that there may be some additional work that needs to be done after it is shared.

DR. TACKETT’S USEFUL TIP: There are no dumb questions. In fact, some of the sillier questions have led to really useful opportunities to dig deeper into the data and develop meaningful shared understandings that could be immediately applied to programs!

Week 63: Welcome to the ILO!

In my last post I mentioned that I would soon be moving to Geneva, Switzerland to work in the Evaluation unit at the International Labour Organization (ILO). Well, I’m here!! After a whirlwind first week I have spent the weekend catching up on sleep and thinking a bit about this brand new context I am immersing myself in as an evaluator. I have only worked at the ILO for three days, but I figured why not take this opportunity to reflect a bit on that here while simultaneously producing something interesting for all of you.

The ILO evaluation office commissions, conducts and manages a wide range of evaluations. These range from high level evaluations of policy and strategy, in which the office is intricately involved, to country level program evaluations, in which the office only in provides technical assistance when requested and reviews the final report. However, each evaluation comes with a set of recommendations, and the evaluation office is responsible for developing capacity for using evaluation findings and an environment in which this is viewed as important for decision-making and program improvement.

There seems to be a tiered system for holding program managers accountable for developing action steps based on evaluation recommendations and following through on them. There are, for example, regional evaluation managers who hold country program managers accountable for using the evaluation. At higher levels within the organization, the evaluation office convenes an Evaluation Advisory Committee (EAC) made up of high-level administrators in the ILO who review strategic level evaluation reports and hold the leaders of particular ILO departments accountable for the utilization of recommendations. Having an engaged committee made up of organizational leaders willing to hold managers accountable for using evaluation may be a good strategy for enhancing use within organizations.

This approach to enhancing evaluation use within an organization is top-down. It is not yet clear to me how much of the evaluation use happens on managers own accord, which is arguably a more sustainable and effective strategy. Considering the research done by various evaluation scholars over the years on the topic of use, it aligns with what some have argued for, which is the engagement of key decision makers in the evaluation. However, it seems that this happens not while the evaluation is being conducted, but after the evaluation has been completed, recommendations written and the final report submitted.

All in all, use seems to be a key component of the ILO evaluation strategy. It is included in their strategic plan, and it is an important part of the evaluation offices work. I am sure I will continue to learn all about how evaluation use is facilitated within the organization and among development agencies like it. As I do, I plan to share some of those lessons here with you. Being in this new environment brings with it that period of time where you don’t know what is going on and are trying to figure out how things operate. Writing this post has helped me unpack a few things, and hopefully is an interesting read for all of you.

COREY’S USEFUL TIP: Since I am still in the “soak it all in” phase my tip is to go into new environments with open ears and open eyes. Take all opportunities that are presented to you because, even if they are uncomfortable, your ability to face those situations and thrive are where real learning takes place.

Week 62: Impact of Evaluation Focused on Use, a Client’s Perspective

Continuing our temporary monthly component of asking some of our clients to serve as guest bloggers, sharing their perspectives on how specific evaluation tools are useful or how they use evaluation findings in their jobs, next up is Keri Retzloff. Keri is the project director for SPARKS, a 21st Century Community Learning Centers program that has been running for 13 years in northern Michigan. Here are some of Keri’s thoughts on how she sees evaluation as be useful…

The evaluation we receive from iEval reaches far beyond data and statistics. The evaluation team takes the time to walk you through the data and helps to explain it further so that the data can be used to create real impact in your program and with students/families. Some examples of how the evaluation team encourages use are:

  • Wendy (the evaluation lead) has come to our staff meetings and helped guide us through some evaluation/goal pieces to help guide our staff in creating meaningful goals that represent areas of growth needed in the program.
  • Wendy has helped us look at our individual data, review areas of improvement, and then helped to create and work us through some steps to help achieve those goals.
  • The iEval team is always available to work through a problem, be a sounding board, and give an outside constructive view to solve challenges in programs.

Due to the follow-up and relationship that the evaluation team has with their clients, the evaluation they provide has a large impact. They present data in a non-threatening, easily understood, coaching style approach. This allows the client receiving the data to feel at ease asking questions and digging deeper into the data.

By providing opportunities such as Camp iEval, asking questions, gathering perspectives of all interested parties (school day teachers, after-school staff, parents, students, etc.), and giving non-judgmental recommendations, the evaluation team has created a respectful relationship with the clients where mutual trust and respect are shared. 

KERI’S USEFUL TIP: Develop a relationship with your external evaluators, this will help improve your comfort level as you ask questions and use the data.

Week 61: The Beauty of Beginning

This week I have made a connection with a friend who is starting a position with a United Way as their Collective Impact manager. She is a brand new employee, working to “begin” a collective impact project in her county. Oh my, what a beautiful place to be. While there are certainly relationships and history that will need navigation, there is a blank slate with this collaborative cradle-to-career effort. How will she bring stakeholders together to rally around increasing the high school graduation rate? Who will she bring into the room? How will she navigate through the early stages of the project? The “whos” and “whats” are being defined.

In one year, she will reflect back on the hiccups and successes. But at this moment, she is still taking the baby steps and gets to start with a clean slate! It almost sounds magical.

However, week 1 is almost over, and Monday begins week 2, and then 3, and then a month will be under her feet. Time flies, and the opportunity to start fresh can almost pass by without noticing how fleeting it is. As an evaluator being involved in collective impact projects that are years into the process, I recognize the significance of the opportunity before her. Below are a few things I have learned from my years involved in collective impact work…a few pointers that are applicable when you have the opportunity to start anything new.

  • Talk to others in your position around the state/country. Making a few phone calls of introduction and information gathering can be invaluable. Asking questions and taking notes about greatest challenges, lessons learned, advice, etc. can be worthwhile, as can saving these contacts and connections as you progress in your project.
  • Meet one on one with all of the key stakeholders. Taking the time to have face-to-face meetings with the stakeholders in a project is an excellent start at the beginning of a collaborative relationship. Find out what their investment and interest is in the project, level of enthusiasm, hesitations, and best mode of communication (email, phone, etc.).
  • Be proactive in identifying and solidifying data agreements. While data needs will evolve as a project unfolds, forming relationships with school districts and other data administrators will be beneficial for the duration of the project if done right.
  • Create a system for tracking decisions to be made, decisions that have been made, and the outcomes. As a project grows and continues, it is useful to have a tracking system that clearly identifies how a project evolved and what decision points were used along the way.
  • Be wed to a goal, not a way of doing things. If your project is adopting a process or model, only adopt the aspects that make sense for your work. Stakeholders will resent participating in activities that are repetitive or unclear. Have all work plans tied to clear, identifiable outcome measures and make sure activities are aligned with that.

Of course you only get to start something once, but there is always opportunities for reflection and resets. Make time for those breaths of fresh air in your work to keep it thriving.

KELLEY’S USEFUL TIP: When starting a new project, don’t be afraid to ask lots of questions and admit when you don’t know something. Be smart about gathering the information you need, ask for help when you need it, and track information for further use down the road!

Week 60: Focus on Use at Conferences!

I practically jumped off of my couch when I saw the email about the theme for the European Evaluation Society Biennial Conference in September 2016 in the Netherlands – Evaluation Futures in Europe and beyond: Connectivity, Innovation and Use! USE is in the theme!!! Specifically, in the email, it states:

The three themes of the conference (connectivity, innovation and use) strike vital chords in our community’s current state of affairs. Without weaving evaluation theory and practice through innovative methods the evaluation community will not generate findings that are trustworthy and useful. Without closer connections to other sciences it will not achieve excellence. Without partnerships linking commissioners, evaluators and the citizenry, use of evaluation will falter. Nor will evaluation utilization improve human lives without professionalism, sound ethical standards and generally agreed good practice guidelines.

So often USE is something that is vaguely talked about, but it doesn’t get the priority it deserves. We all know improving use is important…but how do we do it, what tools have been successful, what has increased use led to, how has the use of evaluation findings improved the world? EES 2016 is providing the opportunity for evaluators all over the world to talk about these issues. If you’re unable to attend EES next year, you can always check out their web site for some of the publications, presentations, and discussions related to the conference. I attended EES in 2014 in Dublin, Ireland and gave a presentation about improving evaluation use, and it was well received. I can’t wait to hear what will come in 2016!

While USE may not be the primary theme at other evaluation conferences, you can always find some sessions that will be clearly applicable to improving use. Here’s two other conference opportunities that will have a focus on USE and will also happen a bit sooner than EES in September 2016:

  • American Evaluation Association - Exemplary Evaluations in A Multicultural World: Learning from Evaluation's Successes Across the Globe, November 2015 in Chicago - The Exemplary Evaluations theme is intended to inspire and energize evaluation professionals from around the world to spotlight what has gone well in evaluation. As we listen to each other’s exceptional experiences, we listen for themes of success—how were these evaluation exemplars managed? How did evaluators engage with stakeholders? How were evaluation conclusions developed, what lent them credibility, and what led to evaluation use? There is much to learn from examples of high quality, ethically defensible, culturally responsive evaluation practices that have clearly contributed to decision-making processes, program improvement, policy formulation, effective and humane organizations, and ideally to the enhancement of the public good. 
  • Michigan Association for Evaluation21st Annual Conference, April or May 2016 in Michigan – theme yet to be determined but the conference has focused more on the practical application of evaluation knowledge in the real world the last few years!

If you know of any conferences, trainings, webinars, etc. that have a focus on USE, please share here!

DR. TACKETT’S USEFUL TIP: The more we talk about USE, the more ubiquitous it will become. When focusing on the meaningful use of evaluation findings is part of the way we all do business, then I can retire!

Week 59: Independence in Evaluation

In three weeks I am moving to Geneva, Switzerland to work for 6 months at the International Labour Organization (ILO). The ILO is a division of the United Nations which focuses on working with governments, the private sector and labor organizations to ensure fair wages, fair working conditions and worker rights. I will continue blogging and will likely be drawing on my experiences in that context as I decide what to share with you on any given day. I am kicking that off today, and, because I will be working in an internal evaluation context, I have been thinking a lot about the idea of independence in evaluation.

The ILO has an internal evaluation unit. In fact, nearly all UN divisions have their own evaluation unit, in addition to the external evaluation consultants they employ. Working as part of one of these units I will have an opportunity to experience what this type of evaluation environment looks and feels like. One of the primary criticisms of relying too heavily on internal evaluation is the potential for bias to leak into decision making, data analysis or interpretation of findings. On the other hand, internal evaluators have strong contextual knowledge, relationships within an organization and an understanding of nuances which can be so important when trying to tell the story of an organization or a program through an evaluation. These various features have an important effect on the utility of evaluation findings or products.

I believe that internal evaluators or units are able to maintain independence but it relies on the evaluator(s) ability to do as such. Because it is so person dependent, I wonder whether or not it is generalizable across all internal evaluation units. But, a mix of internal and external evaluation can be a powerful tool for ensuring relevant and well understood evaluations, as well as evaluations which limit the influence of internal biases. In fact, playing the part of the internal evaluator in this type of scenario is how I became involved with iEval.

I first met Wendy when I was an internal evaluator for a local non-profit. iEval was hired as the external evaluator of that organization and so we started working together, in the way I described above. I was able to focus on program improvement, while also learning from and contributing to the external evaluation being done. I also was able to provide that contextual understanding to some of the data in those external evaluation reports. I also had to be self-aware of my own biases and be sure that they didn’t blur my ability to be critical in my interpretation of the information I was collecting.

As I now move back into an internal evaluation setting, I will bring with me my experience and my ability to be self-aware of my own biases. In fact, this is something I believe is critical for continuous improvement within all programs, projects, businesses, etc., and it is something I encourage everyone to consider. It will also be interesting examine the way the evaluation unit in the ILO establishes its independence as an internal unit and the considerations on the part of the staff as they work to improve and evaluate the efforts of their colleagues.

COREY’S USEFUL TIP: Internal evaluation of any kind can be extremely effective for program improvement. This doesn’t have to be done by an internal evaluator, but rather anyone who can think critically about data and other programmatic information. But, it is important to be aware of our own biases as we examine our own work and managing these can help us make better, more objective decisions.


Week 58: Impact of Evaluation Focused on Use, a Client’s Perspective

We would like to congratulation Kristin Everett, one of our blogging team, for adding a future evaluator to her family's team. Congratulations on baby Claire! While Kristin is enjoying her family this summer, we have asked some of our clients to serve as guest bloggers, sharing their perspectives on how specific evaluation tools are useful or how they use evaluation findings in their jobs.

First up is Julie Dibert, the director of Community Unlimited in Union City, Michigan, and the project director for Union City's 21st Century Community Learning Centers grant. Here are Julie's thoughts on how evaluation works for them...

The evaluation process that iEval has created with Camp iEval has made the evaluation process come alive for our organization. When you are able to sit in a room with others in your same field and discuss the findings of your evaluation and how to improve it, it is invaluable. Everyone is processing, throwing out ideas of improvement, things that are working well for each other, etc. The opportunity to hear firsthand key ideas that are trending in our field is another great piece, along with project directors sharing information about what is working well in their program, so others can take their ideas and implement in some way to their program. It honestly has made our organization look at data, review it, and implement new strategies based on the data.

In working with an external evaluation team that is focused on use and responsive to our needs, I am able to do strategic planning and talk with my staff around the data that is fresh and relevant. It helps create a system on how to make improvements from the data to our program…we immediately begin to see the difference.

The impact on the evaluation comes from the relationships that are built during the Camp iEval. We have had the opportunity to meet together and grow together as project directors. The reports that we receive are immediate. The documents and data are relevant to make the changes in the programs. We can improve programs for the students using the data how it was meant to be used. You definitely walk away thinking about how to make improvements and excited about making those changes.

JULIE’S USEFUL TIP: Talk about your data often. The more you talk about it, the better you’ll understand it, and the more it is relevant to improving your program.

Week 57: Staying the course

My friend, a psychologist, says that a “midlife crisis” is inevitable for everyone. Like learning to walk and going through puberty, midlife is developmental milestone where the human mind grapples with hitting the middle mark. For some this is acted out in extra-marital affairs, buying flashy cars, and drinking too much alcohol. For the more ordinary individual, this is marked by a period of questioning one’s career, relationships, and purpose.

I think I may have hit this milestone a little early this week. Luckily, I think I fall in the ordinary individual grouping—nothing too flashy for me in crisis. But at the young age of 38, I’m beginning to wonder what difference I make as an evaluator. I had the pleasure of spending a day with a group of enthusiastic teachers this week. They were eager, passionate and determined to improve their practice. It was inspiring—and it made me long for the days I spent in the classroom with students.

I also had the misfortune this week of making a huge mistake in exporting data—literally, forgetting to uncheck one box in a data download. It had a negative impact on me and my colleagues. I am plagued with guilt for the time wasted on behalf of my mistake. But more than anything, I was upset by the emotional turmoil this mistake brought. Rationally, I understand that the data is insignificant in the grand scheme of life, but man….what a big mistake at this moment, for this project.

This week, I longed for the simplicity of impacting “humans” in the classroom. It seems so much more straightforward.

However, when I take a step back and look at the work of evaluation in the bigger picture—I do remember that when used correctly, evaluation findings can have a dramatic impact on “humans”…on individuals in classrooms, organizations, and communities. As I push through this midlife hurdle, I attempt to keep the relevance of evaluation work at the center.

I would love to hear from our blog readers about how they stay focused and motivated as evaluators.

KELLEY’S USEFUL TIP: One way I’ve found meaning in my work is by using my skills in a volunteer capacity in my community. I serve on several local boards and councils where I am able to dive into issues that are especially important to me (e.g., at-risk youth, low-income housing) and bring an evaluation lens to the table.


Week 56: Keeping Evaluation Relevant in a Changing Environment

Summer is always a tumultuous time for our evaluation team. We are collecting, cleaning, and analyzing a myriad of data types. We are creating user-friendly and useful reports based on those data. We are providing training around using the reports for program improvement. We are seeking new contracts for the upcoming school year and calendar year. It’s an extremely crazy time of the year. Throw into that mix some other atypical occurrences like vacations, internships, new babies, remote working, and conferences…it creates an even more dynamic evaluation environment. Thinking about how we adapt to that and continue to provide high quality, meaningful data and reports to our clients led me to think about how flexible we often have to be in the actual evaluation implementation with clients.

A key staff member got a promotion…

A partner organization lost their funding…

Two team members are on maternity leave…

150 new children sign up to participate midway through the program…

The pre-/post-tests were lost…

A snowstorm cancelled four program sessions in a row…

We live in the real world where good and bad things happen that dramatically impact how a program is implemented and evaluated. While we can’t plan for every contingency when designing an evaluation plan, we need to be responsive to those changes, making adjustments where needed and holding steadfast when appropriate.

Here are a few important questions we can ask ourselves to help us decide if we should modify the evaluation plan or stay true to its original design and account for that change in the data analyses portion of our work:

What are some other questions you ask yourself as you decide if you’re going to change your evaluation design? Please comment below and share!

DR. TACKETT’S USEFUL TIP: Be thoughtful and intentional if you are going to change your evaluation design, oftentimes programmatic changes can be accounted for during the data analyses.