Week 23: Success for whom?

One thing I love about being a graduate student in evaluation is the continuous opportunities to be exposed to new ideas, new ways of thinking and interesting work being done by interesting people on a diverse range of topics. Now, this becomes slightly overwhelming at times, but it is always a good thing in my opinion. The Evaluation Center, which is where my doctoral program is housed, holds a weekly event called the Evaluation Café. It takes place during lunch on Wednesdays, and pizza is provided. Someone or a group of people present about something related to evaluation. This past week’s topic was particularly relevant to me, and I wanted to explore that a little here.

The speaker was Dr. Jerry Jones who is the director of an interesting organization in Grand Rapids called the Community Research Institute (CRI). This organization does evaluation, collects and analyzes public data to make it more accessible, and builds platforms to explore these data. But Dr. Jones wasn’t there to talk about the cool stuff that CRI does. He spoke to us about ideas how evaluation fits in with concepts of social justice, truth and most specifically, racial equity.

How may this tie into the concept of use? Well, one point I wrote down during this presentation was on the topic of success, more specifically, “success for whom?” To me, this concept ties into use. In fact the word “success” could be supplanted with “use” and the point would still be an important one to consider. That short question is essentially the basis of certain utility-focused approaches to evaluation. Remaining with the word success though, there are still important considerations related to how evaluations are used.

Evaluators believe, hope and strive to make sure that their evaluations are used for program improvement. In a recent conversation with a friend, we talked about what may be the outcomes for an evaluation. Well, program improvement would certainly be one of them, we decided. Program improvement should then tie in to the successfulness of a program. This is where the question of “…for whom?” comes into play.

As evaluators, we must be aware of inequities which exist in the contexts which we work. These issues are often wrapped up in the programs we work with, the communities they exist in and the people they serve. Real disparities exist in this country, and they cannot be ignored because we (the collective) want our clients to be happy, or we don’t want to push them to address deeper, more systemic issues that may exist in their program contexts.

Too often we (again, the collective) are wrapped up in the intended outcomes of a program - their goals and objectives - that we forget about the idea brought to us long ago by Michael Scriven of Needs Based evaluation. Shouldn’t evaluations be focused on studying whether or not programs are meeting the needs of people in order improve the public good? Should evaluations be used to identify programs which address important social issues, to improve towns, cities, states, countries? I believe, and some may disagree, that it is the duty of evaluators to ask hard, probing and sometimes uncomfortable questions of our clients when it comes to the services they are providing, and how they can be made even better by broadening their net and being intentionally more inclusive.

I became interested in evaluation because of the opportunity it provided to work for the “public good” by helping to provide information for use in improving the way problems are approached. I sometimes lose sight of this as I get wrapped up in the challenges of my own life. I find that these are the moments when I most question my decision to be an evaluator, when I forget what it is that brought me here. Not everyone may resonate with this message. However, I think that it is important to think critically about how we as evaluators present our evaluations to our clients. What is our role in making sure that the information we present lends itself to use, not only instrumentally, but in a way which address issues of equity?

Here are a few key points that I can share:

  • Be aware of the context in which the program you are evaluating exists. This may mean exploring publicly available data to get a sense of what types of societal issues are relevant in a particular community. This may mean having conversations with key individuals who can shed light on what struggles a particular community is having. Programs operate within a system, and an evaluation will be made more useful it can place its findings within that system.
  • Disaggregate your data. This seems straightforward, but it is important to show the data you collect by race, gender and ethnicity. This can lead to important conclusions about how a program is operating, who it is serving, and what success it is having, for whom.
  • Encourage your clients to allow for dissemination of the evaluation findings. This opens up the opportunity for an evaluation to be used more broadly by a community, and prevents the issues of shelving, whereby an evaluation client receives a report and promptly finds a place for it on their bookshelf.

COREY’S USEFUL TIP: Consider not only the intended outcomes in your evaluation, but also how well the program you are working with addresses real needs. Scriven argues that every evaluation should begin with a needs assessment so that the evaluation may assess how well a program meets those needs.