Week 35: Evaluation Use and its Outcomes

I recently gave a guest lecture to a group of undergraduate students at Kalamazoo College introducing to them the concept and field of evaluation. One of the first things I talked about was why I chose to go into this field. I was reminded that what I wanted to get out of being an evaluator was to help improve the programs and/or policies which are affecting social change and making my community, this country and world, a better place.

Now, this means I operate under an assumption that when evaluation findings are used, they have some subsequent outcome or impact (just as a program would) on the evaluand. This is a logical connection to make, in my mind. However it is not something that we as a profession know much about. We create our evaluation reports, facilitate data interpretation workshops and give presentations with our eyes on use, but we don’t yet know much about what the outcomes of that use is. 

This is a concept I have been mulling lately and having conversations about with some of my colleagues. I don’t have answers but in the next couple of posts I am going to explore this idea by unpacking some of the concepts that Gary Henry and Mel Mark discuss in their 2003 paper titled “Beyond Use: Understanding Evaluation’s Influence on Attitudes and Actions.” The first thing I want to share with you is a framework that they present in their paper which outlines three levels of evaluation impact, or what they call “evaluation influence.”

  1. Change within individuals: Changes in individual’s opinions or attitudes about something.
  2. Change in the interaction between two or more individuals: Changes in how individuals talk about the program as a result of evaluation. For example, the use of evaluation information to make a case for a program to a funder.
  3. Change in the practices or decisions of programs: Changes that occur within programs or policies regarding their implementation.

What this framework provides us with is a way to think about the different levels of evaluation impact, and for those of you, who are ambitious enough, a way start cataloging our own examples of impact. I think it would be extremely interesting to collect this kind of data which could be used to share experiences in a more systematic way.

 

COREY’S USEFUL TIP: When an evaluation has been completed, it may be interesting and meaningful to you as a professional to reconnect with a client after a period of time has elapsed to investigate what types of impacts your evaluation may have had on the program. Building a record of this may help you adjust your professional practice, but also build a base of evidence that can be presented to potential clients about work you have done in the past.

Also, if any of you reading this have an example of when an evaluation you did made a notable impact, please share in the comments section.

Scriven, M. (1991). Evaluation thesaurus. Sage.
Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293-314.

Week 34: Using Evaluation Data to Make Decisions

What type of data are people really interested in? What type of data will people use to make changes in their programs?

Christina Christie set out to answer these questions in a study. She used simulated experiences with study participants to better understand what types of data most influenced them. Christie’s results, published in “Reported Influence of Evaluation Data on Decision Makers’ Actions: An Empirical Examination” (2007), used simulations with educational leaders to determine what types of evaluation data they use to make decisions.

To gather the data, Christie used simulations. She emailed electronic surveys to 297 alumni and current students in an educational leadership program. She also contacted 29 program directors from a county health department. The instrument consisted of scenarios that used three sources of evaluation information: large-scale evaluation study data, case study evaluation data, and anecdotes. The scenarios presented were based on a variety of programs such as pedestrian safety, substance abuse, and preschool programs.

After reading each scenario, the respondents were asked to choose which type of information would most likely influence their decision-making. The data were analyzed and the results were presented.

Results:

  • General large-scale and case study data were more influential to decision makers than anecdotal accounts.
  • Almost all respondents said that they were influenced by some type of evaluation data.
  • Christie suggested that evaluators specifically ask decision makers about their evaluation data needs prior to conducting the evaluation, so that the evaluator could provide information that will be most useful and influential.

Has your experiences with data used for decision making been similar to these findings? Are clients asking for large-scale and case study data?

DR. EVERETT’S USEFUL TIP: Ask the decision makers about their evaluation needs, but don’t forget that reports get shared. What may be influential for one group of people may not important to another. Knowing the other possible audiences for the report is important when planning an evaluation.

Week 33: Keep it Simple

At my house, January is a month of resolutions, goals, and getting rid of excess (weight, clutter in the closets, and toys with missing parts). In the name of trimming and focusing on what is important for 2015, I am reminded of how this task applies to evaluation work as well. 

When conducting an evaluation, program metrics and measures should be kept as simple as possible so that all stakeholders and the public at large understand the outcomes of what is being measured. 

All too often within complicated programs and projects we get caught up in the magic of measuring more, assuming that is better…and even better than that…measuring even more with great specificity. Sometimes as evaluators we can go too deep into the weeds and lose the big picture of what change, outcome or finding we are trying to communicate.

Some “simple” reminders about measures and metrics are that they should be:

  • directly aligned to the overall goals or purpose of the evaluation
  • easily tracked and reported
  • summarized in a couple of words (i.e., it should not take several sentences to explain the caveats)
  • combined easily (i.e., 3 or 4 measures examined together) to paint an overall picture

As you cultivate your goals for 2015, I challenge you to consider “simplicity” as a worthwhile end-point for your evaluation and project planning.

KELLEY’S USEFUL TIP: When putting together an evaluation plan, start by writing the key finding you hope to report in your final deliverable (e.g., In 2015, X percent of students engaged in violent behavior, an X% change from previous years). From this point, you can work backwards to think about how you can best and most simply get the information you need. 

Week 32: 2015 is EvalYear

We’ve all been hearing that 2015 has been designated EvalYear – the International Year of Evaluation. It’s hard to believe that it’s been only 50 years since evaluation has truly been considered a field. The intention of EvalYear is to bring new thinking into evaluation and create new synergies across disciplines…inclusion, innovation, and strategic partnerships.

In order to ensure I can make a contribution to EvalYear, here are some specific resolutions I make to get involved, and hopefully it will help you think of ways you can participate too:

Stay connected through social media – I admit that I’m relatively new to blogging, but I’m committed to staying with it throughout 2015, hopefully sharing some ideas to make evaluation more useful and learning how to improve my own work through the blogs I read. I haven’t gotten involved in Instagram, Pinterest, Twitter, etc., but I will read at least one evaluation blog post each day and hope to hear feedback from many of you on what we are writing each week!

Explore ideas with other evaluators – I resolve to spend time not just networking with other evaluators at meetings and conferences, but also asking critical questions to push our thinking and building on each other’s ideas to potentially lead to innovative ways to design, use, and benefit from evaluation.

Teach about evaluation – I resolve to teach my clients about the process, intention, and impact of evaluation so they can take that knowledge and apply it to other areas of their work. I resolve to teach my graduate students, those who are planning to be evaluators and those who are not, how to incorporate evaluative thinking and processes into their own work.

Share about evaluation in unusual spaces – I admit, sometimes when I’m not wearing my evaluator hat, I like to just cheer at a football game, go to a musical performance, visit a museum, etc., and not think about evaluation at all. However, evaluation is all around us. If I am to live the principles of EvalYear, then I need to be thinking strategically how to include others in evaluative innovations. While I also don’t want to be that lady who talks about evaluation all of the time, I resolve to be more intentional in posing challenging questions that may lead to others thinking differently.

DR. TACKETT’S USEFUL TIP: You don’t have to be part of a large-scale governmental evaluation, work at an international evaluation firm, or keynote one of the major evaluation conferences to make a contribution to the evaluation field in 2015. Make your New Year’s resolution related to EvalYear and contribute in your own way!

Week 31: Learning about evaluation while teaching

We’ve been a bit reflective in our past few posts – about our evaluation work, our teaching practices, and how students learn about evaluation. Continuing along that same theme, a few months ago I mentioned that I was teaching Program Evaluation Theory at Western Michigan University. I approached the class like I would approach a new client, teaching the basic premises necessary to understand, incorporating hands-on activities, and integrating the practical application of new knowledge in the real world. I stressed the importance of involving key stakeholders in the evaluation to improve use, sharing results often so stakeholders think of evaluation as part of the way they typically do business and can make improvements along the way, and having fun while doing it. What I didn’t really think about…and I should have, I’ve taught enough to know this usually happens…is what I would learn in the process of teaching.

There’s an old adage, which I couldn’t find anywhere, that goes something like you learn the most when teaching. Joseph Joubert, a French moralist, said “To teach is to learn twice.” At this point in my career, when I’ve been an evaluator for 15 years, I think this class provided me with an excellent opportunity to reflect on what I have learned, how I apply what I know, and how I share evaluation with others. And, since this is the end of the year, it’s a great time to share those reflections with you!

What I have learned…or, more appropriately, what I have forgotten…is quite a bit! As an evaluator, you get comfortable with a few evaluation approaches and theories. You’re aware of others and (hopefully) read to stay current on what’s going on in the field. However, in teaching about many different evaluators, evaluation theories, and evaluation approaches, specifically guided by Alkin and Christie’s Evaluation Theory Tree, I am re-energized to incorporate some different perspectives and ideas into the work our evaluation team does. While I regularly read evaluation journals and blogs, spending hours discussing other approaches and hearing different perspectives on these approaches got me excited to try some new things!

How I apply what I know is also something I’ll think a little differently about moving forward. I always try to teach a little about evaluation as I work with clients, but hearing the questions students had, understanding what concepts they struggled with, and listening to their A-HAs…these perspectives will really help my team and I as we work to ensure our clients really understand how we are doing what we are doing, why we are doing it, and what impact our work has on their programs and organizations. Some basic concepts, like outputs and outcomes, have many different interpretations, and I need to make sure we are speaking the same language with our clients.

How I share evaluation with others was reinforced through teaching this class. Leonardo da Vinci said, “Simplicity is the ultimate sophistication.” As evaluators, it is not our role to teach the complexities of the statistical analyses used or give the historical perspective on a particular evaluation approach, it is our role to share the evaluation findings in a simple, yet meaningful way so the client can make decisions or improvements based on the findings. When teaching, sometimes I would get lost in the weeds because I felt it was all important…while it may be to me, I need to remember what is most important to the client.

 

DR. TACKETT’S USEFUL TIP: Never stop learning – you never know when you’ll be able to use some of that new knowledge. Keep the ultimate outcome in mind – for evaluators, that means sharing evaluation findings in a way that enables clients to use those findings in a meaningful way.

Week 30: Lessons for New and Seasoned Evaluators

A few years ago I interviewed graduate students about their experiences taking an evaluation practicum class. During the semester the students were matched up with a local nonprofit organization. The students worked in teams of two to four to identify an evaluation-based need of the organization, create a plan to meet the need, and implement the plan. For many students, it was their first time doing “real” evaluation work.

The graduate students offered tips for others embarking on evaluation, many of which have stayed with me through the years. These are helpful for not only new evaluators, but even the more seasoned among us.

  • Know your limitations. While working within a practicum, the students had very real limitations with time and resources. This is true of any evaluator. Knowing your limitations of time, budget, human resources, and skills will shape your work.
  • Learn from the experience. For students, they were learning new skills and how to interact with clients. Some students suggested for evaluators to keep a journal of what you are doing. Take notes. How did you handle the situation? Was it handled appropriately? Would you do something differently? What other possibilities are there for handling similar situations? Reflect back after evaluation is over and learn from your experiences.
  • Time management. It is easy to get in over your head. You may not be able to provide what you said you would provide. Build in extra time to collect data. For example, someone might not be available for an interview or someone might not show up when they said they would – this may not because you are a student but just how things operate. Balance what time is available and how you dedicate it. Stay on top of the work; do as much as you can with the time you have.
  • Discuss what you are doing. Talk to other students about methods they are using. Ask them if they see any holes you don’t see. Reach out to experts and colleagues.
  • Clarify your boundaries. Boundaries were a very real issue for graduate students working on their first evaluation. In the future they would try to finalize, establish, and understand the boundaries of the evaluation. Clarify to yourself how far you can go. Don’t be too ambitious; you can’t do everything.

DR. EVERETT’S USEFUL TIP: Evaluation offers a continuous education. Because no two clients or programs are the same, you learn something during each project. Jotting down notes of issues you can refer back to or talking to colleagues can help you learn from your experiences.

 

Week 29: The Total Package

This past week I was honored to have the iEval team come together to celebrate my 5 year anniversary of consulting with iEval. This milestone has made be pause to reflect on my evaluation career over the past decade+, starting as a teacher, moving onto a big evaluation firm, and finally consulting with iEval. These experiences have given me opportunities to work on a multitude of projects with a variety of teams—some with great success and others with huge disaster. From these experiences, I understand the importance of teams being comprised of the right combination of personalities, talent, and expertise.

I cannot pose as a team-building expert, but I wanted to share my “total package” list for a good evaluation team. Whether you are an evaluator building a team or a client trying to hire an evaluation team to work with you—I propose the following as non-negotiable team traits:

  • A tuned-in and responsive conductor. In some cases this may be considered a project manager, but more than that, the conductor is the person that is keeping the overall team on track with administrative functions (billing, marketing, time-keeping) and big picture development (bringing in new work, branding).
  • Content area expert. Knowledge in the subject matter being evaluated is critical, even better is someone that has practical experience in the field being evaluated (e.g., health, education, community-based organization). Evaluators having a larger context and understanding of the data lead to more meaningful conclusions that are more likely to be used by clients and programs.
  • Stats guru. Most evaluators can do descriptive stats and basic regression—but there should be someone that has training and experience with more complicated methods.
  • Type A personality. There has to be a detail person that catches every little grammatical mistake, organizes data files, and nit-picks reports. However….equally important….
  • A person that drives Type A personality nuts. Having a laid back, big-picture person on the team to counter balance the “type A” is necessary to keep sanity in check.
  • At least one person with a sense of humor (although it does work better if they have someone to laugh with). During stressful moments, project deadlines, bouts of boring analyses, and all of the other components of a “typical” evaluation—laughing helps. It just does.
  • Someone eager to learn and propose new ideas. Students, eager learners, and “idea” people can often push a team to conduct more interesting and meaningful evaluation work. It is helpful to have a team member that shares new and interesting research and points of view.
  • All team members that interface with clients should be approachable and down to earth. (Note: if stats guru or other team member is not approachable, keep them at the computer) An important part of making evaluation useful is having an evaluator that is comfortable communicating in a variety of settings—face-to-face, on-line, giving presentations, etc. Team members should know how to speak in terms that are understandable to their clients without being patronizing. There are often evaluation team members that do not fit this description, and that is fine, but they should only do the behind the scenes work.

I think our iEval team is a total package, but it has taken time and a lot of work from our “conductor” to get us to that point! Taking time to reflect as a team, whether it be through a formal system (e.g., Myers Briggs) or informal (e.g., craft brewery), is worthwhile and important to providing clients with useful evaluation. 

KELLEY’S USEFUL TIP: One terrific way to diversify an evaluation team is to tap into the graduate school programs of local universities. There are often evaluation students, or students in related fields, that are eager to gain hands-on experience. While students can sometimes take time to mentor, their contributions are worthwhile and often you’ll end up learning as much in the process as they do!

Week 28: Make evaluation FUN!

From the beginning to the end of working with a client, we try to bring high energy to the evaluation process, making it something the client looks forward to instead of dreads. I remember a client I worked with over a decade ago…on her first day at her new job, she was given my business card and told to call me because I could help her. We scheduled a day for me to come up and work together so evaluation would be part of the way they did business from the very beginning. She was so nervous about an EVALUATOR (scary, evil, mean people that they are!) was coming, that she made herself sick the night before my visit. After she spent the day with me, she realized I wasn’t scary at all, she had actually learned some things, and she was looking forward to our next time together. Now that’s a successful visit in my book!

While we consciously work to make evaluation fun and exciting, here are some simple ways that we have done that…

We are not afraid to show our enthusiasm for evaluation and data! If you’ve ever been at a meeting, small or large (yes, even a room with hundreds of people in it), with me where they talk about evaluation or data, you’ll hear a loud “WOO HOO!” when evaluation or data is mentioned. It’s usually the point in the presentation that eyes have glazed over because most people are on overload by the time the evaluation conversation comes around, so I yell with excitement to help remind people that evaluation can be exciting (and, sometimes, to wake people up). You never know when that enthusiasm will result in someone (thanks, Chris Lysy) drawing a cartoon inspired by you!

We have a sense of humor in whatever we’re doing! You’ve heard a little about Camp iEval in a previous post. That’s a retreat we do several times a year with 21st Century Community Learning Centers program directors, where we come together and review data and evaluation findings, conduct professional development around hot topics that have either emerged from the data or been requested by sites, and share and learn across programs. We also have a lot of fun while doing it. We’ve given out silly awards (like “Miss Direction” and “Pressure Cooker”) and done crazy team building activities (like building bridges with toothpicks and marshmallows), all interspersed with real learning and meaningful personal growth. I like to give a small gift to the project directors each year, and one year we gave them each a blanket with iEval on it…poking fun at ourselves because the temperature in the rooms where we hold Camp iEval is often pretty chilly…you just have to roll with it.

We use resources we can find and put our own goofy twists on them! We also previously mentioned Roger Miranda’s children’s book called Eva the Evaluator. If you haven’t purchased that book, it’s wonderful! We’ve given it out to many clients. It’s a fun way to explain what an evaluator does…and my favorite explanation is SUPERHERO! We took it one step further, with Roger’s approval, and made his book into a five-minute video that we showed at several trainings we have done. By putting ourselves out there, dressy in silly costumes and trying to act, it helped humanize us with current and potential clients, so people are not as fearful of evaluation as they may have previously been. Kelley and I even performed a live interpretation of the book, but that was a one-time only show!

DR. TACKETT’S USEFUL TIP: Evaluators are people too. You do not have to be stuffy or flaunt advanced degrees around in order for clients to take your work seriously. In fact, if you are more approachable, your clients are more likely to relate to you on a personal level and work with you to make better use of evaluation findings!

Week 27: Mapping our Way to Use

A portion of the discussion of how to increase the use of evaluation findings has been centered around the presentation of data. Evaluators around the world argue that findings should be presented in way that is user-friendly or digestible for an audience who lives and works in a world where nobody has any time for anything outside of what they HAVE to do. My colleague, Kristin, discussed the need to focus on presentation of evaluation information in her post last week and provided some great tips on how to begin thinking about improving the graphical representation of the information we want to present to our clients.

What we also must consider are the range of data visualization techniques and tools that are available to us in this day and age of technology. One of these, that I have used a fair bit and enjoy messing around with, is Geographic Information Systems (GIS). GIS is all about the production and representation of data in the context of maps. Maps are powerful tools for understanding the context of a place, and they function as a lens through which information is passed and represented to the user.

In our work we have used GIS a handful of times, in situations where we believed that the presentation of data in the form of a map was the most likely way for us to enhance the utility of the information we intended to share. We have used GIS to create maps of service delivery sites in relation to where the clients of these lived. This led to the revelation that a whole neighborhood, where many of the families which utilized this particular service lived, did not have a service site. This subsequently led the organization to begin the process to developing a new service site in order to enhance the accessibility of their services to the households they served.

Would the information we presented in this particular scenario have been used the same way if we hadn’t presented it in map form? Maybe. But I would argue that it would likely have been a conclusion arrived at by us, as the evaluation team, whereas through the presentation of this particular map the clients themselves arrived at said conclusion. This is not insignificant since research on the utilization of evaluation findings has shown that when stakeholders feel ownership over elements of the evaluation, such as the findings, it is likely to enhance use.

So, what’s my point? Basically, I love maps, and you should too. Much of the information we collect has a spatial component. On the simplest level, the programs we work with operate within certain spaces: blocks, neighborhoods, towns, cities, counties, countries. Maps can help us to understand what is going on in these places before we undertake an evaluation, they can help us to fit our data into the context of where it came from, and it can help us to visualize in ways which may help facilitate interpretation and use.

Before you go out and become the ultimate cartographer, let me put in one word of caution when it comes to the use of some of these sites. You MUST be sure to review privacy agreements. Some of these sites allow you to upload and map your own data, but the data becomes public.

I also want to point out that it can take time and dedication to figuring out some of the more advanced GIS software out there. However, as Kristin mentioned in her post last week, you can likely find someone who can provide the technical expertise. One good place to start is geography departments at universities.  

Below are a few resources if you’re interested in reading more about maps:

COREY’S USEFUL TIP: To get started thinking about GIS, consider your own work. How does geography play a role in your community or the communities of the programs you evaluate? Do you think that a spatial component may exist in your work? If so, GIS may be a good option to explore. 

Week 26: Better Evaluation Reporting through Pinterest

I attended Kylie Hutchison’s excellent half day workshop on Better Evaluation Reporting at the American Evaluation Association conference last month. Her presentation gave me a lot to think about, especially about how reporting has changed over the past years and what clients really want to see.

Kylie shared her Pinterest board about evaluation reporting with us. Who knew Pinterest could be used for things other than recipes and crafts I’ll never do?

Her Pinterest board is called “Better Evaluation Reporting,” which you can search on Pinterest or go to the site: http://www.pinterest.com/evaluationmaven/better-evaluation-reporting/

Almost all of what she presented was also on the Pinterest board, so you could spend a few hours going through the board and see a lot of what I saw at her presentation. She has lots of links to infographics, places to download free fonts, how to make free comics, sites for stock images, and many more.

One of the best clips she showed was called Life after Death by PowerPoint. If you haven’t seen it, definitely check it out! It’s a hilarious take on what and what not to do with PowerPoint.

https://www.youtube.com/watch?v=MjcO2ExtHso

One statement Kylie said, which has stuck with me, is today evaluators are expected to not just be experts in social science research but also be graphic designers. This is extremely difficult for most of us because we don’t have the training or time to learn another skill. She said it’s important to not get stuck at what you CAN’T do with graphics, but instead, look at how you are graphically improving your reports. Learn how to do just one new thing for each report and apply that to future reports.

If you really need to knock someone socks off, a freelance graphic designer is always an option. She suggested, if you can’t find someone local, to use the website Fiverr. Fiverr is a website where creative people offer their services for $5. You can find people willing to make illustrations, design logos, or create videos, all for $5. The hope is that you will love their work and continue to use them, when I’m guessing their fee increases.

No matter how snazzy and graphic heavy your report is, if it’s not useful to the client, they still won’t use it. So remember that, as you stress about colors and fonts, in the end, it’s important to make some useful. 

IMG_4780.jpg

DR. EVERETT’S USEFUL TIP: Spend some time learning about free resources available to help improve your graphic design skills. And if you don’t have the time or inclination, try out Fiverr and let me know how it goes! 

Week 25: A Note on Positivity


As the polar vortex looms and the forecast calls for 1-2 inches of snow tomorrow, I am reminded of the absolute importance of positive thinking. It is necessary to make it through a Midwestern winter, and it is an essential component in evaluation work as well. Cheerfulness makes anything more palatable, even snow pants.

Often when working with clients I am reminded of how my attitude and frame of reference as an evaluator sets the tone for the work at hand. There are many opportunities to improve the quality and usefulness of evaluation findings with a positive approach, including: 

Reporting Findings with a Positive Delivery: For example, if three quarters of students are below an academic standard for a program you are evaluating, instead of stating, “75% of students are failing to meet the standard” one could state “25% of students are meeting the standard, which shows a X% increase from last year.” Focusing on who is PASSING instead of who is FAIILING is a simple change that makes a huge difference in the tone of the findings.

Remain Upbeat and Positive Amid Project Tension: Often times evaluators get in the weeds with their clients when it comes to program changes, stresses, and obligations. When clients and other stakeholders get overwhelmed and tense, it is an opportunity to redirect the group to productive brainstorming, documentation methods or other opportunities to bring the project’s energy back to the original goal and intention. 

Celebrate Successes with Your Clients: Clients can be engrossed in the daily work of their programs that they forget to celebrate the gains, improvements, and successes. Be unbounded in communicating program strengths and growth. Encourage clients to share these successes with their stakeholders as well. Everyone likes to hear that they are doing something right. Successes can be something as simple as finally securing the data needed for analysis after four weeks of scrounging for it to finding statistically significant gains in a program. 

KELLEY’S USEFUL TIP: Approaching evaluation with a positive frame of reference will make your clients feel more at ease and less “attacked” by the findings. At your next client meeting, try starting by highlighting something positive you have observed and see how it influences the tone of the discussion.

Week 24: Share results often and in different ways!


I absolutely hate it when evaluation findings are only shared in a formal way once a year in an annual report…and it’s a long document full of technical words and tables, with many appendices of data that no one reads, that typically fulfills a funder requirement. Where is the use in that? Well, I suppose you could say it is useful in satisfying the funder and helping to procure future funding, but that’s not the kind of use that I get passionate about in my work.

Data and evaluation findings should be shared when they are available. Evaluators shouldn’t hold on to data or evaluation findings until the end of a fiscal year. This is one area that research and evaluation drastically differ. Researchers may not want to share findings with their subjects because it could invalidate the study by introducing bias and skewing the longitudinal analyses. Evaluators, on the other hand, typically are focused more on program improvement, and sharing evaluation findings more frequently can help with more responsive improvements and changes in the program design and implementation.

Data and evaluation findings should be shared in different ways. Not everyone responds well to a lengthy, technical report. Okay, most people don’t respond well to that! It’s is the evaluator’s responsibility to work closely enough with the client to understand in what ways the evaluation findings should be shared to be most easily interpreted and used. Here are some examples of ways to share the data and findings (in the future, we’ll talk more about how to do some of these and connect you with other blogs that do a great job of explaining some of them!):

  • Meeting summaries – instead of taking regular meeting notes, incorporate some on-the-spot analyses of what is going on to help the client, including points of tension, implicit decisions, emerging themes and patterns, and recommendations moving forward.
  • Dashboards – easy, visual way to share some high-level data or findings with the community groups, quick presentations, and boards.
  • Executive summaries – gives a little more detail than dashboards, can still be visually appealing, and can include some recommendations for action.
  • Targeted table of contents – instead of starting the report with a typical table of contents, integrate some key findings and specific suggestions for use within your table of contents.
  • Bite-sized reports – we like to create our evaluation reports so they can be easily parsed apart. That way, the client can copy one page to share at a meeting, and it will tell it’s own story within that one page. When taken in context with other pages, it tells a richer story, but programs can interpret and use the data in more digestible sections.
  • User-friendly visuals – we always recreate graphics that are created by our statistical software packages, making them more visual and drawing the client’s attention to the important learning from each visual.

DR. TACKETT’S USEFUL TIP: Sharing data and evaluation findings with clients regularly not only helps them improve their programs and shows them the ongoing value of evaluation, it helps build stronger relationships between program staff and evaluators and between program design and evaluation use

Week 23: Success for whom?

One thing I love about being a graduate student in evaluation is the continuous opportunities to be exposed to new ideas, new ways of thinking and interesting work being done by interesting people on a diverse range of topics. Now, this becomes slightly overwhelming at times, but it is always a good thing in my opinion. The Evaluation Center, which is where my doctoral program is housed, holds a weekly event called the Evaluation Café. It takes place during lunch on Wednesdays, and pizza is provided. Someone or a group of people present about something related to evaluation. This past week’s topic was particularly relevant to me, and I wanted to explore that a little here.

The speaker was Dr. Jerry Jones who is the director of an interesting organization in Grand Rapids called the Community Research Institute (CRI). This organization does evaluation, collects and analyzes public data to make it more accessible, and builds platforms to explore these data. But Dr. Jones wasn’t there to talk about the cool stuff that CRI does. He spoke to us about ideas how evaluation fits in with concepts of social justice, truth and most specifically, racial equity.

How may this tie into the concept of use? Well, one point I wrote down during this presentation was on the topic of success, more specifically, “success for whom?” To me, this concept ties into use. In fact the word “success” could be supplanted with “use” and the point would still be an important one to consider. That short question is essentially the basis of certain utility-focused approaches to evaluation. Remaining with the word success though, there are still important considerations related to how evaluations are used.

Evaluators believe, hope and strive to make sure that their evaluations are used for program improvement. In a recent conversation with a friend, we talked about what may be the outcomes for an evaluation. Well, program improvement would certainly be one of them, we decided. Program improvement should then tie in to the successfulness of a program. This is where the question of “…for whom?” comes into play.

As evaluators, we must be aware of inequities which exist in the contexts which we work. These issues are often wrapped up in the programs we work with, the communities they exist in and the people they serve. Real disparities exist in this country, and they cannot be ignored because we (the collective) want our clients to be happy, or we don’t want to push them to address deeper, more systemic issues that may exist in their program contexts.

Too often we (again, the collective) are wrapped up in the intended outcomes of a program - their goals and objectives - that we forget about the idea brought to us long ago by Michael Scriven of Needs Based evaluation. Shouldn’t evaluations be focused on studying whether or not programs are meeting the needs of people in order improve the public good? Should evaluations be used to identify programs which address important social issues, to improve towns, cities, states, countries? I believe, and some may disagree, that it is the duty of evaluators to ask hard, probing and sometimes uncomfortable questions of our clients when it comes to the services they are providing, and how they can be made even better by broadening their net and being intentionally more inclusive.

I became interested in evaluation because of the opportunity it provided to work for the “public good” by helping to provide information for use in improving the way problems are approached. I sometimes lose sight of this as I get wrapped up in the challenges of my own life. I find that these are the moments when I most question my decision to be an evaluator, when I forget what it is that brought me here. Not everyone may resonate with this message. However, I think that it is important to think critically about how we as evaluators present our evaluations to our clients. What is our role in making sure that the information we present lends itself to use, not only instrumentally, but in a way which address issues of equity?

Here are a few key points that I can share:

  • Be aware of the context in which the program you are evaluating exists. This may mean exploring publicly available data to get a sense of what types of societal issues are relevant in a particular community. This may mean having conversations with key individuals who can shed light on what struggles a particular community is having. Programs operate within a system, and an evaluation will be made more useful it can place its findings within that system.
  • Disaggregate your data. This seems straightforward, but it is important to show the data you collect by race, gender and ethnicity. This can lead to important conclusions about how a program is operating, who it is serving, and what success it is having, for whom.
  • Encourage your clients to allow for dissemination of the evaluation findings. This opens up the opportunity for an evaluation to be used more broadly by a community, and prevents the issues of shelving, whereby an evaluation client receives a report and promptly finds a place for it on their bookshelf.
IMG_4728.jpg

COREY’S USEFUL TIP: Consider not only the intended outcomes in your evaluation, but also how well the program you are working with addresses real needs. Scriven argues that every evaluation should begin with a needs assessment so that the evaluation may assess how well a program meets those needs. 

Week 22: Three Tips and Tricks for Conducting Site Visits

I presented at the American Evaluation Association annual conference in Denver last week to a standing-room only crowd. The topic was “Tips and Tricks for Conducting On-Site Observations.”

I geared the presentation for graduate students and new evaluators, but the topic hit a nerve with more than just the newest to our profession. One woman said she had been working in evaluation since 1999 and over half of the audience had already completed site visits.

Something about observations and site visits mystifies people. How do you do it? Here are three tips that generated the most conversation at my AEA presentation:

  1. Think about the image you want to portray and what is appropriate for the people you are observing. My experience with observations is in educational settings, mainly observing classrooms and professional development sessions. In those types of environments, the observers should wear business casual clothes. A business suit can be overkill for a kindergarten classroom. However, business casual may be too casual for the people you are observing. At my AEA session, one person said that within the cultures she works with, if you are observing and are not dressed in professional attire, they will not take you as seriously. Another person shared a story about worshipping in a mosque when a person collecting data came in without her head covered and people avoided her. I’m guessing she did not have very good response rates that day!
  2. Decide if you want to bring a computer, tablet or paper and pencil to take notes. I always stick to a paper and pencil when I conduct site visits. I find computers to be too intrusive, typing can be loud, and power sources are not always convenient. Again, know your audience. Maybe the people you are observing expect a computer. I used to be a classroom teacher and I put myself back in the teacher’s shoes when I conduct observations. If someone I didn’t know was sitting in the back of my classroom typing on a computer I would be even more nervous about the whole situation. A paper and pencil seem more kind. In addition to the paper and pencil, I bring copies of the protocols I am using and also take notes. Try to limit the amount of stuff you bring – don’t move in.
  3. Know your observer role. A lot of the conversation at AEA was around what is the appropriate role for an observer. I don’t think there is an easy answer. This is something to think about and discuss with your colleagues before you begin your observations. Gold’s Typology of Participant Observer Roles can help you identify your observer role.

Observer vs. Participant - Gold's (1958) Typology of the Participant Observer Roles

  • The complete participant - takes an insider role, is fully part of the setting and often observes covertly.
  • The participant as observer - the researcher gains access to a setting by virtue of having a natural and non-research reason for being part of the setting. As observers, they are part of the group being studied. This approach may be common in health care settings where members of the health care team are interested in observing operations in order to understand and improve care processes.
  • The observer as participant - in this role, the researcher or observer has only minimal involvement in the social setting being studied. There is some connection to the setting but the observer is not naturally and normally part of the social setting.
  • The complete observer - the researcher does not take part in the social setting at all. An example of complete observation might be watching children play from behind a two-way mirror.

Gold, R. (1958). "Roles in sociological field observation." Social Forces, 36, 217-213.

0
0
1
48
276
iEval
2
1
323
14.0
 
 

 

 
Normal
0




false
false
false

EN-US
JA
X-NONE

 
 
 
 
 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 


 <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true"
DefSemiHidden="true" DefQFormat="false" DefPri…

DR. EVERETT’S USEFUL TIP: Know your audience when you conduct a site visit. Will they expect you to wear a suit and bring a computer? Figure out what your participant observer role will be for the project before you begin your site visits. Site visits can be difficult but some pre-planning can help them run more smoothly.


Week 21: Balancing Qualitative and Quantitative Data

At the risk of losing all our readers, let me drag you for a minute into my personal life. I have never been a “math” girl. I didn’t take AP math classes in high school, I took the minimum requirements in undergrad, and it wasn’t until I was consciously aware that I needed to pad my resume to be hirable that I enrolled in more challenging stats and econ courses in grad school. I never in a million years would have predicted that I—an English major—would know how to navigate statistical software, much less understand what a p value is. And here I am, contributing to an evaluation blog.

Why am I babbling about this? It seems like a good way to open up the conversation about quantitative data. In general I have found that our clients love numbers—not always because they understand what statistical significance or power means—but because numbers sound official. They look good to funders. And in some ways, “numbers” are what evaluators are often hired to provide—something that seems too complicated for programs to produce on their own. And yes, collecting, cleaning, organizing and presenting data in meaningful ways takes skill. But it should not be the sole tenet of an evaluation plan.

Fast-forward a decade in my life past grad school when I was a campaign manager for a statewide afterschool funding campaign. It was in this role that I really began to understand the importance of context—something that the numbers cannot provide alone. I remember a VERY uncomfortable conversation with policy makers about afterschool programming. I had beautiful charts-graphs-tables-you-name-it to show how many students were engaged in the programs, projections for the next five years, and how the programs impacted student outcomes. However, immediately following the presentation of the data, came the questions…really hard questions…the sort of questions that make you sweat through a blazer. It became abundantly clear that the data did not tell the whole story. I didn’t understand the caveats and intricacies of who was and was not included in the data set and why—information that was only available through deep conversations with community members and program leaders. Interviews and focus group were essential to getting a clear picture, and I was remiss not to include that in the plan from the beginning. Qualitative data is necessary to appropriately position the quantitative data within the context of the work.

This is one small example, but I could provide many more where quantitative data alone has not provided a clear picture. Qualitative data is a powerful tool to shape and provide insight into evaluation. As an English major turned evaluator, I find comfort in knowing the whole story.

KELLEY’S USEFUL TIP: Muscle weighs more than fat—I tell myself this often when I weigh myself each morning. Understanding that my body composition is more complicated than what the scale indicates is a good illustration for understanding numbers wit…

KELLEY’S USEFUL TIP: Muscle weighs more than fat—I tell myself this often when I weigh myself each morning. Understanding that my body composition is more complicated than what the scale indicates is a good illustration for understanding numbers within context. Remember that there is always more to a story then what the numbers tell. As evaluators, it is our job to provide meaning to the quantitative data through qualitative inquiry.

Week 20: I was at the European Evaluation Society Biennial Conference!

I was extremely fortunate to attend and present at the European Evaluation Society Biennial Conference in Dublin, Ireland last week. Not only did I get to hear noted evaluators like Michael Scriven, Bradley Cousins, Jennifer Greene, and Michael Bamberger, I had the opportunity to co-present with an international group of new colleagues including Sara Vaca (Spain), Gillian McIlwain (Australia), Emmanuel Sangra (Switzerland), and Graham Smith (Australia).

Today I want to talk about something that Michael Scriven said that resonated with me. In a follow-up session after his conference keynote, Michael said (and I wrote this down word for word), “P-values and statistical significance are bulls***. We need to focus more on unintended consequences of the work and the ethics behind the work.” He went on to talk about if evaluators were only focusing on if a program did what it was supposed to and to what extent, then the evaluation field hasn’t advanced much in the last 50 years. He feels we have a moral responsibility to try to understand the unintended outcomes as well and how the program influenced those unintended outcomes.

His comments made sense to me on so many levels.

When working with our clients, they rarely ever care about p-values or any statistical notations, so we rarely ever share them in reports or presentations. The statistical work is there, in the background, as it’s how we arrive at some of our conclusions and recommendations, but we present it in a way that is more meaningful to the client…using charts, pictures, vignettes, etc. that paint the story of what the data is saying.

Uncovering the unintended consequences of a program or intervention is what we call DSI – Data Scene Investigation (yes, totally stealing that from CSI). Personally, I love digging into the data to uncover what other stories can be told, outside of the stories we were initially looking for (which are critically important, too, of course). Sometimes patterns emerge that peak our interest, so we dig deeper. Other times we just throw out hypotheses and test them with the data to see if they can be supported or refuted. We triangulate our data sources (quantitative and qualitative) when we develop all of our conclusions and recommendations, but it’s those unintended ones that can be so revealing and compelling. Sharing those unintended consequences can sometimes be scary, but I loved that Michael said it was our moral responsibility. I agree! This speaks to my deontologist soul, as I truly believe that I need to do what’s right no matter what – it is my duty to do so. And, if working with a client uncovers some unintended information that may upset them, it is still my duty to share that information and help them use it in a practical way.

We also try to be proactive in evolving with our evaluation design, implementation, and data analyses. Sure, the work we do is based in theories, some that may have been developed 50 years go, but we need to constantly push our work forward to make it more meaningful and useful for the clients. That may involve new ways of designing, analyzing, triangulating, interpreting, or reporting data and findings.

I always love Michael’s down-to-earth, no-holds-barred attitude when sharing his latest words of wisdom. I will share more insights from the conference, including some about my own presentation, in future posts.

DR. TACKETT’S USEFUL TIP: The usefulness of understanding the unintended consequences of an intervention often outweighs the value of uncovering progress towards outcomes, but working on both concurrently helps build trust in and creates value for t…

DR. TACKETT’S USEFUL TIP: The usefulness of understanding the unintended consequences of an intervention often outweighs the value of uncovering progress towards outcomes, but working on both concurrently helps build trust in and creates value for the evaluation process.

Week 19: Involving Stakeholders in Valuing

Michael Scriven once said, “bad is bad, good is good, and it is the job of evaluators to determine which is which.” Now, I’m not one to challenge Michael Scriven, and this wouldn’t necessarily be the place to do it, but it illustrates a way of thinking about value claims. It is the construction of these value claims which is what I want to focus on in this post.

The root word of evaluation is value. To me, the fundamental difference between evaluation and research is the value part of evaluation, in that we are obligated to make value statements about the programs we are tasked with evaluating. What I mean by “make value statements” is that part of our job is to eventually, after going through some systematic process, say whether something is good or bad. Involving key stakeholders in the development of these value statements can enhance the utility of evaluation for those stakeholders.

To illustrate this point, let me briefly describe an example. For a current project I am working on, our team recently spent almost a full day with a client reviewing initial findings. We presented our findings by criteria that we had used in the evaluation. After we presented the data for each of the criteria, we asked each of the project stakeholders in the room to indicate whether they felt the finding was positive or negative. They did this confidentially, so as not to influence one another. That evening we analyzed their responses and presented them back to the group in the morning. It sparked rich discussion about whether or not the program was working or not. The purpose was to enhance the utility of the findings by comparing our interpretation of the findings with the stakeholders’ interpretations.

I was also inspired to write this post by a recent conversation with a fellow student. What we discussed was the general lack of valuing in our profession. We got on the topic of how these value statements are constructed, more specifically, that there doesn’t seem to be much discussion in evaluation of how to systematically align the values of project stakeholders with the value statements that we as evaluators construct. If we involve project stakeholders in this process, it has the potential to enhance use since the conclusions are relevant and meaningful to those individuals who are most likely to use the evaluation.

Now, I can’t necessarily proclaim in this post the best way to go about facilitating the interpretation of findings in order to best facilitate use. However, I do believe that by engaging with stakeholders around the interpretation of findings and the construction of value statements about programs, we can enhance the utility of evaluations. The value claims become joint conclusions about a program, weighing the empirical evidence produced by the evaluation with the contextual information held by the program staff.

At this point, I need to address the major criticism of doing evaluation in this way. Obviously, external evaluators are hired to provide a level of objectivity not attainable through the sole use of internal evaluators. By engaging with stakeholders, we [evaluators] risk losing some of that objectivity and independence. Despite this, I believe that we should be able to balance this possible encroachment of bias through critical thinking and self-awareness.

I think there is merit to a process which engages program stakeholders in the interpretation of findings to reach a conclusion of good or bad.  That sounds simplistic, but if you engage stakeholders in this process, it increases their level of investment in the evaluation and makes the findings more relevant. It isn’t just “some outsider person” who doesn’t know the program or the context. Instead it is a collaborative effort between the evaluator and program stakeholders in coming to meaningful conclusions about how well or not well the program is working.

0
0
1
46
268
iEval
2
1
313
14.0
 
 

 

 
Normal
0




false
false
false

EN-US
JA
X-NONE

 
 
 
 
 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 


 <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true"
DefSemiHidden="true" DefQFormat="false" DefPri…

COREY’S USEFUL TIP: Spend time with your key stakeholders to facilitate the interpretation of the findings you present to them. Ask them what their value statements are regarding the program. Weigh these against your own and discuss discrepancies. In instances where there are differences, explain your thinking.

Week 18: Miss America: The Nation’s Largest Scholarship Program for Women

Let’s talk about pop culture this week. John Oliver, the host of the late night HBO talk show “Last Week Tonight with John Oliver” roasted the Miss American organization for their claim that they provide $45 million in scholarships annually to women. This 15 minute clip is a great spring board to talk about evaluation (warning: inappropriate language in the clip).

To start, he discussed the logic behind such a contest. If end long-term outcome is to provide scholarships to women, do the inputs and activities work to meet that goal?

What would a logic model of this program look like? Here’s my quick take on it:

However, I’m an outsider to the program. I’m guessing the program stakeholders would create a different logic model, reminding us it’s important to consider all stakeholders when creating a logic model.

John Oliver had a research question and collected data to answer his question. He wanted to know if the Miss America organization actually offers $45 million worth of scholarships for women each year. His staff combed through 990s, websites, and called state-level Miss America organizations to collect data.

What he found was that to reach that $45 million figure they claim, the Miss American organization counts all of the potential scholarships. For example, the Miss Alabama organization reported they provided $2,592,000 in scholarships to one school, Troy University. The show contacted Troy University and learned the organization got to that number by taking the amount of single scholarship ($54,000) and multiplying it by the amount of competitors who could have used it (48). $54,000 x 48 = $2,592,000. In actuality, zero people used the scholarship, therefore zero dollars were awarded. This, of course, brings up interesting questions related to reporting.

John Oliver also discovered a surprising finding in his research. Although the Miss America pageant does not give out $45 million dollars of scholarships each year, it is the largest single scholarship program for women.  He said “even their (the Miss America organization’s) lowest number is more than other women-only scholarships that we could find. More than the Society of Women Engineers, more than the Patsy Mink Foundation and more than the Jeanette Rankin Women’s Scholarship Fund.” While he was speaking, the websites of the other three organizations popped up on the screen and he told the viewers you can donate to all of the other scholarship funds.

So, let’s review. John Oliver has questioned the logic model, asked a research question, and collected data. He analyzed and reported the data. Although most evaluators do this through a written report, John Oliver chose a different format. He presented his evaluation findings in the form of a 15 minute segment on his talk show. The findings have been shared over the internet to millions of people.

Now we get to evaluation use. How will the Miss America organization and the public use this evaluation data?

First off, the Miss America organization responded to the segment with a very general statement (which you can read here http://www.nj.com/entertainment/tv/index.ssf/2014/09/miss_america_john_oliver_scholarships_1.html)

How will the organization use those evaluation findings? Will the organization change their tagline about providing scholarships? Will they change the dollar amount they say they provide?

The other stakeholders for these findings are the viewers. John Oliver gave three examples of women-only scholarship funds, displayed their web addresses, and told the viewer they can donate to those funds. Did the results of evaluation change someone’s behavior? Will the viewer donate to the funds?

Although we may never know the impact of this segment on these organizations, it brings up interesting questions and shows us that evaluation is all around us.

*Thanks to Carolyn Jayne for the idea of looking at the John Oliver’s clip through an evaluation lens.

0
0
1
31
180
iEval
1
1
210
14.0
 
 

 

 
Normal
0




false
false
false

EN-US
JA
X-NONE

 
 
 
 
 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 


 <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true"
DefSemiHidden="true" DefQFormat="false" DefPri…

DR. EVERETT’S USEFUL TIP: Evaluation is all around us, we just have to look. We may not have our own late night cable show to share our evaluation results; but with the internet, we can easily share findings.

Note: Dr. Tackett is presenting on evaluation use at the European Evaluation Society conference this week. Stop in and say hi to her if you're there! You'll be hearing more about her experience in an upcoming post.

Week 17: How to write a good executive summary

There are aspects of any job that can be frustrating—waitresses don’t get tipped, nurses get yelled at, teachers have large class sizes—and so on. For evaluators, having a report sit on the shelf unread is gut-wrenching. While it is painful to think that all of the carefully crafted sentences and tables are not going to be read, the most frustration is in knowing that the evaluation findings are not going to impact programs.

Program directors and leaders are often short on time and one way to give a report a fighting chance at having an impact is through an executive summary. It’s a one-page document that summarizes the report—it is not a random collection of findings or an introductory piece. Instead, an executive summary should serve as a mini-report, organized to highlight the same areas as the report.

 Below are a few of the fundamentals a good executive summary should contain:

  • An organized structure that parallels the main report, including:
    • Summary of the subject matter. What is the topic being addressed? What was the evaluation question?
    • Brief explanation of methodology. Emphasis on brief. 1-2 sentences in most cases is enough.
    • Summary of the key findings and the supporting evidence. What are the key pieces of information that should be highlighted? What information tells the story best?
    • Summary of the recommendations and next steps with supporting evidence. Based on the findings, what are the next plausible steps and why?
  • Clear language that is appropriate for the audience.
  • Positive language and emphasis. Executive summaries are often used for boards and wider spread distribution. Frame the suggestions and findings with constructive language.
  • The information should be written in a new and abbreviated way—not simply cut and pasted from the original report.
0
0
1
46
267
iEval
2
1
312
14.0
 

 
Normal
0




false
false
false

EN-US
JA
X-NONE

 
 
 
 
 
 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 


 <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true"
DefSemiHidden="true" DefQFormat="false" DefPriori…

KELLEY’S USEFUL TIP: Sometimes even an executive summary contains too many words for particular stakeholders. A dashboard or graphic representation of the key findings may make a more appealing summary document. By using basic shapes, colors, and text boxes, data can be presented in interesting and useful ways.

Week 16: Avoiding misuse: The evaluation client’s perspective

My post today wraps up my discussion of Cousins (2004) evaluation use-misuse framework. For a summary of the framework see this blogpost (first one). Today I want to talk about how this framework relates to clients of evaluation and why it is important for them to understand.

As a client of evaluation, it is easy to think of your evaluator as a sage wizard (I know, we give off this aura) of evaluative techniques and research methodologies. But, you don't need to know an enormous amount about evaluation and/or research methods to be able to critically review an evaluation plan or report to get a sense of how the evaluation was conducted or to clearly understand the findings presented.

This same argument is made by Cousins (2004) in his paper introducing the misuse framework. He offers that evaluation clients must review anything they are given, as well as demand a clear articulation of the methods and, maybe more importantly, the limitations of those methods. It is a reasonable request to ask your evaluator to walk you through the methods they employed for your evaluation to help you understand what the strengths and weaknesses were. Understanding each of these will help you better understand what kinds of conclusions you can come to based on the evidence presented and help you avoid “mistaken use” (Cousins, 2004). Mistaken use could be an instance where you, as the user, unintentionally make claims about the program which are not justified.

Now, as a client of evaluation you must also recognize your own responsibility for using the findings correctly. By understanding the evaluation methods and design, you will understand what you can and cannot say about your program. However, you must also not shy from parts of an evaluation which show room for improvement in your program or maybe even reflect negatively on your program. Evaluations are intended to not only prove, but to improve. Don’t “abuse” (Cousins, 2004) your evaluation by either suppressing the findings or putting pressure on your evaluator to change their conclusions to make a program look better.

As many of our blog posts go, I will offer a few tips to help you, the evaluation client avoid misusing the information and hopefully lead to better and more appropriate use:

  • DO review the methods employed in the evaluation and ask your evaluator to clearly spell out their strengths and weaknesses.
  • DO NOT jump to causal conclusions (e.g., my program caused a five point gain on reading scores). Rarely are evaluators able to employ evaluation designs which can support causal claims. More often, we are able to point to relationships between a program and an outcome. The word “cause” can get sticky.
  • DO NOT take your evaluator’s words out of context. Do not just pull the parts out of the evaluation which make your program look good. This can be misleading and untruthful.
  • DO ask questions. As you review your report, make notes about what isn’t clear or things which stand out. Remember, you likely know your program better than anyone and it is important to remember to think logically about the findings. If something just doesn’t make sense, ask.
0
0
1
34
195
iEval
1
1
228
14.0
 
 

 

 
Normal
0




false
false
false

EN-US
JA
X-NONE

 
 
 
 
 
 
 
 
 


 
 
 
 
 
 
 
 
 
 
 


 <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true"
DefSemiHidden="true" DefQFormat="false" DefPri…

COREY’S USEFUL TIP: Spend time with your evaluator when the report is delivered to understand what the results and evidence presented to you mean. Ask about what the strengths and weaknesses are of the evaluation. Ask questions!