Color Scripting Update

Last September, I wrote a post about color scripting (you can read the full post here), a technique I learned about from the Disney-Pixar team at the D23 Expo. In a nutshell, color scripting is a type of story boarding where you change the main colors of each panel to reflect the emotion the animated film was supposed to portray at that time. It helped the Pixar team understand what was going on in the film emotionally at a quick glance, and it also made it easier to create a musical score to enhance those emotions.

I tried out color scripting for a large event (over 1,000 people) in an auditorium where I was observing at the back of the room. I was taking notes on the engagement and enthusiasm of the audience based on who was presenting. I created some metrics on the spot including number of people on their mobile devices, number of people leaving the event, laughter, murmuring, applause, etc. I used color scripting to create a timeline of the event, highlighting who was presenting at different times, and indicating if engagement was high/medium/low and if enthusiasm was high/medium/low. The client felt it was a useful overview of how the audience related to the event.

In March 2016, I shared an update on color scripting through the ATE Blog (you can read the full post here). I applied color scripting in a slightly different way to a STEM project, color scripting how the teachers in a two-week professional development workshop felt at the end of each day based on one word they shared upon exiting the workshop each day. By mapping participant feelings in the different cohorts and comparing what and how things were taught each day, this resulted in thoughtful conversations with the trainers about how they want the participants to feel and what they need to change to match reality with intention.

I have recently used color scripting in another context, asking students what they learned at the end of each nutrition lesson, then analyzing it by lesson topic, grade levels, and topic order (different sites delivered the lessons in different orders). These graphs resulted in thoughtful conversations with the nutrition facilitators about how they want the students to feel and what they need to change to match reality with intention, including directed conversations about the impact of specific lessons and tastings as well as the progression of lessons at each site/grade level. The color scripting graphs visually indicated the percentage of students expressing changes in knowledge (blues, the darker the blue – the more substantial the knowledge) or changes in behavior (greens, the darker the green – the more substantial the behavior).

The student responses by food tasting color scripting clearly illustrates students are processing information and learning more about broccoli/cauliflower, pineapple, and tomatoes/grapes (blue sections), while they are moving towards changing behaviors with eating and cooking for chickpeas/hummus and smoothies (green sections).

The student responses by grade level color scripting shows that a larger percentage of older students expressed changes in knowledge while a consistent percentage of students had positive tasting experiences. This graph also illustrates that students in K-2nd grades seem most likely to move towards changing behaviors.

The analyses by site also helped nutrition facilitators understand differences in lesson order, expertise of facilitators, etc.

DR. TACKETT’S USEFUL TIP: When you learn (or create!) a new technique, try applying it in different contexts to 1) practice using it, 2) identify where it’s most meaningful in analyzing data, and 3) determine various ways clients will be able to use it.

Exercising for Intellectual Health

These past eight months, I have been on a journey to become a healthier me…at all levels. I have focused on physical health (e.g., clean eating, more time working at my walking desk, regular barefoot running or kangooing, massages), mental health (e.g., reading books, watching feel-good shows like the Andy Griffith Show), social-emotional health (e.g., taking grandnieces to the zoo, cheering my husband and a friend on while they took glider rides in memory of my father, sharing our favorite Jefferson places with friends, dreaming about our next Disney trip in 2017), and intellectual health. Now, that’s the one that’s often overlooked when people embark on a path to better health…intellectual health.

I tried googling a definition of intellectual health, and this one makes sense. The University of Arkansas’s College of Medicine defines it as “The intellectual dimension of wellness encourages creative, stimulating mental activities. An intellectually well person uses the resources available to expand one's knowledge, improve one's skills, and create potential for sharing with others.” I think there are some more typical ways people expand their intellectual health – reading, watching the news, discussing current events or issues with others, playing mind-challenging games, etc. But, I also think it’s important to exercise my evaluation skills – to improve my intellectual health related to work.

Here are several ways I’ve focused on my intellectual evaluation health these past eight months:

  • Took on evaluation projects that were not within my team’s typical scope of work. This forced us to reach outside of our comfort zone and learn quickly and deeply about a different field, while applying evaluation strategies and analyses we had experience in to new content areas.
  • Further developed an evaluation tool I designed (see Color Boarding post) and implemented it with different types of clients and situations to help understand it’s potential application and usefulness (particularly around program improvement).
  • Tried several new data analyses procedures that have led to exciting results and more energized and meaningful discussions with the clients.
  • Intentionally thought about how evaluation ideas or processes apply in settings where evaluation might not be the focus (see Thomas Jefferson – Founding Evaluator post). I’ve found that talking and sharing about evaluation in atypical settings (e.g., fundraisers, pontoon boat rides, museum visits) can help expand understanding of the evaluation field and sometimes results in connections you never expected.
  • Regularly read 5-10 posts each week from various blogs, including AEA 365, Evaluation is an Everyday Activity, BetterEvaluation, Stephanie Evergreen, and many in the feed through Eval Central.
  • Registered to check out an evaluation conference I’ve never been to before to hear different perspectives on evaluation and research. I’m looking forward to the Virginia Educational Research Association conference in September!
Everyone should enjoy some bouncy time in kangoo jumps!

Everyone should enjoy some bouncy time in kangoo jumps!

DR. TACKETT’S USEFUL TIP: I challenge YOU to come up with your own intellectual exercise program and share what has worked for you with others!

Triangulate! Triangulate! Triangulate!

The most exciting moments in evaluation work, for me, are when clients internalize the data and have that A-HA moment when they either truly understand the data or are able to apply the data/recommendations to make some meaningful changes. Finishing a dashboard or report may give me a sense of accomplishment, but it’s not until the client makes use of those things that I genuinely feel like I’ve made a difference. It really energizes me to see the light bulbs go on, facilitate the conversations around reports/data/findings/visualizations, and assist in the planning for program improvement.

Recently, I had a different type of A-HA moment, which was completely unexpected and still gives me goosebumps to talk about! My team was charged with determining the ideal staffing ratio to provide mandated services within domestic violence shelters in a state.

This was a pretty high-stakes evaluation, as any staffing ratio recommendation could result in the loss or addition of jobs across the state. It was a very complicated project, including implementation and analyses of program observations, interviews, surveys, document reviews, financial data, and service delivery data. After months of deep digging into the data, we were able to determine an ideal staffing ratio from two completely different perspectives (and sets of data)…and they matched! We found a staffing ratio that could be validated two completely different ways. We were so jazzed about this and couldn’t wait to share it with the client! The client was extremely excited by the findings, as well, and felt confident in our processes.

I always encourage the triangulation of multiple data sources to support findings. This, of course, encourages the use of the data, since the findings and recommendations are more robust. This recent experience, though, with the ideal staffing ratio, has reminded me just how critical the triangulation of data can be. How many evaluation reports have you read that make vast generalizations, only to find out that their n was 3 people? We, as evaluators, need to make sure we maintain high quality in our evaluation plans, implementation, analyses, and reporting…not only to encourage use, but also to improve client confidence in the work.

DR. TACKETT’S USEFUL TIP: Incorporate the triangulation of data in your key findings and recommendations, building confidence and encouraging use.

Always Learning - Visualizations

It’s not true! You can teach an old dog new tricks!

For years, I have referred to the Periodic Table of Visualization Methods. As a total nerd, I love the periodic table format, and I’ve found it very useful to peruse various ways of visualizing the data as I’m working on reports. I’d get ideas, adapt them to my own style, and strive to make the data more accessible and clearer to my clients. I’m not saying the periodic table is bad – it’s still a great reference and it’s been a wonderful resource to me for years, but I’ve now discovered my next visualization go-to web site!

Last week, at the Michigan Evaluation Association’s 21st Annual Conference, I participated in high-quality sessions the entire day. One of the presentations, created by Lyssa Wilson and Emma Perks, focused on data visualization. I tend to like simple visualizations. It may be somewhat because I’m not a graphic designer and can’t do some of the more complex ones, but it may also be because I feel the simplest way to get a message across to clients is the best way to encourage their use of the data and recommendations.

One of the resources Lyssa shared was the Data Visualisation Catalogue. Oh where have you been all my life? Not only can you look at data viz options in a much cleaner, crisper format, you can also click into them to find out what type of data they’re best used for, how to create them, and what software could be used to create them more efficiently. The Data Visualisation Catalogue is something that is useful to me as an evaluator, and it will help me create more useful representations of data for my clients. I’m in love!

I’ve been in the evaluation field for almost 20 years, and sometimes people get stodgy when they’ve worked in the same field for that long. They believe the tried and true ways of their youth are the only ways to do something. I love to learn. I love to apply new learning. I love to use cutting edge technology. It’s great when we’re able to learn a new twist on something that’s been done for years and apply it in a meaningful way to our work.

DR. TACKETT’S USEFUL TIP: Force yourself to participate in a learning opportunity at least once a year…a webinar, a workshop, a conference. Of course the professional relationships and sharing is always valuable, but you may just learn something that could help improve your practice.

Special Note: Thank you to the MAE board and membership for honoring me with the John A. Seeley Friend of Evaluation Award this year. It is given annually to an individual or organization that has made a consistent and continuing leadership contribution to the field of evaluation by actively embracing and promoting its use. I’m particularly honored have received the same award my mentor, Dr. James Sanders, was given in 2000, and to receive the award at the same time that my friend and colleague, Dr. Laurie Van Egeren, was given the MAE Service Award. Congratulations, Laurie!

To Bid or Not To Bid

Many evaluators, especially external evaluators, have to bid on evaluation requests for proposals (e.g., RFP, RFA) to find work. Others of you, particularly grant-funded organizations and programs, have to bid on grants to get funding to do your work. The advice below can apply to both groups, but it’s written from the evaluator perspective.

First of all, there’s a plethora of opportunities out there to find evaluation RFPs – with just a few listed below:

Of course my favorite way to secure new work is by word-of-mouth, but that’s a whole other topic!

How do you know if you should bid on an RFP or not? Sometimes it can be a very difficult decision, and the key questions below may help you think a bit more strategically through if you should bid on it or not…

  • Does it look like the bid has been released because the organization was forced to and may have an evaluator already in mind?

That happens often. Many organizations must go through a bid process for any projects costing more than $25,000, yet they have pre-established relationships with evaluators they have worked with in the past. I’ve been on both sides of that equation…I’ve bid on projects that took a lot of time only to find out there was already a planned evaluator, and I’ve been that planned evaluator. Over the years I’ve learned some details to look for to help you figure this out. Ideally, you could ask the organization directly, but many bids do not allow unsolicited communication prior to submitting the bid (if they do, ask!). Look for very detailed, and sometimes unusual, requirements in the bid such as resident of the state, XX years of experience, specific educational background, or prior experience conducting particular types of evaluation. Those are typically signals that they have an evaluator/team in mind and are writing the bid so that evaluator/team will be the only one who meets the requirements.

  • Do you have enough time to commit to giving serious thought to designing the evaluation and writing the narrative & budget for the bid?

Creating responses to an RFP take a significant amount of real work and time. Some evaluators use more of a cookie-cutter approach when submitting an evaluation bid, but we create each bid from scratch. We want to make sure we are seriously considering what the potential new client wants and carefully explaining how we would achieve that with our own special added value. There may be elements from other bids we’ll use, but they are always reviewed and rewritten to target the potential new client. That being said, writing bids from scratch can take a long time. I typically plan 2-3 hours just to think about the project, no writing involved. Then, for every page of narrative the RFP requires, I estimate 1-2 hours of developing/writing time. I estimate the budget to take another 1-2 hours. So, for a 10-page bid, you’re looking at 13-25 hours of time!

  • Do you have the capacity to implement the evaluation if you are chosen?

This sounds simplistic. If you get funded, you’ll make it work, right? That’s a naïve approach. In thinking more strategically, I first consider if my evaluation team has the knowledge, skills, and time to implement the evaluation. If there’s a knowledge or skills deficit, then I need to start thinking immediately about what training would help us be better prepared to implement the evaluation…which may involve investing some time/money on our own. If there’s a time deficit, then I need to start looking for potential new team members to bring on board and get up to speed with our evaluation work ethics. I never want to be put into the situation where I have to bring someone on board that is unprepared then inappropriately represents my evaluation firm. I like to do some “speed dating” with potential new evaluators – have them work on minor elements for projects to learn more about their skills and deliverables, then I’ll be more prepared when I need to increase team membership.

  • Are you comfortable with sharing your personal ideas about evaluation approaches with an organization or individual who could potentially abuse that information?

I’m normally a Pollyanna type of person – I like to think positively about things and people for the most part. However, there have been several times when we’ve bid on a project and the organization took our bid, shared it with a close friend or relative, and hired that person to implement the evaluation. Now, the fact that neither of those evaluations were implemented well because the people hired didn’t have the necessary skills does not make me feel better. I felt violated and taken advantage of. Unfortunately, many RFPs state in them that the bid becomes the property of the organization. This is a risk we take. I often add a statement at the end of the bid that says something like “This evaluation plan is being provided by iEval for the sole purposes of review through the RFP process and should not be shared for use by any other organization.” While I have no legal control over it, it makes me feel a little better. It’s something you need to seriously think about when you’re writing and submitting a bid. We also tend to not put in some of the very specific information about evaluation implementation or data analyses, which would make it more difficult for someone to copy our plan without complete knowledge. The hope – and the request we write in the bid – is that the reviewers will be so impressed with what they read that they will follow up with additional questions via phone or interview where we can go into more details.

Now, there are always situations that come up where you just have a gut feeling that it was meant to be. A few years ago we bid on a project that was really outside of our content area of expertise, we were swamped with work at the time the RFP came out so time was at a premium, and receiving the project would put a big strain on our available staff time. However, something in my gut said it was meant to be. I talked a while with the team about it, and we decided to go for it. Amazingly, we got it. It resulted in our first international project, expanded our portfolio, helped us create new processes that we’ve continued to use in other evaluation work, and produced very satisfied clients!

This is definitely how I feel sometimes about evaluation bids! May the Force be with you!

This is definitely how I feel sometimes about evaluation bids! May the Force be with you!

DR. TACKETT’S USEFUL TIP: As external evaluators, bids are a part of life. While it’s important to be thoughtful when deciding which projects to bid on, sometimes you do have to just go for it! You never know where it will lead you…

On a personal note, I have to give a HAPPY BIRTHDAY shout-out to my husband, Paul, on his birthday today! Paul is the one who introduced me to Thomas Jefferson years ago (see prior posts on Jefferson as our founding evaluator, weeks 76 & 81), and – for those fellow math geeks out there – Paul’s birthday is actually on the average of Thomas Jefferson’s two birthdays. Yes, President TJ had two birth dates – google it!

Three Ways to Win an Evaluator's Heart

We’re back with a special Valentines’ Day edition of the Carpe Diem blog. Or, more appropriately, an EVAL-ENTINES’ Day edition!

Love is in the air, so today we’re talking about relationships. How would you describe your relationship with your favorite evaluator? Friendly? Collegial? Non-existent?

Does it even matter?

Evaluators are great resources. Not only can they help you think through data collection procedures and instruments, data analysis, reporting, and impact, they also have worked on a lot of different projects over the years. That experience means they know what works and what doesn’t. And they can help you tease it out in your project.

So if your relationship with your evaluator is lacking and needs a boost, here are Three Ways to Win an Evaluator’s Heart:

  1. Be yourself.  Just like any teen magazine will tell you, the best way to have a healthy relationship is to be yourself. Be clear about your organization’s goals and future plans as well as share the challenges your organization faces. It’s important your evaluator knows who you are and what your organization is – the real story.
  2. Talk data to me. Evaluators love data. If you have data, share it. We’d rather have too much data than not enough. We can sift through it and make sense of what should be used to help uncover what is going on, what possible impact the program is having, and what recommendations can be derived from the data.
  3. Open and honest communication. If you’re happy and you know it, clap your hands. If you’re not, let us know that too. It’s important that evaluators hear and see what is really going on with your organization, how you feel the evaluation process is going, and what is working or not working for you. On the flip side, we need to be honest with you too, which may result in some findings that don’t make you happy…but they will help you improve your programs for the future.

DR. EVERETT’S USEFUL TIP: If you do these things and your relationship doesn’t improve, then it’s probably not the right evaluator for you. As they say, there are more fish in sea…and evaluators will probably be counting them and putting them into categories. You have to keep looking for the right one for you.


Week 87: Thank you for reading!

We’ve thoroughly enjoyed working on this blog the past 20 months, but all good things must come to an end. We no longer will be offering posts on a weekly basis, but we’re not signing off for good. If something really meaningful happens or crosses our minds related to evaluation use, we will definitely share it with you! So, you may see more sporadic posts from our team, just not the weekly posts. Thank you so much for taking the time to read and share our posts, and please feel free to go back and read past ones – there’s some really interesting gems in there!

We’d each like the opportunity to share a few thoughts about the blog with you…

Dr. Wendy Tackett

Dr. Wendy Tackett

When we started this blog, we wanted to share tips and ideas for improving the usefulness of the evaluation process, the potential utility of the findings, and the use of the recommendations. At iEval, making evaluation useful is one of our primary tenets, and we hoped to share some of what we’ve learned over the years with you. While the impetus for starting the blog was completely altruistic, I have to admit that it was pretty amazing going to EES in Dublin in 2014 and AEA in Chicago in 2015 where some people said, “I love that Carpe Diem blog.” It felt really good to know that we were part of the larger evaluation blogosphere and that fellow evaluators read and appreciated our tips!

Kelley Pasatta

Kelley Pasatta

Outside of evaluation conferences, it is hard to remember that “evaluation” is a profession. I am so used to no one understanding what I do. This blog reminded me that there are others out there who grapple and deal with the same challenges and successes in their work. The blog has provided me with an opportunity to reflect on tools and practices that have been successful in my work, and it continues to remind me to reach out and lean on others in the field when I’m stuck. Thank you for being part of my professional community!

Dr. Kristin Everett

Dr. Kristin Everett

I started reading blogs almost a decade ago but I never thought I would write one. Writing this blog has been a great adventure. It’s been a great way to dig deeper and reflect on evaluation practice. I’ve enjoyed reading about my colleagues’ experiences and hearing feedback from readers. I’ve enjoyed being part of the evaluation blogging community and look forward to be a reader (and poster) for years to come!

Corey Smith

Corey Smith

As someone who is still in the midst of a doctoral program, it is easy to get wrapped up in the idea of sharing knowledge through the more traditional channels of academia. Our evaluation journals are no doubt a great source of information. But, sometimes we don’t want to wait that long to get our thoughts and ideas out into the world and blogs are a great way to accelerate the process of reflection and sharing. It has been a pleasure to share my experiences, my random thoughts, and the things I that have caught my attention through blogging. As Wendy says in the introduction, this isn’t goodbye. I find real value in having the opportunity to put things out to an audience quickly and without the need for editors or submissions. I have learned a great deal from reading others blogs, and I hope we have provided you some morsels for thought through the Carpe Diem blog. The evaluation community is a diverse and fast growing one, and it is an honor to be a part of it.

iEval TEAM’S USEFUL TIP: Keep reading blogs! This is where you can find up-to-date ideas and thoughts in the world of evaluation, and they can be read during brain breaks from work, while traveling, or over a cup of tea. If you have any questions or suggestions for future (more randomly timed) posts about evaluation use, don’t ever hesitate to reach out to us! Thank you for reading. CARPE DIEM!

Week 86: When NOT to Use Well Done Evaluation Results

Last week, I talked about instances where you may choose to use results from a poorly done evaluation, and there are some legitimate times to do that. On the flipside, there are also times when you will choose NOT to use the results from an evaluation done well.

What?!?! Are you crazy, Wendy?

Well, sometimes maybe a little, but I’m being completely honest here. We’ve previously referenced some of Brad Cousin’s work on use and misuse (clearer graphic here), and there are obviously times when not using well done evaluation results is a clear case of misuse. However, sometimes the nonuse is what needs to happen at that point in time. For example…

  • Sharing the evaluation findings and recommendations now will skew future results. We serve as the evaluators on many Mathematics and Science Partnership projects. As part of those evaluations, we often observe teachers in the classroom and test teachers’ content and pedagogical content knowledge in the subject area. If we shared the baseline evaluation results with the teachers on their knowledge and instructional strategies, we would bias the impact of the actual project. Later, when conducting follow-up observations and testing, we wouldn’t be able to tell if the changes in knowledge and practice were because of the professional development design or because teachers knew which areas they were weak on and worked on them outside of the project.
  • Sharing the evaluation findings and recommendations now will stress the client unnecessarily.  That sounds like a weak reason, I agree, but I can think of one case where we waited a few weeks to share the evaluation results. The client was in the midst of applying for continued funding and was allowed to use any evaluation results from the previous years but not the current years. The midyear evaluation report, which we had just completed, had very positive continued results in it…results that would really strengthen their argument for continued funding. Instead of giving the report to the client and frustrating them because they would not be able to use the results in their grant application, we waited until after the grant application was submitted then share the results. It worked out well – they were exhausted from the application process and were excited to hear some positive news.
  • Program leadership is in the middle of major staff changes. Changing leadership can be traumatic for any program. When a new leader comes on board, s/he needs to take the time to understand the program, figure out what has been working and not working, and make the project their own. This can be a timing issue, too, for sharing evaluation results. The evaluation may have been done with complete fidelity, providing accurate and reliable findings with strategic recommendations for improvement. However, the new leader may have already decided drastic changes are going to be made to the program, based on his/her prior experiences or future expectations. In this case, the evaluation findings may not be helpful. If they are positive, it may create more duress among staff who do not want the program to change. If they are negative, they are really moot since the program is changing anyway. The evaluation was conducted, and does need to be shared at some point, but this one would warrant a conversation with the new leadership to determine the best timing of that.

DR. TACKETT’S USEFUL TIP: If you are not going to use the analyses, findings, and recommendations from an evaluation that was conducted with fidelity, reliability, and validity, make sure you have clearly defensible reasons for doing so.

Week 85: When to Use Poorly Done Evaluation Results

As a client, have you ever gotten an evaluation report that…after reading it…you seriously question the implementation of the evaluation? Maybe…

  • the sample size was too small to create generalizations, yet there they were in the report
  • the timeline for gathering and analyzing data was too soon after the program intervention, not allowing time for the actual impact of the program to be realized
  • the conclusions and recommendations made have nothing to do with the data analyzed and presented in the report
  • the wrong key stakeholders were interviewed and the findings presented as if they were representative of the entire project

You get the idea. I’m not talking about evaluation results that are poor (because, in any good evaluation, both the positive and negative findings will be reported and used to help improve programs and make key decisions). I’m talking about a poorly done evaluation. We’ve all seen them. As evaluators, iEval has been hired to work on numerous projects after another evaluation team has been fired, typically because of poorly conducted evaluations.

So, evaluators and clients alike, we’ve seen these types of evaluations and we typically disregard them – and appropriately so. Are there times when we should actually use these evaluation results? I can think of a couple. Can you?

  • You want to show your client the impact of inadequate data collection methods. Maybe the client you are working with is giving you partial data so you can’t fully analyze the data with any validity. Or possibly the client’s staff members don’t understand the value of the data they are entering into a system (e.g., student attendance, participation in training). By sharing the analyses of the poorly collected data and having a discussion around those results with the client (which will probably result in a lot of frustration and claims that the evaluation is wrong), you can help the client realize for themselves the importance of accurate data collection and what it means for the true evaluation of program impact.
  • You want to demonstrate a starting point from which the program can build. Maybe there were no opportunities for additional data to be collected, a more appropriate timeframe for analyses to be done, comprehensive interviews with all key stakeholders, etc. Maybe you were hired at the last possible moment to conduct the evaluation. What you had was all you could possibly have by the time the evaluation report was due. You may go ahead and put together what – ultimately is – a poorly implemented evaluation and report. In this instance, you need to stress the caveats of the restrictions of the evaluation, emphasize this is merely a starting point, and focus the evaluation findings and recommendations on the next steps necessary for a valid and reliable, comprehensive evaluation during the next fiscal period.

DR. TACKETT’S USEFUL TIP: If you are going to use the analyses, findings, and recommendations from a poorly done evaluation, make sure you are crystal clear in explaining the prescribed use of that evaluation report.

Week 84: Learning and Improvement vs. Accountability

Happy New Year to everyone reading this!! For my first blog post of this year I want to focus on the purposes for evaluation that the International Labour Organization use in their policy material. These are actually very common to many international development agencies and multilateral organizations which have some kind of evaluation function built into their structure. The United Nations Evaluation Group (UNEG) state in their mission that they aim to “advocate for the importance of evaluation for learning, decision-making, and accountability” (UNEG, 2014 p.4). This implies that all evaluation offices within the UN system have the mandate to pursue evaluation work with these purposes in mind.

These purposes are good ones and mirror the basic evaluation purposes articulated by Scriven all those years ago when he developed the concepts of formative (for program improvement) and summative (for making go/no-go decisions about programs). They also take into account Weiss’s (1998) ideas of enlightenment use of evaluation, whereby evaluations feed into a larger knowledge base for decision makers who might reflect on that body of knowledge in future decision making contexts, but don’t necessarily use the evaluation results immediately.

But what I want to discuss here is the perception that often permeates institutions: that evaluations for learning and improvement are mutually exclusive from evaluations for accountability. See, evaluation for learning and improvement conjures the images of evaluation that we like to use to talk to anyone who sometimes has evaluation anxiety. It looks like evaluators working with program managers and other stakeholders to learn more about what they are doing, and help them to improve their work and ultimately serve their stakeholders better. We certainly value this concept at iEval, as we strive to make evaluation as useful as possible to our clients, which often means helping them to use evaluation to improve their work.

But evaluations are also often conducted for some accountability purpose, and accountability is also important. Ensuring that public funding is being used as it was intended, while also achieving results, is part of being a responsible steward of these dollars. These seemingly divergent purposes for evaluation don’t have to be mutually exclusive. However, to make this point more salient, we may need to re-conceptualize traditional definitions of accountability.

Accountability has strong connotations which make people think of auditing, oversight and some large entity casting a watchful, but distant eye over their work. But, accountability isn’t necessarily a bad thing, because don’t we all want to fulfill our obligations? What might begin to bring these different evaluation purposes closer together is a culture of accountability which:

  • Instead of generating fear of admitting mistakes, instead fosters a commitment to learning from them.
  • Promotes the identification of issues through data driven monitoring systems, and rewards corrective action, instead of reprimanding organizations for these very same issues.
  • Encourages the explanation of these issues in an honest and transparent way providing guidance for future program managers on how they may avoid similar mistakes.

One of the most important threats that the current perception of accountability poses to itself is that it incentivizes being less truthful about program realities. People running programs may think that if they don’t get everything right their jobs may be at stake or a cause which they believe deeply in might be jeopardized. This may make them more likely to be less clear about the outputs or results that they are achieving. Perhaps if the definition of accountability were to align more with learning and improvement then we could foster an environment where programs feel more comfortable identifying issues that arise and taking subsequent corrective action.

COREY’S USEFUL TIP: As evaluators, we often occupy a unique position in-between program staff and funder. If accountability is a stated purpose of an evaluation you are a part of, ask what that looks like to the client. What kinds of accountability-related questions do they want to answer? Are these types of questions going to incentivize program staff to be truthful about what has taken place during the programs lifecycle?

Week 83: Observation Standards and Theory

When you conduct program site visits or observations, do you always see the whole picture?

Here is an interesting exercise about paying attention, and, more importantly, looking for what you THINK should be there. Click on the link and try it out.

How did you do? Were you able to observe what was there, as well as what was not there?

How would you use this in thinking about evaluation and site visits?

It is apparent that not all evaluators see the whole picture when they conduct site visits and observations. The December 2015 of the American Journal for Evaluation has an article by Michael Quinn Patton called “Evaluation in the Field: The Need for Site Visit Standards.” It is an interesting read, especially for anyone who conducts site visits or is “site visited.’ He has many anecdotes of site visits gone wrong. Although they are all shocking in their unprofessionalism and lack of standards, one of the most jaw dropping stories relayed to Patton by an evaluation colleague about watching a graduate student conduct an observation:

“I remember the student sitting in one of the classrooms for quite a long time, more than a couple of hours. I circled back to this room and asked her how it was going. I was especially interested in what she was learning about the culture of the school, which was the focus of her observations. I asked, so what are you learning? She yawned, looked bored, and said, “Not much,” and added that she would probably leave soon. I noticed that she only had a half-page of notes. The next time I looked for her, she was gone. Imagine my surprise when I found out that she had written up her observations as a chapter in a book on school innovation. The methods description in that chapter did not correspond to the degree and nature of the field work I knew had actually occurred.” (p. 452)

How do we remedy this? Some of the standards Patton recommends to help remedy the site visitor only looking at certain things include (p. 458):

  • Preparation – Site visitors should know about the site they are visiting
  • Credible fieldwork – Not all of the activities/events of the site visit should only be led by the people at the site. The evaluator should be allowed to determine what activities are observed, arrange interviews, etc.
  • Neutrality – The evaluator should not have a preformed opinion on the program or activities.
  • Site review – The site should be able to see reports that are prepared based on the site visit and given an opportunity to change errors.

DR. EVERETT’S USEFUL TIP: Using Patton’s proposed site visit standards of preparation, credible fieldwork, neutrality, and site review can help ensure that just one side of the story is told from a site visits. 

Week 82: School of Choice

A while ago I mentioned a project I’ve been working on in my local school district around “white flight,” schools of choice, and the impact it is having on the district and community. I want to take an opportunity to talk a little bit more about that project, what it has entailed, the roadblocks, and the opportunities it creates.

My daughters attend the local public school in a traditional elementary building. As a concerned parent, I do what I can to help the teachers and building. In this case, watching the district transform over the three years we have been there, and continuing to lose many friends to charter and neighboring districts, I decided to use my role as an evaluator to survey parents about why they are leaving.

This project entailed sending emails to all those parents that provided email addresses on their school of choice application and also sending the survey out “word of mouth style” via email and social media to anyone that lives in the district boundary but chooses to go to another school (private, public, charter or homeschool). One of the biggest hurdles I face as an evaluator is collecting parent data. It is hard to track them down, and this project was no different.

How do I reach parents at charter schools that aren’t required to fill out an application with the local district? How do I ensure that more than just “my friends” are filling out the application? How do I get schools to cooperate without feeling like we are poaching parents?

I called in the guard. I enlisted friends, school personnel, board members and Facebook groups to spread the word. At the end of the day, 145 parents of the 600+ families responded. I know there are still large pockets of the parent population missing from the dataset, but I know that a grassroots attempt yielded me results I wouldn’t have been capable of on my own.

The survey focused on asking parents about why they left—what are the primary reasons you go to the school you do? Would you recommend the public districts to a neighbor or friend? Where do your neighbors go to school? Are you aware of the special programs at the public district?

As I export and clean the data and start to prepare the results for analysis, I realize what an important tool this will be for the school board in future planning. Next month, I will be presenting the findings to the Enrollment Committee and they will be able to use these data to look at changes that are necessary to keep students from “choicing” out.

I wanted to share this example for a couple of reasons:

  1. There is power in knowing evaluation. Pro-bono projects can make lasting contributions to your community.
  2. When there is a problem that seems overwhelming, such as declining district enrollment, tackle it with data first. Find out what the problem is and why before crafting solutions.
  3. Community-based projects provide opportunities for collaboration with new stakeholders, e.g., parents, district officials, board members. It is always a good idea to make connections with people, especially over areas of shared passion.

As we finish 2015 and forge ahead into 2016, I encourage everyone to take a larger view of their work and gifts and see how they might tackle a new project or problem.

week 49.jpg

KELLEY’S USEFUL TIP: When doing community based work, one of the best skills to have is the ability to listen. What are the key problems? Themes? Possible solutions? What are the variations between solutions or problems of the various stakeholders? How might data serve as a bridge between groups?

Week 81: Founding Evaluator Thomas Jefferson, Part II

A month ago, I talked about Thomas Jefferson as one of our founding evaluators because he recorded data, monitored changes, analyzed data, made decisions and improvements based on the data, engaged others in his thought processes, and saw data analyses as something extremely valuable to his way of life. I recently went to an amazing Thomas Jefferson exhibit at the American Philosophical Society in Philadelphia called Jefferson, Science, and Exploration. In part of the explanation of the exhibit at the APS web site, it states:

Big data counted for Jefferson—he rebutted European stereotypes of American nature as degenerate and weak by gathering scientific data on the size and variety of American plants, animals, and humans—even calculating the number of geniuses America had produced per capita. Jefferson also promoted collecting weather data to counter the belief that America’s “swampy” climate contributed to degenerate flora and fauna. 

At the exhibit, I saw several examples of how Jefferson used data to make decisions and improve his (and others’) work. Jefferson had APS form a committee to gather and analyze data on the Hessian fly, which was causing problems for wheat farmers across America. In a letter from Jefferson to Jonathan Havens and Sylvester Dering on December 22, 1791, he thanks these gentlemen for discovering that the Hessian fly has two generations of offspring per year – this information will help them identify a way to fight against the infestation.

Another example can be seen in the letter below, from Jefferson on January 17, 1800 to the United States House and Senate. While the first American census gathered only the number of individuals, free and enslaved, in each household in 1790, Jefferson proposed that additional data should be collected in the 1800 census. He wanted to help determine the effect of the soil and climate on our country and citizens. Some of his recommendations also included recording the number of native citizens and foreign-born and the professions of males. These recommendations were not adopted, but it’s more evidence of Jefferson’s forward thinking related to data and evaluation and how he clearly had plans for the meaningful use of the additional data components.

Thank you to the American Philosophical Society for permission to take pictures of these original letters from Thomas Jefferson and share them with you through our blog. And, if any of you are in the Philadelphia area, I’d highly recommend stopping by APS to see the Jefferson exhibit, which ends December 30, 2015. The third part of this amazing exhibit will open in April 2016.

DR. TACKETT’S USEFUL TIP: I always advise clients not to collect data just for the sake of having more data (e.g., asking extra questions on a survey, interviewing parents just because they have access) – they need to have a clear plan for using the data if they are to collect it. Thomas Jefferson’s systematic plans for data collection and analyses can serve as models for us in how to clearly explain the intended use of new data collection.

Week 80: What would be useful to YOU?

We have given you 80 weeks of useful ideas, processes, and tips that you can (hopefully) incorporate into your evaluation practices, use to understand evaluation better, apply to pick a more qualified and appropriate evaluator, and improve the utility of the work you do! We have gotten some feedback from you identifying which blog posts were particularly applicable, asking further questions about specific ideas, and letting us know when we hit your funny bone. As all good evaluators should, we would like to ask our key stakeholders for more input into what we’re doing. Please look at the following questions and either write a comment on this blog post or email me (wendy@ieval.net) with your suggestions:

  1. Is there a specific idea you learned about in one of our previous posts that you’d like us to expand on in a future post?
  2. What stumbling blocks have you hit in your evaluation work that you’d like us to think about then share our thoughts on how to overcome and turn it into something more useful to your evaluation design?
  3. What question(s) about designing an evaluation with use in mind, engaging stakeholders to improve use, writing reports that are user-friendly, sharing evaluation findings in an easily digestible manner that will result in the practical application of those findings, etc. do you have that we might be able to shed some light on?
  4. Look at our web site (www.ieval.net), is there anything there you’d like us to expand upon?
  5. Have you read a specific article, blog post (not ours – someone else’s), evaluation report, etc. that generated some questions about utility that you’d like us to comment on?

DR. TACKETT’S USEFUL TIP: Practice what you preach. We encourage other evaluators to rely on stakeholders to know if the evaluation is useful, we try to engage stakeholders in different ways to encourage understanding and application of the evaluation process, and we need to apply that same idea to our blog! We NEED your input!

Week 79: One Word to Describe Evaluation

Someone on the Graduate Student and New Evaluator TIG Facebook page posed the following question, “What ONE WORD describes evaluation for you?”

How would you answer that question?

The answer to this question depends on your point of view and training. Asking this question to a group of nonprofit executive directors, funders, someone at an understaffed organization, a new evaluator, a seasoned evaluator or the general public would elicit entirely different responses. Not surprisingly, most of the responses from the TIG members were positive. The responses included:

  • IMPROVEMENT
  • QUALITY
  • USE
  • PROGRESS
  • ACCOUNTABILITY
  • COMPLEX
  • VALUABLE
  • MESSY
  • GROW
  • EXCEL
  • OPPORTUNITY

So how would you answer this question? Put your response in the comment section. We would especially love to hear from readers who aren’t evaluators.

DR. EVERETT’S USEFUL TIP: Knowing how you define evaluation will help you explain to others about the usefulness and necessity of the evaluation process…kind of like an “elevator speech” about evaluation.

Week 78: Thanksgiving

I would be remiss not to build on the spirit of Thanksgiving for today’s post. This is a week to focus on what is good rather than the imperfections, and in that vein, I want to share my top five sentiments of thanksgiving about being an evaluator.

#1: Talking to everyone: I LOVE being able to meet with a range of stakeholders in projects – on-the-ground-worker-bees all the way to the executive directors and policy makers. Good evaluation work involves gathering data from a variety of voices in a project/program/organization – and it is that piece of the work that keeps my people-loving, social self ticking.

#2: Seeing people doing good work: There is nothing that helps you see the wonderful things our great country offers than going to visit schools, non-profits, and foundations. It gives me hope about humanity, the goodness of people, and pride for where I live.

#3: Helping clients see things in new ways: One of the best part of being an evaluator is helping clients understand data and information in new ways that ultimately impacts their work and practice. It is powerful to teach and learn with clients to make programs and organizations better.

#4: Learning new skills: I am thankful for working with colleagues and clients that continue to push me as an individual and professional to learn to skills and processes. Whether it be learning a new way to do an analysis with the help of a colleague or digging deeper into a particular type of evaluation work at the request of a client – it is amazing to be in a career that allows and encourages me to be a continuous learner.

#5: No two days are ever the same: I am not a person that could sit in a cubicle doing a rote task, and being an evaluator allows me flexibility – no day looks like another. I am always balancing tasks between conducting interview and focus groups, working face-to-face with clients in meetings, doing analyses at my computer, among a variety of other tasks. The variety keeps all the tasks interesting, fun, and flexible. I love it!

KELLEY’S USEFUL TIP: Remember to reflect on the positive aspects of your job. No job is perfect, so make the most out of what you do and find ways to make it worthwhile for your personal mission statement. 

Week 77: Experiences at AEA 2015 Chicago!

(we had some technical glitches with this post earlier in the week, so we're trying to post it again...hopefully the gremlins are worked out this time...thank you for your patience!)

It was a bit intimidating to have both of my presentations schedule back-to-back on the first night of AEA 2015, but – once that was over – it was refreshing to be done with the presentations and on to enjoying, networking, and learning!

Kristin and I shared at the Crime and Justice TIG meeting about our experiences serving as the research partner on a Byrne Criminal Justice Innovation grant – our research process, how we planned for future evaluation efforts while conducting the research, working with residents, etc. It has been a very different project from anything we’ve done before, and we learned how to not just listen to the residents but empower the residents to drive the work. Several of Kristin, Corey, and my useful tips from that project included:

  • Do not overwhelm residents with too much data – when you do share data, present it in easily interpretable formats within context and truly listen and respond to resident interpretation of those data.
  • Make sure the data drives the work – continually refocus the interpretation and strategic planning back centered on decisions derived from the data.
  • Conduct formative evaluation – even when implementing research, it’s important to review what’s going on behind the scenes to improve the research process as you go along.

Kelley and I presented about Camp iEval – a low stakes, casual setting where we share data, highlight best practices, discuss struggles and barriers, and plan for future evaluation with a variety of clients. We would love to hear from anyone who is able to take the plans and ideas we shared and implement their own evaluation camp! We had a few requests to make Camp iEval a full day pre-conference PD session at a future AEA…we’ll give some thought to that. In the meantime, here are a few useful tips from Kelley, Corey, and me regarding implementing your own evaluation camp:

  • Know and respect your audience – you need to understand their comfort zone in sharing, respect their boundaries, and engage them at the level they individually want to be engaged.
  • Be prepared – even though the atmosphere of camp is casual, you need to plan professionally by ensuring all technology works, you have adequate materials, the facilities are appropriate, and the content is timely and meaningful.
  • Keep the energy high – plan for brain breaks during the day, incorporate project-based learning, rotate presentation responsibilities to include participants, and throw in a few downright silly moments to break up the day.

At the end of our presentation on Camp iEval, we shared a video adaptation we made several years ago of Roger Miranda's book, Eva the Evaluator. It's a fantastic children's book that explains what an evaluator is in a very creative, yet accurate, way (personally, I prefer the SUPERHERO description of evaluation work!). Because so many of you have asked to be able to use our video over the years with your own clients and in presentations, and with special permission from Roger, we are able to share the video for your use. You may click HERE to show the video (or find it under the Information tab on our web site). We hope it is useful for you in teaching about evaluation, demystifying what evaluators do, making evaluation a little less scary, and infusing fun in your work! 

We attended some good sessions on data placemats (might have to use that idea!), social network analysis, visual reporting, and others. Of particular fun was when Deborah Rugg from the United Nations Evaluation Group said during the plenary on Thursday something to the effect of “Carpe diem! Seize evaluation, particularly during this EvalYear!” Of course, that brought a smile!

TEAM iEVAL’S USEFUL TIP: Be open to learning new strategies to integrate into your work. Ask questions of the presenters during or after their presentations, they’re all willing to share – that’s why they’re there presenting!

Eva the Evaluator announcement!

WE'RE SO EXCITED TO BE ABLE TO SHARE THIS!!!

At the end of our presentation on the exemplar model, Camp iEval, at AEA 2015, we shared a video adaptation we made several years ago of Roger Miranda's book, Eva the Evaluator. It's a fantastic children's book that explains what an evaluator is in a very creative, yet accurate, way (personally, I prefer the SUPERHERO description of evaluation work!). Because so many of you have asked to be able to use our video over the years with your own clients and in presentations, and with special permission from Roger, we are able to share the video for your use. You may click HERE to show the video (or find it under the Information tab on our web site). We hope it is useful for you in teaching about evaluation, demystifying what evaluators do, making evaluation a little less scary, and infusing fun in your work!

Week 76: Thomas Jefferson – Founding Evaluator?

As someone who is very interested in Thomas Jefferson (thanks to my husband, Paul, for introducing his love of the colonial period and Jefferson to me 23 years ago), I am very aware of his contributions to our lives today. There are the pivotal historical milestones that we are all aware of: primary author of the Declaration of Independence, minister to France, 3rd President of the United States, and founder of the University of Virginia. What about those more unusual innovations that we can credit Jefferson with for adapting and/or bringing to the United States? There’s vanilla bean ice cream, the Jefferson cipher wheel, European grapes spliced with his own grapes to create a sturdier stock, a seven-day dual-facing clock (made for his home in Philadelphia but then transferred to Monticello where they cut a hole in the floor to make it fit!), an entire octagonal house (Poplar Forest) designed to allow better air flow and more light, a letter duplicating device (polygraph), the revolving bookstand, a partially automatic door, etc.

Thomas Jefferson may not have invented all of those things, but he listened, observed, took notes, and enhanced recreations of those items to improve his own life. He kept a weather journal, writing down the daily conditions in a systematic way. He kept a transactional journal, noting what purchase was made, from whom, for how much, when delivered, etc. He kept copies of all letters he wrote and received. He monitored how his gardens took shape, including how seeds were planted, growth rates, rain and sun, and cultivation techniques. Thomas Jefferson monitored everything, and he made decisions for improving his systems based on the data he collected. Doesn’t that sound like an evaluator?

There are differing perspectives on what core competencies or skills are required of a high quality evaluator. Several ideas that evaluators need to understand and know how to use that appear to transcend the arguments and span across thought leaders and publications include conceptual frameworks, data management, data analyses (quantitative and qualitative), planning with data, and teaching with data. Jefferson clearly used conceptual frameworks as he planned improvements and recreations of what he’d seen overseas (e.g., swiveling chair, automatic door); employed various data management systems (albeit all in handwritten journals); analyzed his data to make decisions about planting, building, politics; and engaged others in his thought processes so they could learn from what he was doing (e.g., John Hemings, Ellen Wayles Randolph Coolidge). Maybe we should be referring to Thomas Jefferson as a Founding Evaluator as well as a Founding Father!

Now, why am I talking about this in a blog dedicated to evaluation use? The blog is called Carpe Diem – seize the day! I was recently at a Tom Talk at Monticello where the presenter, Diane Ehrenpreis, was talking about Jefferson’s attention to detail as he observed and took notes about mechanical inventions abroad then appropriated them for his own comfort back home. It hit me, as she was talking, that the tools and processes Jefferson used were very strategic and resulted in his use of data for improvements and decision-making. It's just another example of evaluation in the wild (see post 68 from the D23 Expo)! I decided to seize the day, share a little bit of history about Thomas Jefferson, and highlight several of his evaluative tendencies that resulted in the meaningful use of data! While I’ve never studied Thomas Jefferson in any evaluation course or seminar, I can definitely learn from his thorough attention to collecting data, systematic approach to analyzing it, and innovative applications of the data for improving his own situation at home.

DR. TACKETT’S USEFUL TIP: Keep your eyes and ears open for opportunities to connect evaluation theories to new (or old) practical applications!

Week 75: Collaborative Evaluations (not Collaborative Evaluation)

Recently I participated in a half-day training on collaborative evaluations given by Brad Cousins. In a previous post, I have written about Brad Cousins who published a paper about use and misuse of evaluation. Brad Cousins may be most well known in evaluation for his development of what is called Participatory Evaluation. But, this workshop was intended to introduce us to some work he and his colleagues in Ottawa are working on. This focuses on an effort to develop a set of principles for collaborative evaluations.

Now, there is an important distinction I want to make here. The work that Dr. Cousins presented was related Collaborative Evaluation approaches, not Collaborative Evaluation. The latter is a specific evaluation approach (See Rodriguez-Campos, 2012) based upon ongoing engagement with stakeholders by the evaluator(s). The former though is what Cousins and his colleagues describe as a family of approaches which have collaborative inquiry at their heart (Cousins, Whitmore & Shulha, 2013). This includes approaches which are premised upon the development of a collaborative relationship between evaluator and stakeholders. They differ in terms of who is engaged, how in-depth that engagement goes and the level of decision making that stakeholders receive (see figure 1). They also have different purposes (i.e., empowerment evaluation is intended to actually empower stakeholders through the evaluation process)

Figure 1. Dimensions of form in collaborative inquiry.

Cousins, Whitmore & Shulha, American Journal of Evaluation 2013;34:7-22

So, they have this basic concept of collaborative inquiry as it relates to evaluation, and I find the argument for talking about these different approaches as part of a family makes good sense. Not only do they make the case for this family to exist, but they have developed a set of principles to guide evaluators who are implementing an approach that is a member of this collaborative family. I am not going to share those here because a paper written by Dr. Cousins and his colleagues will be published in the American Journal of Evaluation soon and I don’t want to let the cat out of the bag.

But, these types of evaluation are very use-friendly. Their emphasis on engaging with stakeholders around various evaluation processes means they end up feeling like they own the results, making it more likely that stakeholders use the findings to make program improvements or programmatic decisions. These types of evaluation also build evaluative capacity within the groups that are deemed “collaborators.” They offer quite a bit, and they are worth looking into if you haven’t already.

I find this type of research on evaluation fascinating, and I wanted to share a little bit of what I learned. I also think that trying to refine and condense what I find to be a bloated universe of evaluation approaches is a laudable effort. I also wanted to convey some information about evaluation approaches which can be very good for fostering use. Maybe this makes you more inclined to do some reading up on these types of evaluation, maybe not. But I hope I have at least introduced you to something interesting in the world of evaluation.

COREY’S USEFUL TIP: Dr. Cousins and his colleagues are looking for evaluators who are interested in utilizing a collaborative approach to evaluation to pilot the principles they have developed to get a better sense of how they translate into practice. A public call will be going out soon. They will be offering support to anyone who chooses to participate related to designing an evaluation with these principles in mind. If you are interested, let me know.

 

Cousins J. B., Chouinard J. (2012). Participatory evaluation up close: A review and integration of research-based knowledge. Charlotte, NC: Information Age Press.

Cousins, J. B., Whitmore, E., & Shulha, L. (2013). Arguments for a common set of principles for collaborative inquiry in evaluation. American Journal of Evaluation34(1), 7-22.

Rodriguez-Campos L. (2012). Advances in collaborative evaluation. Evaluation and Program Planning, 35, 523–528