Week 55: There are Standards for that…

I recently took my comprehensive exams. If you pass them, it signals your completion of course work and you are qualified to move onto the dissertation phase of the Ph.D. I passed, WOOHOO! But, in preparation for that exam I studied and read a lot.

One of the things I spent time looking back at was the Joint Committee for Standards in Educational Evaluations (JCSEE) Program Evaluation Standards. The standards are in their third edition and are accredited by the American National Standards Institute (ANSI). Basically, this means they have gone through an extensive development process, and standards approved by ANSI become American National Standards. They are intended for program evaluation in the United States, but professional associations and organizations around the world have used them as a starting point for creating their own set of national standards for evaluation.

The Joint Committee is made up of representatives from a wide range of professional associations. A couple of very influential people in getting these standards developed and then maintaining and reviewing them are from Western Michigan University (WMU). So, these standards are in a way built into our (WMU trained evaluators) collective DNA in a sense. The standards have 5 main categories:

  • Utility: the extent to which program stakeholders find evaluation processes and products valuable in meeting their needs.
  • Feasibility: when evaluations can take place with an adequate degree of effectiveness and efficiency.
  • Propriety: what is proper, fair, legal, right, acceptable, and just in evaluations.
  • Accuracy: the truthfulness of evaluation representations, propositions, and findings, especially those that support judgements about the quality of program so program components.
  • Accountability: the responsible use of resources to produce value.

This is a blog about use, so I wanted to highlight the first main category, utility. Clearly, use is important to evaluators and evaluation users as represented by its place on the list. Within each main category there is a set of sub-standards.

With regards to the utility standards, there are a few I think are particularly important. The first is actually the first standard on the list, Evaluator Credibility. This refers to the evaluator being qualified to undertake a particular evaluation. Now, this comes at a time when there is much discussion on evaluator credentialing, and where the Canadian Evaluation Society has already implemented a credentialing process for evaluators. I’m not going to get into that, and you can read about that discussion on your own if you are interested. But, without a qualified individual to take on your evaluation, the whole process may as well stop. So, do your research on evaluator qualifications (the standards are a place to start, but also see Stevahn, King, Ghere & Minnema, 2006), consider your own needs, and be sure to obtain the appropriate evidence from your evaluator to be able to assess whether or not they are in fact able to take on the task you need them to.

The other utility standard I believe is particularly important is Negotiated Purposes. This basically describes the need to spend time on the front end to have a clear and honest discussion about the purpose for conducting the evaluation. This will help to ensure the evaluator understands what it is that needs to be done, and how to design an evaluation which will best serve the agreed upon purpose(s). This can help avoid situations where evaluations are conducted but end up being something completely different from what a client expected. This scenario is clearly not useful to anyone. An evaluation designed around the appropriate purpose is an evaluation which will be most useful once all is said and done.

COREY’S USEFUL TIP: Take a look at the JCSEE Program Evaluation Standards. This goes for evaluation users and evaluators (who should already have done this!). It is the most detailed and clear set of criteria we have for what makes evaluations good or bad. Knowing a little bit about this can help you procure better evaluators, and on the flipside for evaluators, do better evaluation.

Stevahn, L., King, J. A., Ghere, G., & Minnema, J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation26(1), 43-59.

Week 54: Conducting a Data Scavenger Hunt

A few months ago I talked about ways to help a client think about and organize their existing data. I had used a chart as a tool for this, creating three tables with the following headings: current data collection efforts, past data collection efforts, and possible data collection efforts. Within each chart there was a column to list information about each source of data, such as:

  • Type of data
  • Location (folder/file names)
  • How the data is collected (pencil and paper, online survey tool, interviews, etc.)
  • How often the data is collected
  • What questions are answered by the data? This could be based on evaluation questions or other “big” questions the organization has, such as who is being served by our programs?
  • Stakeholders for the data

An example of a chart follows below:

I had used these charts with a client’s Executive Committee and they found them very useful to see the data they had collected.

Since that post, I have continued to work with the client around their data collection. They were interested in ensuring all of the data they were collecting was useful and relevant. How much of what they were collecting was because they have always done it before?

To answer this question, the Executive Committee wanted the entire Board of Directors to review the charts and think more deeply about them. To facilitate this, I developed a “Data Scavenger Hunt” activity for them to complete at their board meeting. The client was very invested in the outcome of this work and saw the importance of data collection and reporting. Because of this buy in, they dedicated an hour and a half of their board meeting to the Data Scavenger Hunt and the follow-up Data Dig activity.

The Data Scavenger started with brief homework for each person. They received a copy of the completed tables (which I explained above) and a link to a brief online survey. The purpose of the online survey was to get people looking through the tables. The survey had questions like:

  1. Approximately how many current data collection efforts is the organization undertaking?
  2. Approximately how many recent past data collection efforts did the organization complete?
  3. If you were given 10 minutes for a presentation to {specific stakeholder group} to promote your organization, which sources of data would you use? – This question was repeated with different stakeholder groups – parents, administrators, legislators, etc. to get people thinking how different data is used based on who they may be talking to.
  4. What other data are the organization collecting that are not represented on these charts?
  5. Identify two current sources of data that you feel are important for the Board of Directors to review quarterly.
  6. Identify one current source of data that you have never used.

To answer questions 3, 5 and 6, the respondents were given the list of all the different data that the organization collects to choose from and just had to click on their answers. No writing necessary.

After everyone took the survey, we tallied up the responses and reported them back to the Board of Directors. Some of the findings were surprising but many of them reaffirmed the importance of the data that was collected.

DR. EVERETT’S USEFUL TIP: Make the data scavenger hunt fun! Some ideas: give prizes to people who got the most (and least) correct answers or use a treasure map theme.

Week 53: Out in the Field

I’ve spent the day with my nine year old twins who are home sick watching a marathon stretch of Man vs. Wild on Discovery Channel. The show dramatically reveals Bear Grylss, an expert outdoor adventure man, surviving difficult elements by sleeping in a parachute sack in the Arizona dessert, creating a wet-suit out of seal skin in Scotland, and roasting a snake for dinner in the jungle. It seems that each day in the field for Bear is uniquely different. Dramatic. Breath-taking. Marvelous. Terrifying.

Now, skip to a day in the field for me, an evaluator on site at schools collecting data. Each day of visits to schools or program sites is hardly as unique as Bear’s regiment. While one school may have distinguishable characteristics compared to another, my visits are not usually dramatic. Never life-threatening. Sometimes marvelous.

For the work-cycle of me as an education evaluator, spring marks “site visit” season where I am often in schools several times a week to observe lessons or after school programs. When at the site, the experiences are fresh, but as soon as I set foot into another school, the experiences start to blur together. The smell of rotten bananas, hand sanitizer, and crayons could be from any school. It is hard to keep track of what I observed where. Over the years I have developed a couple of strategies to help me recall what I observed in the field when I am back at my computer in my office translating my findings into a report.

  1. Complete your notes for each day within 24 hours. Sometimes it is late when I return home from a day in the field, but I know that I will be sorry if I don't take a few minutes to finalize my notes. While it seems that the information and details will still be fresh, the reality is, it won’t be.
  2. Jot down general themes you are observing to explore later. Often site visits and qualitative data collection will be analyzed in conjunction with quantitative data. Being in the field provides an opportunity to see work being implemented on the ground and to gain insight into why particular strategies may or may not be working. For example, as I go between several classrooms at school, I can make general observations about school culture, teacher morale, and student expectations that may be supported through other data points as well.
  3. Provide visual cues in your notes to help recall. Often times in my notes I’ll include information about the physical space of a classroom (e.g., reading nook with upstairs tree fort) or person (e.g., red hair, black dress) to remember one observation from another. This helps me “go back” to the classroom in my mind as I write up my notes.
  4. When collecting data using a set protocol, it is still important to make notes about positive elements observed and areas needing particular attention. When observing classrooms we use an observation tool with set fields. However, by explaining some context on my note sheet, I have a better understanding of what was going on in a particular classroom when going back to conduct the analysis.

KELLEY’S USEFUL TIP: Having an evaluator visit can be intimidating and inconvenient for people. Leaving a few pieces of candy or some other treat behind can show appreciation for their cooperation. In particular, I have received the best feedback from Hershey mini-bars!

Week 52: Year 1 Reflections

My apologies! I accidentally put the internal links to all of the blog posts below in yesterday's posting. Here's the corrected update. Thanks, Jason Forney, for letting me know, and thank you everyone else for being patient with me :) 

I know this is cliché, but I’m a bit flummoxed on how 1) our blog has only been around one year and 2) we’ve really been doing this blog for one year! Writing these posts on how to make evaluation more useful to evaluators and clients has so quickly become integrated in our work, that it feels like we’ve been doing this for much longer than a year. On the flip side, I clearly remember brainstorming with my colleagues just over a year ago about filling a void we saw in the evaluation blogosphere.

Writing these posts have caused each of us to be more reflective on our individual and team practices and even act more intentionally to make evaluation useful. On a personal level, the fact that we are doing a better job evaluating our own work…I deem that a success of this blog.

I’m happy to see that people are referring back to our posts weeks after they’ve been put online. While some posts are particularly relevant to a timely event, most are more ubiquitous and can apply at any time you choose to read them. Please take some time to review some of our more popular posts (listed chronologically):

On a more global level, the traffic to the Carpe Diem blog online has had a fairly steady increase since its inception, which makes us think that more people are reading our tips. Of course, taking that theory of logic to conclusion, we hope more evaluators are integrating the tips into their practice, which would lead to increased use of evaluation findings by clients and more user-friendly evaluations in general. We would LOVE to hear from you on how you’re using what you’ve learned through this blog. Please post comments below!

We have all had a blast sharing with you this past year, and we look forward to continuing that work moving forward!

DR. TACKETT’S USEFUL TIP: Keeping current on thoughts in the world of evaluation through blogs, journals, conferences, trainings, etc. is crucial to continuing the work of improving our field. We all need to own some of the responsibility for doing that!

Week 51: Evaluator Qualifications and Use – The Doctoral Student

This week is the final post of a four part series where each of us took a turn to talk about our personal evaluation qualifications and how we believe those qualifications have helped us provide a focus on use for our clients. This is a particularly poignant time for me to be writing this post and contemplating the question of what makes me a qualified evaluator. In a little over two weeks I will be sitting for my comprehensive exams for my Ph.D., passage of which takes me to the last, and final, phase…the dissertation. As part of the preparation for these exams, one of the steps I must take is completing an evaluator self-assessment, whereby I must go through a list of competencies and provide evidence for why I have obtained each one. So, to say the least, it is a reflective time for me, and this post slots perfectly into that space.

Since I am last in this series all the good words/qualifications have been taken, and since I am a student and have been burying my nose in evaluation literature I have found a quote from Jane Davidson’s book Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation that I want to use to frame my post. In this section she describes what it takes to become a good evaluator, writing that it: “involves developing the pattern spotting skills of a methodological and insightful detective, the critical thinking instincts of a top-notch political reporter, and the bedside manner and holistic perspective of an excellent doctor, among many other skills” (p. 28). 

When I read this, the skill set she mentions resonated with me, and so I figured it would be a way to frame my post for this series. I want to talk a bit about each one, and how it has helped me become a good evaluator, and one who is able to make evaluation useful.

The Detective. The other day I was talking to a peer of mine in our comps study group about that moment in an evaluation when everything sort of comes together, a lightbulb goes off, and I am able to put all the pieces of evidence into a cohesive thought or story. It’s like when a detective cracks a case or at the very least finds a lead. When I am able to speak holistically about a program in the context of the evaluation, it says to the client that I have an understanding of what is going on. As an evaluator you gain a unique perspective on a program. As an evaluator we are able to look at the program with some of the context, through the light of any of the evidence we have collected.

However, on the topic of evidence, the other part of being a detective that is pointed out in Jane Davidson’s quote is the methodological aspect. Good evidence is the foundation of any evaluation. As evaluators we are obliged to do our best to collect, analyze and use the best evidence we can. My path through my Ph.D. program had a heavy emphasis on methodology. Gathering and using good evidence gives evaluation consumers faith in the results, and, I believe, makes them more likely to use the evaluation.

The Reporter. Sometimes I think that in another life I would love to be a reporter. Collecting stories, breaking important news, shedding light on important issues. Some of the individuals I look up to and admire most in this world are reporters, often because of the questions they ask and the balance they attempt to bring to a story (see Diane Rehm). Good questions are a key part of a good evaluation. Questions have been a part of my life since I was a kid. Perhaps my Dad unknowingly was developing the evaluator within me when each day when I came home from school, he would challenge me, asking me what questions I asked that day. I have always been taught to ask questions, challenge existing knowledge, and think critically about what is put before me. I think this serves me well in evaluation.

I also think that being a reporter means maintaining a level of objectivity and distance from the object of evaluation so as to not have ones judgement clouded. To make evaluation useful, it must be credible. This is similar to what Kelley talked about in her post, but I think I am conceptualizing credibility in a different way. I think that if the evaluation is going to be used, the users have to believe in its {the evaluation’s} credibility. This demands a level of objectivity, and distance on the part of the evaluator.

The Doctor. I like the doctor metaphor a lot. Put simply, evaluation is about diagnosing a program and saying whether it is sick or healthy (good or bad). It is also about communicating that to the patient (the program stakeholders). So, when we communicate this is when the bedside manner comes into play. I think that having the ability to engage people, respect people, and empathize with people helps me to facilitate evaluation use. This is a part of my bedside manner that I think has served me well and will continue to do so in the future. I also think that part of being a doctor is communicating the diagnosis, then describing a treatment plan to address the disease. As an evaluator, it is important to make value claims about the program (e.g., good or bad, better or worse), but it is also important to help clients consider how to improve what they are doing. By putting forth the treatment plan, we also facilitate use.

COREY’S USEFUL TIP: Evaluators are superheroes, who need to be able to do many jobs. Become one today. Channel your inner detective, reporter and doctor all in one. With our powers combined….you know what comes next. 

Week 50: Evaluator Qualifications and Use – New PhD

This week is the third of a four part series where each of us takes a turn to talk about our own specific evaluation qualifications and how we believe those qualifications have helped us provide a focus on use for our clients. I have worked as an elementary classroom teacher and received a PhD in evaluation in 2013. I work in a university setting, conducting educational program evaluation for clients across the country.

Education. My background is in education, first as an elementary school teacher, then working for a university-based evaluation center. I eventually earned a Master’s degree and PhD in evaluation. My plan was never to become an evaluator. I came about it in a roundabout way, as I think is the case for most. I decided to pursue my PhD in evaluation, in part, because my graduate school advisor told me that because there is no formal evaluator certification process in the United States. If I want to call myself an “evaluator,” the best way to do that is by having a PhD in evaluation. Obviously, it’s not a magic bullet - as there are many good evaluators without PhDs and vice versa. But the knowledge I gained through years of classes, presentations, and writing a dissertation about evaluation gave me confidence, content knowledge and invaluable professional and academic contacts.

Real Life Experience. I started my working life as an elementary school teacher and moved into evaluation from there. Being trained as a classroom teacher has given me so much insight as I work in educational program evaluation. It’s also gives me some street credibility when I work with other teachers and educators. Real life experience is also useful as an evaluator. I’ve been working in the evaluation field since 2002. Those years of experience have helped me understand the bigger picture and ask the right questions. It helps me head off possible problems and understand what use looks like.

Continued Learning. Evaluation is a growing field and there is always something new on the horizon. Keeping up with the field is important. Luckily, it’s possible to learn more about evaluation through multiple channels: conferences, talks, professional development, classes, blogs, podcasts and colleagues. This information helps me become a better, more informed evaluator.

DR. EVERETT’S USEFUL TIP: Education, real life experience, and continued learning - three important ingredients for any successful career.


Week 49: Evaluator Qualifications and Use – Credibility

This week continues a four-part series where each of us is going to take a turn to talk about our own specific evaluation qualifications and how we believe those qualifications have helped us provide a focus on use for our clients. We all bring various backgrounds and strengths to the table. My background is in education, first as high school English teacher and administrator for gifted education programs, and then after graduate school, in a large policy focused research and evaluation firm in Washington, DC. I also led a team in Chicago to develop and implement a legislative campaign for afterschool program funding and reform. My path to working as an evaluation consultant has not been straightforward—a little more bumper car than race car.

Through my work as an evaluator I interact with stakeholders, in particular teachers, who often ask me…so how DID you go from being a teacher to doing what you do? To a teacher (and I understand this) there seems to be something appealing about a job with flexibility and travel. What they don’t see is me waking up at 4 am to make it across the state for an 8 am meeting or me eating carrots at 5 am for breakfast since that was the quickest thing to grab. So, when people ask me What qualifications do you need to be an evaluator? – I would say the number one thing is credibility. In my experience, credibility is gained through 1) critical thinking, 2) context knowledge and understanding, and 3) interpersonal skills.

Critical thinking. As an evaluator, there are hard skills needed to conduct the day-to-day work—including quantitative and qualitative data analysis, report-writing, meeting facilitation, instrument creations, etc. However, above all else, is an ability to look critically from a big picture level while working smartly in the weeds. It is critical, for example, as you discover a finding, to understand the implications to the broader project goal, the community’s challenges, and the soldiers on the ground implementing the work. There is a constant need to step back from the “doing” part of analysis and ask yourself: What is really going on here? Does this finding make sense given the various components? Is it the right time to share this finding or does more information need to be examined first?

Context. I remember sitting in my first graduate school course (a policy course) and being so grateful that I had just been in the classroom as a teacher so I could put meat on the bones of the information I was learning. It was essential that I had an understanding of the education field from a teacher’s perspective as we discussed proposed changes to education policies that would ultimately impact teachers and students. As I walk into classrooms regularly to collect data, it also puts teachers at ease to know that I was a teacher. I am not just a scary evaluator that won’t understand why student A, B and C won’t listen or why the lesson plan had to be adjusted mid-course to accommodate for students that don’t understand. Experience as a teacher gives me “street cred” when I go into schools and context as I conduct analyses.

Interpersonal skills. I’ve written about this in previous blogs: being personable is one of the most important skills to an evaluator that is interfacing with clients and other stakeholders. If you are uncomfortable in social situations, that is fine, but then stay behind the computer and let other associates do the meetings and data gathering. When I worked as a campaign manager in Chicago, I had to go to many elite social gatherings that made me sweat. I was uncomfortable and felt out of my league. However, these challenging situations have made me strong as steel when talking to others—there are few people that I feel the need to shy away from. Pushing myself to be confident when talking to everyone has been invaluable to me professionally.

KELLEY’S USEFUL TIP: If you are looking to make a professional shift, think about what skills you have developed and how they might lend themselves to other fields. Don’t be “stuck” doing something that doesn’t feel right—take your gifts and use them in a different way. 

Week 48: Evaluator Qualifications and Use – Veteran

This week begins a four-part series where each of us is going to take a turn to talk about our own specific evaluation qualifications and how we believe those qualifications have helped us provide a focus on use for our clients. We have very different, yet sometimes overlapping, backgrounds and qualifications. Kelley has evaluation experience working as an independent consultant, within a large policy-driven organization, and in K-12 education. Kristin has a recent doctorate in evaluation with experience working in a university evaluation department and as a K-12 educator. Corey is a novice evaluator currently working on his doctorate in evaluation. And then there’s me…I guess I’m the veteran!

While I could list many words that would describe the qualifications I believe I have as an evaluator (thinking of AEA’s Guiding Principles, the Joint Committee’s Program Evaluation Standards, coursework from Western Michigan University, etc.), I’m going to aggregate them into: Knowledge, Proficiency, Integrity, and Enthusiasm.

Knowledge. I have the education necessary to claim to be an evaluator – a doctorate in evaluation. I have read many books, articles, and blogs over the years to stay updated on current theories, trends, and ideas. I have presented at conferences, created and led workshops, and taught college courses on evaluation, proving I have the knowledge that I can share with others. While all of those pieces are important, the most critical element is the ability to apply that knowledge in the real world. Students need opportunities to use the theories and models they are learning about evaluation and apply them directly to a real life situation through case studies, field experiences, or internships. Organizations need to have evaluators they work with who are willing to try out new ideas, with the possibility of failure, in order to potentially learn something amazing that will improve processes and programs beyond their own. Being able to translate valid knowledge into practical application in an evaluation setting is a critical qualification influencing use.

Proficiency. I have been working in the evaluation field for 15+ years, and I have led iEval since founding it in 2002. Understanding how to use quantitative and qualitative tools and analysis methods, then using them often, gave me experience. Applying the tools in different settings – educational, health, nonprofit, community, law enforcement, etc. – also helped to build my experience. However, experience does not always mean proficiency. Working to modify analyses to meet client needs then sharing what I’ve learned with other clients is part of using my experience proficiently. Being able to ensure evaluation procedures are done with fidelity then translating them into meaningful findings and recommendations makes someone proficient in those methods.

Integrity. Integrity is usually at the top of the requirements list for any type of job. My favorite definition of integrity continues to be “doing what’s right even when nobody is looking.” I believe integrity is even more imperative for evaluators. Evaluators are often working with sensitive data, potentially making life-changing recommendations based on the data, and never truly understand how their written words and presentations can affect individuals. If someone who was less than completely honest had that type of power, the results could be disastrous. Unfortunately, integrity isn’t something you can teach or learn, its just part of who you are…I choose to live with integrity. Being transparent about evaluation processes, monitoring for validity and reliability in findings, recognizing and accounting for biases, and openly sharing with clients and other evaluators all lead to evaluations conducted with integrity.

Enthusiasm. Anyone who knows me will say that I’m a little high energy…okay, maybe more than a little. I have fun with what I’m doing…from planning trips for friends to Walt Disney World to taking pictures of flowers to playing tour guide at Thomas Jefferson’s Monticello and Poplar Forest to spending time with nieces and nephews to reading while snuggled up with my cats to planning for the future with my husband to conducting evaluations! Yes, evaluation IS fun! I love to dig deep into statistical analyses and find something I didn’t expect. I take pride in creating a report that is accurate, digestible, and beautiful. I feel fulfilled when I see a client experience an A-HA because of the data we shared. I get recharged when I give presentations about evaluation…I do tend to bounce around quite a bit! I am passionate about the work I do – both how it is done and how it is used. Being enthusiastic about evaluation is probably the most critical evaluator qualification that leads to and improves use.

DR. TACKETT’S USEFUL TIP: Learn and apply it. Practice and share. Live honestly. Carpe diem!

Week 47: Reflections on our Responsibility for Facilitating Use

In some of our recent work, we have been engaged in gathering, synthesizing, and presenting an array of data and information to a large, diverse group of stakeholders. Information has been presented to the group at large meetings, where smaller groups discuss information specific to a shared interests. The groups we have worked with have gone through a process whereby they interpret relationships in the data, which raises new questions. These questions often lead to new questions, and then new and different conclusions. By the end of this, we had provided the group with a vast array of information, and it got me thinking about what constitutes too much.

I believe as evaluators we have a responsibility to provide data, but also to help insure that information is used correctly. I started to consider what constituted too much information. We had provided so much data that 1) we had overwhelmed the group with information and 2) the group had come to conclusions which they believed to be true, but which were not fully supported by the data we shared.

Here are a few conclusions and thoughts I have about these types of situations:

  • Be clear about the difference between correlation and causation: Groups who are unfamiliar with data and research methods may be quick to interpret information which looks correlational as evidence of a causal relationship. However, it is important to understand that correlations only suggest some relationship, not one that can be interpreted as X causes Y.
  • Clarify limitations: Be sure that any limitations associated with specific data are made clear, so these are taken into account by the group as they dive into the sense making process.
  • Consider the purpose: Not all data may be relevant to a particular issue, and presenting it may only confuse people or muddy the waters. Consider what the most important pieces of information are, how they connect to your overarching questions, and rely on prior research if available to guide you in the decision making process of what should or should not be used.
  • Facilitate inclusion: As you facilitate the interpretation process, take steps to engage all members of the group. Even in smaller groups, people interpret things differently based on their experience with the issues at hand. It is critical to engage all people involved to get an inclusive array of conclusions.

COREY’S USEFUL TIP: Be prepared! We have spent many hours preparing for each of these meetings, including how we would present the information to the group. We developed guiding questions to keep the group focused, and made sure to limit the amount of time we spent on each question. By having a process, we were able to make sure that time was spent efficiently and effectively.

Week 46: Thinking through Existing Data: Alignment and Purpose of the Data

In my last post (week 42) I talked about how an evaluator can help an organization use the data they are already collecting. The tool I talked about is for the evaluator to create tables outlining current, past, and possible data collection efforts. These tables help an evaluator and organization understand what data are available, how the data are collected, what questions are answered by the data, and stakeholders for the data. In this post we’ll start to break down those tables and further investigate other ways to think through already existing data. 

One way to think about it is through alignment and purpose of the data. What was the purpose, objectives, and intended use of the data that has been collected? Answering the following questions will help the evaluator and organizations better understand the data that exists.

If you’re faced with existing data, some questions to ask the organization you’re working with around alignment and purpose:

  1. What was the original purpose for collecting the data (why)?
  2. What was the original context for the collection of the data (who, how)?
  3. What was the context for the intended use of the existing data?
  4. After reviewing the data does the evaluator have any measurement concerns with the data?  Consider the methodology for the previous data collection, including sampling.
  5. Is the data relevant for the current project?  Does the data provide the appropriate information to answer the present evaluation questions?
  6. Is there consistent use of language?  Is the language understandable?

DR. EVERETT’S USEFUL TIP: If you’re faced with a new project or client, or just want to make sure you are clear on why data are being collected, dig deeper into the purpose and alignment of existing data. 

Week 45: Simple questions to consider when data is overwhelming

This week iEval worked with a group that was considering a dataset that was actually quite simple but complicated by the fact that numbers in the data sets varied slightly between stakeholders. For example, while the researcher through there were X participants, the program staff through they had X + 2 participants. These nuances were hanging the group up–and as an evaluation firm that stands behind its analysis–they tripped us up too. How can we measure change when we don’t even really know where we are starting?

And then I had to remember the KISS principle–Keep it Simple, Stupid.

So in our case, the nuances of numbers being off by a few participants here and there really didn’t impact the overall picture of what the data were saying. So when reviewing the data with stakeholders, we decided to frame the data digging around two questions:

1.     What are the data telling us?

What are the trends over the last year? What patterns emerge? How has the data changed since last year’s reporting? What surprises are there? What doesn’t the data account for? (e.g., forces of nature) Where are the bright spots–things that are going well despite difficult odds?  

2.     How can/should the data inform practice?

What strategies are working according to the data? What strategies aren’t working? How might we replicate successful strategies? What data do we need to tell a complete story that we aren’t yet tracking? What steps need to be in place to gather that data?

These two simple questions helped us get out of the weeds of sorting out the particulars when they didn’t really impact the overall picture of data trends, successes, and areas needing additional focus. Sometimes it is hard to step back, take a deep breath, and look holistically at what we do know instead of being bogged down in what isn’t quite right.

KELLEY’S USEFUL TIP: When working with a complicated data set or when conducting analysis, take a few minutes to step back and reflect on the overall project goals or research questions. Sometimes a group’s questions and energy redirect in a way that is counter-productive to larger goal. Don’t lose sight of the larger goal and questions as a project evolves. 

Week 44: Using Evaluation Results in Collective Impact Work

Collective impact is a hot topic right now. Try googling it – you’ll get about 424,000 results in Google and about 12,000 results in Google Scholar. There is a plethora of webinars and conferences dedicated to understanding and implementing collective impact. While understanding how to do collective impact work is necessary when embarking on that process, it is also critical to understand how to know when it is or is not working.

Developmental evaluation is often paired with collective impact models, and that methodology fits well (when implemented appropriately) for initiatives that operate in a continual state of emergence. Other evaluation styles can also work with collective impact initiatives, as long as the evaluators are responsive to the changing needs of the work and share meaningful feedback along the way that is used. Used – that is the critical word here. This entire Carpe Diem blog is dedicated to improving the use of evaluation findings in meaningful ways, and it is very easy to forget to use evaluation results when operating in the state of chaos that is collective impact work.

In my experience with evaluating initiatives that are using a collective impact approach, the use of evaluation findings typically falls into two categories: 1) react to evaluation results immediately and make changes or 2) consider evaluation results, put them aside, come back to them 6-18 months later, and then use them. In other evaluation projects, I’ve seen a use timeframe in between where the findings are immediately reviewed but not acted upon for a few months (after a little more time to thoughtfully prepare a plan for use), yet I have not seen that as the primary approach within any of the collective impact initiatives we have evaluated. I’d like to propose to other collective impact initiatives that the middle ground may be the better method to employ.

The table below highlights just a few pros and cons, based on my experience, of each type of evaluation use within a collective impact framework. There are definite pros and cons with each timeframe for evaluation use, and we’d love to hear some of your thoughts on your experiences evaluating collective impact frameworks within the comments section at the end of this post.

DR. TACKETT’S USEFUL TIP: Collective impact work often requires rapid changes in program design, communications, connections between organizations, evaluation, etc. While operating in this world of fast-paced change, it is important to remember to carefully consider evaluation results and strategically plan for use of those findings, but don’t wait too long to use them otherwise you’re paralyzing the work.

Week 43: Collective outcomes and thoughts on managing them

This post wraps up my exploration of the paper by Henry and Mark (2003) which discusses the outcomes of evaluation at three different levels. In my last two posts I explored the outcomes of evaluation at the individual level and the interpersonal level. Today I want to discuss the idea of evaluation outcomes, beyond use, at the collective level.

The collective, in the context I work in, refers most often to a program or institution. These are the entities which I most often do evaluation work with. For example, a particular non-profit might represent the collective or it may be a department within an agency that is responsible for implementing a particular program. Collective evaluation outcomes are concerned with how groups or institutions learn from evaluations and apply knowledge or information generated by an evaluation to their work.

Henry and Mark discuss four different types of these collective level outcomes, including:

  1. Agenda setting – evaluation information leads an issue to become a part of a larger agenda.
  2. Policy oriented learning – evaluation information leads to a shift in the way an organization or collective body views a particular policy or program. This lies in contrast to individual changes in attitudes.
  3. Policy change – evaluation information actually leads to a direct change in the way a program operates.
  4. Diffusion – evaluation information leads others, outside of the original context, to adopt a program because of reported impacts or effects.

In my previous posts about the content of this paper I have discussed the importance of building research into our practice to try to build a base of evidence for these types of outcomes. This is still critically important, and all that I have said for the previous two levels of outcomes still applies here. However, I want to explore another aspect of outcomes in this post: how evaluation is used to make programmatic changes and how these changes are applied.

I believe that each of the four outcomes shown above is plausible and important. However, I am most interested in points 2 and 3. They are related in a way. The most obvious connection between the two would be that policy oriented learning leads to policy change, however I believe that sometimes change may precede the learning. This may not make intuitive sense, and in fact I would argue that order of events may not be the best approach to applying evaluation findings. However, evaluations and the application of the information they present is often the domain of managers or organizational leadership. In this scenario, policy changes based on evaluative conclusions might be made by organizational leadership, but run into problems because the learning has not been effectively diffused throughout the organization.

The scenario presented above reminds me of a current project we are working on. Much learning has taken place over the lifespan of this project, and we believe that the information produced by the evaluation has contributed in some way to this. However, the learning which has taken place, and which has sometimes led to drastic changes in direction, has not been properly communicated. What this has meant is that others who are involved in the work have been left to try and muddle through the reasoning of those leading the effort, which has often led to frustration and discontent, and eventually a diminished credibility of the leadership in the eyes of other key stakeholders.

I highlight this example, and the process of communication it describes, because it relates to the use of evaluation results. As a consumer of evaluation it is important to consider how the decisions and results, which are made based on evaluation findings, are communicated to other program stakeholders or staff. As evaluators, we must consider how the information we produce is used and disseminated. We, of course, want our work to be used, the findings applied, and organizational learning to occur. But we also don’t want to enhance the feelings of anxiety among program staff which sometimes accompany an evaluation. Managing the way evaluations are used to produce outcomes could go a long way in preventing misuse or even abuse. It may also help us spread the gospel of evaluation by showing how it can help passionate people do their work better.

COREY’S USEFUL TIP: Consider a strategy for disseminating or communicating evaluation findings to program staff or other stakeholders, particularly when they are going to be affected by decisions made based on that information. This will enhance policy oriented learning by building new processes or approaches into the collective memory and knowledge, as opposed to these justifications only existing at the management level.

Week 42: Data, data and more data! Why an organization isn’t using data and how to help.

We live in a data-driven world. Schools, businesses, and nonprofit organizations look at data to make decisions. However, with increased pressure to look at data, some organizations I work with are struggling to make sure they are collecting the right data. An evaluator can help you answer this question. Over my next few posts, I’ll discuss ways to think about data that an organization has already collected.

As evaluators strive to make evaluation useful, we also strive to make data collection useful and financially responsible. Collecting data just because isn’t helpful. Collecting data is expensive.  Collecting data uses financial and human resources, and having a clear reason as to why it is being collected will help reduce those expenses.

Usually, the problem isn’t that an organization doesn’t have data, the problem is that they aren’t using it. There are typically two main reasons as to why they aren’t using it:

  1. They may not know it exists. The data may have been collected previously by a different department, staff member, or consultant. Maybe the organization has a new executive director and he or she didn’t know what had been collected.
  2. They forgot. An organization may get so busy with day to day operations, they just forget about the six months of data they collected and paid someone to have analyze. It happens.

To help alleviate these issues, I’ve created three tables for clients, outlining their current, past, and possible data collection efforts. The tables include the following information:

  • Current data collection efforts – What are they currently collecting? This chart includes columns for:
    • Type of data
    • Location (folder/file names)
    • How the data is collected (pencil and paper, online survey tool, interviews, etc.)
    • How often the data is collected
    • What questions are answered by the data? This could be based on evaluation questions or other “big” questions the organization has, such as who is being served by our programs?
    • Stakeholders for the data
  • Past data collection efforts – What data has been collected in the recent past? (Be sure to put a timeframe on this table.)
    • Include the previous columns plus a link to the report that was generated by the data
  • Possible future data collection efforts – This is a place to list questions the organization wants answered that aren’t currently covered by the data being collected. I have also included possible stakeholders and ways the data could be collected (focus groups, surveys, etc.).

Creating tables like this has started some great conversations both between myself, as the evaluator, and the organization, as well as within the organization. For instance, after looking over the tables I made with this information, one organization I was working with decided to hold off on collecting any additional data and spent time reviewing what was being collected and how to best utilize it.

DR. EVERETT’S USEFUL TIP: Many organizations are already collecting data. When first starting work with a client, one question an evaluator should ask an organization is “What data are you currently collecting?”


Week 41: Brick building

On my drive to a meeting recently I was listening to an academic researcher on NPR discuss global warming research and how it intersects with public and political opinion. As part of the discussion, the researcher said that a concern to be wary of when dealing with academic research is that everyone is out there building individual bricks of knowledge on a subject, but no one is working together to build a wall. In other words, academic institutions expect their faculty to publish research, but there is little incentive to work together to pull all of those pieces together to make a more comprehensive argument or view.

This concept resonates with me as an evaluator. As Dr. Tackett mentioned in her post last week, it is easy to continue to provide a standard and expected deliverable for clients. This often means producing a report with descriptive statistics, analysis, and recommendations. However, where the magic happens is when we can help our clients see how their work fits into the overall bigger picture and push those findings to larger outcome goals.

An example I think of in our work at iEval is when Johns Hopkins University and Chapin Hall at the University of Chicago came out with groundbreaking research on high school dropouts. Research studies outlined what academic and social behaviors were indicators that put students at risk of failing to graduate. These studies have a monumental impact on programs that aim to improve outcomes for students. While it would have been easy to coast by with our current projects and only keep these studies in the back of our minds, iEval determined that they should be driving the reports we provided to many of our school-based programs.

For example, how many of the students served by our clients have one or more of the risk predictors? What impact do programs have on the number of risk predictors a student has? How can school day, after-school, and community members work together to best address student risk predictors?

Addressing risk-predictors and high school dropouts is one way we were able to use current research and apply it to our “brick making” practice. iEval was able to use national research and apply it to local communities to move their programs beyond a traditional book-shelf report to one that has moving parts, cutting-edge ideas, and applicable findings.

KELLEY’S USEFUL TIP: One way to stay attuned to new research in your field is to sign up for e-blasts from the top research magazines and institutions in your field. For me, Ed Week, Harvard Graduate School of Education, and the Harvard Family Research Project are essential resources. By skimming these emails as they come in, I can stay abreast as to what new research is out there and brainstorm ways to integrate it into our work.

Week 40: Helping Too Much

As an evaluator, I feel successful in my work if the client is using the data I’ve collected and analyzed to make informed decisions – that makes me feel like my work is valuable. However, I think there’s a fine line between facilitating the client’s understanding of the data in order to create plans and telling the client exactly what the data means and what to do with it.

Thinking back to when I first started out in the evaluation field, I would often analyze the multiple data source and create a report that included the complete analysis, recommendations for improvements and decision-making, and next steps for future evaluation work. It was succinct, but it was also very prescriptive. Clients appreciated it because it took the guesswork out of what they were doing. There were clear steps they could take based on my analyses, but I haven’t been intimately involved with their programs over time as they have. While my recommendations and next steps may have grown out of both data analyses and conversations with key stakeholders, it really still just came from the external evaluator.

Over the years, I’ve realized that it’s more than just a theory – clients who own the data and decision-making really do use the evaluation to a much higher degree. That’s a win-win! It’s been a gradual change over the years, but in the last seven years or so, but here’s how our reports look now:

  • Our recommendations and next steps sections of reports typically contain focused questions to help the client think through the data with some guidance.
  • We embed “digging deeper” questions throughout the report so the client is an active reader, thinking and wanting more as they read through it.
  • We try to give the client the opportunity to ask follow-up questions based on the conclusions they came to when thinking about our focused questions, then we’ll do additional analyses to response to the new needs.

While this work is more time consuming and requires greater involvement from the client, we have seen it lead to more systemic changes that will last beyond the duration of our evaluation contract.

DR. TACKETT’S USEFUL TIP: Remember to let the client speak! Working with the client in a focused way, asking questions that dig deeper into the data and understandings, will result in greater client ownership of the data and decisions.

 

Week 39: Evaluation Use and its Outcomes on the Individual Level

A couple of weeks ago I introduced a framework developed by Gary Henry and Mel Mark (2003), two well-known evaluation scholars. The basic concept was that it laid out three levels of evaluation influence: the individual level, the interpersonal level, and the organizational level.

In this post I want to explore evaluation outcomes and influences at the individual level. The framework previously mentioned breaks out individual mechanisms–or what I will call outcomes–into three distinct groups: overall knowledge acquisition or attitude change, behavior change, and the acquisition of new skills.

Attitude change would relate to an evaluation providing new knowledge or information about a program which changes how stakeholders perceive said program. Behavior change would be when individuals change the way they implement or the program, or more generally, do their work. Skill acquisition is when individuals involved in the evaluation might learn something new over the course of the process such as how to create better surveys or facilitate group conversations.

To me a discussion of these outcomes brings to mind the purposes of doing evaluation, beyond the classical “formative” and “summative.” For example, is our evaluation supposed to be participatory with the intent of building the evaluative skills of the program staff we are working with? Or, do we intend to generate knowledge and new information about a program in order to affect the attitudes and beliefs stakeholders hold about a particular program? Can an evaluation affect all of these different elements at once? Questions lead to more questions lead to more questions. What to do about it, I am not really sure.

What I do think is that each of these outcomes is important when we consider them in the context of the utility. As I read through these concepts and the possible outcomes of the work that evaluators do, it makes me think about how I might study my own evaluation work and the work of other evaluators. This could be for purposes of self-improvement, but also to produce empirically based research on evaluation, by developing measures and data collection methods for these different outcomes. It also makes me wonder whether these types of outcomes are important to consider when developing an evaluation plan, so that we may clearly define our purposes, and build our approach to support them.

One word of warning before I sign off: the outcomes or mechanisms described here and in the paper cited are only a few, drawn from the literature, in a pool which might be far larger. Consider these, but also consider what you see in your own practice and share them. Share them here, at conferences, or in papers. We must begin to build this base of evidence within our profession, because without some idea of what evaluation actually contributes to this world we are basing the things we do on hopes and anecdotal evidence.

COREY’S USEFUL TIP: When developing an evaluation plan or brainstorming ways to tackle a particular project, consider what the outcomes the evaluation is intended to produce. What are the purposes of the evaluation and how might this shape the choices made along the way. By considering these questions on the front end, and collecting data around them on the back end, we might begin to build a collection of evidence for the outcomes evaluation can produce.

Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293-314.

Week 38: Site Visits and Observations

I’ve been reading the 2015 edition Michael Quinn Patton’s text Qualitative Designs and Data Collection. He has “ruminations” following each chapter. The ruminations are issues that “have persistently engaged, sometime annoyed, occasionally haunted, and amused” him over the last 40 years of work as a research and evaluator. His ruminations around site visits are spot-on and should be reviewed by anyone conducting field work.

A summary of Patton’s site visit ruminations:

  • Too often, experienced or ineffective evaluators are the ones sent to do site visits. For example, international projects often hire external site visitors as contract workers. The timelines announcing job postings for external site visitors are often so short, only the most inexperienced evaluators are available, which are the ones that end up doing the work.
  • Philanthropic foundations and governmental agencies often send program officers and administrative staff into the field to conduct site visits without the proper training, unclear about their scope of work, and not enough advance notice given to sites about the visit.
  • Don’t just rely on site visit data. The program knows you are visiting and are doing their best to impress you. What you see might not be a normal day. Patton says to triangulate the data. Look at multiple sources and examine the situation from multiple perspectives.
  • Sometimes sites create a “show and tell” event the day of the site visit. They want to show off their program to the evaluators and celebrate with the program staff. This causes problems though, because the evaluator doesn’t get to see the program as it really exists.
  • People doing site visits are not always trained with background knowledge of the program. They should know something about the site they are visiting, the context, and the purpose of the site.
  • Patton advocates for rethinking site visits. He sees too many problems with inexperienced, uninformed evaluators not given enough time to collect useful data. Instead, he echoes Leslie Goodyear et al. (2014) in their book Qualitative Inquiry in the Practice of Evaluation: “The responsibility of a qualitative evaluator is to leave the setting enriched for having conducted the evaluation.”

r. Patton has identified multiple issues around site visits. Have you experienced the same issues Dr. Patton has? How have you worked to rectify those issues? 

IMG_4805.jpg

DR. EVERETT’S USEFUL TIP: Site visits are a common component of evaluations. Being prepared to conduct an observation includes learning about the site through background materials, briefings and prior experience. How do you prepare to visit a site? 

Week 37: Rallying the Troops

Anyone that has been an evaluator for more than a day has come to know how complicated it is to answer the question “So, what do you do?” In other words, what is your job? I’ve had one adolescent say, “So…you go into schools and sort of tell them what is good and bad—you are like Child Protective Services for schools?” Well, not exactly. Others, “so you get to travel all over and tell people what they do wrong? Awesome.” No, not that either. And my friends that are also mothers to young children say, “So you have a job as a consultant so you really only need to work when you want to. Can I work with you?” Oh, that is not even close to right.

Instead of writing an entire post about clarifying what an evaluator is, I’d like to discuss an important word that I think goes into being a good evaluator. Coach. (Definition: 1. a person who teaches and trains an athlete or performer 2. a person who teaches and trains the members of a sports team and makes decisions about how the team plays during games)

There are periods of time with my clients that I do very little data gathering and analysis, instead spending more time helping them think strategically to get their end results. I think, as a coach, evaluators are instrumental in:

  1. Timing evaluation activities with stakeholders in a way that will garner the best response rates and/or interest from stakeholders. For example, if there is momentum around a particular topic or event, tapping into that energy to get participation in gathering documents, conducting interviews, or completing surveys is key.
  2. Working with clients to select purposeful and strategic evaluation activities. For example, rather than conducting surveys haphazardly to every program, data collection should be strategically mapped out to align to program goals.
  3. Rallying the troops when spirits are low and clients are overwhelmed. Clients often live in the weeds of the work and it can be difficult for them to get a treetop view of what they are trying to accomplish and how evaluation can assist with that.
  4. Providing perspective, strategy and guidance when personal frustration with the work is clouding professional judgment. Listening to stories of gloom from clients is an important aspect of being an evaluator. It builds trust and relationships, but also adds context to evaluation design and provides an opportunity to steer the work back in a positive direction.

 

KELLEY’S USEFUL TIP: Being a coach to clients is a very important part of the job. However, it is important to set boundaries on communication with clients. Just because a client has your cell phone number and knows you are responsive to emails, it is important not to get in the habit of answering emails at midnight or picking up the phone when they call after 6 or 7 pm. Of course there are always exceptions, but overall, maintaining healthy boundaries is better.

Week 36: Compromising the evaluation process

We all know what the typical evaluation process is. While the steps and buzzwords may vary based on the approach you’re using, it’s basically: identify key stakeholders, develop evaluation questions, identify indicators and measures, collect data, analyze data, synthesize results to answer evaluation questions, and create a report. In the perfect world, the evaluation questions and process jointly created with key stakeholders at the beginning of the evaluation is the plan you adhere to during the entire evaluation – it’s your map and you don’t deviate from it. By implementing that plan, you’re able to validly and reliably answer the evaluation questions, thereby helping the key stakeholders improve programs and make critical decisions. In the perfect world, that is. But, we all work in the real world, which can be pretty messy. What do you do when it doesn’t work that way?

Some projects and organizations are in such a state of flux that the evaluation plan has to be continually adapted to meet the changing environment, context, program design, funder requirements, organizational shifts, etc. While this does throw off the original evaluation design, I think this is something many of us expect to happen and are ready to react to it. We modify the timeline, adapt instruments, involve new stakeholders, create more specific evaluation questions, and change analysis procedures. Because we are operating in the real world and not implementing scientifically-based research processes, we can be more reactive to changes while still staying true to the purpose and quality of the evaluation work.

That was the easy one, but what do we do when clients have “emergency evaluation components” that don’t fit into the overall evaluation purpose or plan? They want a quick survey because they feel they need to do it, even if it doesn’t have a clear purpose. They want to pre-/post-test after every treatment because they’re worried the participants will drop out and they’ll lose data. They want interviews conducted immediately to get feedback on satisfaction of program implementation. Here are some questions you can ask yourself to help decide what you should do:

  1. What is the purpose of this “emergency evaluation component?” How will the findings from it be used in a strategic way to improve the program, inform policies, or make decisions? If there is not a clear answer to these questions, then it should not be done.
  2. Is this “emergency evaluation component” an excessive burden on participants? If the results from the component would be meaningful, it’s still important to understand the burden on respondents (pay particular attention to the Belmont Report). Collecting excessive data from respondents can have multiple negative effects include bias in responses, respondents not taking the instrument seriously, respondents tiring of being over-surveyed, etc. If it’s an excessive burden, then it should not be done.
  3. Can the “emergency evaluation component” be developed quickly in a way that still adheres to the Program Evaluation Standards, particularly the accuracy standards, and is valid, reliable, sound in design, and explicit in evaluation reasoning? If you create an instrument that ends up not measuring what it was intended to measure or asks questions that aren’t used or is poorly constructed, then you forfeit the utility of the results and risk the future engagement of key stakeholders. If the answer is no to this question, then it should not be done.

DR. TACKETT’S USEFUL TIP: While we want to be responsive to client needs, it is important to always strive to implement high quality evaluation services!