At the risk of losing all our readers, let me drag you for a minute into my personal life. I have never been a “math” girl. I didn’t take AP math classes in high school, I took the minimum requirements in undergrad, and it wasn’t until I was consciously aware that I needed to pad my resume to be hirable that I enrolled in more challenging stats and econ courses in grad school. I never in a million years would have predicted that I—an English major—would know how to navigate statistical software, much less understand what a p value is. And here I am, contributing to an evaluation blog.
Why am I babbling about this? It seems like a good way to open up the conversation about quantitative data. In general I have found that our clients love numbers—not always because they understand what statistical significance or power means—but because numbers sound official. They look good to funders. And in some ways, “numbers” are what evaluators are often hired to provide—something that seems too complicated for programs to produce on their own. And yes, collecting, cleaning, organizing and presenting data in meaningful ways takes skill. But it should not be the sole tenet of an evaluation plan.
Fast-forward a decade in my life past grad school when I was a campaign manager for a statewide afterschool funding campaign. It was in this role that I really began to understand the importance of context—something that the numbers cannot provide alone. I remember a VERY uncomfortable conversation with policy makers about afterschool programming. I had beautiful charts-graphs-tables-you-name-it to show how many students were engaged in the programs, projections for the next five years, and how the programs impacted student outcomes. However, immediately following the presentation of the data, came the questions…really hard questions…the sort of questions that make you sweat through a blazer. It became abundantly clear that the data did not tell the whole story. I didn’t understand the caveats and intricacies of who was and was not included in the data set and why—information that was only available through deep conversations with community members and program leaders. Interviews and focus group were essential to getting a clear picture, and I was remiss not to include that in the plan from the beginning. Qualitative data is necessary to appropriately position the quantitative data within the context of the work.
This is one small example, but I could provide many more where quantitative data alone has not provided a clear picture. Qualitative data is a powerful tool to shape and provide insight into evaluation. As an English major turned evaluator, I find comfort in knowing the whole story.