Blog / Evaluation

Part 2: How We Think About Evaluation Needs to Change

by Kelly Hannum
Part 2: How We Think About Evaluation Needs to Change
This post was originally published on the Luminaire Group’s blog. The original post can be viewed here.

Recently the American Evaluation Association released an updated version of their Guiding Principles1. This is just one example of how the profession of evaluation is continually evolving.

This got me thinking about what it takes for change to happen. How long will it take for the guiding principles to take root? I came across one study that estimated that translating scientific discoveries into patient benefit took about 17 years2. While a new scientific discovery is not the same thing as introducing guiding principles, there is a similarity in that there’s a lag between “knowing” something and that something “showing up” in practice. There are a lot of proposed reasons for this generation-long lag. While some of the lag might be due to the time needed to make sure the discovery can be verified, a lot of the lag seems due to the time it takes to translate the research into practical terms, to disseminate the information and to shift practice. I also wonder if the generation-long lag has to do with reaching a critical mass in a profession, organization, sector, etc. of people who were trained in or “grew up” with the newer ways.

Maya Angelou offered the following practical wisdom: “I did then what I knew how to do. Now that I know better, I do better.”

If “knowing better” is not linked to “doing better,” I’m not convinced we’re doing either. I firmly believe that evaluation can be the heart of both knowing better and doing better, but in order to take up that call, we have to evolve how we think about (and conduct) evaluation. Here are some of the shifts that are top of mind for me.

1) Put it in context.

Validity is primarily about accuracy and use. Notions of validity have historically been based on technical definitions pulled from scientific research. Evaluators often seek to simplify, codify and de-contextualize the work in an effort to create easy-to-understand, logical, “provable” models that can be replicated. One consequence of dominant evaluation approaches is that nonprofits and communities have had negative (even traumatic) experiences of being judged in ways that don’t seem accurate or fair. Another consequence is that we are likely only getting part of the story.

Because of how we think about validity (and its bedfellow, objectivity), our inquiry lacks the nuance necessary to truly understand the complex endeavors in which many are engaged and have different perspectives about. Validity requires technical as well as contextual (including cultural) expertise. Values, value, and valuing are the heart of evaluation but we (evaluators) have paid woefully inadequate attention to developing theories and practices that center values, value, and valuing.

2) Less extraction, more interaction.

What good does a report on a desk do? When evaluative thinking becomes a way of doing business, stakeholders gain clarity and have influence on the relationship between impact and strategy; they better understand what information they need to make decisions, rather than making decisions driven by intuition or by external models, both of which can be missing critical information.

For example, Brandon & Fukunaga (2014) examined the participation of program stakeholders in any phase of evaluation and found that only 12 of the 41 studies involved program beneficiaries. Our hope is that when nonprofit leaders think more evaluatively the need to include representatives of all stakeholder groups (i.e. those funding, designing, implementing, and participating in as well as impacted by initiatives) would become clear and those leaders would advocate for that inclusion. This change would reflect the kind of “nothing about us without us” shift in perspective and action needed for real change to occur.

3) Reframe accountability.

In philanthropy, evaluation is often (but not always) used to hold organizations accountable; it’s a gatepost for continued funding. I would never (ever) argue for a lack of accountability, but I might shift what people are accountable for and I would seek to make accountability less of a power play (wherein one party holds another accountable) and more about setting mutual expectations and communicating progress on shared work so we can know better, and therefore do better, together.

These are the shifts I’m starting to see and hope to foster in evaluation. While I am excited about the growth and evolution of evaluation as a profession, I also deeply believe that evaluation as a practice is not and should not be the domain of evaluators alone. We need stakeholder groups to think and engage in evaluative ways if we are to know better and to do better.

Resources and associations I use to help broaden my thinking when it comes to evaluation include:

The American Evaluation Association’s AEA365 blog (https://aea365.org/blog/)

Better Evaluation (https://www.betterevaluation.org/)

The Center for Culturally Responsive Evaluation and Assessment (https://crea.education.illinois.edu/)

The Equitable Evaluation Initiative (https://www.equitableeval.org/)

_______________

1American Evaluation Association Guiding Principles For Evaluators: https://www.eval.org/p/cm/ld/fid=51

2Morris, Z. S., Wooding, S., & Grant, J. (2011). The answer is 17 years, what is the question: understanding time lags in translational research. Journal of the Royal Society of Medicine, 104(12), 510–520. http://doi.org/10.1258/jrsm.2011.110180

Kelly Hannum, Ph.D.
Founder, Aligned Impact
Kelly has more than 20 years of experience blending evaluation theory and practice to help stakeholders make the world a measurably better place by clarifying purposes, processes and progress.