No time for learning? Take a cue from Ice Cube and “Check yo self before you wreck yo self.”
by Jara Dean-Coffey and Kelly Hannum
When the folks at the Johnson Center asked us to write a blog for the Learning for Good series, we jumped at the chance. Learning — particularly systematic and shared learning — is essential for advancing philanthropic missions, not to mention our individual development.
Many philanthropic organizations are encouraging greater application of data-informed decision making in both foundation grantmaking and nonprofit services and programs — including the Johnson Center. But we’re still in the early stages of understanding how to best gather, interpret, and use data. Data, as we define it, are those raw bits of information that are often decontextualized and need additional interpretation (or meaning making) to be useful.
A report from EveryAction and Nonprofit Hub (2016) revealed that while 90% of nonprofit respondents say they are collecting data, only 6% said they were using the data they have effectively. Many funders are in a similar position, faced with a lack of data expertise and effective tools for evaluation according to a report from GuideStar and Exponent Partners (2016).
This suggests we are leaving a lot of potential learning on the table. But it also means we’re in a unique position now to adopt effective learning strategies that can help us become more strategic and equitable in our everyday practice. Learning is how we get through this ever-changing world. A lot of our learning happens automatically, and disrupting that process is the key to learning better and to building fair, equitable learning into our philanthropy.
We are flooded with stimuli and data. There’s much to pay attention to — too much reveals Platsis, (2017) in “The Human Factor: Technology Changes Faster Than Humans.” We are swimming — and sometimes drowning — in a sea of data. We have formal and informal systems to get data and we have short cuts for making sense of it. There’s nothing wrong with that — we can’t pay attention to everything. But as individuals — and collectively within our organizations — we need to take a step back from time to time and recalibrate how we gather data and turn it into useful information.
In our consulting practice, we frequently work with nonprofit and foundation clients who gather data they don’t use, use data that doesn’t really get at what matters, and/or miss important information (because they are focused on data). And while it’s critical to figure out what information you need, it’s also important to step back even further and think about the bigger context of how we, as individuals and organizations, decide what — and who — to pay attention to and what we deem to be credible. We know it sounds esoteric; like something you might do when you have a bit of spare time (a.k.a. “never”). But our individual and collective sense-making autopilots are not dependable and we’re not going to stay on course — or perhaps switch to a better course — without intentionally recalibrating. As we grow more aware of our world and as our world changes, our autopilots become less and less accurate.
One of our favorite examples from history is the (now) seemingly ridiculous and yet widespread medical practice of bloodletting. The practice was used to treat a wide variety of ailments for over 2,000 years. The only problem was that it didn’t work for the majority of ailments for which it was applied and it often made the patient worse. While bloodletting can be useful for a limited number of ailments, it was a popular practice that was universally applied regardless of its efficacy for the patient in that context. Even when evidence suggested it was ineffective, the practice remained popular for another two hundred years. Why? Because it was a long-standing medical practice, it fit with au courant thinking. How could something so prevalent, something so embraced by doctors be wrong? But it was.
In 1989, five teenagers, known collectively as the Central Park Five, were arrested, convicted, and eventually exonerated of a crime in a case that highlights multiple systems failures. A story developed that fit stereotypes, it made sense to those with power, and it solved a problem. The hitch was that it wasn’t true, and not only harmed the lives of those falsely accused, it allowed the real criminal to continue committing crimes. Actor Marquis Rodriguez, who plays one of the teenagers in When They See Us, a Netflix series directed by Ava DuVernay (2019), stated, “There’s nothing more terrifying than telling your truth and telling it over and over and over again, but having people refuse to honor it as the truth.”
Val McDermid’s book Forensics (2014) provides a fascinating account of the history, practice, and future of forensic science — a form of legal evidence (hot tip – it’s a great audiobook too). Much of what we take for granted as solid evidence leaves little room for assumptions and error. While the Central Park Five case might seem like an extreme example, it highlights the fact that we are prone to accepting as truth that which matches our preconceived notions. We all have preconceived notions, as individuals and as collectives.
Daniel Kahneman (2011) explains why this happens. The human brain operates on two systems. System 1 is fast, intuitive, and emotional. It is based on our history and experiences. It is a short cut and allows us to judge quickly (and sometimes inaccurately). System 2 is slower, more deliberative, and more logical. It takes A LOT for us to move from System 1 to System 2.
The human brain is lazy. Given the overabundance of data flying at us from all sorts of sources and devices, it is easy for us to default to System 1 and to not engage System 2 to be critical consumers of context and data. In short, if the story fits with our concept of people and the world, it seems true so we don’t second guess it. Conversely, information that doesn’t fit our preconceived notions seems false and even in the face of evidence, we discount it. This plays out at the individual and collective levels. The good news is that with effort we can engage System 2. The bad news is that engaging System 2 takes deliberate consistent practice.
Part of learning is recognizing the limits of both our understanding and the ways in which we understand. We have to engage System 2 to get ourselves off of autopilot. The following questions can unlock System 2 and support our learning;
- What assumptions are we making?
- What perspectives are we missing or devaluing?
- What perspectives are we deeming most credible?
- To what and whom are we paying attention?
- What are we missing?
- How do our methods for gathering, making sense of, and using information fit (or not) with our mission and values?
We should give you fair warning, the process can be tedious and it can feel downright awkward at times. Resources like the recently released guidebook Why Am I Always Being Researched? (2018) raise awareness about the ways in which we gather, interpret, and use information and how those methods are influenced in ways of which we may not be aware. They also offer practical frameworks and ideas for doing things differently. There is even a section that focuses on funders. The roots of many research and evaluation practices used today harken back to different times and no longer serve us well. It’s time for a reset.
The Center for Evaluation Innovation has both explored and written about the cognitive traps that foundations should be aware of and the deliberate practices they can adapt to support better learning. An article based on Associate Director Tanya Beer’s short talk from this year’s Grantmakers for Effective Organizations Learning Conference (2019) gives a quick dose of CEI’s thinking. It is for foundation trustees but has broader relevance.
If we fail to disrupt and reflect on our learning, we will continue to gather and make sense of data in ways that reflect assumptions that may have never been true, or are no longer true and reinforce practices that conceal truth, rather than deliver it.
References and Further Reading
Beer, T. (2019, June). Realigning foundation trustees to incentivize learning. Center for Evaluation Innovation. Retrieved from https://www.evaluationinnovation.org/publication/realigning-foundation-trustees-to-incentivize-learning
Chicago Beyond. (2018). Why am I always being researched? A guidebook for community organizations, researchers, and funders to help us get from insufficient understanding to more authentic truth. Retrieved from https://chicagobeyond.org/researchequity
Coffman, J. (2018). 5-a-day: Learning by force of habit. Center for Evaluation Innovation. Retrieved from https://www.evaluationinnovation.org/insight/5-a-day-learning-by-force-of-habit
DuVernay, A. (Director). (2019). When they see us. [Series]. Netflix.
EveryAction and Nonprofit Hub. (2016). The state of data in the nonprofit sector. Retrieved from http://cdn2.hubspot.net/hubfs/433841/The_State_of_Data_in_The_Nonprofit_Sector.pdf
GuideStar and Exponent Partners. (2016). Data-driven funders: In search of insights. Retrieved from https://learn.guidestar.org/hubfs/docs/Data-Driven-Funders2016-04-11.pdf
Harris, A. (2019, May 30). The Central Park Five: ‘We Were Just Baby Boys’. New York Times Online version. Retrieved from https://www.nytimes.com/2019/05/30/arts/television/when-they-see-us.html). Print version June 2, 2019, Page AR12.
Kahneman, D. (2011). Thinking, fast and slow. New York, NY, US: Farrar, Straus and Giroux.
McDermid, V. (2014). Forensics: What bugs, burns, prints, DNA, and more tell us about crime. Profile Books. London.
Platsis, G. (2017, April 24). The human factor: Technology changes faster than humans. Tripwire. Retrieved from https://www.tripwire.com/state-of-security/off-topic/human-factor-technology-changes-faster-humans
Jara Dean-Coffey is Founder and Director of the Equitable Evaluation Initiative. For the past twenty-five years, she has partnered with clients and colleagues to elevate their collective understanding of the relationship between values, context, strategy and evaluation and shifting our practices so that they are more fully in service of equity. Her consulting practice, jdcPartnerships and now Luminare Group, has consistently sought to push practice and incubated the initial explorations of equitable evaluation (EE) as a next step for the field. Learn more about Jara.
Kelly Hannum, Ph.D. has over 20 years of experience blending evaluation theory and practice to help stakeholders make the world a measurably better place by clarifying purposes, processes and progress. To learn more about Kelly, please visit her LinkedIn page.