By Dr. Teri Behrens, director of special projects and editor in chief of The Foundation Review
Phil Buchanan, executive director of the Center for Effective Philanthropy, recently stirred up a lot of commentary with his article in the Chronicle of Philanthropy on the poor quality of research on the nonprofit sector. He listed five questions to ask about a research project:
  1. What was the methodology?
  2. Is the conclusion warranted?
  3. Is this really research at all?
  4. Has other relevant research been done on this topic?
  5. Who paid for it?

“Who did it?” was added as a sixth important question by a commentator. I agree completely with the importance of these questions. I also agree with Phil’s point that as a sector we need to be more demanding as consumers, and more transparent about our methods and humble about our conclusions as researchers. However, there are different ways to build the body of knowledge about any area of practice — as opposed to building knowledge in a science. Other ways to build practice knowledge include:

  1. We can learn from the experiences of others. Knowledge built in this way will have many caveats and limitations to generalizability, but it is perhaps the first way that most of us learn everything from how to feed ourselves to how to behave in a social situation. We observe and learn — and some people codify that knowledge in parenting books or etiquette columns.  The Reflective Practice section of The Foundation Review, the peer reviewed journal of philanthropy that I edit, provides many examples of how thoughtful reflection on experience can contribute to knowledge.
  2. Applied Community Based Research (CBR) doesn’t give us the controls of a randomized sample, but it also contributes to practice knowledge. We still need well-designed data collection methods, appropriate analysis, and reasoned and reasonable conclusions, but understanding what worked and didn’t work in a given context can be valuable. Sometimes, that’s the best thing we have to work with as we are building our strategies.

Research in many areas of inquiry into human behavior has perhaps too eagerly adopted the model of Stage 3, randomized control trials, forgetting about the foundation that needs to be laid in earlier stages. Documenting safety and effectiveness with a given target population — a purposely selected, non-randomized test group — is usually the first step.  That is the stage that we are at with some of the research on the sector. As commentators to Phil’s post pointed out, no research method is universally appropriate. I would add that the selected method needs to fit the research or evaluation question being posed. Practice knowledge in the nonprofit sector it is very much at an early stage, where we are asking basic questions about if/how interventions work. We know, for example, that there is an increasing emphasis on collaboration, based in part on theory about how to change systems and in part on the practical experiences of many in the sector. It is reasonable to ask what we know about how to do that collaboration more effectively and to answer that question based in part on the experiences of those who have done it. This does NOT answer the question of the most effective method of fostering collaboration or creating collective impact, or which of two approaches is better — but it does give us a place to start. All this said, we DO need to step up our game. There is much room for increased rigor, more transparency, and more circumscribed conclusions from research on the sector. I think The Foundation Review is beginning to raise the standards for philanthropy. The driving force behind the start of the journal was to get beyond “Here’s what we do and we like it,” which was how most foundations reported on their work. The editorial feedback and peer review processes do lead to increased rigor and transparency in reporting. However, from my vantage point as editor, I recognize the value of multiple ways of contributing to practice knowledge. Phil pointed to the John Hopkins Center for Civil Society Studies as an example of solid nonprofit research. What other resources — centers, research reports, websites, journals, etc. —  can you point out that do a good job of sharing research on the nonprofit sector rigorously and transparently? What are the barriers to using this research to inform practice?

Established in 1992 with support from the W.K. Kellogg Foundation, the Dorothy A. Johnson Center for Philanthropy promotes effective philanthropy, community improvement, and excellence in nonprofit leadership through teaching, research, and service. The Johnson Center is recognized for its applied research and professional development benefiting practitioners and nonprofits through its Community Research Institute, Frey Foundation Chair for Family Foundations and Philanthropy, The Foundation Review, The Grantmaking School, Johnson Center Philanthropy Archives and Library, and Nonprofit Services

Grand Valley State University is a four-year public university. It attracts more than 24,500 students with high quality programs and state-of-the-art facilities. Grand Valley is a comprehensive university serving students from all 83 Michigan counties and dozens of other states and foreign countries. Grand Valley offers 81 undergraduate and 29 graduate degree programs from campuses in Allendale, Grand Rapids and Holland, and from regional centers in Muskegon and Traverse City. The university is dedicated to individual student achievement, going beyond the traditional classroom experience, with research opportunities and business partnerships. Grand Valley employs more than 1,900 people and is committed to providing a fair and equitable environment for the continued success of all.

The Johnson Center receives ongoing support from the Doug & Maria DeVos Foundation,Dyer-Ives Foundation, Frey Foundation, Grand Rapids Community Foundation, and W.K. Kellogg Foundation. For more information, contact Robert Shalett, communications director for the Johnson Center, at 616-331-7585.


  1. Reply
    edwarjam says

    Part of the reason we have varying levels of rigor is because of the lack of a national database that is easily accessible to researchers and that contains programatic, organizational, and historical information. Such a database would allow researchers to conduct surveys with adequate representation and to conduct secondary analysis of sector trends and outcomes. Fields such as medicine and education greatly benefit from having large national databases as an additional way of knowing.

Post a comment