- What was the methodology?
- Is the conclusion warranted?
- Is this really research at all?
- Has other relevant research been done on this topic?
- Who paid for it?
“Who did it?” was added as a sixth important question by a commentator. I agree completely with the importance of these questions. I also agree with Phil’s point that as a sector we need to be more demanding as consumers, and more transparent about our methods and humble about our conclusions as researchers. However, there are different ways to build the body of knowledge about any area of practice — as opposed to building knowledge in a science. Other ways to build practice knowledge include:
- We can learn from the experiences of others. Knowledge built in this way will have many caveats and limitations to generalizability, but it is perhaps the first way that most of us learn everything from how to feed ourselves to how to behave in a social situation. We observe and learn — and some people codify that knowledge in parenting books or etiquette columns. The Reflective Practice section of The Foundation Review, the peer reviewed journal of philanthropy that I edit, provides many examples of how thoughtful reflection on experience can contribute to knowledge.
- Applied Community Based Research (CBR) doesn’t give us the controls of a randomized sample, but it also contributes to practice knowledge. We still need well-designed data collection methods, appropriate analysis, and reasoned and reasonable conclusions, but understanding what worked and didn’t work in a given context can be valuable. Sometimes, that’s the best thing we have to work with as we are building our strategies.
Research in many areas of inquiry into human behavior has perhaps too eagerly adopted the model of Stage 3, randomized control trials, forgetting about the foundation that needs to be laid in earlier stages. Documenting safety and effectiveness with a given target population — a purposely selected, non-randomized test group — is usually the first step. That is the stage that we are at with some of the research on the sector. As commentators to Phil’s post pointed out, no research method is universally appropriate. I would add that the selected method needs to fit the research or evaluation question being posed. Practice knowledge in the nonprofit sector it is very much at an early stage, where we are asking basic questions about if/how interventions work. We know, for example, that there is an increasing emphasis on collaboration, based in part on theory about how to change systems and in part on the practical experiences of many in the sector. It is reasonable to ask what we know about how to do that collaboration more effectively and to answer that question based in part on the experiences of those who have done it. This does NOT answer the question of the most effective method of fostering collaboration or creating collective impact, or which of two approaches is better — but it does give us a place to start. All this said, we DO need to step up our game. There is much room for increased rigor, more transparency, and more circumscribed conclusions from research on the sector. I think The Foundation Review is beginning to raise the standards for philanthropy. The driving force behind the start of the journal was to get beyond “Here’s what we do and we like it,” which was how most foundations reported on their work. The editorial feedback and peer review processes do lead to increased rigor and transparency in reporting. However, from my vantage point as editor, I recognize the value of multiple ways of contributing to practice knowledge. Phil pointed to the John Hopkins Center for Civil Society Studies as an example of solid nonprofit research. What other resources — centers, research reports, websites, journals, etc. — can you point out that do a good job of sharing research on the nonprofit sector rigorously and transparently? What are the barriers to using this research to inform practice?