We have an evaluation problem in the social sector. We want evaluations to be easy more than we want them to be right. Designing good surveys and collecting client data is hard. Rating how we feel about a particular program on a scale from one to five is easy. As a sector, we need to guide funding towards programs that work, and abandon ones that don’t. If we are to reliably move resources towards the highest achieving organizations, we have to define what high achieving means.
To me, the answer to what makes an organization high achieving is clear. The social service sector exists to reduce social ills like poverty, homelessness, and food insecurity. Organizations that have a greater impact on improving the lives of their clients are better than those that have less. Any evaluative framework that is not centered on measuring changes in client indicators is irrelevant. Despite this obvious point, I am dismayed by how celebrated efforts like the Alliance for Social Investing and Greatnonprofits.org fail to base their evaluative criterion on client outcomes.
There is a lot at stake in getting a rating system right (or wrong). The potential harm a poor rating system can cause was illustrated last week in a partnership between Greatnonprofits.org and Guidestar.org. These two rating organizations teamed up to compile a list of the “Top Ten Relief Organizations Working In Haiti.” The list was compiled based on a handful of donor reviews, and as non-profit consultant Gayle Gifford pointed out, those organizations “that were listed in the Top 10, had ONLY 1 or 2 Reviews. That’s it.”
Greatnonprofits.org and Guidestar.org responded to Ms. Gifford’s criticism by dropping the top ten list all together. While these rating organizations certainly did the right thing by retracting their list, it is amazing to me that two supposed evaluation leaders in our industry could have compiled such a hasty, pointless agency ranking in the first place. There is so much that is problematic here, least of all the paltry number of reviews the top ten list was based on.
If we are to ever develop a meaningful top ten list of the most effective social programs, we have to embrace the social scientific complexities of evaluating clients’ social outcomes. This means taking the collection and analysis of client data, in its quantitative and qualitative forms, seriously. Simplistic rating systems that ask donors how they feel about a particular organization may seem seductive, but they could not be more beside the point in determining which organizations are best able to improve the lives of hurting people. So long as we fail to move towards an evaluative framework that is centered on sound social outcomes practices, the only top ten list we can reliably compile is the “Top Ten Worst Ways to Rank Non-Profits.”
Originally posted on inforumusa.org