I’ve written extensively on how the social service sector needs to be more data driven, that data and outcomes analysis should drive what we do and how we do it.  This argument is not unique, pretty much everyone makes this argument.  The real question is how do we determine what is working, and what is not.  This is a question of metrics in social services, and establishing universal guidelines so we can compare organizations to one another, and direct resources accordingly.

About a year ago I wrote about an organization, the Alliance for Effective Social Investing, which aims

To drive more funds to high performing nonprofit organizations by helping donors adopt sound social investing practices.

They plan to do this by creating an evaluation standard by which organizations can be compared to one another.  I recently wrote a post for Inforum where I provided an update on what the Alliance has accomplished (nothing).  Last night I had the displeasure of reading through the group’s most recent paper, Social Investment Risk Assessment Protocol, 11th Version.  The document provides a questionaire and framework for non-profit evaluators.  The idea is that if all evalutors use this assesment tool, then we will have common metrics.  There are two problems with this approach.

  1. Inherently not scalable – it is a fantasy to think every organization can get independently evaluated in any meaningful way, with any regularity.  If we can’t do this to every organization, or even a reasonable fraction, there will be no common metrics because the number of evaluated organizations won’t be significant.
  2. Subjectivity – the evaluation methodology proposed by the group is based on the subjectivity of the evaluator, rating organizations on a scale of one through five on issues like whether or not an organization holds staff accountable through performance reviews.

On Wall Street, companies are not invested in based on whether or not they have performance reviews.  Companies have performance reviews because it keeps productivity and innovation up.  Higher productivity and innovation means greater profits.  However, presence of performance reviews, in and of itself, is not meaningful.  For companies, they are evaluated in large part on their profits.

So what is the common currency by-which social service and non-profit organizations should be evaluated?  That is the central question, and the one that the Alliance completely fails to address.  The real point should be to evaluate what gets done, not how we do it.  The inadequacy of the Alliance’s approach is on their focus on the how.  In evaluation speak, we refer to this as focusing on outputs, what we do, rather than outcomes, what results we get for the people we serve.

A better common metric are client outcomes such as changes in poverty status, housing status, food insecurity, educational outcomes, etc.  It’s funny how evaluations are incredibly trendy to discuss right now, yet nothing is really being done to move the sector any closer to meaningful evaluation metrics.  So far, this is largely the case in both the domestic and international spaces.  While the Alliance, to date, is a non-factor in seriously providing evaluation frameworks, I’m interested now to see what the Acumen Fund does to move this issue forward with their highly anticipated Pulse evaluation system.