The social sector is a complex industry to work in. While what we all do can be uniformly defined as philanthropically related, that is where the similarities for many of us ends. We are unified in so far as we want to do good in the world, but what that good is, and how we achieve it, varies wildly.
On Friday I wrote about the limit of objectivity. There has been a groundswell toward basing social investments on objective realities rather than subjective emotions. Our sector has benefited tremendously from this new wave of thought.
One of the agencies pioneering the march toward greater accountability and evaluative objectivity is GiveWell. However, GiveWell’s recent move to drop their three star ranking illustrates the pitfalls of trying to force the appearance of objectivity on subjective issues. Describing GiveWell’s decision to drop the three-star ranking, Holden Karnofsky writes:
In the end we don’t feel comfortable rating Nurse-Family Partnership higher than Small Enterprise Foundation … or lower … or the same. They’re too different; your decision on which to give to is going to come down to judgment calls and personal values.
While there is an objective, fact-based reality of what an individual organization does or does not accomplish, how we value the change an agency creates in the world has more to do with personal values rather than an elusive universal truth of social value. GiveWell is right to recognize this fact and abandon the pointless pursuit of purported standardization of across sector evaluative metrics.
Indeed, the discussion about standardization of outcomes metrics in the social sector has been taken to unrealistic extremes. The Alliance for Effective Social Investing, for example, allegedly aims to create a way of evaluating organizations that do unlike things, such as the comparison of an after-school program to a disaster relief agency.
This is insane.
The standardization that we need is within focus areas, like a way of evaluating poverty focused organizations against one another. This is very difficult to do; a gang intervention program and low-income utility income supplement both target poverty, but they do so in different ways. Figuring out a way to compare these dissimilar programs with common social objectives is relevant and valuable in a way that comparing Green Peace to the ballet is not.
The rush to create true standardization has driven some to focus on organizational health metrics instead of social outcomes. If the goal is to equate unequatable organizations, then the focus on managerial indicators makes sense. It is the only logical way of comparing dissimilar agencies that do not have a shared social interest.
The problem is that organizational metrics do not say anything about outcomes. We don’t evaluate for-profit organizations on their managerial structures, we evaluate them based on profit. Their organizational abilities certainly impact the bottom line, but it is not in itself the bottom line.
There is no point in evaluating agencies based on proxy metrics like managerial indicators when we can, and should, simply measure their real outcomes: changes in social indicators. Those social indicators are comparable within focus areas, but not across them, which is fine.
The question we need to answer is not what philanthropists should focus on, instead we need a way of better identifying the highest impact agencies that address a funder’s focus area. Not only is this an important question, it is an answerable one.
Therefore, we need to drop our quest to measure the unmeasurable. Instead, we should shift our focus to analyzing that which is empirical. Not only can we actually measure social impact within focus areas, it is also one of the only questions worth answering.