Firing bad donors

Like any other industry, money drives the social sector. Sure it matters that our hearts are in the right places, and that we are willing to make less doing what we do. But we still work for a paycheck, and whether or not we can get a paycheck creating the social value we seek determines to some extent what we focus on.

The importance of money is the reason that the ongoing debate about how to rate charitable organizations matters. We do not simply want to know what works and what does not, but which agencies are deserving of charitable dollars. In short, what is at stake is not only which agencies get funded, but which causes do as well.

How we rate agencies will alter how agencies do business, which will in turn impact their bottom lines. That is why we have to be really careful with how we end up rating organizations. Charity Navigator realized that their focus on administrative overhead not only mislead donors about which agencies were most impactful, but it also incented organizations to focus more on financial gymnastics than creating social value.

It’s great that Charity Navigator is making changes to the way it evaluates organizations and I’ll be following those changes closely. But if there has been so much widespread agreement that the focus on overhead ratios is a poor indicator, why has the sector abided by this fruitless metric for so long? Essentially, the Charity Navigator effect was to flood agencies with donors who contributed to those organizations for bad reasons.

I don’t run a non-profit. My company builds database systems and provides data analysis services for poverty-focused social sector organizations, so instead of donors I have customers. But the two really are not all that different. Just like donors, I have some agencies that hire my firm for good reasons, and others that hire us for stupid ones.

And just like non-profits, which have long accepted donations from individuals and foundations whose values do not align with their own, I too have been (and am) guilty of working with organizations whose missions are a poor fit with my company’s values.

I have rationalized working with bad customers in the past by thinking I would use the money from those contracts to supplement the work my company really cares about. What I failed to acknowledge is that money not only enables you to follow your agenda, but it also has a way of setting it.

Basically, these bad customers led me and my team to focus on building features in our products that did not align with our core mission. We ended up spending staff time on issues that were at best tangentially related to the core poverty issues our company was built to address.

Non-profits do the same thing. It is a rare occasion when an agency turns down funds. I have seen my own customers accept grants for the sake of expansion, not thinking how that funding would impact the overall trajectory of the organization. A lot of funding comes with so many strings that it can pull an agency in multiple directions, resulting in devastating mission creep.

Rating social sector organizations well is vitally important. How we evaluate organizations determines which ones we fund, which in turn changes what agencies do. While I am all for better evaluation of non-profit organizations, perhaps the organizations would be wise to take the time to evaluate those who evaluate them in turn.

Focus on scaling evaluations is premature

Our sector is enamored with the idea of reaching scale, no matter what the issue. We want to bring education reform to scale, poverty alleviation to scale, and now even evaluation to scale.

Lamenting the lack of funding available for social sector evaluations, Dan Pallotta writes in a post on Our Ineffectiveness at Measuring Effectiveness that

The watchdogs never achieved scale. The largest among them, Charity Navigator, evaluates about 8,000 of the active charities in the country, or 1.1%. And none of the new effectiveness organizations are on a trajectory to get much larger, even using relatively inexpensive and simple evaluation methods. For example, GiveWell, one of the best of the bunch, measures only 413 organizations, or .059% of the active charities in the country. Why is this so? None has yet discovered a revenue model that can achieve dramatic growth.

While Dan is spot on to highlight the importance of evaluation in the social sector, his focus on scale is premature. The fact is that none of the evaluation methodologies available to us today are worth scaling up.

Evaluation is complex, and Dan’s call for one evaluation agency to rule them all smacks of the oversimplification of those who wrongly believe evaluating non-profits is a specialty in itself. Part of the reason organizations like Charity Navigator have failed to evaluate organizations effectively is that they have tried to evaluate agencies that are too dissimilar from one another. Comparing the incomparable necessarily forces evaluators to rely on pointless metrics like overhead ratios.

Perhaps more damaging, the insistence on evaluating all types of agencies distracts from the more important work of developing sound methodologies in any one of the numerous philanthropic sub-sectors, such as environmental issues or poverty. Our ratings suck not just because of a lack of money, or imagination as Dan suggests, but a lack of focus and specialization.

An evaluation specialist is not necessarily someone who understands the managerial side of non-profits. Ideally, an evaluator would be an expert in the social or environmental issue a non-profit aims to influence.

If I want to evaluate an environmental organization, forget the non-profit consultant, give me an environmental expert. Yet the lauded evaluation agencies of today by-and-large fail to take an issue specific approach to outcomes evaluations.

We have a lot of work to do before bringing evaluations to scale. The first step is abandoning the pointless pursuit of an evaluation framework that compares all organizations, no matter what they do, with one another. If we can agree on that, we can begin the serious work of evaluating social and environmental outcomes. Eventually, we might even have something worthy of scale.

Abandoning across sector evaluative metrics

The social sector is a complex industry to work in. While what we all do can be uniformly defined as philanthropically related, that is where the similarities for many of us ends. We are unified in so far as we want to do good in the world, but what that good is, and how we achieve it, varies wildly.

On Friday I wrote about the limit of objectivity. There has been a groundswell toward basing social investments on objective realities rather than subjective emotions. Our sector has benefited tremendously from this new wave of thought.

One of the agencies pioneering the march toward greater accountability and evaluative objectivity is GiveWell. However, GiveWell’s recent move to drop their three star ranking illustrates the pitfalls of trying to force the appearance of objectivity on subjective issues. Describing GiveWell’s decision to drop the three-star ranking, Holden Karnofsky writes:

In the end we don’t feel comfortable rating Nurse-Family Partnership higher than Small Enterprise Foundation … or lower … or the same. They’re too different; your decision on which to give to is going to come down to judgment calls and personal values.

While there is an objective, fact-based reality of what an individual organization does or does not accomplish, how we value the change an agency creates in the world has more to do with personal values rather than an elusive universal truth of social value. GiveWell is right to recognize this fact and abandon the pointless pursuit of purported standardization of across sector evaluative metrics.

Indeed, the discussion about standardization of outcomes metrics in the social sector has been taken to unrealistic extremes. The Alliance for Effective Social Investing, for example, allegedly aims to create a way of evaluating organizations that do unlike things, such as the comparison of an after-school program to a disaster relief agency.

This is insane.

The standardization that we need is within focus areas, like a way of evaluating poverty focused organizations against one another. This is very difficult to do; a gang intervention program and low-income utility income supplement both target poverty, but they do so in different ways. Figuring out a way to compare these dissimilar programs with common social objectives is relevant and valuable in a way that comparing Green Peace to the ballet is not.

The rush to create true standardization has driven some to focus on organizational health metrics instead of social outcomes. If the goal is to equate unequatable organizations, then the focus on managerial indicators makes sense. It is the only logical way of comparing dissimilar agencies that do not have a shared social interest.

The problem is that organizational metrics do not say anything about outcomes. We don’t evaluate for-profit organizations on their managerial structures, we evaluate them based on profit. Their organizational abilities certainly impact the bottom line, but it is not in itself the bottom line.

There is no point in evaluating agencies based on proxy metrics like managerial indicators when we can, and should, simply measure their real outcomes: changes in social indicators. Those social indicators are comparable within focus areas, but not across them, which is fine.

The question we need to answer is not what philanthropists should focus on, instead we need a way of better identifying the highest impact agencies that address a funder’s focus area. Not only is this an important question, it is an answerable one.

Therefore, we need to drop our quest to measure the unmeasurable. Instead, we should shift our focus to analyzing that which is empirical. Not only can we actually measure social impact within focus areas, it is also one of the only questions worth answering.