Please stop developing websites that list nonprofits

Yet another website that catalogues non-profits was released into the wild earlier this week, as the Laura and John Arnold Foundation launched the Giving Library. For a sector that disdains duplicating efforts, maintaining online directories of non-profit organizations is a fairly crowded market. What all of these efforts to create proprietary listings of non-profit organizations have in common is an imminent threat of extinction by Google, which has a pretty serious competitive advantage in the indexing game.

Developing listings of nonprofits is not necessary with modern search tools. While it is fairly trivial to find an organization that wants to take a donor’s money, it is far more difficult to identify organizations that one believes will maximize their charitable dollars. To be fair, organizations like Great Nonprofits and Charity Navigator are attempting to solve this bigger problem of helping donors identify effective agencies to invest in, although neither provides compelling analysis to functionally move outside the sphere of simple cataloguing (yet).

Indeed, the growth in influence of GiveWell underscores the value of analysis over indexing. The problem of course is that the deeper analytic approach is research intensive, and does not scale. Therefore, we are instead bombarded with superficial efforts to simply create nonprofit listings or develop laughably linear four star rating systems.

And where is the evidence that donors need a website dedicated to listing non-profit organizations? Results of the 2012 Millennial Impact Report suggest that, at least among web-savvy donors between the ages of 20-35, people are perfectly capable of learning about non-profits through organizations’ websites, newsletters, and social media channels without the assistance of intermediaries.

Which brings us back to the analysis problem. Donors do not need help finding organizations, they need help selecting organizations based on their evaluative criterion. GiveWell simplifies the process for a certain set of donors by articulating their own criterion and providing investment advice to donors who are inclined to adopt GiveWell’s utility framework.

The more difficult issue then is to develop ways of matching donors to effective organizations that address issues that are consistent with the donor’s own values. This is a matter of substantive impact evaluation and donor utility elicitation. Neither of which have anything to do with hiring a web design firm to throw up yet another nonprofit digital dumpster.

 

How to select measurable outcomes

At the end of last month I wrote a post advising organizations to setup their evaluation frameworks prior to advertising their greatness to the world, as credible evidence to back up claims of effectiveness is infinitely more persuasive than the alternative. In the post I gave an example of a consultant who helped a former customer of mine create a logic model with poorly defined outcomes, positioning my customer for advertisement rather than measurement. From the post:

One such consultant, who I was later hired to replace to clean up his mess, outlined a data collection strategy that included an outcome of “a healthy, vibrant community free of crime where all people are treated with respect and dignity.”

What an operationalization nightmare. How the heck do you measure that? You don’t. And that’s the point. The logic model was not a functional document used for optimizing results. Instead, it was an advertising document to lull unsavvy donors into believing an organization is effective in the absence of evidence.

Fair criticism, I think (obviously). Well, yesterday Jennifer Banks-Doll asked an even fairer question:

David, I really enjoy your blog and love the example you have given above about the unmeasurable outcome.  So true and so common!  But I feel like you’ve left us hanging…What kind of outcomes would you recommend instead?  Can you give us an example or two?

This is an excellent question, and while I did my best to give an example of the right type of thought process an organization should go through to operationalize outcomes, unfortunately the best answer I can really give is the unsatisfying “it depends”.

While there are those who tend to think through their logic models from left to right, that is, if I do this I expect that, I tend to think the other way around, starting with the goal an organization aims to achieve and moving backwards. There is nothing wrong with beginning the ideation process and goal setting with seemingly unmeasurable ideals. However, what begins as pie-in-the-sky should not be left there.

In my response to Jennifer’s question, I gave the example of an organization trying to create a “safe” neighborhood. Safety is an inherently abstract notion. Part of safety might be actual incidences of crime, but safety might also have to do with perceptions as well. If someone feels unsafe in an area free of crime, is that area “safe”? Well, it depends on how an organization defines its goals.

In the process of creating more exacting definitions of the change you want to see in your target population, you get closer to identifying measurable indicators. The goal is not to create one measure for “safety”, or whatever your outcome of interest is. Instead, what we want to do is come up with a few measures that collectively approximate what our otherwise abstracted outcome is.

One other note on this point. People tend to think that “measurable” necessarily means inherently enumerable. Working again with our safety example, one measure of safety might be crime rates, but another measure might be an opinion survey of residents in an area, asking them if they feel safe. If I had to choose one, my preference would be for the perception survey over the crime data.

Crime statistics, while numeric by nature, are not necessarily pure measures of “safety”. If arrests increase in a neighborhood, this might be due to an increase in criminal activity, but it also might mean there are more police patrolling the area. Indeed, in some of the community development work I have done, I have found (in some cases) that perceptions of safety actually increase with the number of reported crimes.

The point I am trying to illustrate is that organizations should not feel pressure to use public data or seemingly more quantitative metrics if those are not good measures of your intended outcomes. The best indicators are those which most closely approximate measures of the change you want to see in the world. If the best way to get that data is asking people how they feel about a particular issue, then by all means, hit the streets.

Snake oil nonprofit consultants sell outcomes as impact

Google’s advertising algorithm knows me too well. Pretty much the only advertisements I see now are for non-profit services. I tend to click through to these advertisements as a way of checking the pulse of social sector offerings outside the typical circles I operate in.

Yesterday I clicked on this advertisement for Apricot non-profit software by CTK. The advertisement includes a product demonstration video for their outcomes management system in which the narrator conflates outcomes with impact ad nauseam.

Sigh.

Outcomes is not another word for impact. An outcome refers to a target populations’ condition, impact is the change in that condition attributable to an intervention.

While we all tend to be saying the same buzz words (“manage to outcomes”, “collective impact”, etc.) we lack uniform agreement on what these terms mean. In the case of outcomes and impact, these are terms that come from the evaluation literature, and are (hopefully) not open to the manipulation of social sector consultancies with more depth in marketing than social science.

There are some that believe helping an organization at least understand its outcomes is a step in the right direction. I count myself as one of them. But telling an organization they can infer something about impact and causality by simply looking at a distribution of outcomes is not only irresponsible, it is downright dishonest.

The promise of metrics is to help the social sector move toward truer insight, not to use data to mislead funders. Whether the persistently misleading misuse of outcomes metrics is intentional or the result of ignorance, it has no place our work, and only stands to derail the opportunity we all have to raise the standards of evidence in our sector.

Funder mandated outcomes requirements create perverse incentives for implementing organizations

On face, funding agencies providing grants contingent on implementing organizations meeting outcomes objectives seems sensible. After all, the sector is moving toward an era of smarter philanthropy and impact oriented giving, so shouldn’t funders demand outcomes for their money?

Kind of.

I have been aware of this problem for some time, but was recently faced with an ethical dilemma when a customer asked me to help them adjust their program offerings to better achieve funder required outcomes.

The organization I was working with provides employment services to low-income individuals. Their grant stipulated that a certain number of program participants had to get placed in jobs in a given period of time. At first glance this seems to be a simple optimization problem where the employment program wants to maximize the number of people placed into employment.

Given this simplistic directive (place as many people as possible into employment), the optimization is actually quite trivial; serve the people most likely to get employment and ignore the hard to serve.

In this case, the hard to serve also tend to be those who need employment assistance the most. People with children might be harder to place into employment, given childcare needs, yet these individuals arguably need employment more than the otherwise equivalent person without dependent children.

Similarly, better educated people are easier to place into jobs than those with less education. But better educated people are more likely to find employment irrespective of the program intervention. This, of course, highlights the difference between outcomes and impact. The grant required improvements in outcomes, that is, the number of people placed into employment. When the focus is on outcomes, simple rationality dictates finding the most well-off persons who still qualify for services, and serving those folks first at the detriment of those who are harder to serve but in greater need.

While serving those who are better off first is rational given the way outcomes thresholds are typically written in grants, it probably does not lead to the desired outcome of the implementing organization (nor the intended outcome of the funder either!).

Hence my firm’s ethical dilemma. By wanting our assistance in optimizing outcomes according to my client’s funder’s guidelines, our client was unwittingly asking us to help them identify and serve only those who didn’t need their help that much to begin with. Our mission is to help organizations use metrics to help people better, and in this case, a data oriented approach given a misguided objective would likely lead to under-serving a hurting demographic.

Of course, this mess is nothing new, and is outside the control of implementing organizations. Funders requiring meaningless metrics of implementing organizations is not news. However, as funders try to press their grantees for more results, I am concerned that funders with a poor grasp of the difference between outcomes and impact, and insufficient knowledge to properly operationalize social indicators, will force implementing agencies to act in financially rational ways that end up hurting their target populations.

The answer to this problem is better data literacy in the social sector. My practice to date has focused on the data literacy of implementing agencies, but I’m worried that the zeal for more proof of social impact has underscored an open secret; both our front-line and grant-making institutions have a limited capacity to use data effectively.

And to those who say that some data is better than no data, I would argue that data is more like fire than we tend to realize. Fire has done incredible things for humanity, but those who do not know how to use it are likely to burn themselves.