Money and metrics – Google Hangout on using performance data to raise funds

Idealistics is not a marketing or fundraising firm. I try to make this point as clear as possible to all of my potential customers before entering into any engagement.

However, I whole heartedly believe that improving social outcomes should lead to better fundraising prospects. But do funders and donors really invest in social outcomes?

Next Tuesday, April 30th at 2pm Eastern I will be discussing “How to Use Real Performance Data to Raise More Money” on a Google Hangout with management consultant Nell Edington of Social Velocity.

Nell, a notable exception to my rants about underwhelming nonprofit consultants, is well known in philanthropy circles for encouraging organizations to focus on financing instead of fundraising. In the Google Hangout, we will identify cases where organizations we have worked with have successfully translated their outcomes metrics into more funding.

As funders and donors transition toward a social investment mindset, organizations that can report credible outcomes and demonstrate an ability to learn from their data will prove most competitive for charitable dollars. Among other things, we will discuss what makes outcomes more or less credible, and how organizations can signal institutional learning to funders.

If you have questions you would like Nell and I to discuss in the Hangout, leave a comment below, email me at dhenderson@idealistics.org, message me on Twitter at @david_henderson, or visit the Google Plus event page.

Update April 30, 2013: Since this event has passed, here is the video of the hour long discussion.

Why comparing your outcomes to community averages might be misleading

I followed a Chronicle of Philanthropy chat titled How to Show Donors Your Programs Are Working earlier this week. While it is encouraging that the social sector is trying to incorporate metrics in our work, data’s rise to mind share prominence has also seen the rise of some fairly dubious advice.

One piece of advice from this “expert chat” was that organizations should couch their outcomes in terms of community averages. For example, a tutoring program might look at the graduation rates of students in their program versus graduation rates for the school district at large in order to show their students do on average perform better.

I’ve heard this suggestion a lot – and see organizations proudly declare their outcomes are some percentage greater than the community’s as a whole.

The problem with this approach, and pretty much every mainstream discussion about evaluation, is that there is no serious discussion about the difference between a good and a bad comparison group.

The missing counterfactual

In the evaluation literature, a counterfactual is a hypothetical whereby we try to estimate what would have happened to someone in our program had that very person not received our services.

This is pretty tough to do – and the reason evaluation experts prefer randomization is that it gives a good approximation of the missing counterfactual. That is, randomization allows us to take two people we have no reason to believe are different, and provide one of those people the program while the other does not receive the program, and then estimate the difference in their outcomes as the program’s impact.

The suggestion to use the community as a whole as a comparison group assumes that the people in your program are the same as the people in the community at large with the exception of your services. This is a pretty bold claim.

Let’s go back to our tutoring example. A skeptic like myself might argue that people who choose to participate in a tutoring program are more motivated to graduate high school than the average student. In this way, it’s hard to differentiate if your program actually made students better able to graduate or if the students in your program were just so highly motivated that they were likely to graduate anyway.

When we compare the kids in our tutoring program to kids at large, we might be comparing a highly motivated student to a particularly unmotivated student. This is not a fair comparison to make.

Yet we make these comparisons all the time when we blindly compare our outcomes to community averages.

Comparing our outcomes to community averages might be effective from a fundraising standpoint, which was the premise of the Chronicle of Philanthropy talk. But I would argue this particular approach has less to do with “showing your donors your programs are working” and more to do with identifying favorable comparisons that make your outcomes look good.

In so far as evaluation is more about truth than treasure, simple comparisons of outcomes to the community average can be highly misleading.

Data does not make decisions

I participated in my first Google Hangout last Friday on the topic of using data in homeless services. The discussion was organized by Mark Horvath, a homeless advocate and founder of Invisible People. The call included Mark, myself, and three other practitioners with experience applying metrics in homeless services. You can check out the recording of the conversation here.

I was clearly the outlier on the panel as I work with a range of organizations on a variety of issues, so while homeless services is a part of my work, the other folks on the hangout are fully focused on working with those experiencing homelessness, and it shows in their depth of knowledge.

There were a lot of interesting take aways from the conversation, so I’ll likely be referring back to this discussion in future blog posts. But one thing that stood out to me was that the conversation reflected on both the opportunities and the risks of using data in homeless services, a point that applies to all possible uses of outcomes data in the social sector.

At one point in the discussion, our attention turned to the possibility that homeless service providers could use predictive analytics to exclude people from receiving services. Before I go any further, let me describe what I mean by predictive analytics.

Predicative analytics

Predictive analysis is the process of using historical data to try to predict what will happen in the future. There are various types of statistical techniques for doing this, but the basic idea is you try to fit a model that explains what happened to a set of observations in an existing data set. You then use that model to try to predict what will happen to a new set of people you don’t have any data on.

For example, your model might suggest that people who have a criminal background are more likely to get evicted from housing.

On the one hand you might use this finding to provide additional supportive services to keep people with criminal background in housing, on the other hand you might choose to exclude people from your housing program who have criminal backgrounds, which brings us back to a worry my co-panelists raised on the Google Hangout.

Same data, different decisions

Limited resources is a fact of life, which makes it particularly important that organizations develop intelligent ways of rationing their services that maximize the social value they aim to create. So does that necessarily mean we should use data to weed out those that are hardest to serve?

Not necessarily.

People make decisions, not data. Two people can look at the same data, the same sound analysis, and make two different decisions. One organization might decide to not serve a certain target demographic because they believe those individuals would not fare well in their program, but another organization might decide the exact opposite, reasoning that because those individuals have poorer prospects and risk worse outcomes, that they should be a higher priority.

Indeed, that is exactly what the vulnerability index does. The vulnerability index is essentially a triage tool for prioritizing chronically homeless people into housing. The more vulnerable someone is, the higher priority they are to house.

My point is not to argue that it is better to serve those who are more or least vulnerable, but rather to illustrate that data is a tool that helps us make decisions that are consistent with our own values.

While data can help us better assess what has happened and what might happen in the future, it does not tell us what to do. The decisions we make should be informed by data, but data does not make decisions for us.