Possibilities and probabilities of Social Impact Bonds

Last Friday the White House Office of Social Innovation and Civic Partnership hosted a one day seminar on the pay-for success (also known as social impact bond) approach to social sector financing. I’ve been critical of social impact bonds (SIB) in the past, so was interested to watch the White House’s live stream of the event.

The promise of social impact bonds is to provide a new way of financing some social interventions. Panelists at the White House summit were quick to acknowledge that SIBs are not a cure all approach, and that not all interventions are well suited for this type of funding.

My criticism of SIBs in the past has been that the investment approach assumes evaluations can be performed reliably. Obviously, I think evaluation is a pretty thorny issue that we have a long ways to go on. I was happy to see that proponents of SIBs at this event were aware of this problem, and while more optimistic about which programs are easily measured than I am, realized there were limitations.

The example that has been most used in peddling SIBs, and the only actual implantation of the concept I’m aware of, involves an anti-recidivism program in a UK prison. Prisons provide a nice opportunity for evaluators because you can easily collect longitudinal data (prisoners and parolees are already monitored) and you have a logical control group (neighboring prisons).

Other social interventions don’t provide such fertile ground for experimentation. Yet ascertaining the effectiveness of a program is at the heart of the agreement between an investor and the government, which backs the bond.

Indeed, as was discussed at the seminar, the critical negotiation in a SIB is when the investor and government establish the ground-rules for when the government pays out, and when it doesn’t. The investor has an incentive for weaker evidence of success while the government prefers stronger.

While the SIB discussion centers on thresholds like the number of positive outcomes, the discussion got me thinking about another metric of success, the probability that an intervention actually caused a desired outcome.

For example, an employment program might place a hundred people in jobs in a month, but would those people have found jobs anyway? This is the question a control group helps inform. But the control group does not answer this question definitively. Instead the best evidence one can get is that:

  1. A program exceeds its outcomes threshold
  2. There is a certain probability that the program deserves credit for this success

Therefore, the question is not just about achieving an outcome objective, but how sure we are that the relationship between the intervention and outcome is not due to random chance. Achieving a high significance threshold further limits the possibilities of SIBs, as in order to ensure fairness to both the bond holders and backers, ample data in a controlled environment is a necessity.

The White House seminar, as well as discussions I’ve had since my last post on this topic, softened my stance on social impact bonds. As Tracy Palandjian, CEO of Social Finance smartly said, social impact bonds are about managing “execution risk”. Where there are opportunities for well controlled experiments, like prisons, SIBs might very well help governments manage that risk.

What SIBs won’t do is drive innovation, nor will they replace all existing forms of social funding. But they don’t need to in order to be a success. If SIBs can help governments improve the performance and financing of a subset of their social investment portfolios, that’d be a significant accomplishment anyone would have a hard time questioning.

Focus on big data misses the big picture

We are living in an era of data deluge. Social sector talking heads are preaching the promise of “big data”, yet few arguments have been made about how all this data will facilitate impact.

Data is useful only in so far as it helps inform decision making. And while social sector organizations aim to solve big problems, their solutions are decidedly smaller in scale and scope.

Indeed, an organization developing a job training program might look at macroeconomic conditions to determine the total potential market size for an intervention. But unless that intervention covers a large enough geography to significantly influence over all employment (which is rarely the case) the more useful information is the micro, “small data” that an agency collects itself.

I think about so-called big data, like Census indicators, as providing the context for an intervention. It’s the market data that develops the case for action. But once that decision to act is in place, there’s little left for big contextual data to inform.

The more pressing questions, like is this program working and how do I increase impact, cannot be answered by large public databases. Instead the focus needs to be on developing analytical capacity and program specific data collection feedback loops that capture relevant indicators on an iterative basis.

Yet our current obsession with big data and trivial infographics obscures the real promise of an analytically oriented social sector for soundbites and graph porn.

If we want to tackle the big problems, we need organizations to be able to collect and analyze small data sets relevant to their own work. Our wrongheaded focus on large scale data for the sake of seeming analytical obfuscates the real opportunities data affords.

The reasonable consequences of our unreasonable expectations

Organizations are often tasked with making bold predictions about future achievement. But as the 10 Year Plan to End Homelessness illustrates, most of the predictions we make in the social sector are based on hope and little else.

Good predictions are based on sound models and historical data. And before you think the concept of modeling is too technical or outside the realm of the social sector, I would argue that every organization has a model. It’s called a theory of change.

Publicly traded companies make revenue predictions that investors analyze to determine whether to buy, hold, or dump stock. Social sector agencies are similarly asked to make predictions about what changes they are going to create in the world and how they are going to do it.

Companies have a lot to lose when they fail to meet projected targets. Analysts rip companies apart and investors lose confidence not only in companies’ predictive abilities, but in management and future viability as well.

But in the social sector we are rarely evaluated based on our predictions. As a result we shoot for the moon, hit the dirt, and call it a success. And while all this unreasonable ambition sounds good, and makes for catchy campaign titles, does it help us advance public welfare?

Our outlandishly high expectations and horribly unfounded predictions undermine donors’ faith in our ability to create real social change. Indeed, it’s seems an unspoken truth of the social sector is that if your theory of change doesn’t draw a straight line between your local intervention and world peace, then you’re just not doing it right.

What’s wrong with setting reasonable objectives? Why not have a theory of change that relates what you do to what you can actually accomplish; instead of relating what you do to some fantastical notion of what you wish might happen?

Instead of spinning increasingly more ludicrous stories, we’d be better served aiming lower and actually hitting our targets. That way we could accurately account for what we do, measure our impact, and iterate on our interventions.

The byproduct of reasoned predictions based on realistic theories of change would be something far better than stories that make donor’s hearts swell. We’d have stories people could actually believe.

Interview on Social Velocity

Non-profit consultant Nell Edgington was kind enough to interview me for her Social Velocity blog yesterday.

She asked some interesting questions about what comprises good evaluation, how to make evaluation accessible and affordable for organizations of all sizes, and what role government plays in social sector innovation and combating poverty.

The common thread throughout the interview is that the focus of evaluation needs to transition from a grading system used by would-be donors to a tool that helps organizations increase social outcomes. You can check out the interview on Social Velocity here.