Yesterday Sean Stannard-Stockton put up an excellent post titled Getting Results: Outputs, Outcomes, and Impact explaining the difference between outputs, outcomes, and impact. Looking at each type of indicator separately, Sean writes:
Outputs: These are the activities done by the nonprofit. The meals served by a soup kitchen are outputs.
Outcomes: These are the observed effects of the outputs on the beneficiaries of the nonprofit. The degree to which the meals served by the soup kitchen reduce hunger in the population served by the soup kitchen.
Impact: This is the degree to which the outcomes observed by a nonprofit are attributable to its activities. The impact of the soup kitchen is the degree to which a reduction of hunger in the population they serve is attributable to its efforts. While a soup kitchen might serve a lot of meals and correctly observe that hunger is subsequently less prevalent in the population it serves, the reduction in hunger might simply be attributable to an improving economy, or a new school lunch program or some other activities that are not part of the soup kitchen’s efforts.
Sean’s post made me realize that when I write about outcomes on this blog I am actually referring to impact. By conflating outcomes with impact I hope that I have not been misleading in the past, and I appreciate Sean’s clarification on the terminology. In his piece, Sean went on to argue that outputs are inferior to outcomes as units of measurement, with impact being the gold standard objective. I certainly agree with this final point that impact, the change created as a result of a particular intervention, is the sole, truly meaningful metric.
However, I was more iffy on the assertion that outcomes, as defined by Sean, are clearly superior to outputs. In the comment section on the Tactical Philanthropy blog, I wrote:
Outputs and impact are more pure metrics in that they both isolate an effect of the intervention. The output obviously tells us very little (such as in your example a client receiving food aid) and the impact telling us a lot (the degree to which hunger decreased, nutrition increased, etc., as a result of that aid). While it might seem that outcomes are superior to outputs, there is serious risk that outcomes are misleading, as you get to a bit.
Most damaging, the outcomes metric can mask harms from an output. You allude to a scenario where there are multiple factors at play toward a positive end of an intervention, but an intervention can actually have a negative effect, which might be offset by other environmental factors, not only masking the harm of the intervention but wrapping it in a cloak of success.
Sean responded to my comment by saying:
I would argue that Outcomes are objectively more important (better) than Outputs, but admit freely that they are harder to measure. I guess like any powerful tool, they can become dangerous if used incorrectly (miscalculated).
In hindsight, I believe I overstated my point when I suggested that outcomes are not more telling than outputs. Sean was right to hold his ground. However, I am still troubled by a hierarchical mental framework that holds outcomes over outputs, when outcomes are inherently riddled with extraneous factors outside the control of a particular intervention.
An output does not say anything about a change in a service recipient’s life, therefore making it an indicator of little use for assessing social value of an intervention. However, an output is a pure metric in that the output is clearly the result of the organization administering the intervention. In this way, although an output does not say anything about what happened in an individual’s life, it accurately indicates what a particular agency did.
This may seem like a non-point, but let’s look at the clarity of the output metric versus an outcomes metric. An outcome is a ratio in which the numerator is impacted by any number of variables which are not controlled for. For example, we can look at the change in hunger levels amongst a population that accesses a food pantry. One might find that incidences of hunger have gone down amongst that population over a period of time. In this sense, there is a positive outcome indicator (lower incidences of hunger) which might correlate with a higher level of food distribution by the food pantry.
The problem, of course, is that the lower incidences of hunger could be the result of any number of factors, like lower food prices, better wages, etc. Worse yet, what if the food pantry’s activities actually had a negative impact on food insecurity, or produced some other harms? Outcomes is a metric that risks both masking the negative effect of an intervention and crediting an intervention with factors outside the control of said intervention. This effect works the other way too, a population might do worse over time in a particular indicator due to forces outside the control of that intervention, even though the intervention might have a hidden, positive causal effect. In this way, a focus on outcomes might create incentives for service providers to focus on populations that will do better in certain indicators despite their interventions, thus achieving an outcomes windfall.
My point here is not to continue a debate about whether outcomes or outputs are better metrics. I agree with Sean and the others who blasted my comment, a focus on outcomes is more meaningful than outputs. I think my objection is more rooted in the fear that one might believe that if one has outcomes data, than output data is not relevant. In the absence of impact, both metrics are important. Outputs tell us what a service provider did, outcomes tell us what happened to service recipients over a period of time.
Of course, this debate would be rendered moot if we were better able to assess impact. As many have pointed out though the social sector struggles to reliably collect output or outcomes data, let alone make any serious attempt at impact analysis. While I concede that outcomes are a more important metric than outputs, I believe the path forward is through fostering an appreciation and deep understanding of what specific indicators can, and cannot, tell us about the work we do. Ultimately, we need to collect a number of indicators. More importantly, we need to know what these indicators are actually telling us, and the various ways in which our indicators, especially composite indicators that are impacted by a large number of variables, actually mean.