In my last post I pledged to give up the cowardice comfort of chanting the mantra of why we should evaluate and instead move toward discussions of how.

If evaluation is to actually be a useful tool for organizations implementing social interventions, it is essential that evaluations are established in a feedback loop instead of conducted on a one-and-done basis.

Indeed, much of the tension between evaluation proponents and front-line organizations has been that organizations are forward looking, yet evaluations are historical by nature.

[caption id="attachment_471" align="aligncenter" width="480" caption="Figure 1: Evaluations look back, organizations look forward"][/caption]

Evaluations necessarily rely on historical indicators, with a general rule of thumb being the longer the time series the better. This focus on the past is at considerable odds with organizations that face not only the needs of their constituents today, but the always pressing concerns of how to meet their needs in the future.

Given this conflict, it is no wonder organizations are reluctant to dive into evaluations, especially those with more limited budgets. If today and tomorrow are your real concerns, why focus on the past?

You wouldn’t, not unless looking at the past could better inform tomorrow. That is where the feedback loop comes in.

A feedback loop is essentially a way of enabling an organization to learn from outcomes metrics on an iterative basis, both collecting and synthesizing social indicators with rigor and regularity. The following diagram shows one approach to a feedback loop, whereby an evaluation begins with an agency’s Theory of Change, followed by question design, data collection, then analysis.

[caption id="attachment_472" align="aligncenter" width="520" caption="Figure 2: An implementation of a feedback loop"][/caption]

After reviewing the analysis of information collected in the previous term, the loop then goes back to the Theory of Change. At this point an organization uses its feedback to reassess its approach, what is working? What is not? What can we do different? These changes are then reflected in altered program design and data collection questions and processes.

The idea here is to turn evaluation into a tool instead of a judgment (“you suck”). Good evaluation should not simply conclude with an up or down on a program’s effectiveness. Instead, evaluation should lead to options, several of them.

[caption id="attachment_474" align="aligncenter" width="346" caption="Figure 3: Evaluations should lead to several options"][/caption]

In this way, I see the role of the evaluation consultant to be that of a medical doctor, who not only diagnoses problems but recommends possible treatments. It is then up to the organization to decide what course to take, as they know their capacity and client base best.

Feedback loops are critically important, but they are difficult to establish and I cannot claim to necessarily do it right. My company’s approach is in part based around our database systems. Since we build the data collection systems for our customers we can help facilitate the entire question design and collection process.

Of course there are difficulties here. First, not all the data an evaluator wants actually gets collected, or at least not in the ways an evaluator would prefer. Moreover, getting social sector managers to agree that evaluation is important in their organization is one thing, but getting them into the habit of regularly re-evaluating their efforts is another.

Therefore, establishment of a feedback loop, and by extension good evaluation, is not only about what to measure and how to analyze it, but how to setup a managerial system and organizational culture receptive and capable of incorporating feedback.

(Photo by esti)