Feeding the feedback loop

In my last post I pledged to give up the cowardice comfort of chanting the mantra of why we should evaluate and instead move toward discussions of how.

If evaluation is to actually be a useful tool for organizations implementing social interventions, it is essential that evaluations are established in a feedback loop instead of conducted on a one-and-done basis.

Indeed, much of the tension between evaluation proponents and front-line organizations has been that organizations are forward looking, yet evaluations are historical by nature.

Figure 1: Evaluations look back, organizations look forward

Evaluations necessarily rely on historical indicators, with a general rule of thumb being the longer the time series the better. This focus on the past is at considerable odds with organizations that face not only the needs of their constituents today, but the always pressing concerns of how to meet their needs in the future.

Given this conflict, it is no wonder organizations are reluctant to dive into evaluations, especially those with more limited budgets. If today and tomorrow are your real concerns, why focus on the past?

You wouldn’t, not unless looking at the past could better inform tomorrow. That is where the feedback loop comes in.

A feedback loop is essentially a way of enabling an organization to learn from outcomes metrics on an iterative basis, both collecting and synthesizing social indicators with rigor and regularity. The following diagram shows one approach to a feedback loop, whereby an evaluation begins with an agency’s Theory of Change, followed by question design, data collection, then analysis.

Figure 2: An implementation of a feedback loop

After reviewing the analysis of information collected in the previous term, the loop then goes back to the Theory of Change. At this point an organization uses its feedback to reassess its approach, what is working? What is not? What can we do different? These changes are then reflected in altered program design and data collection questions and processes.

The idea here is to turn evaluation into a tool instead of a judgment (“you suck”). Good evaluation should not simply conclude with an up or down on a program’s effectiveness. Instead, evaluation should lead to options, several of them.

Figure 3: Evaluations should lead to several options

In this way, I see the role of the evaluation consultant to be that of a medical doctor, who not only diagnoses problems but recommends possible treatments. It is then up to the organization to decide what course to take, as they know their capacity and client base best.

Feedback loops are critically important, but they are difficult to establish and I cannot claim to necessarily do it right. My company’s approach is in part based around our database systems. Since we build the data collection systems for our customers we can help facilitate the entire question design and collection process.

Of course there are difficulties here. First, not all the data an evaluator wants actually gets collected, or at least not in the ways an evaluator would prefer. Moreover, getting social sector managers to agree that evaluation is important in their organization is one thing, but getting them into the habit of regularly re-evaluating their efforts is another.

Therefore, establishment of a feedback loop, and by extension good evaluation, is not only about what to measure and how to analyze it, but how to setup a managerial system and organizational culture receptive and capable of incorporating feedback.

Shutting the door on the evaluation debate, time to focus on the how

There seems to be reasonable consensus in several philanthropy circles that evaluation is important.

Impact investors want to know what programs work to better guide their charitable investments. Organizations designing and implementing social interventions want to know what works so they can better help people in need and earn donated dollars.

I am pleased to see our sector moving toward agreement on the importance of evaluation in the social sector. At this point though the conversation needs to shift from should we or shouldn’t we evaluate, to how the heck we are going to do this correctly. Therefore, I am no longer going to write about why we should evaluate, and instead will focus on how.

This second phase of the conversation is not only the most important, but it is by far the trickiest. When it comes to evaluation, two people can say the same thing (“measure impact!”) and mean two totally different things (for example “conduct a randomized control trial” versus “write a narrative about one of my superstar clients”).

Complicating matters further, there are multiple intervention types, focus areas, and evaluation techniques. Of course, perhaps the bigger problem is the general lack of evaluative insight and expertise, at least amongst philanthropy talking heads.

Indeed, the reason the evaluation debate has lingered so long on the “should we or shouldn’t we” is because, by and large, that is the beginning and the end of what a lot of proponents can say on the subject.

Before anyone argues my perceived evaluative insights are ballooning beyond my ability, let me be the first to take a needle to my own inflated head. I suck at this.

In fact, we all suck at this. I would suggest there are really two types of evaluation proponents:

  1. Those who have the skills and opportunity to put their theories into practice
  2. Those who have a WordPress blog and a desire to dump on implementing organizations in the name of philanthropic purity

I count myself fortunate to have the opportunity to not only write about evaluation, but to work with some amazing organizations every day that allow me to work through evaluation strategies that both work and fail in an attempt to improve the lives of their clients.

If we are going to get evaluation right, we have to get it wrong first. And just like there is pressure on organizations to discuss their failures, evaluators should discuss their failures as well. I plan to.

My interest in evaluation has less to do with impact investing and more to do with crafting effective social interventions. Good evaluation provides regular feedback, not simply on whether an intervention works or not (an unrealistically simplistic binary attribution to complicated social interventions), but instead on what parts of an intervention worked and why.

Regular feedback can make failing and succeeding interventions better. That is the real power of good evaluation and a feedback loop. In my next post I’ll further discuss the importance of feedback loops and the strategies and struggles I have faced setting them up.

What I won’t discuss though is whether or not evaluation is important. I think we all get the point on that.

Good evaluation habits start early

It is common knowledge that bad habits are hard to break. The longer one lives with a bad habit, the more difficult it is to shake.

The same is true of social sector organizations. Good evaluation requires good habits. But good habits are difficult to form, and even more difficult to introduce into an organizational culture that has grown accustomed to evaluative misbehavior.

So first, what are good evaluation habits?

Broadly speaking, good habits involve

  • Intentional data collection designed to capture client indicators stemming from an organization’s theory of change
  • Regular, incremental program changes based on feedback from client indicators
  • Identifying comparison groups to compare progress against

Many organizations adhere to anything but good evaluation practices. It is especially difficult to get established organizations to change course, like turning around a cargo ship drifting downstream.

I have been hired time and again under the pretense of helping established organizations with their evaluation strategy. However, I have learned over time that while the request for evaluative assistance might be earnest, inevitably the conversation turns from “help us assess our impact” to “show us why we are awesome.”

I certainly understand why organizations devolve from the former to the latter. These established organizations have existing funders that want to see that their efforts have been paying social dividends. Ultimately, evaluators are brought in to prove to funders that money has been well spent rather than to help assess how future monies should be spent.

This makes sense, but it enforces bad habits. With funders breathing down agencies’ necks, agencies are incentivized to look for any positive trend in their data and report it as success, instead of looking for feedback that informs intervention design.

I’m not arguing that good evaluation habits are unachievable in larger organizations. But, instilling good habits in startup organizations is not only easier, it is also the low-hanging fruit in our collective shift toward a more outcomes oriented social sector.

The problem, though, has been that evaluation consultants (I stand amongst the guilty) have focused their efforts on snagging contracts with established organizations rather than working with emerging agencies.

To do our part in helping instill good habits in start-up social sector organizations, my company designed an evaluation consulting package that

  • Offers access to evaluation guidance year-round, as good evaluation strategies develop through time instead of in a single Word document
  • Fits in the budget of emerging organizations

Under the new package, we will work with organizations under three years old helping design and implement evaluation strategies. Because good strategies take time, we have structured our contracts to run annually with quarterly meetings and deliverables, instead of using an hourly rate model.

Each contract is a flat rate of $3,500 so startup agencies can afford the service and are not at risk of ballooning costs.

I love working in the social sector. Having the opportunity to help organizations develop effective evaluation strategies that inform program design is by far the most rewarding experience my team and I can have.

If we want to foster a sector that values the importance of good evaluation, we have to instill those principles in the fabric of the next generation of non-profit organizations, before they get stuck in their ways.

Logistics and liquidity: what the crisis in Japan teaches us about philanthropy

The tragic crisis in Japan has been followed by an outpouring of sympathy around the world and frustratingly predictable financial opportunism by the non-profit sector.

Givewell and Good Intentions Are Not Enough have posted excellent pieces on why giving to aid organizations in Japan right now is not a good idea, despite our natural desire to do so.

In short, Japan doesn’t need it. Lack of money is not the issue in the rescue efforts. Nor should it be, Japan is one of the richest countries in the world.

However, like in Katrina, getting the logistic right has proven exceedingly difficult. Indeed, there are legitimate risks of starvation, dehydration, and death from airborne illnesses, not to mention the radiation risks. But these risks are not the result of financial deprivation.

Delivering services to people in need, like delivering any intervention, is tremendously complicated. There is a lot more to providing effective interventions than money. Given this fact, if improving peoples lives is really what the social sector is about, why so much attention on giving rather than doing?

Simple, the focus on giving is easier, rewarding, and for those who facilitate it, lucrative as well.

But the hard work of the social sector is in the doing. Figuring out what works and what does not. Sure, raising funds is important, but the crisis in Japan makes it clear that both good intentions, and ample financial resources, are not enough.

Yet heralded social enterprises like Causes are pushing people to give to a crisis that does not necessarily need the money. Why? Because that is all they can do.

And therein lies the lesson. I don’t mean to disparage innovations that facilitate giving. The problem instead is that the sector’s best talent focuses so much on donations.

Look at the technical prowess of Causes or the insightfullness of Givewell. The sector is lucky to have these organizations. But our most pressing problems are what to do, followed by how to finance, not the other-way around.

Raising money does not solve logistical issues, and increasing endowments does not necessarily beget higher impact social interventions. Is there a correlation between money and effectiveness? Absolutely. But money is hardly the sole independent variable.

I get why so much of the talent in the sector focuses on donations. If there is money to be made in the poverty sector, by and large it’s on the donor end. But if we want that money to go to actual use, we’ll have to do more than help organizations raise money and focus on criticizing interventions that don’t work. Instead, we’ll have to put our money, or at least the money of others, where our hearts are, and put our best talent on the front lines instead of the back channels of donor portals.

(Photo by un.org)