There seems to be reasonable consensus in several philanthropy circles that evaluation is important.

Impact investors want to know what programs work to better guide their charitable investments. Organizations designing and implementing social interventions want to know what works so they can better help people in need and earn donated dollars.

I am pleased to see our sector moving toward agreement on the importance of evaluation in the social sector. At this point though the conversation needs to shift from should we or shouldn’t we evaluate, to how the heck we are going to do this correctly. Therefore, I am no longer going to write about why we should evaluate, and instead will focus on how.

This second phase of the conversation is not only the most important, but it is by far the trickiest. When it comes to evaluation, two people can say the same thing (“measure impact!”) and mean two totally different things (for example “conduct a randomized control trial” versus “write a narrative about one of my superstar clients”).

Complicating matters further, there are multiple intervention types, focus areas, and evaluation techniques. Of course, perhaps the bigger problem is the general lack of evaluative insight and expertise, at least amongst philanthropy talking heads.

Indeed, the reason the evaluation debate has lingered so long on the “should we or shouldn’t we” is because, by and large, that is the beginning and the end of what a lot of proponents can say on the subject.

Before anyone argues my perceived evaluative insights are ballooning beyond my ability, let me be the first to take a needle to my own inflated head. I suck at this.

In fact, we all suck at this. I would suggest there are really two types of evaluation proponents:

  1. Those who have the skills and opportunity to put their theories into practice
  2. Those who have a Wordpress blog and a desire to dump on implementing organizations in the name of philanthropic purity

I count myself fortunate to have the opportunity to not only write about evaluation, but to work with some amazing organizations every day that allow me to work through evaluation strategies that both work and fail in an attempt to improve the lives of their clients.

If we are going to get evaluation right, we have to get it wrong first. And just like there is pressure on organizations to discuss their failures, evaluators should discuss their failures as well. I plan to.

My interest in evaluation has less to do with impact investing and more to do with crafting effective social interventions. Good evaluation provides regular feedback, not simply on whether an intervention works or not (an unrealistically simplistic binary attribution to complicated social interventions), but instead on what parts of an intervention worked and why.

Regular feedback can make failing and succeeding interventions better. That is the real power of good evaluation and a feedback loop. In my next post I’ll further discuss the importance of feedback loops and the strategies and struggles I have faced setting them up.

What I won’t discuss though is whether or not evaluation is important. I think we all get the point on that.

(Photo by Jeremy Piehler)