At the end of last month I wrote a post advising organizations to setup their evaluation frameworks prior to advertising their greatness to the world, as credible evidence to back up claims of effectiveness is infinitely more persuasive than the alternative. In the post I gave an example of a consultant who helped a former customer of mine create a logic model with poorly defined outcomes, positioning my customer for advertisement rather than measurement. From the post:
One such consultant, who I was later hired to replace to clean up his mess, outlined a data collection strategy that included an outcome of “a healthy, vibrant community free of crime where all people are treated with respect and dignity.”
What an operationalization nightmare. How the heck do you measure that? You don’t. And that’s the point. The logic model was not a functional document used for optimizing results. Instead, it was an advertising document to lull unsavvy donors into believing an organization is effective in the absence of evidence.
Fair criticism, I think (obviously). Well, yesterday Jennifer Banks-Doll asked an even fairer question:
David, I really enjoy your blog and love the example you have given above about the unmeasurable outcome. So true and so common! But I feel like you’ve left us hanging…What kind of outcomes would you recommend instead? Can you give us an example or two?
This is an excellent question, and while I did my best to give an example of the right type of thought process an organization should go through to operationalize outcomes, unfortunately the best answer I can really give is the unsatisfying “it depends”.
While there are those who tend to think through their logic models from left to right, that is, if I do this I expect that, I tend to think the other way around, starting with the goal an organization aims to achieve and moving backwards. There is nothing wrong with beginning the ideation process and goal setting with seemingly unmeasurable ideals. However, what begins as pie-in-the-sky should not be left there.
In my response to Jennifer’s question, I gave the example of an organization trying to create a “safe” neighborhood. Safety is an inherently abstract notion. Part of safety might be actual incidences of crime, but safety might also have to do with perceptions as well. If someone feels unsafe in an area free of crime, is that area “safe”? Well, it depends on how an organization defines its goals.
In the process of creating more exacting definitions of the change you want to see in your target population, you get closer to identifying measurable indicators. The goal is not to create one measure for “safety”, or whatever your outcome of interest is. Instead, what we want to do is come up with a few measures that collectively approximate what our otherwise abstracted outcome is.
One other note on this point. People tend to think that “measurable” necessarily means inherently enumerable. Working again with our safety example, one measure of safety might be crime rates, but another measure might be an opinion survey of residents in an area, asking them if they feel safe. If I had to choose one, my preference would be for the perception survey over the crime data.
Crime statistics, while numeric by nature, are not necessarily pure measures of “safety”. If arrests increase in a neighborhood, this might be due to an increase in criminal activity, but it also might mean there are more police patrolling the area. Indeed, in some of the community development work I have done, I have found (in some cases) that perceptions of safety actually increase with the number of reported crimes.
The point I am trying to illustrate is that organizations should not feel pressure to use public data or seemingly more quantitative metrics if those are not good measures of your intended outcomes. The best indicators are those which most closely approximate measures of the change you want to see in the world. If the best way to get that data is asking people how they feel about a particular issue, then by all means, hit the streets.