In the for-profit sector, profits are the bottom line. Yet companies spend considerable amounts of money trying to figure out whether their products work as advertised, whether their customers are happy, and what they can do to improve quality and customer satisfaction. While advertising drives sales of an effective product or service, smart organizations are careful to first figure out how to measure the effectiveness of their offerings with their target consumers (operationalization), optimize based on customer feedback, and then advertise.
In the social sector, we make up an intervention and then skip straight to advertising the hell out of it. No measurement plan, no operationalization of desired outcomes, and certainly no optimization (an impossibility if we aren’t measuring our effectiveness to begin with).
While we are loath to operationalize, optimize, or do anything that rhymes with “evaluation”, we love it when it rains advertisers. Our advertisers come in the form of grant writers, social-media-for-social-good consultants, and pretty much anyone willing to work on a retainer to tell an organization it’s fantastic.
But how fantastic can we be without sensible data collection strategies? And how much can we improve if we continue to offer the same intervention year after year without improvements? The assumption is that the continued existence of an intervention is sufficient proof that a service is working and is valuable. Of course, this is the simple fallacy that always leads to the poor getting screwed, especially when their choice is between bad services and nothing.
The emphasis on advertising stems from agencies’ survival instincts. Indeed, the primary function of any organization is to continue to exist. I get that. But the irony is that funders and donors are begging for any organization to step up with reliable metrics and believable outcomes.
As funders and donors have started to demand more evidence of impact from organizations, the usual suspects of advertising consultants have shifted their rhetoric (but not offerings) to appear more in line with the shifting philanthropic landscape. All of a sudden, non-profit marketing consultants with backgrounds in Russian literature and interpretive dance are qualified to help organizations craft logic models and develop rigorous data collection strategies.
One such consultant, who I was later hired to replace to clean up his mess, outlined a data collection strategy that included an outcome of “a healthy, vibrant community free of crime where all people are treated with respect and dignity.”
What an operationalization nightmare. How the heck do you measure that? You don’t. And that’s the point. The logic model was not a functional document used for optimizing results. Instead, it was an advertising document to lull unsavvy donors into believing an organization is effective in the absence of evidence.
The good news is that donors and funders are starting to get wise to the backward thinking “advertise first” mentality. The social sector is shifting, for the better, to reward organizations that take their data collection plans seriously, and who look to improve on their impact rather than simply advertise it to anyone willing to listen.
Organizations hoping to enjoy fundraising success in the future would be wise to invert their funding strategy to a model that emphasizes operationalization and optimization of outcomes first. In this new era of philanthropy, without evidence of impact, your advertising partners won’t have anything to sell.