Operationalize, optimize, then advertise your outcomes

In the for-profit sector, profits are the bottom line. Yet companies spend considerable amounts of money trying to figure out whether their products work as advertised, whether their customers are happy, and what they can do to improve quality and customer satisfaction. While advertising drives sales of an effective product or service, smart organizations are careful to first figure out how to measure the effectiveness of their offerings with their target consumers (operationalization), optimize based on customer feedback, and then advertise.

In the social sector, we make up an intervention and then skip straight to advertising the hell out of it. No measurement plan, no operationalization of desired outcomes, and certainly no optimization (an impossibility if we aren’t measuring our effectiveness to begin with).

While we are loath to operationalize, optimize, or do anything that rhymes with “evaluation”, we love it when it rains advertisers. Our advertisers come in the form of grant writers, social-media-for-social-good consultants, and pretty much anyone willing to work on a retainer to tell an organization it’s fantastic.

But how fantastic can we be without sensible data collection strategies? And how much can we improve if we continue to offer the same intervention year after year without improvements? The assumption is that the continued existence of an intervention is sufficient proof that a service is working and is valuable. Of course, this is the simple fallacy that always leads to the poor getting screwed, especially when their choice is between bad services and nothing.

The emphasis on advertising stems from agencies’ survival instincts. Indeed, the primary function of any organization is to continue to exist. I get that. But the irony is that funders and donors are begging for any organization to step up with reliable metrics and believable outcomes.

As funders and donors have started to demand more evidence of impact from organizations, the usual suspects of advertising consultants have shifted their rhetoric (but not offerings) to appear more in line with the shifting philanthropic landscape. All of a sudden, non-profit marketing consultants with backgrounds in Russian literature and interpretive dance are qualified to help organizations craft logic models and develop rigorous data collection strategies.

One such consultant, who I was later hired to replace to clean up his mess, outlined a data collection strategy that included an outcome of “a healthy, vibrant community free of crime where all people are treated with respect and dignity.”

!%#*

What an operationalization nightmare. How the heck do you measure that? You don’t. And that’s the point. The logic model was not a functional document used for optimizing results. Instead, it was an advertising document to lull unsavvy donors into believing an organization is effective in the absence of evidence.

The good news is that donors and funders are starting to get wise to the backward thinking “advertise first” mentality. The social sector is shifting, for the better, to reward organizations that take their data collection plans seriously, and who look to improve on their impact rather than simply advertise it to anyone willing to listen.

Organizations hoping to enjoy fundraising success in the future would be wise to invert their funding strategy to a model that emphasizes operationalization and optimization of outcomes first. In this new era of philanthropy, without evidence of impact, your advertising partners won’t have anything to sell.

Service rationing and social welfare maximization

It’s fairly typical for direct service organizations to lament that demand for services out paces supply. In the free market when demand exceeds supply sellers increase prices, as the seller’s focus is maximizing profits.

In the social sector, our focus is maximizing social benefit, yet outside of the medical world (where triage dictates the worst of individuals capable of survival get treated first) there is no such rationalization around service rationing.

Through my work, I visted with two programs recently, one that supplies affordable housing vouchers for homeless individuals and another that places low-income youth in summer jobs with stipends. Both organizations face greater demand for services than they can afford, meaning they each have to turn a significant number of people away.

I asked both program directors how they decide who receives services, and in both cases they simply service people on a first-come, first-served basis.

The problem with first-come, first-served is that those who show up for services first might not be the best fit to maximize social welfare. Going back to the example of the for-profit world, if you have ten tickets to a concert to sell, and the first five people in line are willing to pay a dollar, and the next twenty are willing to pay thirty dollars each, you would deny the first five people and select the next twenty.

But in the social sector, when we select people on a first-come, first-served basis, we are willing to serve those who might need the service less, or be less likely to succeed, than if we selected someone else.

The difficulty is that we think about service rationing in terms of eligibility instead of social welfare maximization. If someone qualifies for our program, they are in, even if other indicators suggest that the selected individual might drop out of the program soon after enrollment, or that her or his need for the service is far more modest than the person behind them in line.

In order to maximize over social welfare, we have to define what social welfare means to us. This definition stems from the outcomes identified in your logic model’s impact theory. In the case of the affordable housing program, the desired outcome might be to minimize the number of years of life lost due to homelessness. Under this framework, we would have a preference for serving individuals who are at a greater risk of lower life expectancy due to their homelessness, than simply taking every person who is homeless until vouchers run out.

The distinction I’m making here is eligibility (as being homeless makes you eligible), and what our social welfare maximization framework is. The social welfare maximization framework gives us a way of prioritizing service delivery between two individuals who otherwise qualify for services.

In the case of the youth workforce development program, while all low-income youth would qualify for services, we might have a preference for placing people into the program who are likely to complete the internship. In this case, one could use historical data to fit a predictive model that provides some insight into what characteristics made an individual more or less likely to have completed the program in the past. Under this framework, social welfare maximization would involve not only placing people into the program, but maximizing the number of people in the program who complete the internship.

Supply and demand issues have long plagued the social sector, in both economic booms and busts. Therefore, we need to be smarter about how we allocate our scarce resources. The first step in better allocation of resources is a well defined impact theory that clearly identifies an organization’s intended goals. From there, one can develop a utility maximizing framework that learns from historical data to better optimize allocations through time.

Impact transparency and audited outcomes statements

Transparency is generally thought to be one of the core values of the social sector. Executives and non-profit consultants go to great length to ensure agencies remain transparent.

Despite our obsession with transparency (some may argue because of it), nonprofit fraud is not widespread. One would of course hope that the social sector exists for more than just to not commit fraud. Are our standards really so low that a day free of governance maleficence is a day well spent?

Of course not. The social sector exists to create social value. That’s what donors expect of us, and that is what most of us expect from ourselves.

Charity watchdogs like Guidestar have begun moving beyond financial and governance issues, encouraging organizations to post evidence of impact on their websites. While the for-profit technology startup scene advocates the importance of “failing fast” in order to test ideas and iterate quickly to achieve effective offerings, the social sector seems petrified by the prospect of evaluation, lest an evaluative inquiry discover that an organization has failed to single handedly eradicate global poverty.

And yet without feedback, no program, for-profit or non-profit, can improve. Savvy donors understand that. In the new era of impact investing donors are more persuaded by an organization they believe has an honest approach more than an agency with a fantastical story.

My work centers on helping organizations improve program impact by learning from their evaluative metrics. I got into this business to help organizations help people better, but a natural consequence of better outcomes is strong fundraising prowess as well.

As the donor mindset shifts to that of the impact investor, I have begun to encourage the organizations I work with to learn (and fail) out loud. While many organizations have “impact” pages on their websites, the content of those pages tend to be a story of one amazing case or an enumeration of a select subset of program outputs. These pages tend to not be terribly informative, nor believable to the savvy donor.

But despite the plethora of shadowy claims of exceptional, unsubstantiated claims of impact on organizations’ websites, these same organizations will consider it a sign of organizational integrity to publicly post their third-party audited financial statements.

So, we consider it a value to publicly post our financials but not our impact? Frankly, I’m more interested in the results an organization produces than how it spends its money. Most donors are.

To help our clients better communicate their results (and more importantly their institutional learning) we developed a system that allows organizations to publicly share the reports we produce on their websites.

Just as there is value in hiring a third party auditor to examine an organization’s financial data, we believe the same is true for evaluative metrics. The public has grown to expect audited financial statements from non-profits. Audited impact statements are the logical next step if transparency is truly valuable to the social sector. Indeed, rrom a donor perspective, impact transparency might be the only kind of transparency that matters at all.

Evaluative metrics bring truth in advertising to the social sector

My primary focus is helping organization’s improve their programs based on outcomes metric, but there is no doubt that fundraising and development are on the minds of every agency executive that I work with. I used to shy away from this fact, but I now embrace the desire of agency executives to use evaluative data for fundraising purposes as an opportunity to bring truth in advertising, a particularly exciting prospect as donors become increasingly savvier investors.

It is undeniable that there are plenty of organizations (and people) that overplay their contributions to the public good. But there are also organizations that unwittingly mask their outcomes in poorly defined metrics, framing their social value to donors in ways that are at odds with their own internal organizational decision calculus.

A large affordable housing development and homeless service providing organization I work with is a great illustration of how our choice of outcomes metrics can obscure the real value an organization aims to optimize over.

As is typical in homeless services, this organization reports, among other things, the number of people housed annually. The problem with this metric is that it values housing a mentally ill chronically homeless person who has been on the streets for 19 years the same as housing someone who slipped into homelessness for one month due to momentary economic shocks, like job loss.

Assuming the number of people housed is actually what this organization wanted to maximize, the rational thing to do would be to move away from chronic homelessness and only house those whose homeless spells is likely to be very short. But this is not how the organization actually thinks, or how it makes program decisions.

This agency, again like many homeless service providers, cares deeply about those experiencing the continuum of homelessness, especially those who are chronically homeless. In economic terms, the organization derives more utility, or value, from housing a more difficult to house individual than from a less difficult to house person.

The trick then is to first internally formalize the organization’s utility framework, and then to identify the outcomes metrics the organization is actually optimizing over. For example, instead of the number of people housed, a more meaningful metric might be the number of years of homeless prevented, or the number of years of life preserved that likely would have otherwise been lost. Not only are these metrics better representative of how the organization plans its interventions, it also paints a more complete picture for potential donors.

In cases like this, I embrace the fundraising and development aspirations of the organizations I work with to the extent that helping them better understand their data will actually move them closer (rather than further) from articulating a truer story of the social benefit their agency contributes toward.