How social proof subjugates program evaluation

About a year and a half ago, The Verge wrote an incredible exposé on the seedy underworld of get-rich-quick fake bossiness gurus, who prey on hapless victims down on their luck and in need of cash.

The basic scam is to sell a wide range of “products” to people aspiring to startup one-person businesses. Each of these products is basically a PDF document full of shallow advice that recommend further products in the series to achieve success.

To the discerning eye, it’s not terribly difficult to spot business self-help website nonsense. They all basically look like some derivation of this.

At the heart of the online marketing underwold is the concept of “social proof”. These business guru scammers collude to make it appear as though they are experts in business, and insanely wealthy. They do so by linking to each others’ websites to manipulate search engine rankings, and quoting one another on their respective websites, giving the illusion that each of these individuals is endorsed by other experts.

I’ve been sitting on this topic for quite some time, always thinking back to the concept of social proof when any new social sector “break through” initiative is touted loudly in the media without a shred of evidence that said intervention actually works.

Who needs evaluation when you have publicity?

Good stories trump good data in the media, and questionable ideas that sound plausible are shrouded in social proof and promoted as though they were ideas worth spreading.

For those of us in the social sector, it’s (generally) easy to spot initiatives with exaggerated claims of success. But for the casual donor and untrained eye, the origin of enormous amounts of support to philanthropic causes, the difference between real outcomes and social proof can be illusive.

I’m not sure how one might go about tackling this problem. There are plenty of nonprofits that try to be honest about their results for internal improvement, and to a lesser extent are transparent with their donors about their findings.

But the incentive is always there to manufacture positive publicity by promoting misleading claims of impact. And more importantly, getting other nonprofits, coalitions, businesses, politicians, and media outlets to repeat those claims, thus creating truth.

The social proof versus program evaluation conundrum is a non-trivial puzzle. Donor education programs are more likely to appeal to savvy donors in the first place, so donor education is at least a difficult path, if not a non-starter.

For nonprofits, favoring actual proof over social proof is a poison pill, as high flying headlines and endorsements by public figures in major publications will always trump more down-to-earth claims of impact.

It’s an interesting question without a clear answer. The cost of not figuring it out is donor capital flowing to compelling sounding claims, rather than actual results.