UNIVERSAL SETUPI’ve never been terribly comfortable with the social sector’s obsession with story telling. It’s not that I don’t understand that stories can be powerful. People can resonate with stories in a way that they can’t with numbers. Indeed, evidence suggests that while story telling can help drive donors to give, quantitative data can actually risk turning donors off.

My problem with story telling is not the story telling itself per se, but that stories can be misleading. Perhaps more important, because story selection is driven by nonprofit fundraising and public relations people rather than those focused on data integrity, the stories told are invariably positive outliers.

I’m not the only one concerned about how stories can be misleading. GiveDirectly, a nonprofit that provides unconditional cash transfers to those living in extreme poverty, wrote an important post on how to best balance donor demand for stories with the organization’s core tenant to present its findings in unbiased ways.

In a blogpost last week, GiveDirectly outlined a set of standards it will hold itself to when sharing stories, and more importantly deciding which stories to share. The rules are worth reading, and are included in full below.

To keep ourselves honest when doing so, we’ve decided to stick to three rules:

  • Share everything, as in this blog post on interesting spending choices;
  • Select recipients randomly so that every recipient’s story has an equal chance of being shared, as we do weekly on Facebook. Or, explicitly state if the recipient was not chosen randomly and why, as in this post on a recipient who experienced an adverse event; and/or
  • Provide contextualizing data so the reader can determine how representative of the average the story is. For example, if we relay a case of a woman who used her transfer to pay for a surgery, we’ll also share any data we have on average spending on medical expenses.

Finding the average story

GiveDirectly’s strategy to select stories at random is compelling. A randomly selected story holds some probability of being positive, with the complimenting probability that the story is negative. When was the last time you saw a nonprofit share a negative story?

But more interesting than sharing randomly selected stories is to systematically tell average stories. Finding average stories is not an uncomplicated task, especially since one can be average on one metric (income for example), but far from average on another (like health).

One possible approach to identifying average stories is to use a machine learning clustering algorithm, such as k-means. Roughly, the k-means algorithm takes a dataset of individuals with various data-points, and places individuals into groups with others that possess similar attributes. This type of clustering is regularly used for things like customer segmentation, but can work equally as well for grouping targets of program interventions.

Improving on GiveDirectly’s approach, instead of telling random stories from the entire population, you could instead pull stories from within clusters, providing the average demographics and outcomes from each group as context for stories told.

Story telling versus truth telling

I’m not against stories as defined as a qualitative accounting of an individuals lived experience. There is always more richness in a narrative than a quantitative dataset. However, I am opposed to story telling when it’s really just a pseudonym for bullshit.

Good story telling does not just elicit a reaction from donors, it communicates the truth in a way that quantitative data never can. Even if sharing quantitative data isn’t part of an organization’s strategy for engaging donors, data should help guide which stories are shared.