How social proof subjugates program evaluation

About a year and a half ago, The Verge wrote an incredible exposé on the seedy underworld of get-rich-quick fake bossiness gurus, who prey on hapless victims down on their luck and in need of cash.

The basic scam is to sell a wide range of “products” to people aspiring to startup one-person businesses. Each of these products is basically a PDF document full of shallow advice that recommend further products in the series to achieve success.

To the discerning eye, it’s not terribly difficult to spot business self-help website nonsense. They all basically look like some derivation of this.

At the heart of the online marketing underwold is the concept of “social proof”. These business guru scammers collude to make it appear as though they are experts in business, and insanely wealthy. They do so by linking to each others’ websites to manipulate search engine rankings, and quoting one another on their respective websites, giving the illusion that each of these individuals is endorsed by other experts.

I’ve been sitting on this topic for quite some time, always thinking back to the concept of social proof when any new social sector “break through” initiative is touted loudly in the media without a shred of evidence that said intervention actually works.

Who needs evaluation when you have publicity?

Good stories trump good data in the media, and questionable ideas that sound plausible are shrouded in social proof and promoted as though they were ideas worth spreading.

For those of us in the social sector, it’s (generally) easy to spot initiatives with exaggerated claims of success. But for the casual donor and untrained eye, the origin of enormous amounts of support to philanthropic causes, the difference between real outcomes and social proof can be illusive.

I’m not sure how one might go about tackling this problem. There are plenty of nonprofits that try to be honest about their results for internal improvement, and to a lesser extent are transparent with their donors about their findings.

But the incentive is always there to manufacture positive publicity by promoting misleading claims of impact. And more importantly, getting other nonprofits, coalitions, businesses, politicians, and media outlets to repeat those claims, thus creating truth.

The social proof versus program evaluation conundrum is a non-trivial puzzle. Donor education programs are more likely to appeal to savvy donors in the first place, so donor education is at least a difficult path, if not a non-starter.

For nonprofits, favoring actual proof over social proof is a poison pill, as high flying headlines and endorsements by public figures in major publications will always trump more down-to-earth claims of impact.

It’s an interesting question without a clear answer. The cost of not figuring it out is donor capital flowing to compelling sounding claims, rather than actual results.

Philanthropic taking – who should decide what’s good for us?

The social sector is rife with strange power dynamics. On the one hand the social sector is about giving, both the act of giving and the perceived selflessness of being rich enough to have money to spare.

So obsessed are we with giving, that it seems a quarter of social sector organizations have some form of “give” in their name.

The corollary to giving, of course, is taking.

A focus on “philanthropic giving” sounds noble, and sexy. So sexy and admirable is philanthropic giving that the Knight Foundation recently released a playbook for organizing giving days, and countless technology companies are springing up to make every day donors feel like heroes, while taking 3% donation processing fees.

Noble indeed.

But heroes need victims to save. In the social sector, the counterpart to philanthropic giving is philanthropic taking. Folks like me are hired by nonprofits and foundations to help them model their theories of change, a fancy way of defining one group’s (the “philanthropic giver”) vision for another (the “philanthropic taker”).

Nonprofits like the Family Independence Initiative (FII) have called out this traditional take on philanthropy, opting instead for their yet unproven strategy of having low-income families organize themselves out of poverty. The core argument here is that the top down model of philanthropic providers setting objectives for target populations is not only troublingly paternalistic, but also ineffective.

The fundamental premise of one person setting objectives for another is that the latter person doesn’t know what he or she needs. Proponents of this approach in anti-poverty interventions point to what they perceive to be, effectively, the economic irrationality of the poor.

Yet there is evidence to suggest the poor, even the extreme poor, are quite economically rationale, and that the poor maximize their happiness as any other economic actor does, as argued in the excellent book Poor Economics.

While I might make a different decision than you, that doesn’t necessarily make either of us irrational. We simply assign different values to different outcomes. The same is true of the poor.

The trouble is that foundations and nonprofits are in the business of assigning their values to other people’s problems. And why shouldn’t they? It’s their time, and their money, shouldn’t it be spent to maximize their social ambitions?

Sure, why not. But what if the best way to see a given change in the world is to listen for solutions rather than preach them?

That’s the simple premise behind the budding beneficiary feedback movement, which argues we need to listen more to the voices of so-called program recipients (and perhaps less so from folks like myself).

This line of thinking is what lead me to ultimately consider the Cash Transfer Equivalency (CTE) metric, which I introduced in a post last week. While the CTE is admittedly an imperfect evaluative metric, as a planning tool it is a simple way to quantify the value a program participant excepts to receive from a social intervention.

If philanthropic giving is about more than self-aggrandizement, we would be wise to reconceive program participants as more than philanthropic takers.

Technology’s role in the social sector

I’m no fan of Silicon Valley’s offerings for the social sector, but Causes’ relaunch yesterday sparked an interesting conversation in my Twitter feed on the role of technology in our line of work.

Causes is repositioning itself as a platform for civic engagement. Like Change.org before it, Causes works with organizations from varying political ideologies, a political-neutral business practice that has drawn the ire of activists in the past.

In the following exchange, Twitter user @mcbyrne argued with Causes CEO Matthew Mahan that by supporting non-liberal initiatives like the NRA, Causes could not justly claim to be a platform for social good:

Matthew responded by suggesting that Causes is simply a platform for social actions, and that technology is politically neutral:

Which lead to the following counter-argument:

Well, that’s just not true at all. Democracy is letting people voice their opinions. Indeed, what is more democratic than empowering idiots to speak their mind?

The fundamental issue in this debate is what the proper role of technology in the social sector ought to be.

I chose a career in the social sector because I have strongly held beliefs on what change I want to see in the world. Working in the social sector allows me to spend my time working on issues that I care about.

But in my work, like anyone else’s work, I use a lot of tools. And those tools don’t have political agendas. People do, I do, but not my tools. If Causes is indeed a tool, an amplification vessel for political action, then why should the platform take a stand on what people promote on the network?

Technology has done wonders for the world, and even the social sector. But the commercial technology that has aided the social sector has not had stated social agendas. Obviously computers and office productivity software make much of the work we do in the social sector possible. But we don’t think of it as “social sector software”.

In so far as Causes is just a platform for social actions, simply a tool, I find nothing offensive about the company working with organizations from varying ideologies.

Technology’s role in the social sector is no different than technology’s role anywhere. Technology should be useful. It should make it easier to accomplish what we want to see happen in the world. I’m not sure the new Causes is going to set the social sector on fire, but I have no desire to burn it down either.

Silicon Valley’s depressing vision for the social sector

Technology is supposed to be taking over the world and eating everyone’s lunch. Every industry, from medicine to law to retail has experienced impressive gains (or joblessness) at the hands of nimble, innovative technology companies upending every sector.

As the technology revolution has moved from industry to industry, Silicon Valley has even set its eyes on the social sector, with VC’s pumping money into a handful of startups aimed our way.

And oh man, does Silicon Valley ever have a depressing vision for the nonprofit sector.

With few exceptions, venture funded startup technology companies have focused on fundraising or building hallow online followings for non-profit causes.

But wait, don’t nonprofits need more money and more support? Well, two answers:

  1. I’m not sure. If we have the right interventions, then sure. But if we’re using these tools to litter the African continent with more Tom’s Shoes, then no.
  2. It’s well documented that charitable giving is a function of GDP. More accurately, giving is consistently 2% of GDP. Online giving platforms are not growing the pie, they are simply redirecting giving to online channels.

Don’t get me wrong, there are technology initiatives moving the social sector forward. But they aren’t venture backed Silicon Valley startups.

Companies like Social Solutions have tackled the less sexy, but essential problem of helping nonprofits track and report their outcomes. Outside of the case management market, the real social sector technology innovations are coming from the nonprofit sector itself, with Ushahidi pioneering crisis mapping and Volunteer Match long serving as the leader in connecting volunteers to nonprofits, despite ample for-profit competition.

The social sector’s challenges are vast and well documented. It’s a shame that by and large the technology sector has focused on the least of these issues, catering to a caricatured vision of the social sector content to earn five dollar donations for their do-gooder pass times.

The social sector has greater ambitions and faces starker realities. Indeed, even in so far as the income side of the equation is worth tackling for nonprofits (which it is), more than online fundraisers, nonprofits need to move toward serious financial planning.

There is no doubt that the technology revolution has touched much of the world. But Silicon Valley to date has been a modest presence in the nonprofit sector.

Perhaps there isn’t much of a role for Silicon Valley to play. As Phil Buchanan, CEO of the Center for Effective Philanthropy has argued time and again, the social sector exists to correct market inefficiencies. Technology’s genius has been in making already effective market functions work better, faster.

By contrast, the social sector is not as clearly monetizable, nor are we awash in effective solutions waiting for automation. Instead, there is a lot of ambiguity around what works, and we exist in a market that is largely underserved by for-profit ventures for good reason.

Therefore, technology companies are left to focus on the least compelling, yet monetizable, problems the social sector faces – shifting donations online and winning likes and retweets for your cause.

Hardly earth shattering stuff.

Is your program better than cash?

With 1.59 trillion in revenue, and over $300 billion dollars in charitable and government contributions in the US alone, there is no doubt that charity is big business. So vast is the charitable sector, that it has drawn the ire of a famous son who has labeled our noble pursuits the charitable-industrial complex.

Whether a career in the social sector is noble or not, the bigger question is whether one’s efforts result in the intended change.

As one who makes his living as part of the charitable industrial complex, I can’t help but wonder if we wouldn’t be better off skipping writing me checks and instead giving money directly to those who need it most. That’s the thinking of Give Directly, a nonprofit that simply gives money away to people in the developing world.

Cash transfers are not new, the US government has long transferred cash through various types of welfare programs to those living in poverty. The difference between traditional cash transfers and Give Directly’s approach is that Give Directly provides money unconditionally, whereas traditional approaches have provided money on condition a recipient completes certain tasks, like keeping their kids in school.

Cash Transfer Equivalency

I first started thinking critically about how much money it costs to implement social programs while completing a fellowship with a financial intermediary in graduate school. We were giving out large amounts of money, and there was a lot of doubt around what social return on investment we were getting.

One particular grant request was from a nonprofit that planned to hold a musical event for low-income children. The event (I guess) sounded like an okay idea, but some simple math revealed an outrageous price per expected participant. It would have been cheaper to have sent these kids to a Bieber concert.

This realization lead me to a fairly simple evaluation standard. If the purpose of a social intervention is to improve a program recipient’s life, shouldn’t that recipient at least value that intervention equal to its cash equivalent?

More formally, I devised a Cash Transfer Equivalency (CTE) metric, which is a simple social investment standard whereby the cost per person of the social intervention should be less than the value of transferring the same amount of cash to a program target.

Using the CTE, one would simply ask a program’s intended recipient how much they would be willing to pay to receive a social program. You then get a simple ratio, the amount the recipient is willing to pay over the program cost per person. If the ratio exceeds one, the intervention produces a surplus for the beneficiary, as they value the program more than the program costs.

Given the motion around beneficiary feedback, it seems logical that we would get feedback not just on how much a program recipient likes a program, but how much (if anything) they would be willing to pay for a program if they had the means.

My guess is the CTE is beneficiary feedback that would scare the hell out of a lot of organizations.

My failed Gates Foundation proposal

Since deciding to shut down my company Idealistics, I’ve been kicking around various ideas of what to do next. One thing I won’t be doing is implementing my rejected Gates Foundation Grand Challenges data interoperability proposal.

Contests are a popular, yet controversial approach to soliciting new ideas to social problems. Personally I’ve never been much of a fan of contests, preferring to fund my ideas the old fashioned way, by selling.

However, I’ve never had much luck with contests either, which likely suggests more sour grapes than thoughtful dissent of the contest approach. The most compelling argument against contests is that contests waste the time of would be entrepreneurs or nonprofits, making them spend considerable time filling out losing applications.

The common counter argument is that contests are beneficial for the losers in that they get to flesh out their ideas.

In my experience, I neither feel like my time was wasted nor terribly well spent. My time wasn’t wasted in that my opportunity cost was Netflix. On the other hand, while I know what I likely won’t be pursuing, I’m not sure I’m much closer to an answer as to what I ought to do either.

Results of course will vary, but in my experience, the process was neither terribly offensive nor rewarding.

The failed concept

Contests tend to be pretty good about publishing winning ideas. Reviewing a winning entry can be helpful as applicants try to gear their concepts to past winners. While more abundant in the world however, examples of losing projects are almost no where to be seen. In a moment of extreme transparency and boredom, I decided to share my losing idea. You can download it here.

The Gates Foundation Grand Challenges data interoperability challenge was to propose a product or initiative to increase data interoperability between social sector organizations or individuals. My proposal attempted to tackle the problem of non-profits sharing data with foundations.

The gist of my proposal was to build an open-source middleware web-based software solution that would allow non-profits to submit their outcomes data in raw format, in one place, that would then translate and summarize that data for multiple funders.

Essentially, the idea would be to allow non-profits to submit their data once, but have their data transmitted to multiple funders, the way those funders need their data sliced.

By submitting raw data, foundations could bypass the summary statistics shenanigans that earn some development officers six figure salaries, while easing the burden of filling out multiple applications to any number of funding entities.

Admittedly, this idea is totally not sexy and pretty darn boring (obviously the Gates reviewers agreed!). But I do believe it’s a necessary piece of plumbing. Not only are non-profits’ applications to funders arduous to put together, they are full of tons of nonsense. Such a solution would address both issues.

At the time I submitted my proposal, I had not yet attended the excellent retreat on data and philanthropy put on by the Heron Foundation, where I had the opportunity to learn about Coop Metrics.

Coop Metrics is a technology company that aggregates various types of industry data. They got their start focusing on aggregating food coop data nationally, and are now moving into multiple areas, including philanthropy. I’m excited about Coop’s entry into the social sector because they already have a proven, scaled solution, that can adapt nicely into the social sector. I’m hopeful that Coop, or a company like them, will solve this important problem.

I sure won’t be 🙂

The power of utility frameworks

In my last post on next steps I mentioned that I had the opportunity to present at the F.B. Heron Foundation’s inaugural Power of Information conference in July. Last week Heron posted all the presentations from the conference to their YouTube channel. If you are interested in getting more out of data in philanthropy, I strongly recommend giving these talks a look.

For my talk, I focused on how grant makers can make better social investments by developing what I refer to as a “utility framework”. A utility framework is a way for grant makers to explicitly assign subjective values to the outcomes of investments.

I think where people have struggled with the idea of assigning values to social outcomes in the past is that one cannot objectively assign value to one outcome versus another, especially across social sector issue areas. How does one compare preserving rain forest to teaching poor kids to read?

There is no objective way of measuring unlike outcomes. However, we make these subjective value judgments all the time, when we decide to invest in one cause over another. The purpose of a utility framework is to make those values explicit, so foundations can make consistent decisions across investment portfolios.

If you make it to the question and answer portion of my talk you’ll notice that I get a lot of push back from the audience. Indeed, there seems to be a lot of skepticism that you can translate personal values into a mathematical model.

But there is precedent for this modeling approach. Large companies model the risk preferences of their executives, and use these models to apply executives’ values to investment decisions without those executives present.

In my talk, I don’t propose anything new. Rather, I recommend applying this already proven technique for modeling subjective values to social investing. The only difference is that instead of modeling financial risk, we are simply weighing social outcome risks instead.

I have embedded the video of the talk above, as well as a SlideShare embed of the slides I used at the end of this post. If you would like to download the PowerPoint slides directly, you can do so here. Please note that the PowerPoint slides in the video got a bit garbled (particularly the mathematical notation), so it might be worth thumbing through the PowerPoint.

Next Steps

Since announcing that I would be moving on from Idealistics I’ve had the opportunity to have a lot of interesting conversations with great folks. The more I learn about what is out there and as I kick around a slew of ideas, the less sure I am of what my next steps should be.

Last week I was fortunate to have the opportunity to attend a retreat on data and philanthropy in New York hosted by the Heron Foundation. The conference included representatives from across the foundation and nonprofit world as well as for-profit executives ranging from finance to predictive analytic startups.

The conversations I had there challenged and expanded my thinking on the social sector. Particularly eye-opening was Clara Miller’s, CEO of the Heron Foundation, vision for Heron to invest 100% of its capital for social impact.

Clara argues that all investments (nonprofit and for-profit alike) have a social impact, whether that impact is positive or negative. Moving forward the Heron Foundation plans to make all of its investments with an eye toward social returns, breaking the traditional confines of expecting only financial return from for-profit investments and looking solely to the non-profit sector for social gains.

As I think about my own possible next steps, this simple (and in hindsight obvious) idea that all investments have a social impact will prove to be formative in my own philanthropic intellectual maturation.

At this point I’m not sure which direction I’m going to go, but Clara certainly opened me up to thinking beyond the gated gardens of the not-for-profit sphere.

In the meantime I’m doing some interesting consulting work that affords me the luxury to wait for the right opportunity, or idea, to grab me. As Idealistics winds down I’ll move back to writing more here on Full Contact Philanthropy, starting with a write-up on a talk I gave at last week’s data and philanthropy conference, followed by some posts publicly hashing out some new concepts I’m mulling over.

Goodbye Idealistics

I co-founded Idealistics in 2005. Eight years is a long time. This is not easy to write, but I have decided to move on from Idealistics.

My interests and skills have evolved over the last eight years, as has the social sector. Data is the belle of the social sector ball, but as a philanthropic community we are at something of a crossroads.

Through Idealistics I have had the opportunity to see firsthand how non-profit and government social programs use (and misuse) data. These frontline realities are significantly messier than the rosy promises poetically popularized by those well paid to be far removed from the day to day difficulties of executing data informed social programs.

The debate about how to use data in the social sector has a lifespan of an unknown length. At some point a new standard of data “best practices” will be established, and the sector will move on to another issue for the foreseeable future. The decisions the sector makes now will have consequences for years to come.

The upside potential is well documented – money flowing to the most effective programs, programs designed to meet social (not just funder) needs. But the consequences of getting this wrong are significant as well, and there are well intentioned forces moving us in the wrong direction.

Like the decisions that led us to the obsession with overhead, a costly mistake that has been with our sector for decades, wrongheaded efforts that try to supplant analytical capacity for jargon and shallow infographics threaten the sectors’ future ability to truly learn from evaluative metrics.

Indeed, we ended up with the simplistically misleading overhead-ratio metric in order to avoid the real complexity of comparing the seemingly incomparable. Shortcuts are costly, and if the sector is serious about maturing into a more data driven industry, we have to invest in our workforce’s analytic capacity.

Which brings me back to my decision to close Idealistics. I’ve enjoyed my time with the company immensely, and am proud of its accomplishments. Through this blog and others I have been fortunate to play a small role in the vast discussion about how the social sector uses data.

While blogging has been cathartic, taking pot-shots while hiding behind a WordPress install is hardly heroic. And although Idealistics has allowed me to be a part of making big differences for a relatively small group of organizations, no one could argue Idealistics is moving the needle in how the sector as a whole engages data.

It’s not. I’m not.

My career, and technical training, has prepared me well for the intersection of social issues, data, and technology. I’m grateful that this intersection is the talk of the sector, and I’m afraid I’m squandering this opportunity by trying to be a one-man band.

These issues are too important to me to ignore the fact that the right thing for me to do is to join a team. What that team is, I’m not sure.

I’m at the early stages of reaching out to folks. Hopefully wherever I land, I’ll be a part of an organization (whether a consulting firm, non-profit, technology company, whatever) that is building a smart team to tackle thorny data issues in the social sector head on. If you have any ideas, please shoot me an email at david.henderson82@gmail.com.

As for this blog, it will be moving back to its old home at fullcontactphilanthropy.com in the coming weeks. If you subscribed via RSS, you don’t need to do anything as the RSS feed will stay the same.

Anyone who has ever been a business owner knows how emotional running a business is. Although I know this is the right decision, it’s still very difficult. I love Idealistics, and am grateful for the opportunities I have had through it to work with great people and organizations.

Goodbye Idealistics.

Automated grant making

Top executives in large corporations tend to be busy, and don’t have time to make every decision themselves. A technique used by management consultants to help organizations make decisions consistent with those of their top executives without necessarily having to involve those executives in every decision is to model an executive’s values and risk tolerances.

In profit seeking businesses, generally these decisions revolve around how much money (measured in time and assets) a company is willing to risk, and at what level of risk, to receive a certain monetary pay out.

This same concept can be applied to philanthropic giving. By modeling a grant making entities’ social values and their assessment of each applicant’s ability to deliver results, one could quickly and consistently evaluate an arbitrary number of grant applications.

Valuing outcomes in terms of other outcomes

To illustrate this point let’s take the example of two funders, Funder A and Funder B, who each receive two applications from Applicant X and Applicant Y. Applicant X runs a food program plans feed 150 people for a month. Applicant Y runs a housing program that plans to house 20 people for a month. Applicant X is requesting a grant of $26,100 and Applicant Y is requesting $18,000. The following table summarizes this setup.

Table 1: Grant application requests

Persons housed for a month

Persons fed for a month

Grant request

Applicant X

0

150

$26,100

Applicant Y

20

0

$18,000

So, which is better? The answers depends on how we value the intended outcomes. The following table demonstrates how Funder A and Funder B values feeding a person for a month in terms of a person housed for a month.

The idea of measuring one outcome in terms of another is an essential concept to this modeling process. Reading the below table, Funder A values feeding 5 people in a month the same as housing one person for a month. This ratio of 5 people fed to one person housed is Funder A’s point of indifference. That is, according to Funder A, feeding 5 people for a month has the same social value as housing one person for a month.

Table 2: Funder values in terms of persons housed

Persons housed for a month

Persons fed for a month

Funder A

1

5

Funder B

1

15

Funder B puts considerably more value on housing relative to food than Funder A. For Funder B, you have to feed 15 people for a month to equal housing one person for a month. Therefore, we can not only see that funder B values housing more than Funder A, but we see that Funder A values housing 3 times as much, allowing us a way to quantify the subjective value differences between these funders.

Using the relative values of persons fed and persons housed for each of the funders and the number of people each applicant plans to feed and house, we can evaluate each proposal using the value system of each funder. Since we are using “Persons housed for a month” as a base outcome that we compare food outcomes to, Funder A and Funder B both assign the food program, Applicant Y, a value of “20”, which is simply 1 times the number of people Applicant Y plans to feed, 20.

In order to evaluate the outcomes of Applicant X, which is a housing program, in terms of the food outcome used by Applicant Y, we divide the number of people Applicant Y plans to feed, 150, by the number of people each funder believes must be fed to equal one person housed. For example, because Funder A values 5 people being fed the same as 1 person being housed, we divide 150 by 5, which leads Funder A to assign Applicant X a value of 30. Using the same logic, Funder B assigns applicant X a value of 10, reflecting Funder B’s preference for housing over food relative to Funder A’s preferences.

Table 3: Funders’ assessment of social value in terms of persons housed

Applicant X

Applicant Y

Funder A

30

20

Funder B

10

20

Expected social value calculations

In the above table, we see that Funder A prefers Applicant X’s application and Funder B prefers Applicant Y. But what if we don’t necessarily believe Applicant X and Applicant Y will be able to help as many people as they plan to? We can adjust our model to account for each funder’s confidence in each applicant’s ability to achieve their intended results.

Table 4: Funder confidence in applicant’s ability to deliver

Applicant X

Applicant Y

Funder A

40%

70%

Funder B

75%

75%

The above matrix shows each funders’ confidence in each applicants’ ability to deliver their intended outcomes. For example, Funder A is only 40% sure applicant X will deliver its intended outcome of feeding 150 people for a month.

Using these subject funder probabilities we can calculate the expected value of the number of people each applicant will help according to the funders’ assessments of the applicants’ capacities.

Table 5: Expected social value by funder in terms of persons housed

Applicant X

Applicant Y

Funder A

12

14

Funder B

7.5

15

By accounting for funders’ confidence in each applicant, we now see that both funders prefer Applicant Y, whereas Funder A preferred Applicant X before applying the probabilities in Table 4, as shown by the preferences depicted in Table 3.

Finally, we can factor in the cost of each applicants’ grant request by dividing the grant amount by the expected social value in terms of people housed.

Table 6: Financial cost over expected social value

Applicant X

Applicant Y

Funder A

$2,175

$1,286

Funder B

$3,480

$1,200

Accounting for cost, the decisions made by Funder A and Funder B do not change from Table 5, with both funders preferring applicant Y’s application. However, whereas Funder A has a modest preference for Applicant Y in Table 5, accounting for the dollar amount per social value metric makes funding Applicant Y a clear decision for both funders A and B.

Applying this approach

All the calculations in this example are simple, and this approach scales to any number of data points. The real trick is picking a base indicator to value other indicators against. In this case, we used the number of people housed in a month, but really any indicator can be used.

While this example focuses on grant making, the same idea can be used for any type of social investment decisions. By modeling an organization or grant making institutions’ values, decisions can not only be made more quickly, but the decision making criteria is made transparent, which helps drive intelligent discussions about whether investments are being made consistently, and whether those decisions are designed to maximize social impact.

As the social sector continues to debate how to better incorporate metrics in our work – we have to move away from simple summary statistics and outputs enumerations to more sophisticated uses of data that directly aid decision making. Automating some aspects of grant making is a logical application of data driven methodologies in the philanthropic sector.