What I learned about the state of analytics in the social sector at the Do Good Data conference

Last week I attended the third annual Do Good Data conference in Chicago. The turnout was impressive with about six hundred attendees across the country and a smattering of international attendees.

I attended the conference with high hopes. Andrew Means, one of the conveners of the conference suggested in his opening remarks that being a data analyst in the social sector can be a lonely job. Andrew stated that at the Do Good Data conference you are surrounded by people like yourself. For better or for worse, I did not find this to be true.

While the conference title suggests it is targeted at analysts, my experience suggested attendees were more enthusiasts than analysts. Probably rightly so, sessions were targeted at those with very little technical skill, making the content not super compelling for those grappling with higher order problems.

As I said, I don’t fault the conference for its emphasis on beginners. My experience at the conference suggests the conveners got it right for the vast majority of folks there, which reveals an unfortunate truth about the state of analytics in the social sector.

The rhetoric that analytics is changing the social sector is largely untrue in practice. Those of us in attendance did not sum to a collection of “data wizards” and “analysis ninjas”. These are nonsense terms designed to sell tickets by making us so called social sector analysts feel like something we are not. Ironically, these types of monikers are yet another example of what conference speaker and Chief Program Officer of the Robin Hood Foundation argued nonprofits do too much of, taking credit where none is do.

I don’t proclaim to be a gifted analyst. I possess the basic tools of a data scientist, but would hesitate to declare myself one in public. The fact that I felt the sessions at Do Good Data were too rudimentary is less good job me than it is bad job social sector.

Indeed, the only real kudos belong to the Do Good Data conference, which rightly recognizes where the sector is at, and developed a conference to try to bring folks along a little bit.

But little by little leaves a long way to go. I sincerely hope I’m completely wrong about the general experience level of conference attendees (less likely) or that conference attendees are not representative of the social sector’s analytical capabilities (more likely).

For my part, I would be inclined to return to next year’s conference, if for nothing else than the simple fact that I support the aspiration of a data driven social sector, even if the reality falls short. If you are a data analyst in the social sector, I’d encourage you to attend as well. Every sector contains a mix of individuals at all skill levels. Do Good Data 2015 met the needs of beginners well. Sign up for Do Good Data 2016 and help prove my assessment of social sector analytics wrong, and maybe help raise the level of sessions at next year’s conference as well.

Open grant making - The Council on Foundations' almost good idea

The Council on Foundations made a stink last week with their proposal to hold a Shark Tank style competition where nonprofits would pitch for a $40,000 grand prize before a live audience. The idea was largely derided by the social sector, so much so that the Council on Foundations ended up scrapping the event all together.

Copying a show called Shark Tank is never a good idea. While it’s certainly unpopular to do so, I do however think there are some elements of the open style grant competition the Council on Foundations proposed that might be valuable.

Funder transparency

For a sector that seems borderline obsessed with transparency, I almost spit my soup at the argument that the open style grant competition was problematic because it failed to preserve grant seekers’ anonymity. Public charities’ financials are largely a matter of public record through form 990s, it’s not hard to tell who has money and who does not. Indeed, I think there could be plenty of value in not only knowing who does get funded, but in who does not and why.

For my part I hang my failures like underwear up a flagpole, hoping others can learn from my mistakes and succeed were I fell short. Grant seekers would be better off not only knowing what does get funded, but what doesn’t get funded as well. Instead of preserving the feelings of applicants, we should be building public knowledge.

More important, publishing rejected grant proposals would better hold funders accountable, providing the public a peek behind the well varnished oak doors of the philanthro-elite. This style of open grant making would create a more equitable power distribution, where funders become accountable to the crowd with their decision making in plain view. Preserving grantee anonymity paradoxically preserves this power imbalance, to the determinant of the applicant anonymity purportedly protects.

Marketplace for grants

Contestants go on Shark Tank not just to secure investment from one of the investors on the panel, but to gain exposure to Shark Tank’s sizable audience. Grant applications are labored over by dedicated staff and entrepreneurs, presented to a handful of program officers at select foundations, and otherwise never really see the light of day.

If grant applications were public, one’s rejection from [insert big name foundation] might serendipitously get picked up by another foundation or donor. Taken a step further, if all these applications were not only public but in machine readable formats, one could truly build a fundee/funder marketplace where investors and social innovators could find one another far more easily.

Game over

Shark Tank is a stupid show. The Council on Foundations really should have anticipated this backlash. But the underlying idea of open grant making has potential, and deserves a better champion and a more intelligent debate. I’m hoping the possible end-game of a more frictionless way for great social innovations to get funded doesn’t get bogged down by a most unfortunate game-show analogy.

Charity Navigator's naughty and nice list gets a lump of coal

Last week Charity Navigator released a “naughty and nice” list, listing the highest and lowest rated charities by category (civil rights, animal rights, etc.), In the spirit of the holidays, shaming the “lowest” rated charities seemed a rather naughty thing for Charity Navigator to do, so I was intrigued to learn more about what makes a charity naughty or nice according to the charity rater.

I wrote a program to scrape data on the 68 charities listed in the 34 categories from Charity Navigator’s site (two charities per category, one high and one low rated). I used this data to discover a few interesting points about the naughty and nice list, detailed in the rest of this post.

Highly rated charities have significantly greater revenues than low rated charities

The highest rated charities earn quite a bit more in revenue than the lowest rated charities. Highly rated charities have median revenues of $6,386,300 versus $1,418,200 for low rated charities. In this way, Charity Navigator is less highlighting who is naughty and nice, and instead who is well funded and who is not.

rev_boxplot

Highly rated charities spend more on programs

Not surprisingly, program expense (percent of the charity’s budget spent on the programs and services it delivers) is a strong predictor of whether a charity is highly rated or not. In fact, no charity with a program expense ratio below 79.1% (blue line in the following chart) was highly rated, with just four low rated charities having program expense ratios above this threshold. I guess we’ll have to wait until next Christmas for the end of the overhead myth.

program_expense_score

Highly rated charities tend to be clustered on the East Coast

With only 34 highly recommended charities, there’s no way Charity Navigator could have a highly recommended charity in each state. But I was surprised to see the East Coast bias in where highly recommended charities are located. The green lines outline the density of high rated charities, which are identified as green dots. The red dots are the locations of the low rated charities. My poor Southern California is especially naughty it seems, home to four low-rated charities and not a single high rated one.

high_rate_map

 

Concluding remarks

I’d like to be clear that I have no problem with Charity Navigator. I think this list was a fairly silly thing for them to put together, and believe their data backs up this claim. I don’t think there’s much to be gained by shaming “naughty” nonprofits, especially when so much of that shaming is driven by how money is spent rather than what outcomes are achieved. Moreover, the lack of geographic diversity likely makes these “top picks” not super useful to a swath of the giving public.

As an aside, I often struggle with how technical (or not) this blog should be. I intentionally shied away from the more technical pieces of this mini-project, such as discussing how I retrieved and explored the data. If there is any interest in detailing the process or sharing the data, let me know and I’ll be happy to do a followup piece.

Giving Tuesday picks

Giving Tuesday is this Tuesday, December 2nd. I’ve compiled a list of a few of my picks for Giving Tuesday, separated into two groups; safe picks and speculative gifts. As the grouping names suggest, “safe picks” are gifts to organizations that I feel confident deliver effective interventions. The second group, “speculative gifts”, are organizations I’m supporting but whose interventions/execution I’m not as confident in.

Safe picks

  • GiveDirectly - GiveDirectly provides unconditional cash transfers to those living in extreme poverty. I frequently write about GiveDirectly because I believe both in the intervention and their data driven philosophy. If you care about extreme poverty and believe people should be empowered with capital to lift themselves out of poverty, this is by far the best way you can spend your donated dollars.
  • Family Independence Initiative - I work at the Family Independence Initiative (FII), so I’m obviously biased. Of course, I joined FII because I believe in their approach of investing in the poor directly. If you care about U.S. domestic poverty, and are especially interested in the empowerment aspect of democratizing access to capital, then FII is a solid choice.
  • Housing first - Housing first is not an organization, rather it’s an approach to homelessness that argues it is cheaper to place people into housing first, and then provide supportive services, rather than to try to “treat” people living on the streets. There are a lot of organizations taking a housing first approach throughout the U.S. (and other countries as well). If you care about chronic homelessness, find an organization committed to housing first in an area you care about. Here in Los Angeles, I recommend People Assisting the Homeless.

Speculative gifts

  • Team Tassy - Team Tassy helps prepare and place people in Haiti into jobs. This is an admittedly bias recommendation, as the executive director there is a graduate school friend of mine and I’ve worked with Team Tassy on their outcomes measurement framework. The organization is in relative infancy being only a few years along, but they’ve gained the trust of the families they work with in Menelas, and have embraced a data driven approach that I believe is the bedrock for effective interventions. If you care about Haiti, a country that has lost its fundraising luster long after the 2010 earthquake, and you are looking for a speculative gift to a small nonprofit with unrealized potential, I definitely recommend Team Tassy.
  • LivelyHoods - LivelyHoods provides products to youth living in impoverished communities in Kenya and trains the youth to sell the products. LivelyHoods is better than most organizations at consistently reporting key performance indicators on their blog, which wins them a lot of points in my book. I also like the concept of the model, as the purpose of the intervention is to spark economic activity rather than a purely charitable approach. I think there are legitimate questions about the efficacy of both this model and of LivelyHoods itself. I’m also interested in how this approach compares to an unconditional cash transfer. That said, if you care about impoverished Kenyan youth, buy-in to the model, and are looking to add a speculative gift to your portfolio, I think LivelyHoods is a worthwhile bet.

Digging into the Foundation Center's Glasspockets grants data

As you may know, the Foundation Center has been collecting detailed grants data from some of the largest U.S. foundations for the last few years through its Glasspockets initiative. What you may not know is that Glasspockets has developed a simple programmatic way to access its data through an open API (a way for programmers to easily access information).

For those technically inclined, I developed and published an R wrapper for accessing and loading Glasspockets queries on Github. For those less technical, R is a popular open soure statistical software package that I use for data analysis.

The Glasspockets API plus the R library allows me to easily search Glasspockets grants, which I’m planning on mining for future blog posts. For my initial pass with the data I started looking at the Gates Foundation’s giving, specifically looking at the seasonality in their grant making.

Gates giving by month

I’ve long lamented how our tax policy drives seasonal giving, creating peaks and valleys in donations. Next week’s Giving Tuesday is a clever way to try to capitalize on this unfortunate reality.

While the typical donor might not think about giving until year end, I would think that large foundations would be less seasonal in their giving. In the case of the Gates Foundation, I would also be wrong.

Using data from Glasspockets, I constructed the following chart which shows the sum of giving by month from 2011–2014 (up to November 2014 that is). A quick glance shows that November, the 11th month, dwarfs grants made in any other month. Indeed, the Gates Foundation made grants totalling $3,204,224,816 in the Novembers from 2011–2014.

gates_giving_by_month

I’m not necessarily arguing that it’s a bad thing that Gates giving isn’t more spread out. I did however assume that it would not so closely match the regular donor public’s patterns of giving.

I’m interested to explore whether this same seasonality, especially dominance of giving in November, holds true for other foundations or not. More importantly, I’m looking forward to digging deeper into the Glasspockets data. If you are a fellow R user, feel free to grab the library and jump in as well.