Telling average stories

I’ve never been terribly comfortable with the social sector’s obsession with story telling. It’s not that I don’t understand that stories can be powerful. People can resonate with stories in a way that they can’t with numbers. Indeed, evidence suggests that while story telling can help drive donors to give, quantitative data can actually risk turning donors off.

My problem with story telling is not the story telling itself per se, but that stories can be misleading. Perhaps more important, because story selection is driven by nonprofit fundraising and public relations people rather than those focused on data integrity, the stories told are invariably positive outliers.

I’m not the only one concerned about how stories can be misleading. GiveDirectly, a nonprofit that provides unconditional cash transfers to those living in extreme poverty, wrote an important post on how to best balance donor demand for stories with the organization’s core tenant to present its findings in unbiased ways.

In a blogpost last week, GiveDirectly outlined a set of standards it will hold itself to when sharing stories, and more importantly deciding which stories to share. The rules are worth reading, and are included in full below.

To keep ourselves honest when doing so, we’ve decided to stick to three rules:

  • Share everything, as in this blog post on interesting spending choices;
  • Select recipients randomly so that every recipient’s story has an equal chance of being shared, as we do weekly on Facebook. Or, explicitly state if the recipient was not chosen randomly and why, as in this post on a recipient who experienced an adverse event; and/or
  • Provide contextualizing data so the reader can determine how representative of the average the story is. For example, if we relay a case of a woman who used her transfer to pay for a surgery, we’ll also share any data we have on average spending on medical expenses.

Finding the average story

GiveDirectly’s strategy to select stories at random is compelling. A randomly selected story holds some probability of being positive, with the complimenting probability that the story is negative. When was the last time you saw a nonprofit share a negative story?

But more interesting than sharing randomly selected stories is to systematically tell average stories. Finding average stories is not an uncomplicated task, especially since one can be average on one metric (income for example), but far from average on another (like health).

One possible approach to identifying average stories is to use a machine learning clustering algorithm, such as k-means. Roughly, the k-means algorithm takes a dataset of individuals with various data-points, and places individuals into groups with others that possess similar attributes. This type of clustering is regularly used for things like customer segmentation, but can work equally as well for grouping targets of program interventions.

Improving on GiveDirectly’s approach, instead of telling random stories from the entire population, you could instead pull stories from within clusters, providing the average demographics and outcomes from each group as context for stories told.

Story telling versus truth telling

I’m not against stories as defined as a qualitative accounting of an individuals lived experience. There is always more richness in a narrative than a quantitative dataset. However, I am opposed to story telling when it’s really just a pseudonym for bullshit.

Good story telling does not just elicit a reaction from donors, it communicates the truth in a way that quantitative data never can. Even if sharing quantitative data isn’t part of an organization’s strategy for engaging donors, data should help guide which stories are shared.

The Red Cross’ obvious disaster

At the end of October ProPublica and NPR released a joint investigation titled “The Red Cross’ Secret Disaster”, looking into the gulf between the American Red Cross’s fundraising prowess in the aftermath of Hurricane Sandy and the realities of its numerous stumbles in providing the relief the organization so publicly fundraised against. Indeed, to many the Red Cross seemed far better prepared to raise funds in the wake of Sandy than to deploy them effectively toward disaster relief.

The most shocking thing to me in all of these allegations against the Red Cross is that the general donor public is actually surprised that the Red Cross (or pretty much any nonprofit for that matter) prioritizes how it’s perceived over all else. A central driving tenant of every entity, be it a nonprofit, for-profit, or bunny rabbit, is to do what you can today to survive until tomorrow.

Although ProPublica and NPR have positioned their piece as an exposé on the American Red Cross’ failures during Hurricane Sandy, I read the piece more as a statement on how the market realities of running a nonprofit creates adverse incentives, driving organizations to raise funds at the expense of what their stated core missions are.

Funders of change

Most for-profit organizations create a product that they sell directly to consumers. In the nonprofit sector, the funders of a program’s interventions are typically not the recipients of those services. Since the recipients of aid are not the funders, they can’t logically be the focal points of self sustaining organizations.

ProPublica and NPR rake the American Red Cross over the coals for diverting disaster equipment toward a photo-op with model Heidi Klum, an example used in the article to demonstrate executives’ backwards priorities. While those suffering in a disaster probably have no interest in Heidi Klum slowly strolling down a street handing out bottled water as cameras roll, the reality is that those images help the Red Cross raise money.

And however bad some might argue the Red Cross is at providing disaster relief, it’s obviously damn good at raising funds. Like any well run self serving organization (and what organization isn’t at least somewhat self serving?), the Red Cross finely tunes its fundraising strategy. If the monetization opportunity was in providing top notch disaster relief, I can assure you Sandy outcomes would have been different.

But the reality is that most nonprofits’ monetization strategies have very little to do with their programs’ missions. Instead, nonprofits raise funds by and large on their abilities to get donors to exchange their money for the warm-glow of giving.

Smarter giving

For the donors who are outraged at the Red Cross’ alleged ineptitude and emphasis on media exposure over outcomes, I’m hopeful they become aware of their unwitting complicitness in this so called secrete disaster.

For all the arguments about how we need more money in the social sector, I’m more persuaded by those who call for smarter giving. To me, smart giving is giving that is driven by a donor’s best guess of the value created by an organization, optimally influenced by evidence instead of celebrity.

Organizations like GiveWell have carved out narrow niches to better inform donors with specific preferences, although I suspect donors seeking advice from the likes of GiveWell are likely to be unimpressed by Heidi Klum in a disaster response vehicle in the first place.

The big money, and the big challenge, is in the general donor public. The fact is that the Red Cross knows the donor public very, very well.

So long as donors show a preference for media hype over results, shrewdly optimized organizations like the Red Cross will deliver the product their (paying) customers demand.

Why I joined the Family Independence Initiative

On September 1st, 2014 I joined the Family Independence Initiative (FII) as the Director of Analytics. When I decided to shut down Idealistics in the summer of 2013 I figured the next step for me would be joining a team focusing on domestic and/or international poverty. In the time between shutting down Idealistics and joining FII I consulted with a range of nonprofits, foundations, and businesses, none of which felt quite like something I wanted to dedicate several years of my life to.

I decided to close Idealistics in part because I had lost faith that the company’s technologies were really helping the nonprofits I worked with to create social impact. Indeed, I had lost faith that anti-poverty interventions had much effect at all.

I had come to realize that so much of my work, like the social sector itself, had been based on a misguided paradigm of a nonprofit sector providing solutions to distressed “clients” in “need” of answers. This paradigm doesn’t only oversimplify the poor, it plainly gets them wrong.

As I was losing faith in the efficacy of the program driven nonprofit model, my interest in cash transfers was growing considerably. In my personal giving, I support GiveDirectly, a nonprofit that experiments with giving unconditional cash transfers to those living in extreme poverty in the developing world. While the evidence around conditional cash transfers is pretty compelling, and the evidence base for unconditional cash transfers is growing, what I find most compelling about the unconditional cash transfer model has less to do with the transfer of money and more to do with the underlying notion of trusting people living in poverty.

Positive deviance

Positive deviance is a phenomenon whereby certain individuals given the same circumstance and access to raw materials are able to achieve better outcomes than their peers. The term was first coined by nutritionists and applied by those studying how certain families in rural Vietnam in the 1990s were able to provide better nutrition for their children than most families in the same areas.

I had not learned about the term positive deviance until joining FII, but it instantly resolved much of what I’ve been uncomfortable with about the social sector for so long and why I was attracted to models like GiveDirectly and FII.

I have never lived in any type of poverty, from extreme poverty in the developing world to domestic poverty as defined by the U.S. federal poverty line. It is asinine to believe I should see a pathway out of poverty, given my complete ignorance of any of its realities. Yet asinine I’ve been for the last decade of my career.

In FII I found an organization that is less interested in solving the problems of the poor, and instead more interested in learning about how the poor improve themselves, their families, and their own communities.

Data driven nonprofit

There’s a lot of talk in the social sector about data driven nonprofits. I spent eight years at Idealistics and an additional year as an independent consultant working with nonprofits to try to help them improve their data infrastructures to little success.

I’ve written before about the pitfalls of poor data literacy in the social sector, but data literacy is something that can be overcome and hired into an organization. What cannot be acquired through new hires is a data culture. A data culture requires the organization from top to bottom to truly adhere to not only investing in the ability to mine data for feedback, but then taking that feedback and turning it into organizational change.

Given the nonprofit model of developing a theory of change, then fundraising around that model, it’s not terribly surprising that nonprofits struggle to be data driven. A data driven nonprofit must be willing to accept not only that its theory of change might be wrong, but instead it must expect that it is most likely wrong.

Indeed, in graduate school my econometrics professor taught me that “all models are wrong, but some are useful”. An analyst’s role is to develop models that are knowingly wrong that hopefully get less wrong through time, as more data is acquired. This approach of iterative improvement and willingness to shift key assumptions is antithetical to how nonprofits are largely financed. Too often, a funder invests in a nonprofit on the assumption that the nonprofit’s theory of change is correct, leaving the nonprofit to use data to justify the funder’s investment rather than to use data to identify where the organization is wrong, and how to improve.

Investing in people, not nonprofits

I did not get into the social sector because I love nonprofits. I got into the social sector because I love people. Somewhere along the way, my career became about serving nonprofits, not serving people, not serving communities.

The word “service” is a popular way to describe our line of work in the nonprofit sector. Of course, when I receive “service” I expect to get what I want. I seek out services to help me achieve a goal, my goal, not a goal someone else has determined for me.

At FII, my job is to use data to learn from families how they improve themselves and their communities. I’m not tasked with proving a particular model, instead I’m learning about how families define success on their own terms, and how we (collectively) can invest in the incredible initiatives already underway by people that we (in the social sector) for too long have considered objects of change instead of agents of change.

I couldn’t be more thrilled.

Impact calls are the future of transparency

Transparency is a building block buzz-word of the social sector. While there seems to be general consensus that transparency is important, the proprietary actions of social sector actors runs contrary to that idealized vision.

Given this imbalanced rhetoric to reality ratio, I was especially intrigued by Guidestar’s new approach to sharing its progress with the public. Guidestar is experimenting with what it calls “impact calls”, quarterly webinars where the nonprofit’s leadership discusses its finances, impact, and strategic road map. The concept of the impact call is modeled on the quarterly earning calls publicly traded companies hold for shareholders.

On May 12 Guidestar held its second quarterly impact call, the first I have had the opportunity to listen in on. The impact call provided a solid overview of the organization’s finances and projected revenues, as well as a short and mid-term strategic road map, although the call was quite a bit lighter on actual impact reporting.

During the call, Guidestar CEO Jacob Harold explained that Guidestar’s impact assessment strategy is still evolving, and that the organization is developing an impact measurement dashboard it may present at the next impact call. The insinuation seems to be that as the impact measurement tool evolves, Guidestar will be better positioned to report its outcomes.

Although I was disappointed not to hear much about Guidestar’s impact on its impact call, I was nonetheless impressed with the concept and even found the sharing of less exhilarating (although more easily enumerated) metrics such as subscribers and web-usage statistics a great step toward real nonprofit transparency.

A criticism of earnings calls is that the quarterly reporting encourages companies to focus on short-term gains at the expense of long term progress. Guidestar CFO James Lum wisely cautioned that while Guidestar is committed to reporting quarterly results, the organization’s focus is on its long-term strategy. I think this is the right sentiment to have, and hope Guidestar doesn’t feel pressured over time to start optimizing for short-term gains to score favorable headlines in philanthropy media at the exense of the big picture.

This should be a trend

The impact call is an obnoxiously obvious idea. Everyone should be doing this, although I’m not sure many organizations will. Kudos to Guidestar for taking this step, I would love to see, at the very least, foundations follow suit.

While it would be great for every nonprofit to host quarterly impact calls, I’m not sure many folks would care to tune in. Guidestar is the right organization to pioneer this approach because many of its constituents are nonprofits themselves, and more likely to consume this type of information. Similarly, foundations (directly) invest in nonprofits, who would not only be interested in hearing more about how foundations think, but could benefit from learning about foundations’ thought-processes, strategic planning, and overall claims of impact.

Transparency is easy when you’re winning. It will be interesting to see if this type of hyper-transparency holds when findings are less than stellar. The Hewlett Foundation has demonstrated a willingness to embrace this type of transparency in their recent decision to discontinue the Nonprofit Marketplace Initiative, which they announced with the explanation that evaluators found ” our grants have not made much of a dent” in the intended outcomes.

Publicizing wins and losses is the future of transparency. Impact calls are a compelling medium to communicate those findings. I look forward to the next Guidestar impact call, especially if the next call has more impact in it.

Using word clouds to select answer options

Selecting the right questions for your survey instruments can be tough. Equally difficult is identifying the right answer options for the questions you ask people. When selecting answer options, ideally you would provide enoughoptions to get some meaningful feedback and variation in responses, but not too many answer options as to overwhelm survey respondents.

Before launching any survey instrument it’s preferable to do what is called a survey pretest. A pretest is where you get a subsample of people who are like your intended survey audience, and you ask them for feedback on each of your survey questions and answer options. However, pretesting isn’t always possible.

I’ve been working with a nonprofit called Team Tassy that provides workforce services to families in Menelas, Haiti. Team Tassy wanted to learn more about the employability of families in its targeted communities by conducting a survey at a free medical clinic day the organization sponsored.

One of the questions on the survey asked what work related skills each of the respondents possessed. The problem was that we didn’t know whether we were providing the right answer options to the skills question.

Ideally we would have pretested the question to get feedback on what types of skills should be included in the answer options. However, pulling together a focus group abroad would have provided logistical challenges, making pretesting less of a viable option.

Since we were not able to pretest the answer options, instead Team Tassy took its best guess as to what the answer options should have been, and provided an option for respondents to fill in any other skills not included as part of the question’s answer options. We planned to use the free-form options to better learn what job skills options should have been included.

Team Tassy collected more than 250 surveys at the medical clinic it sponsored. Given the relatively large number of surveys, reading through each of the free-form answers wasn’t completely practical. Instead, we built a word cloud of the free-form skills options to get a visual idea of what types of skills were most mentioned.

additional_skills-300x266

The word cloud revealed that several individuals reported having merchant and dress making related skills, options that were not included among the original answer options. Going forward, Team Tassy will now include these options on future skills questions.

Word clouds are a pretty low-tech approach to data analysis. But they can be really effective, especially for getting quick feedback on what types of answer options you might included on your surveys.

Help yourself to my ideas

I spent too much time and effort at my now defunct company worrying about people stealing my ideas. By the time I was wrapping up Idealistics, I thought about open sourcing the code I paid Github a monthly fee to keep private, all to realize it would have taken a ton of effort to get anyone to care that I was giving my software away for free.

If I had Idealistics to do again, I would have spent my energy spreading my ideas rather than paying to keep them secret.

I think there is a lot of value in being open regardless of the industry one is in, but there certainly is value in openness in the social sector. We’re supposed to be in the business of solving social problems after all.

Given the value of openess, and general rallying cry around nonprofit transparency, I can’t help but wonder why I’ve been coming across so many nonprofits intent on lawyering up to “protect” their intellectual property.

I’ve been doing a lot of contracting work recently. Really interesting stuff, and a bunch of insights that could probably help out a whole slew of social interventions. And I can’t tell you about any of it. I’m contractually obligated not to.

The social sector exists in this weird space between the public and private sector. We run private entities intended for public benefit. In the process we develop proprietary solutions to public problems.

That last sentence makes my head hurt.

We need to make a decision as to what we are, and what we stand for. You can’t have proprietary collective impact. The patent system is designed to allow companies to lock in competitive advantages to singularly reap the benefits of their investments over a period of time. Our investments are supposed to be public. So why the hell are startup social enterprises seeking patents on technological solutions to connect low-income families to social programs? Who benefits from those patents? Certainly not the public. Definitely not the poor.

I’m grateful to be working on an exciting range of contracts. While I’m under contract not to say anything about that work, going forward I’ll certainly be more open about my own ideas, even if I intend to monetize them.

If my ideas are any good, and can actually create real social value, be my guest and help yourself to my ideas.

Hire a Chief Data Officer

I’ve been doing a lot of consulting recently, which has resulted in a several months long hiatus from writing on this site. Happily, my greater volume of consulting engagements has given me more opportunities to give people bad (and hopefully some good) advise, which means more content for Full Contact Philanthropy.

Recently I have been reflecting on some particularly bad advise I gave to one of my customers. Over the summer I was hired by a large provider of housing and homeless services to improve the operational speed at which chronically homeless clients were being placed into housing. The project went well, and by the end of the project the executive team was seeing the value data can bring to their organization on an ongoing basis

The executive director asked me to draft a memo outlining what the organization should look for in an internal analytics hire. Ideally, the executive said, the hire would be able to work both on social outcomes data as well as helping the development team improve its use of donor data. I advised the executive against hiring one person to oversee all of the organization’s data needs, as I felt there was value in having specific domain experience (such as a background in homeless services or fundraising) before jumping into an issue specific data set.

I was wrong.

By telling the executive he should hire two different analysts, I scared him off of bringing in more data talent entirely, as I took what already looked like a budget sheet stretch (one new hire with a non-trivial skill set) to something completely out of reach.

Furthermore, while domain experience is important, the organization already had sufficient internal domain expertise. The development folks know development really well. And the program team is top notch. What they didn’t have was internal capacity in sifting through the volumes of data the organization collected each day.

Instead of trying to argue for hiring analysts who intimately know the organization’s core mission, instead I should have advised a management structure where the development and program teams make data requests to a data team, allowing development and program staff to identify the right questions, and letting the data team (or individual to start) focus on answering those questions with the data available.

As I’ve thought about this issue further, and gotten a closer look at the data needs of both the program and development sides of nonprofits, the more I am convinced that having a Chief Data Officer (someone whose sole responsibility is focusing on the data needs of the entire organization) makes a lot of sense.

The idea of a Chief Data Officer has been growing in popularity in the for-profit world. There are some nonprofits that have had success employing chief data officers as well. However, the idea of the Chief Data Officer has not permeated throughout the social sector. Instead, the nonprofits that have employed heads of data have more obviously quantifiable interventions, generally nonprofits that focus on measuring online engagement like DoSomething.

However, there is an exciting, and much broader opportunity, for various types of organizations to bring in Chief Data Officers. Indeed, regardless of what your organization does, every organization (business, nonprofit, foundation, whatever) traffics in some sort of information. Given the importance of data, not just now but historically as well, a Chief Data Officer is as logical, and essential, a hire as good director of programs, development, and Chief Financial Officer.

I’ve complained before that the rhetoric around data in the social sector is too hallow, and thinking too shallow. Part of the block in moving from concept to action in realizing the value of data is that organizations have not invested sufficiently in figuring out how data works in their managerial structures.

Mario Morino rightly encouraged the sector to think more intelligently about how to manage to outcomes. Managing to outcomes is not just about outcomes reporting software, but investing in people and process. I couldn’t be more excited about the fact that my work is giving me the opportunity to help organizations think more seriously about how to build data cultures. It’s a theme I’m passionate about and plan to expand on more in subsequent posts.

Tying charitable deductions to outcomes

While the jury is out on the effectiveness of social impact bonds (SIB), the fundamental idea of rewarding investment in effective social interventions makes a lot of sense. That core tenant of social impact bonds is so compelling that I’m surprised such thinking has not spilled into the charitable deduction debate.

Ideally, the charitable deduction allows donors to write-off investments in the betterment of society. But with 1.5 million nonprofits in the United States, our definition of public benefit is broad, a point underscored by the clear divide in the types of charities middle income individuals donate to versus the wealthy.

Borrowing from social impact bonds, I started thinking about a charitable deduction schedule that allowed donors to write-off outcomes rather than our current approach which limits donors to writing-off inputs.

Under this tax deduction scheme, charitable organizations’ deduction rates would be tiered based on the marginal benefit of each additional dollar donated. The marginal benefit component allows the donation rate to be tempered not just based on societal outcomes (for example, Carnegie Mellon University has a good argument that its students create a lot of economic value, present company excluded), but what effect each additional dollar has on social outcomes. This caveat is similar to GiveWell’s consideration of not just an organization’s effectiveness, but also its room for additional funding.

Tying charitable deductions to outcomes would open up much of the promise of SIB’s to all nonprofits, allowing high functioning nonprofits to market higher deduction rates to potential donors. Obviously such an approach would be fraught with evaluative difficulties, although no more so than SIBs.

I have lamented in the past how our current funding environment rewards nonprofits for investing in marketing over outcomes. A tier system that assigns deduction rates based on outcomes would better align organizations around maximizing social value. Wasn’t that the point of the charitable deduction in the first place?

Cash Transfer Equivalency Calculator

Closing my company has given me the time to pursue a number of small projects. One of those projects is a concept I wrote about last month called the Cash Transfer Equivalency (CTE). The CTE is a simple investment standard that a program officer or social investor can use to assess whether a social program might deliver more value than simply giving equal amounts of cash away.

To make the CTE easier to use, I wrote a web-based CTE calculator that allows users to enter a program’s cost, the number of people the program intends to serve, and the estimated value of that service to each of the intended beneficiaries. Based on those inputs, the CTE calculator estimates whether the proposed social intervention will provide more value than simply giving money away.

The CTE calculator is an easy use to initial assessment tool for grant making insitituions evaluating new grant opportunities. Importantly, because the CTE translates social value into monetary terms, one could use the CTE calculator to compare two or more unlike funding opportunities.

Example

There isn’t much to the CTE calculator, so if you are so inclined you can skip this quick tutorial and give it a try now. But for clarities sake, let’s run through the following example.

Let’s say we are approached to fund a youth focused musical enrichment event. The potential grantee is requesting $7,500 to hold a one day concert for low-income kids. Our first step in the CTE calculator is to enter the program cost, in this case $7,500.

step1

The youth concert expects 200 kids to attend. In step two, we enter the expected number of people affected by the program as 200.

step2

You’ll notice in step two we didn’t just enter the number of people, but we also need to answer the “average value” column. The “average value” is our best guess as to how much each kid would have been willing to pay to attend the concert, were the program not being provided free of charge. In this case, we put in an estimate of $35 per person.

With those three simple answers, the calculator calculates the CTE and suggests whether the program is worth investing in.

step3

With our youth concert example, the system calculates a CTE of 0.93. Because the CTE is below 1 (the point of indifference between doing the program and giving away equal amounts of cash) the calculator determines that the program is not worth investing in.

More simply, the basic mechanics of the CTE are exposed in the average cost versus the expected average value. At a program cost of $7,500 with 200 concert attendees, the average cost per person is $37.50. However, the expected average value we entered was just $35 per youth. Therefore, the program cost more on average than the value we expect each youth to receive.

This is a pretty straightforward example. Where the CTE calculator gets more interesting is when a program targets more than one recipient group. Using the youth concert example, you could imagine not just calculating the return to the kids, but perhaps their parents as well. The calculator allows you to enter any number of target groups, calculating the CTE for each group as well as a weighted average CTE across groups.

Using the CTE calculator

I wish I had the CTE calculator when I was working in a financial intermediary making grants to community development corporations in Pittsburgh. The calculator would have allowed me to more quickly weed out bad investments, and more importantly would have provided a much needed standard method for preliminarily assessing the high volume of incoming grant requests.

If I was still working as a grantmaker I would make the CTE a part of our initial grant application assessment. Each application would be assigned a CTE score by an individual program officer. The grants with the highest CTE scores would then go to committee for further consideration.

Because the CTE score hinges on the assumed monetary value per program recipient, the investment committee would likely debate the value assumption in the model. This is a good thing and illustrates the CTE method’s strength. Because the CTE score is driven as much by our best guess of the monetary value to the beneficiaries as it is a function of cost, the CTE focuses investment committees to have frank discussions about the value they believe their grant making will create.

You can check out the CTE calculator here and use it as you’d like.

Nonprofit consultants, beware of window shoppers

I’m no fan of nonprofit consultants, despite being one myself. But nonprofit consultants are people too, although we’re not always treated as such by the organizations we serve.

As knowledge workers, what we know is what we sell. Yet the courting process for securing work (multiple meetings, requests for proposal, etc), requires that we disclose methodologies to potential customers.

I get that outlining approaches to potential customers is a necessary part of the process. It allows both parties to determine whether the consultant is a good fit. But every consultant has stories of laying out a methodology, entertaining a number of questions from excited sounding staff and board members, all to have those same ideas implemented by another vendor or the organization’s staff.

No hire, no attribution, nothing.

This is a pretty messed up approach, and if you are a non-profit consultant looky-loo (you know who you are), please stop.

I’m not perfect at avoiding nonprofit consulting window-shoppers, but with some experience under my belt I’ve certainly gotten better at avoiding these organizations. Here are a few tips to avoid being a victim of thought theft.

  1. Qualify customers – Before filling out a request for proposal (RFP) or agreeing to meetings, look up an organization’s 990 on Guidestar and check out their annual revenue. If revenue is tight and the proposed scope looks to be outside their budget, you might have a window shopper on your hands.
  2. Be wary of unsolicited requests for proposals – Organizations are typically required to get more than one bid for a project, even if they have a preferred vendor in mind. I’ve certainly had some luck with organizations sending me RFPs out of the blue, but I’m generally wary of these “opportunities”, as they tend to be fishing expeditions for pre-selected vendors.
  3. Be judicious with your time – Window shoppers have a nasty habit of setting up multiple meetings, wasting your time while sucking you dry of your hard fought good ideas. Value your time. If you don’t, they won’t. And if a nonprofit is asking for too much face time without any commitment, it might be time to walk.
  4. Ask around – Ask other consultants about nonprofits you are thinking of working with. I’ve avoided some bad contracts by tapping my network.

My tendency, like other (good) nonprofit consultants is to be helpful. I love geeking out on all things social sector. While the nonprofit sector is accustomed to receiving pro-bono help, manipulating nonprofit consultants looking for work into offering up their ideas for nothing is contrary to the principles of our do-gooding industry.