Search
Close this search box.

Future of Philanthropy Pt. 3: The Social Scientist

by Adam Sobsey

Lance Potter is the Director of Evaluation for New Profit, a venture philanthropy organization that backs breakthrough social entrepreneurs who are advancing equity and opportunity in America. Prior to joining New Profit in 2011, he was Senior Evaluation Officer at the Bill and Melinda Gates Foundation.

Potter’s milieu is social science; his metier, impact measurement and analysis. His work is guided, in his own words, “by the notion that attention to data and evidence can improve the effectiveness of social programs. Done right, the benefits of measurement can exceed the costs.”

How do we do measurement right — and how does philanthropy do right by measurement? Those are the essential questions in Potter’s field. He addressed them in his 2021 interview with Adapt, the latest in our multipart series, The Future of Philanthropy.

Like every interview ever, this one has also been edited for length and clarity.

 

How are we measuring the impact of philanthropy, and does the way we measure need to change?

Measurement in philanthropy is very, very difficult, because the incentives around our theory of change are all misaligned with respect to measurement. Strategies ultimately turn to measurable outcomes, but there’s a misalignment between how we think about the world and how measurement work goes on. 

You’ve got a lot of what I’ll call responsive grantmakers, who give grants of $25,000-$50,000 each. And then they want to measure the effect of [those grants]. Well, they’re pouring the money into organizations [whose operating budgets are] $5 million – $10 million apiece. So those $25,000 grants are very difficult to measure, because attribution is overwhelmed by the number of other context variables in whatever those organizations are producing.

 

Is it easier to measure the impact of larger grants?

The big so-called “strategic philanthropies,” your Gates Foundations and the like, want to see the needle move at the national scale. But the reality is that the amount of money they’re using as the lever is not going to do that. They are aware that although their resources are vast compared to the rest of us, their resources are trivial compared to the actual scale of the social sector in America. The entire corpus of the Gates Foundation would pay for, I believe, one year of public education in California. But they continue to have a strategy that looks at moving the needle on vast pieces of education.

If you’re measuring at the individual grant level, you’re measuring things that are happening in small direct service activities. How those things connect to this larger vision of what the foundation is trying to measure is not always clear. [Foundations] struggle with the role of measurement in their organizations, and with whether they are trying to use measurement to improve their grantmaking, or to add knowledge to the field, or, frankly, just as a communications tool. Evaluation is frequently used as one of the tools in the belt of communications, and looking for wins is how we all operate. I think evaluators understand that, but because philanthropies don’t answer to anyone, their big reward is in good publicity. effectively. And so it’s very difficult for them to say, “We didn’t really learn anything this year, none of our grants played out, but we’re going to keep at it.” They’re committed to changing the world and so they need to demonstrate that they’ve changed the world.

Measurement is much more banal than that in reality. Social scientists do not run around saying we’ve changed the world. So there’s always tension there.

The entire corpus of the Gates Foundation would pay for, I believe, one year of public education in California.

 

How can philanthropies use evaluation and impact measurement more meaningfully, rather than as a way to “prove” wins?

Good philanthropies understand what measurement can do for them. It can help them understand, at the grant level, the variability around approaches: where things are working, and how they’re working. It can help them improve strategy. It can help them identify organizations that are really developing a meaningful kind of success, to try and scale them. It can help them connect larger systems-level initiatives with the system that they’re trying to connect to. But as social scientists, we have to use maybe the worst data source in the world, which is people. [Laughter.] People are a terrible data source. We don’t get to even measure their biologics. We have to ask them to talk, and people change their minds a lot. And they don’t know for sure what they believe. They’re variable in so many ways. And so our basic data source is often difficult to interpret. 

As social scientists, we have to use maybe the worst data source in the world, which is people.

 

 What are some other difficulties in interpretation?

There’s a lot of what we call silver bullet thinking. And this is another large problem with social programming in general. If you’re a young person who is from an economically distressed neighborhood, and of color, and not in a social environment where you have a lot of role models who have been very successful, and in a school that’s underfunded and struggling in the environment that it operates in, [you’re facing] systemic racism in the everyday living of your life. Let’s say I bring you a reading intervention that improves your ability to read by a quarter of a standard deviation. I have taken a laser beam to this huge box of your problems, and I’ve cut out one little spot, but it didn’t actually change that huge box of the barriers that you’re facing. I just drilled one little piece of light through it. We hope that that one little piece of light is enough to make a difference. And we can prove that we taught you to read better. But the difference between being able to read better and overcoming all of those systemic barriers is vast.

When philanthropy makes quote-unquote “big bets” on things, essentially the bet is that that one initiative is going to be robust enough to overcome all of those [barriers]. If you’re the measurement person, what are you going to measure? Are you going to measure whether this reading program teaches kids to read, or whether this reading program is robust enough to overcome all of the barriers that this young person is facing? They’re all braided and interconnected in really, really complex ways. 

Philanthropy is wisely beginning to acknowledge this broad set of influences and forces that are operating on people. And so we’ve seen a great deal of interest in systems entrepreneurship, systems thinking, systems approaches to problems. The challenge of that from a measurement point of view is that these systems are extremely large, and their boundaries, their mechanics, and their nature are not concretely described. And it’s difficult to tell what you’re measuring. We often learn that [a program’s] effect isn’t very large. Generally we don’t find very big effects. And philanthropists get very upset when they learn that the effect isn’t very large, because they’re looking for the win. Well, you’re not going to get the win. You’re not going to change this kid’s entire life by teaching third-grade reading.

 

But then what do you fund and how do you measure it? 

A lot of philanthropy is trying to avoid false concreteness, but it’s also falling back on measuring outputs, just measuring the things that got done rather than the effect of those things, because it’s easier to measure the things that got done. Over the last 40 or 50 years, we moved from trying to count those things done to [evaluating] their effect: short term, and then longer term, and then trying to do it by isolating the effect of that eighth grader’s classroom from all the other influences. That constant narrowing and adding of rigor is where measurement has gone, to the point that we can be very precise, and very, very technically crisp on how much change can be attributed to this one reading intervention only. We do that with statistical design methods that are able to isolate the effect of the program. That’s a powerful tool, but it’s also a very narrow and specific tool. So we’re now moving back towards just funding and measuring those outputs, measuring the things done: campaigns delivered, lawmakers engaged, white papers produced. But what do you predict will be the effect of this white paper? Who do you expect this white paper to land in front of? What do you think the influence of this white paper might be?

Many philanthropists want to know for their next annual report.

And we have to [ask] those questions for everything if we want to try to get that bigger system view of how this all works. And that takes years. Look at any big social movement of the last 50 or 70 years, and how long they take to come to fruition — whether it’s reducing tobacco, seatbelt use, gay marriage legislation, racial equity. Some of these things are not even near achieved and it’s been hundreds of years. But many philanthropists want to know for their next annual report. 

 

Should philanthropy focus less on the macro level and more on the micro — on finding proximate organizations and leaders who can effect change, at the local or regional level, that is easier to evaluate and measure?

Measurement people love place-based investing. We love it because we can get in there and define the system. And we can measure things that go on in there and get our heads around that. One of the biggest challenges of understanding how to scale social innovations is teasing apart the innovation itself from the people who are delivering the innovation. If you develop a program at Eastern Kentucky and it is changing lives [there], it may not be a thing that scales. You may have a whole binder full of how to do this in Eastern Kentucky, but if I take your binder and go to Western Nevada, it’s not going to work because I don’t know anybody there. 

The social sector has a set of systems that has a powerful overlay of human reality and human culture that you can capture and incorporate when you do place-based work. We have a really technical way of talking about this, which is whether these are fixed effects or random effects. Place-based work is inherently a fixed effect. When we find the effect in Eastern Kentucky, we’re not saying it’ll work in Northern Minnesota. If it were a random effect, we can say this would work anywhere, but those are harder to achieve.

The social sector has a set of systems that has a powerful overlay of human reality and human culture that you can capture and incorporate when you do place-based work.

 

Why can’t we just say, “Our work in Eastern Kentucky was great; now we want to go work somewhere else, but we’re going to have to just build a whole new way of doing it”?

Because of the pressure to change the world, and to do it immediately. And if that’s your motivation, if that’s your driving impetus, then patience is not going to be a virtue. If it took you seven years to make it work in Eastern Kentucky, it’s going to take you another seven years to make it work in Western Nevada. You’re going to get through your entire career and have only done four places. It’s too slow. Finding things that can immediately be scaled to work in nine places is going to be the way to go.

 

So cookie-cutter programs get greenlighted even if they might not work in eight of those places.

Measurement people don’t drive the strategy. We’re there to reflect on how the strategy is working out, and try to learn from it, and try to inform the next strategy. But very frequently, the closer you get to the decision of what the strategy will be, the less the data matter. Other influences take hold. It’s the same in philanthropy as it is in Washington, which is where I live. The closer you get to the policymakers and the actual policy decisions, the further the evidence fades into the distance, and the more the politics matter.

 

When you’re doing your work, measuring the impact of a program, what is the kind of impact, the kind of data that make you feel the best about the program? That makes you feel like you can actually demonstrate that something’s working or not working, as the case might be?

It’s layers of evidence that tell a coherent story. It starts with a good theory that there’s going to be a relationship between what your program does and system change. The next layer is collecting data that shows us that the program did the things we thought it was going to do, in the way it was intended to do them, so that we stay connected to that theory. We call that implementation, with what we would call fidelity, meaning it looks like what they wanted it to look like. And we can measure things like how much work got done, what was the dose?

It’s layers of evidence that tell a coherent story.

Layered on top of that, we have a mix of quantitative and qualitative measures of change: the perceptions of people who received services, those for whom the intervention was done; whether they believe that things happened, that they saw it happening around them, that they were engaged with it in a positive way. Measurable change doesn’t happen all at once. People don’t work like that. I like to be able to see that, maybe in the beginning, all we got was changing the perception of the people who were involved. And then over time, we begin to see the effect of the program itself. And you can follow that long enough to see that the effect can persist. So that whole chain then gives me real confidence that something happened here that was connected to what we thought would happen, and we delivered it. It happened.

Measurable change doesn't happen all at once. People don't work like that.

 

In the future, what changes in philanthropy would you like to see?

Three things. One is that I would like to see more patience in waiting for outcomes, which entails more commitment to ideas over a longer period of time. Many foundations periodically do what they call a strategic refresh. When you dig into that, you’ll find it means they basically stopped funding what they were funding before, and they switched to funding a new thing. Philanthropy needs to be much more patient and much more measured; much more humble and much more patient. 

Second, the amount of money invested in measurement work is very, very small. Philanthropists should really invest somewhere between seven and 15 percent of their programmatic money in evaluation. On a brand new thing where you have no idea what’s really going on, it could be much more, could be 20 percent. But most are probably investing one percent, maybe not even that much.

Philanthropy needs to be much more patient and much more measured; much more humble and much more patient.

Third, we need to put less pressure on grantees to have quantifiable outcomes. Especially when they’re doing broader system work, there aren’t going to be quantifiable outcomes that are satisfying in a way that’s going to be on the front page of the Washington Post. What I’m talking about is the difference between what I would call attribution and contribution. Funders need to be more accepting of contribution as an adequate measure of value: “contribution” meaning the money you gave didn’t demonstrably change anything, but it was well used. If you did your program and your program happened and it was done well, and we can measure that it was done well, if this needle moved, we can just say you contributed to moving that needle. We don’t know whether you moved the needle. [But] philanthropies want to know that the money they gave moved the needle. They want attribution. They feel accountable for their money and they feel responsible for the resources being well-spent and for ensuring that these few resources are used to the maximum possible effect. But this can lead to a kind of false concreteness around demonstrating that a specific program moved this needle, which produces a very narrow program model. 

So we need a little more grace there as well, more willingness to say: “We think this money was well spent. We think things are on the move. Did we do it? Hard to say. Did we try to help? Yes! Did the people we worked with do a good job? We think so. And people were happy; the community, the people involved, thought it worked. The macro measures look a little better, or maybe they don’t. And what can we learn from that? And where can we go and do it someplace else?” We need to embrace operating in the uncertainty of whether we’re going to succeed. That would be a win. 

You may also like