Grant applications and grand narratives

By Elizabeth @ 2023-07-02T02:29 (+123)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Stuart Buck @ 2023-07-02T22:54 (+19)

I've been a grantmaker (at Arnold Ventures, a $2 billion philanthropy), and I couldn't agree more. Those kinds of questions are good if the aim is to reward and positively select for people who are good at bullshitting. And I also worry about a broader paradox -- sometimes the highest impact comes from people who weren't thinking about impact, had no idea where their plans would lead, and serendipitously stumbled into something like penicillin while doing something else. 

Elizabeth @ 2023-07-06T15:46 (+3)

Do you have any heuristics for identifying grants worth funding despite a lack of obvious path to enormous impact? I imagine "fling money at everyone" is not viable, and "do we like the founder?" has problems of its own.

Stuart Buck @ 2023-07-12T22:17 (+4)

It's all a bit intuitive, but my heuristics were basically: Figure out the general issues that seem worth addressing; find talented people who are already trying to address those issues (perhaps in their spare time) and whose main constraint is capital; and give them more capital (e.g., time and employees) to do even better things (which they will often come up with on their own). 

calebp @ 2023-07-05T01:51 (+10)

I work as a grantmaker and have spent some time trying to improve the LTFF form. I am really only speaking for myself here and not other LTFF grantmakers.

I think this post made a bunch of interesting points but I am just responding with my quick impressions mostly where I disagree as I think it will generate more useful discussion.

Pushes away the most motivated people
I think this is one of the points raised that I find most worrying (if true). I think it would be great to make grants useful for x-risk reduction to people who aren't motivated x-risk but are likely to do useful instrumental work anyway. I feel a bit pessimistic about being able to identify such people in the current LTFF set-up (though it totally does happen) and feel more optimistic about well-scoped "requests for proposals" and "active grantmaking" (where the funder has a pretty narrow vision for the projects they want to fund and are often approaching grantees proactively or are directly involved in the projects themselves). My best guess is that passive and broad grantmaking (which is the main product of the LTFF) is not the best way of engaging with these people and we shouldn't optimise this kind of application form for them and should instead invest in 'active' programs.

(I also find it a little surprising that you used community building as an example here. My personal experience is that the majority of productive community building I am aware of has been lead by people who were pretty cause motivated (though I may be less familiar with less cause-motivated CB efforts that op is excited about).)

The grand narrative claim
My sense is that most applicants (particularly ones in EA and adjacent communities) do not consider "what impact will my project have on the world?" to create an expectation of some kind of grand narrative. It's plausible that we are strongly selecting against people who are put off by this question but I think this is pretty unlikely (e.g. afaik this hasn't been given as feedback before and the answers I see people give don't generally give off a 'grand narrative vibe'). My best guess is that this is interpreted as something closer to "what are the expected consequences of your project?". Fwiw I do think that people find applying to funders intimidating but I don't think this question is unusually intimidating relative to other 'explain your project' type questions in the form (or your suggestions).

Confusion around the corrupting epistemics point
I didn't quite understand this point. Is the concern that people will believe that they won't be funded without making large claims and then are put off applying or that the question is indicative of the funders being much more receptive to overinflated claims which results in more projects being run by people with poor epistemics (or something else)?

Linch @ 2023-07-07T09:31 (+7)

I didn't quite understand this point. Is the concern that people will believe that they won't be funded without making large claims and then are put off applying or that the question is indicative of the funders being much more receptive to overinflated claims which results in more projects being run by people with poor epistemics (or something else)?

I interpreted Elizabeth as saying that the form (and other forms like it) will make people believe that they won't be funded without making large claims. They then consequently adopt incorrect beliefs to justify large claims about the value of their projects. In short, a treatment effect, not a selection effect.

Elizabeth @ 2023-07-08T05:31 (+2)

I clarified some of my epistemic concerns.

I think my model might be [good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision], where good model means both reasonably accurate and continually improving based on feedback loops inherent in the project, with the latter probably being more important.  And I think that if you reward grand vision too much, you both select for and cause worse models with less self-correction. 

Elizabeth @ 2023-07-06T15:36 (+2)

Thanks Caleb. I'm heads down on short term project so I can't give a long reply, but have a few short things .

Raemon offered to do the heavy lifting on why the epistemic point is so important https://www.lesswrong.com/posts/FNPXbwKGFvXWZxHGE/grant-applications-and-grand-narratives .

What do you think of applicants giving percentile outcomes, terminal goals without justification (e.g "improve EA productivity"), and citing other people to justify why their project is high impact?

Ruby @ 2023-07-07T04:44 (+1)

Is that link correct?

Joseph Lemien @ 2023-07-02T14:35 (+9)

Thanks for posting this.

I can't count the number of times I've faced a question on an application of some sort along the lines of "prove how incredibly impressive you are" or "prove to use how massive the impact of your efforts will be" and I find myself thinking on a much more humble scale.

I wish I could see inside the selection process so I could observe what kinds of different answers people provide.

NickLaing @ 2023-07-02T16:03 (+7)

It's so true, all the social Enterprise stuff wants you to describe for them the 1 percent (or .1 percent) upper tail of potential positive impact, but frame it like it's almost certainly going to happen (rather than the reality that it 99 percent most likely won't). I'm usually too honest and when I write overly ambitiously I feel icky...

Raemon @ 2023-07-02T19:42 (+10)

Hmm, have there been applications that are like "what's your 50th percentile expected outcome?" and "what's your 95th percentile outcome?"

Elizabeth @ 2023-07-03T04:14 (+10)

I listed those on an SFF application last year, although I can't remember if they asked for it explicitly. I think it's a good idea.

NickLaing @ 2023-07-03T04:19 (+3)

Such a great idea love it - never seen that.

I think for EA style applications that could work well, for other applications it might be hard for many people to grasp.

Seth Herd @ 2023-07-04T19:40 (+7)

The primary problem you mention is exaggerating the importance of your project. That is a fundamental issue with every grant. Every grantmaker wants to fund projects with maximum impact per dollar.

There is an incentive to aggrandize your work, but there's a counterincentive to not bullshit. A lot of the work of reviewing grants is having a well-tuned bullshit detector.

I don't think there's any way around the tension between those two factors. You can change the goalposts, but there's always a goal, and a claim of efficiency in moving toward that goal.

The other issue here is with the intended use of the grant money. If these organizations really only want to fund projects that improve our chances of survival and flourishing, that is their choice. If that's their goal, there has to be a chain of logic for how that is going to happen. Sometimes grantmakers come up with that chain of logic, and so they fund projects like "better understanding health psychology" because they believe accomplishing that will produce a better world. The organizations you mention are trying to be broad by allowing anyone to convince them that their unique project will make the world better with a good $/benefit ratio. This work can't be skipped, but it can be shared by grantmaker and applicant.

Therefore, I'd suggest that they add "if you don't have a grand narrative that's fine; we might have a grand narrative for your work that you're not seeing. Of course it helps your odds if you do have a convincing answer for a way your project achieves our goal (X) with a good cost ratio, in case we don't have one."

My career to date has been mostly funded by US government grants. These do not require a well thought out grand narrative, or any other sort of direct causal reasoning about impacts. I believe this is disastrous. It shifts most of the competition to cultural knowledge of the granting agency and the types of individuals who are likely to be reviewers. And by not requiring much explicit logic about likely outcomes and therefore payoff ratio, I believe the government is wasting money like crazy. They effectively fund projects that "sound like good work" to the people already doing similar work, which creates a clique mentality divorced from actual impact of the funded work. 

My experience with EA organization granting processes has been vastly better, primarily based on their focus on the careful payoff logic you seem to be arguing against.

Vasco Grilo @ 2023-07-06T21:20 (+4)

Thanks, Elizabeth!

I just wanted to note (and I am not saying you disagree!) I think applicants should strive to be as transparent and honest as possible, even at the cost of reducing their own chances of being funded.