Grant applications and grand narratives

By Elizabeth @ 2023-07-02T02:29 (+124)

The Lightspeed application asks:  “What impact will [your project] have on the world? What is your project’s goal, how will you know if you’ve achieved it, and what is the path to impact?”

LTFF uses an identical question, and SFF puts it even more strongly (“What is your organization’s plan for improving humanity’s long term prospects for survival and flourishing?”). 

I’ve applied to all three grants of these at various points, and I’ve never liked this question. It feels like it wants a grand narrative of an amazing, systemic project that will measurably move the needle on x-risk. But I’m typically applying for narrowly defined projects, like “Give nutrition tests to EA vegans and see if there’s a problem”. I think this was a good project. I think this project is substantially more likely to pay off than underspecified alignment strategy research, and arguably has as good a long tail.  But when I look at “What impact will [my project] have on the world?” the project feels small and sad. I feel an urge to make things up, and express far more certainty for far more impact than I  believe. Then I want to quit, because lying is bad but listing my true beliefs feels untenable.

I’ve gotten better at this over time, but I know other people with similar feelings, and I suspect it’s a widespread issue (I encourage you to share your experience in the comments so we can start figuring that out).

I should note that the pressure for grand narratives has good points; funders are in fact looking for VC-style megabits. I think that narrow projects are underappreciated, but for purposes of this post that’s beside the point: I think many grantmakers are undercutting their own preferred outcomes by using questions that implicitly push for a grand narrative. I think they should probably change the form, but I also think we applicants can partially solve the problem by changing how we interact with the current forms.

My goal here is to outline the problem, gesture at some possible solutions, and create a space for other people to share data. I didn’t think about my solutions very long, I am undoubtedly missing a bunch and what I do have still needs workshopping, but it’s a place to start. 
 

More on the costs of the question

Pushes away the most motivated people

Even if you only care about subgoal G instrumentally, G may be best accomplished by people who care about it for its own sake. Community building (real building, not a euphemism for recruitment) benefits from knowing the organizer cares about participants and the community as people and not just as potential future grist for the x-risk mines.* People repeatedly recommended a community builder friend of mine apply for funding, but they struggled because they liked organizing for its own sake, and justifying it in x-risk terms felt bad. 

[*Although there are also downsides to organizers with sufficiently bad epistemics.]

Additionally, if G is done by someone who cares about it for its own sake, then it doesn’t need to be done by someone whose motivated by x-risk. Highly competent, x-risk motivated people are rare and busy, and we should be delighted by opportunities to take things off their plate.
 

Vulnerable to grift

You know who’s really good at creating exactly the grand narrative a grantmaker wants to hear? People who feel no constraint to be truthful. You can try to compensate for this by looking for costly signals of loyalty or care, but those have their own problems. 

 

Punishes underconfidence

Sometimes people aren’t grifting, they really really believe in their project, but they’re wrong. Hopefully grantmakers are pretty good at filtering out those people. But it’s fairly hard to correct for people who are underconfident, and impossible to correct for people who never apply because they’re intimidated. 

Right now people try to solve the second problem by loudly encouraging everyone to apply to their grant. That creates a lot of work for evaluators, and I think is bad for the people with genuinely mediocre projects who will never get funding. You’re asking them to burn their time so that you don’t miss someone else’s project. Having a form that allows for uncertainty and modest goals is a more elegant solution.
 

Corrupts epistemics

Not that much. But I think it’s pretty bad if people are forced to choose between "play the game of exaggerating impact" and "go unfunded". Even if the game is in fact learnable, it's a bad use of their time and weakens the barriers to lying in the future. 
 

Pushes projects to grow beyond their ideal scope

Recently I completed a Lightspeed application for a lit review on stimulants. I felt led by the form to create a grand narrative of how the project could expand, including developing a protocol for n of 1 tests so individuals could tailor their medication usage. I think that having that protocol would be great and I’d be delighted if someone else developed it, but I don’t want to develop it myself. I noticed the feature creep and walked it back before I submitted the form, but the fact that the form pushes this is a cost.  

This one isn’t caused by the impact question alone. The questions asking about potential expansion are a much bigger deal, but would also be costlier to change. There are many projects and organizations where “what would you do with more money?” is a straightforwardly important question.
 

Rewards cultural knowledge independent of merit

There’s nothing stopping you from submitting a grant with the theory of change “T will improve EA epistemics”, and not justifying past that. I did that recently, and it worked. But I only felt comfortable doing that because I had a pretty good model of the judges and because it was a Lightspeed grant, which explicitly says they’ll ask you if they have follow-up questions. Without either of those I think I would have struggled to figure out where to stop explaining. Probably there are equally good projects from people with less knowledge of the grantmakers, and it’s bad that we’re losing those proposals. 

Brainstorming fixes

I’m a grant-applier, not a grant-maker. These are some ideas I came up with over a few hours. I encourage other people to suggest more fixes, and grant-makers to tell us why they won’t work or what constraints we’re not aware of. 
 



 

I hope the forms change to explicitly encourage things like the above list, but  I don’t think applicants need to wait. Grantmakers are reasonable people who I can only imagine are tired of reading mediocre explanations of why community building is important. I think they’d be delighted to be told “I’m doing this because I like it, but $NAME_YOU_HIGHLY_RESPECT wants my results” (grantmakers: if I’m wrong please comment as soon as possible).   

Grantmakers: I would love it if you would comment with any thoughts, but especially what kinds of things you think people could do themselves to lower the implied grand-narrative pressure on applications. I'm also very interested in why you like the current forms, and what constraints shaped them.

Grant applicants: I think it will be helpful to the grantmakers if you share your own experiences, how the current questions make you feel and act, and what you think would be an improvement. I know I’m not the only person who is uncomfortable with the current forms, but I have no idea how representative I am. 



 


Stuart Buck @ 2023-07-02T22:54 (+19)

I've been a grantmaker (at Arnold Ventures, a $2 billion philanthropy), and I couldn't agree more. Those kinds of questions are good if the aim is to reward and positively select for people who are good at bullshitting. And I also worry about a broader paradox -- sometimes the highest impact comes from people who weren't thinking about impact, had no idea where their plans would lead, and serendipitously stumbled into something like penicillin while doing something else. 

Elizabeth @ 2023-07-06T15:46 (+3)

Do you have any heuristics for identifying grants worth funding despite a lack of obvious path to enormous impact? I imagine "fling money at everyone" is not viable, and "do we like the founder?" has problems of its own.

Stuart Buck @ 2023-07-12T22:17 (+4)

It's all a bit intuitive, but my heuristics were basically: Figure out the general issues that seem worth addressing; find talented people who are already trying to address those issues (perhaps in their spare time) and whose main constraint is capital; and give them more capital (e.g., time and employees) to do even better things (which they will often come up with on their own). 

calebp @ 2023-07-05T01:51 (+10)

I work as a grantmaker and have spent some time trying to improve the LTFF form. I am really only speaking for myself here and not other LTFF grantmakers.

I think this post made a bunch of interesting points but I am just responding with my quick impressions mostly where I disagree as I think it will generate more useful discussion.

Pushes away the most motivated people
I think this is one of the points raised that I find most worrying (if true). I think it would be great to make grants useful for x-risk reduction to people who aren't motivated x-risk but are likely to do useful instrumental work anyway. I feel a bit pessimistic about being able to identify such people in the current LTFF set-up (though it totally does happen) and feel more optimistic about well-scoped "requests for proposals" and "active grantmaking" (where the funder has a pretty narrow vision for the projects they want to fund and are often approaching grantees proactively or are directly involved in the projects themselves). My best guess is that passive and broad grantmaking (which is the main product of the LTFF) is not the best way of engaging with these people and we shouldn't optimise this kind of application form for them and should instead invest in 'active' programs.

(I also find it a little surprising that you used community building as an example here. My personal experience is that the majority of productive community building I am aware of has been lead by people who were pretty cause motivated (though I may be less familiar with less cause-motivated CB efforts that op is excited about).)

The grand narrative claim
My sense is that most applicants (particularly ones in EA and adjacent communities) do not consider "what impact will my project have on the world?" to create an expectation of some kind of grand narrative. It's plausible that we are strongly selecting against people who are put off by this question but I think this is pretty unlikely (e.g. afaik this hasn't been given as feedback before and the answers I see people give don't generally give off a 'grand narrative vibe'). My best guess is that this is interpreted as something closer to "what are the expected consequences of your project?". Fwiw I do think that people find applying to funders intimidating but I don't think this question is unusually intimidating relative to other 'explain your project' type questions in the form (or your suggestions).

Confusion around the corrupting epistemics point
I didn't quite understand this point. Is the concern that people will believe that they won't be funded without making large claims and then are put off applying or that the question is indicative of the funders being much more receptive to overinflated claims which results in more projects being run by people with poor epistemics (or something else)?

Linch @ 2023-07-07T09:31 (+7)

I didn't quite understand this point. Is the concern that people will believe that they won't be funded without making large claims and then are put off applying or that the question is indicative of the funders being much more receptive to overinflated claims which results in more projects being run by people with poor epistemics (or something else)?

I interpreted Elizabeth as saying that the form (and other forms like it) will make people believe that they won't be funded without making large claims. They then consequently adopt incorrect beliefs to justify large claims about the value of their projects. In short, a treatment effect, not a selection effect.

Elizabeth @ 2023-07-08T05:31 (+2)

I clarified some of my epistemic concerns.

I think my model might be [good models + high impact vision grounded in that model] > [good models alone + modest goals] > [mediocre model + grand vision], where good model means both reasonably accurate and continually improving based on feedback loops inherent in the project, with the latter probably being more important.  And I think that if you reward grand vision too much, you both select for and cause worse models with less self-correction. 

Elizabeth @ 2023-07-06T15:36 (+2)

Thanks Caleb. I'm heads down on short term project so I can't give a long reply, but have a few short things .

Raemon offered to do the heavy lifting on why the epistemic point is so important https://www.lesswrong.com/posts/FNPXbwKGFvXWZxHGE/grant-applications-and-grand-narratives .

What do you think of applicants giving percentile outcomes, terminal goals without justification (e.g "improve EA productivity"), and citing other people to justify why their project is high impact?

Ruby @ 2023-07-07T04:44 (+1)

Is that link correct?

Joseph Lemien @ 2023-07-02T14:35 (+9)

Thanks for posting this.

I can't count the number of times I've faced a question on an application of some sort along the lines of "prove how incredibly impressive you are" or "prove to use how massive the impact of your efforts will be" and I find myself thinking on a much more humble scale.

I wish I could see inside the selection process so I could observe what kinds of different answers people provide.

NickLaing @ 2023-07-02T16:03 (+7)

It's so true, all the social Enterprise stuff wants you to describe for them the 1 percent (or .1 percent) upper tail of potential positive impact, but frame it like it's almost certainly going to happen (rather than the reality that it 99 percent most likely won't). I'm usually too honest and when I write overly ambitiously I feel icky...

Raemon @ 2023-07-02T19:42 (+10)

Hmm, have there been applications that are like "what's your 50th percentile expected outcome?" and "what's your 95th percentile outcome?"

Elizabeth @ 2023-07-03T04:14 (+10)

I listed those on an SFF application last year, although I can't remember if they asked for it explicitly. I think it's a good idea.

NickLaing @ 2023-07-03T04:19 (+3)

Such a great idea love it - never seen that.

I think for EA style applications that could work well, for other applications it might be hard for many people to grasp.

Seth Herd @ 2023-07-04T19:40 (+7)

The primary problem you mention is exaggerating the importance of your project. That is a fundamental issue with every grant. Every grantmaker wants to fund projects with maximum impact per dollar.

There is an incentive to aggrandize your work, but there's a counterincentive to not bullshit. A lot of the work of reviewing grants is having a well-tuned bullshit detector.

I don't think there's any way around the tension between those two factors. You can change the goalposts, but there's always a goal, and a claim of efficiency in moving toward that goal.

The other issue here is with the intended use of the grant money. If these organizations really only want to fund projects that improve our chances of survival and flourishing, that is their choice. If that's their goal, there has to be a chain of logic for how that is going to happen. Sometimes grantmakers come up with that chain of logic, and so they fund projects like "better understanding health psychology" because they believe accomplishing that will produce a better world. The organizations you mention are trying to be broad by allowing anyone to convince them that their unique project will make the world better with a good $/benefit ratio. This work can't be skipped, but it can be shared by grantmaker and applicant.

Therefore, I'd suggest that they add "if you don't have a grand narrative that's fine; we might have a grand narrative for your work that you're not seeing. Of course it helps your odds if you do have a convincing answer for a way your project achieves our goal (X) with a good cost ratio, in case we don't have one."

My career to date has been mostly funded by US government grants. These do not require a well thought out grand narrative, or any other sort of direct causal reasoning about impacts. I believe this is disastrous. It shifts most of the competition to cultural knowledge of the granting agency and the types of individuals who are likely to be reviewers. And by not requiring much explicit logic about likely outcomes and therefore payoff ratio, I believe the government is wasting money like crazy. They effectively fund projects that "sound like good work" to the people already doing similar work, which creates a clique mentality divorced from actual impact of the funded work. 

My experience with EA organization granting processes has been vastly better, primarily based on their focus on the careful payoff logic you seem to be arguing against.

Vasco Grilo @ 2023-07-06T21:20 (+4)

Thanks, Elizabeth!

I just wanted to note (and I am not saying you disagree!) I think applicants should strive to be as transparent and honest as possible, even at the cost of reducing their own chances of being funded.