Issues with centralised grantmaking
By MathiasKBšø @ 2022-04-04T10:45 (+149)
Recently someone made a post expressing their unease with EA's recent wealth. I feel uncomfortable too. The primary reason I feel uncomfortable is that a dozen people are responsible for granting out hundreds of millions of dollars, and that as smart and hardworking as these people are, they will have many blindspots. I believe there are other forms of grantmaking structures that should supplement our current model of centralised grantmaking, as it would reduce the blindspots and get us closer to optimal allocation of resources.
In this post I will argue:
- That we should expect centralised grantmaking to lead to suboptimal allocation of capital.
- That there exists other grantmaking structures that will get us closer to the best possible allocation.
Issues with centralised funding
Similarly to the USSR's economic department that struggled with determining the correct price of every good, I believe EA grantmaking departments will struggle for similar reasons. Grantmakers have imperfect information! No matter how smart the grantmaker, they can't possibly know everything.
To overcome their lack of omniscience grantmakers must rely on heuristics such as:
- Is there someone in my network who can vouch for this person/team?
- Do they have impressive backgrounds?
- Does their theory of change align with my own?
These heuristics can be perfectly valid for grantmakers to use, and result in the best allocation they can achieve given their limited information. But the heuristics are biased and result in sub-optimal allocation to what could theoretically be achieved with perfect information.
For example, people who have spent significant time in EA hubs are more likely to be vouched for by someone in the grantmakers network. Having attended an ivy league university is a great signal that someone is talented, but there is a lot of talent that did not.
My issue is not that grantmakers use these proxies. My issue is that if all of our grantmaking uses the same proxies, then there will be a great deal of talented people with great projects that should have been funded but were overseen. I'm not sure about this, but I imagine that some complaints about EA's perceived elitism stem from this. EA grantmakers are largely cut from the same cloth, live in the same places, and have similar networks. Two anti-virus systems that detect the same 90% of viruses is no more useful than a single anti-virus system, two systems that are uncorrelated will instead detect 99% of all viruses. Similarly we should strive for our grantmakers's biases to be uncorrelated if we want the best allocation of our capital.
In the long run, overreliance on these proxies can also lead to bad incentives and increased participation in zero-sum games such as pursuing expensive degrees to signal talent.
We shouldn't expect for our current centralised grantmaking to be optimal in theory, and I don't think it is in practice either. But fortunately I think there's plenty we can do to improve it.
What we can do to improve grantmaking
The issue with centralised grantmaking is that it operates off imperfect information. To improve grantmaking we need to take steps to introduce more information into the system. I don't want to propose anything particularly radical. The system we have in place is working well, even if it has its flaws. But I do think we should be looking into ways to supplement our current centralised funding with other forms of grantmaking that have other strengths and weaknesses.
Each new type of grantmaking and grantmaker will spot talent that other grantmaking programs would have overseen. Combined they create a more accurate and robust ecosystem of funding.
FTX Future fund's regranting programme is a great example of the type of supplementing grantmaking structure I think we should be experimenting with. I feel slightly queasy that their system to decide new grantmakers may perpetuate the biases of the current grantmakers. But I don't want to let perfect be the enemy of the good, and their grantmaker programme is yet another reason I'm so excited about the FTX future fund.
Below are a few off-the-cuff ideas that could supplement our current centralised structure:
- Quadratic funding
- Grantmaker rotation system
- regranting programmes
- Incubator programs to discover projects and talent worth funding
- More grantmakers
Hundreds of people spent considerable time writing applications to FTX Future fund's first round of funding. It seems inefficient to me that there aren't more sources of funding looking over these applications and funding the projects they think look the most promising.
Given that many are receiving answers from their FTX Grant, I think the timing of this post is unfortunate. I worry that our judgement will be clouded by emotions over whether we received a grant, and if we didn't whether we approved of the reasoning and so fourth. My goal is not to criticise our current grantmakers. I think they are doing an excellent job considering their constraints. My goal is instead to point out that it's absurd to expect them to be superhuman and somehow correctly identify every project worth funding!
No grantmaker is superhuman, but we should strive for a grantmaking ecosystem that is.
Stefan_Schubert @ 2022-04-04T11:29 (+59)
One issue is that decentralised grant-making could increase the risk that projects that are net negative get funding, as per the logic of the unilateralist's curse. The risk of that probably varies with cause area and type of project.
My hunch is that many people have a bit of an intuitive bias against centralised funding; e.g. because it conjures up images of centralised bureaucracies (cf. the reference to the USSR) or appears elitist. I think that in the end it's a tricky empirical question and that the hypothesis that relatively centralised funding is indeed best shouldn't be discarded prematurely.
I should also say that how centralised or coordinated grant-makers are isn't just a function of how many grant-makers there are, but also of how much they communicate with each other. There might be ways of getting many of the benefits of decentralisation while reducing the risks, e.g. by the right kinds of coordination.
MichaelPlant @ 2022-04-05T09:43 (+21)
Right, but the unilateralist's curse is just a pro tanto reason not to have dispersed funding. It's something of a false positive (funding stuff that shouldn't get funded) but that needs to be considered against the false negatives of centralised funding (not funding stuff that should get funded). It's not obvious, as a matter of conjecture, which is larger.
Stefan_Schubert @ 2022-04-05T09:47 (+10)
Yes, but it was a consideration not mentioned in the OP, so it seemed worth mentioning.
Ivy_Mazzola @ 2022-04-06T19:20 (+15)
To be honest, the overall (including non-EA) grantmaking ecosystem is not so centralized that people can't get funding for possibly net-negative ideas elsewhere. Especially given they have already put work in, have a handful of connections, or will be working in a sort of "sexy" cause area like AI that even some rando UHNWI would take interest in.
Given that, I don't think that keeping grantmaking very centralized yields enough of a reduction in risk that it is worth protecting centralized grantmaking on that metric. And frankly, sweeping such risky applications under the rug hoping they disappear because they aren't funded (by you, that one time) seems a terrible strategy. I'm not sure that is what is effectively happening, but if it is:
I propose a 2 part protocol within the grantmaking ecosystem to reduce downside risk:
1. Overt feedback from grantmakers in the case that they think a project is potentially net-negative.
2. To take it a step further, EA could employ someone whose role it is to try to actively sway a person from an idea, or help mitigate the risks of their project if the applicants affirm they are going to keep trying.
Imagine, as an applicant, receiving an email saying:
"Hello [Your Name],
Thank you for your grant application. We are sorry to bear the bad news that we will not be funding your project. We commend you on the effort you have already put in, but we have concerns that there may be great risks to following through and we want to strongly encourage you to consider other options.
We have CC'ed [name of unilateralist's curse expert with domain expertise], who is a specialist in cases like these who contracts with various foundations. They would be willing to have a call with you about why your idea may be too risky to move forward with. If this email has not already convinced you, we hope you consider scheduling a call on their [calendly] for more details and ideas, including potential risk mitigation.
We also recommend you apply for 80k coaching [here]. They may be able to point you toward roles that are just as good or a better fit for you, but with no big downside risk and with community support. You can list us a recommendation on your coaching application.
We hope that you do not take this too personally as this is not an uncommon reason to withhold funding (hopefully evidenced by the resources in place for such cases), and we hope to see you continuing to put your skills toward altruistic efforts.
Best,
[Name of Grantmaker]"
Should I write a quick EA forum post on this 2 part idea? (Basically I'll copy-paste this comment and add a couple paragraphs). Is there a better idea?
I realize that email will look dramatic as a response to some, but it wouldn't have to be sent in every "cursed case". I'm sure many applications are rather random ideas. I imagine that a grantmaker could tell by the applicants' resumes and their social positioning how likely the founding team are to keep trying to start or perpetuate a project.
I think giving this type of feedback when warranted also reflects well on EA. It makes EA seem less of an ivory tower/billionaire hobby and more of a conversational and collaborative movement.
*************************************
The above is a departure from the point of the post. FWIW, I do think the EA grantmaking ecosystem is so centralized that people who have potentially good ideas which stem from a bit of a different framework than those of typical EA grantmakers will struggle to get funding elsewhere. I agree decentralizing grantmaking to some extent is important and I have my reasoning here
konrad @ 2022-04-13T09:27 (+2)
tl;dr please write that post
I'm very strongly in favor of this level of transparency. My co-founder Max has been doing some work along those lines in coordination with CEA's community health team. But if I understand correctly, they're not that up front about why they're reaching out. Being more "on the nose" about it, paired with a clear signal of support would be great because these people are usually well-meaning and can struggle parsing ambiguous signals. Of course, that's a question of qualified manpower - arguably our most limited resource - but we shouldn't let our limited capacity for immediate implementation stand in the way of inching ever closer to our ideal norms.
MathiasKB @ 2022-04-04T11:47 (+14)
I completely agree with this actually. I think concerns over unilaterialist's curse is a great argument in favour of keeping funding central, at least for many areas. I also don't feel particularly confident that attempts to spread out or democratize funding would actually lead to net-better projects.
But I do think there is a strong argument in favour of experimenting with other types of grantmaking, seeing as we have identified weaknesses in the current form which could potentially be alleviated.
I think the unilateralist's curse can be avoided if we make sure to avoid hazardous domains of funding for our experiements to evaluate other types of grantmaking.
evelynciara @ 2022-04-06T04:37 (+31)
Actually, a simple (but perhaps not easy) way to reduce the risks of funding bad projects in a decentralized system would be to have a centralized team screen out obviously bad projects. For example, in the case of quadratic funding, prospective projects would first be vetted to filter out clearly bad projects. Then, anyone using the platform would be able to direct matching funds to whichever of the approved projects they like. As an analogy, Impact CoLabs is a decentralized system for matching volunteers to projects, but it has a centralized screening process with somewhat rigorous vetting criteria.
Yonatan Cale @ 2022-04-06T16:00 (+4)
(Just saying I did lots of the vetting for colabs and I think it would be better if our screening would be totally transparent instead of hidden, though I don't speak for the entire team)
Linda Linsefors @ 2022-04-10T11:47 (+3)
Yes! Exactly!
If you want a system to counter the univerversalist curse, then designen a system with the goal of countering the univeralist curse. Don't relly on an unintended sidefect of a coincidental system design.
Linda Linsefors @ 2022-04-10T11:38 (+4)
I don't think there is a negative bias against centalised funging in the EA netowrk.
I've discussed funding with quite a few people, and my experience is that EAs like experts and efficiency, which mathces well with centralisd funding, at least in theory. I never heard anyone compare it to USSR and similar before.
Even this post is not against centralsised funding. The autor is just arguing that any system have blindspots, and we should have other systems too.
Brendon_Wong @ 2022-04-05T19:38 (+3)
While it's definitely a potential issue, I don't think it's a guaranteed issue. For example, with a more distributed grantmaking system, grantmakers could agree to not fund projects that have consensus around potential harms, but fund projects that align with their specific worldviews that other funders may not be interested in funding but do not believe have significant downside risks. That structure was part of the initial design intent of the first EA Angel Group (not to be confused with the EA Angel Group that is currently operating).
Stefan_Schubert @ 2022-04-05T19:46 (+3)
Yes, cf. my ending:
There might be ways of getting many of the benefits of decentralisation while reducing the risks, e.g. by the right kinds of coordination.
Brendon_Wong @ 2022-04-05T19:57 (+1)
I see, just pointing out a specific example for readers! You mention the "hypothesis that relatively centralised funding is indeed best shouldn't be discarded prematurely." Do you think it's concerning that EA hasn't (to my understanding) tried decentralized funding at any scale?
Stefan_Schubert @ 2022-04-05T20:12 (+4)
I haven't studied EA grant-making in detail so can't say with any confidence, but if you ask me I'd say I'm not concerned, no.
freedomandutility @ 2022-04-04T14:30 (+3)
One idea I have:
Instead of increasing the number of grantmakers, which would increase the number of altruistic agents and increase the risks from the unilateralistsā curse, we could work on ways for our grantmakers to have different blind spots. The simplest approach would be to recruit grantmakers from different countries, academic backgrounds, etc.
That being said, I am still in favour of a greater number of grantmakers but in areas unrelated to AI Safety and biosecurity so that the risks from the unilateralists curse are much smaller - such as global health, development, farmed animal welfare, promoting evidence based policy, promoting liberal democracy etc.
Charles He @ 2022-04-04T16:31 (+32)
Iām not sure this comment is helping, but I donāt agree with this post.
- Separate from any individual grant, a small number of grant makers have unity and serve critical coordination purposes, especially in design of EA and meta projects, but in other areas as well.
- Most of the hardest decisions in grant making require common culture and communicating private, sensitive information. Ideas are worth less than execution, well-aligned competent grantees are critical. Also, success of the project is only one consideration (deploying money has effects on the EA space and also into the outside world, maybe with lasting effects that can be hard to see).
- Once you solve the above problems, which benefit from a small number of grant makers, there are classes of projects where you can deploy a lot of money into (AMF, big science grants, or CSET).
- The above response doesnāt cover all kinds of EA projects, like the development of people, or nascent smaller projects that are important. To address this, outreach is a focus and grant makers are often generous with small grants.
- Grant makers aren't just passively gatekeeping money, just saying yes or no to proposals. Thereās an extremely important and demanding role that grant makers perform (that might be unique to EA) where they develop whole new fields and programmes. So grant makers fund and build institutions to create and influence generations of projects. This needs longevity and independence.
- The post doesnāt mention how advisers and peripheral experts, in and outside of EA, are used. Basically, key information to inform grant making decisions is outsourced, in the best sense, to a diverse group of people. This probably expands grant making capacity many, many, times. (Of course this can be poorly executed, capture, etc. is possible, but someone I know is perceptive and hasnāt seen evidence of this.)
- I'm not sure I'm wording this well, but inferential distance can be vast. I find it difficult to even āseeā how better people are better than me. Itās hard to understand this, you sort of have to experience it. To give an analogy, an Elo 1800 chess player can beat me, and an Elo 2400 chess player can beat that person. In turn, an Elo 2800 player can effortlessly beat those people. When being outplayed in this way, communication is literally impossible, I wouldn't understand what is going on in a game between me and the Elo 1800, even if they explained everything, move by move. In the same way, the very best experts in a field have deep and broad understanding, so they can make large, correct inferential leaps very quickly. I think this should be appreciated. I don't think it's unreasonable that EA can get the very best experts in the world and they have insights like this. This puts constraints on the nature and number of grant makers who need to communicate and coordinate with these experts, and grantmakers themselves may have these qualities.
I think someone might see a large amount of money and see a small amount of people deciding where it goes. They might feel that seems wrong.
But what if the causal story is the exact opposite of this intuition? The people who have donated this money seem to be competent, and have specifically set up these systems. Weāve seen two instances of this now. The reason why there is money at all is because these structures have been setup successfully.
Iām qualified and well positioned to give the perspective above. Iām someone who has benefitted and gotten direct insights from grant makers, and probably seen large funding offered. At the same time, I donāt have this money. Due to the consequences of my actions, Iāve removed myself from the EA projects gene pool. I'm sort of an EA Darwin award holder. So I have no personal financial/project motivation to defend this thing if I thought it was bad.
Brendon_Wong @ 2022-04-05T19:50 (+7)
Separate from any individual grant, a small number of grant makers have unity and serve critical coordination purposes, especially in design of EA and meta projects, but in other areas as well.
There are ways to design centralized, yet decentralized grantmaking programs. For example, regranting programs that are subject to restrictions, like not funding projects that some threshold of grantmakers/other inputs consider harmful.
Can you specify what "in design of EA and meta projects" means?
Most of the hardest decisions in grant making require common culture and communicating private, sensitive information. Ideas are worth less than execution, well-aligned competent grantees are critical. Also, success of the project is only one consideration (deploying money has effects on the EA space and also into the outside world, maybe with lasting effects that can be hard to see).
EA has multiple grantmakers right now, and lots of people that are aware of various infohazards, and it doesn't seem to me like the communication of private, sensitive information has been an issue. I'm sure there's a threshold at which this would fail (perhaps if thousands of people were all involved with discussing private, sensitive information) but I don't think we're close to that threshold.
I think the perception of who is a well-aligned, competent grantee can vary by person. More of a reason to have more decentralization with grantmaking. Also, the forecating of effects can also vary by person, and having this be centralized may lead to failures to forecast certain impacts accurately (or at all).
The post doesnāt mention how advisers and peripheral experts, in and outside of EA, are used. Basically, key information to inform grant making decisions is outsourced, in the best sense, to a diverse group of people. This probably expands grant making capacity many, many, times. (Of course this can be poorly executed, capture, etc. is possible, but someone I know is perceptive and hasnāt seen evidence of this.)
My sense is that this is still fairly centralized and capacity constrained, since this only engages a very small fraction of the community. This stands in contrast to a highly distributed system, like EAs contributing to and voting in the FTX Project Ideas competition, which seems like it surfaced both some overlap and some considerable differences in opinion on certain projects.
But what if the causal story is the exact opposite of this intuition? The people who have donated this money seem to be competent, and have specifically set up these systems. Weāve seen two instances of this now. The reason why there is money at all is because these structures have been setup successfully.
There have also been large amounts of funds granted with decentralized grantmaking; see Gitcoin's funding of public goods as an example.
Charles He @ 2022-04-06T02:23 (+8)
These are good questions.
So this is getting abstract and outside my competency, I'm basically LARPing now.
I wrote something below that seems not implausible.
not funding projects that some threshold of grantmakers/other inputs consider harmful.
EA has multiple grantmakers right now, and lots of people that are aware of various infohazards, and it doesn't seem to me like the communication of private, sensitive information has been an issue. I'm sure there's a threshold at which this would fail (perhaps if thousands of people were all involved with discussing private, sensitive information) but I don't think we're close to that threshold.
I didn't mean infohazards or downsides.
This is about intangible characteristics that seem really important in a grantee.
To give intuition, I guess one analogy is hiring. You wouldn't hire someone off a LinkedIn profile, there's just so much "latent" or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.
This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well.
I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people.
My sense is that this is still fairly centralized and capacity constrained, since this only engages a very small fraction of the community. This stands in contrast to a highly distributed system, like EAs contributing to and voting in the FTX Project Ideas competition, which seems like it surfaced both some overlap and some considerable differences in opinion on certain projects.
I guess this is fair, that my answer is sort of kicking the can. More grant makers is more advisers too.
On the other hand, I think there's two other ways to look at this:
- Let's say you're in AI safety or global health,
- There may only be like say about 50 experts in malaria or agents/theorems/interpretability. So it doesn't matter how large your team is, there's no value getting 1000 grantmakers if you only need to know 200 experts in the space.
- Another point is that decentralization might make it harder to use experts, so you may not actually get deep or close understanding to use the expert.
This answer is pretty abstract and speculative. I'm not sure I'm saying anything above noise.
Can you specify what "in design of EA and meta projects" means?
Let's say Charles He starts some meta EA service, let's say an AI consultancy, "123 Fake AI".
Charles's service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.
Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.
Someone has to kibosh this, and a set of unified grant makers could do this.
Brendon_Wong @ 2022-04-06T03:00 (+7)
This is about intangible characteristics that seem really important in a grantee.
To give intuition, I guess one analogy is hiring. You wouldn't hire someone off a LinkedIn profile, there's just so much "latent" or unknown information and fit that matters. To solve this problem, people often have pretty deep networks and do reference checks on people.
This is important because if you went in big for another CSET, or something that had to start in the millions, you better know the people, the space super well.
I think this means you need to communicate well with other grant makers. For any given major grant, this might be a lot easier with 3-5 close colleagues, versus a group of 100 people.
I see! Interestingly there are organizations, like DAOs, that do hiring in a decentralized manner (lots of people deciding on one candidate). There probably isn't much efficacy data on that compared to more centralized hiring, but it's something I'm interested in knowing.
I think there are ways to assess candidates that can be less centralized, like work samples, rather than reference checks. I mainly use that when hiring, given it seems some of the best correlates of future work performance are present and past work performance on related tasks.
If sensitive info matters, I can see smaller groups being more helpful, I guess I'm not sure the degree to which that's necessary. Basically I think that public info can also have pretty good signal.
So it doesn't matter how large your team is, there's no value getting 1000 grantmakers if you only need to know 200 experts in the space.
That's a good point! Hmm, I think that does go into interesting and harder to answer questions like whether experts are needed/how useful they are, whether having people ask a bunch of different subject matter experts that they are connected with (easier with a more decentralized model) is better than asking a few that a funder has vetted (common with centralized models), whether an expert interview that can be recorded and shared is as good as interviewing the expert yourself, etc., some of which may be field-by-field.
Someone has to kibosh this, and a set of unified grant makers could do this.
Is there a reason a decentralized network couldn't also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.
Charles He @ 2022-04-06T03:12 (+2)
Is there a reason a decentralized network couldn't also do this? If it turns out that there are differing views, it seems that might be a hard judgement to make, whether in a centralized model or not.
So this is borderline politics as this point, but I would expect that a malign agent could capture or entrench in some sort of voting/decentralized network more easily than any high quality implementation of an EA grant making system (E.g, see politicians/posturing).
(So this is a little spicy and there's maybe some inferential leaps here? but ) a good comment related to the need for centralization comes from what I think are very good inside views on ETH development.
In ETH development, it's clear how centralized decision-making de-facto occurs, for all important development and functionality. This is made by a central leadership, despite there technically being voting and decentralization in a mechanical sense.
That's pretty telling since this is like the canonical decentralized thing.
Your comments are really interesting and important.
I guess that public demand for my own personal comments is low, and I'll probably no longer reply, feel free to PM!
Linda Linsefors @ 2022-04-11T16:43 (+3)
Let's say Charles He starts some meta EA service, let's say an AI consultancy, "123 Fake AI".
Charles's service is actually pretty bad, he obscure methods and everyone suspects Charles to be gatekeeping and crowding out other AI consultancies. This squatting is harmful.
Charles sorts of entrenches, rewards his friends etc. So any normal individual raising issues is shouted down.
Someone has to kibosh this, and a set of unified grant makers could do this.
I don't understand your model of crowding out? How exatly is Charles and his firends shouting everyone down? If everyone supsects 123 Fake AI to be bad, it will not be hard to get funding to set up a compeeting service.
In a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.
As far as I can tell, EA grantmakers and leadership are overly worried about crowding out effects. They don't want to give money to a project if there might be a similar but better funding options later, because they think funding the first will crowd out the later. But my experience from the other side (applying and talking to other applicants) is that the effect is the compleet oposite. If you fund a type of project, others will see that this is the type of project that can be funded, and you'll get more similar applications.
Charles He @ 2022-04-14T12:12 (+2)
Ok, so either you have a service funded by EA money and claims to support EAs, or itās not funded by EA money and claims to support EAs.
(Off topic: If itās not funded by EA money, this is a yellow flag. Thereās many services like coaching, mental health targeting EAs that are valuable. But itās good to be skeptical of a commercial service that seems to try hard to aim at an EA audienceāwhy isnāt it successful in the real world?)
The premise of my statement is that you have an EA service funded by EA money. Thereās many issues if done poorly.
Often, the customers/decision makers (CEOs) are sitting ducks because they donāt know the domain that is being offered ( law/ML/IT/country or what have you) very well. At the same time, they arenāt going to pass up a free or subsidized service by EA moneyāeven more a service with the imprimatur of EA funds.
This subsidized service and money gives a toehold to bad actors. One can perform a lot of mischief and put down competitors with a little technical skill and a lot of brash and art. (I want to show, not tell, but this is costly and I donāt need to become a dark thought or something.)
I think there are subtler issues. Like, once you start off with a low funding environment and slowly raise funding bit by bit, until you get first entry, this is sort of perfectly searching the supply curve for adverse selection.
But really, your response/objection is about something else.
Thereās a lot of stuff going on but I think itās fair to say I was really pointing out one pathology specifically (of a rainbow of potential issues just on this one area). This wasnāt some giant statement about the color and shape of institutional space in general.
Charles He @ 2022-04-14T15:32 (+2)
Ok, my above comment is pretty badly written and Iām not sure Iām right and if Iām right I donāt think Iām right for the reason stated. Linda may be right, but I donāt agree.
In particular, I donāt answer this:
āIn a centralised system Charles only have to convince the unified grantmakers that he is better, to stay on top. In a de-centralised system he has to convince everyone.ā
Iām describing a situation of bad first movers and malign incentives, because this is what should be most concerning in general to EAs.
I think an answer is that actually, to start something, you shouldnāt have to convince everyone in a decentralized system. That seems unworkable and wonāt happen. Instead, the likely outcome is that you only need to convince enough people to get seed funding.
This isnāt good because you have the same adverse selection or self selection problems as in my comment above. I think that for many services, first mover/lock-in effects are big and (as mentioned, but not really explained) there is malign incentives, where people can entrench and principled founders arenāt willing to wrestle in the mud (because their opportunity costs are higher or the adversarial skills are disjoint from good execution of the actual work).
PeterSlattery @ 2022-04-05T20:48 (+6)
(on phone again - I really need to change this wakeup routine š!)
This was helpful. Alongside further consideration of risks, it has made me update to thinking about an intermediate approach. Will be interested to hear what people think!
This approach could be a platform like Kickstarter that is managed and moderated by EA funders. It is an optimal home for projects that are in the gap between good enough to fund centrally by EA orgs and judged best never to fund.
For instabce, if you submit to FTX for and they think that you had a good idea but weren't quite sure enough that you could pull it off, or that it wasn't high value relative to competitors, then you get the opportunity to rework the application into funding request for this platform.
It then lives there so that others can see it and support it if they want. Maybe your local community members know you better or there is a single large donor who is more sympathetic to your theory of change and together these are sufficient to give you some initial funding to test the idea.
Having such platform therefore helps aggregate interesting projects and helps individuals and organisations to find and support them. It also reduces the effort involved in seeking funding by reducing it to being closer to submitting a single application.
It addresses several of the issues raised in the post and elsewhere without much additional risk and also provides a better way to do innovation competitions and store and leverage the ideas.
Charles He @ 2022-04-06T02:17 (+4)
(I'm just writing fan fiction here, I don't know much about your project, this is like "discount hacker news" level advice. )
This seems great and could work!
I guess an obvious issue is "adverse selection". You're getting proposals that couldn't make the cut, so I would be concerned about the quality of the pool of proposals.
At some point, average quality might be too low for viability, so the fund can't sustain itself or justify resources. Related considerations:
- Adverse selection probably gets worse the more generous FTX or other funders gets
- Related to the above, I guess it's relatively common to be generous to give smaller starter grants, so the niche is might be particularly crowded.
- Note that many grant makers ask for revise and resubmits, it's relationship focused, not grant focused.
Note that adverse selection often happens on complex, hard to see characteristics. E.g. people are hucksters asking money for a business, the cause area is implausible and this is camouflaged, or the founding team is bad or misguided and this isn't observable from their resume.
Adverse selection can get to the point it might be a stigma, e.g. good projects don't even want to be part of this fund.
This might be perfectly viable and I'm wrong. Another suggestion that would help is to have a different angle or source of projects besides those "not quite over the line" at FTX/Open Phil.
Linda Linsefors @ 2022-04-11T16:26 (+2)
The chess analogy don't work. We don't have grant experts in the same way we have chess experts.
Expertice is created by experience coupled with high quality feedback. This type of expertice exists in chess, but not much grantmaking. EA grantmaking is not old enough to have experts. This is extra true in longtermist grantmaking where you don't get true feedback at all, but have to rely on proxies.
I'm not saying that there are no diffence in relevant skills. Beeing genneraly smart and having related knolage is very usefull in areas where no-one is an expert. But the level of skill you seem to be claming is not belivable. And if they convinced themselves of that level of supeiriority, that's evidence of group think.
Multiple grantmakers with diffrent heruristics will help deveolop expertice, since this means that we can compare diffrent strategies, and sometimes a grantmaker get to see what happens to projects they rejected that got funding somwhere else.
So grant makers fund and build institutions to create and influence generations of projects. This needs longevity and independence.
I agree, but this don't require that there are only few funders.
Now we happen to be in a situation where almost all EA money comes from a few rich people. That's just how things are wether I like it or not. It's their money to distrubute as they want. Trying to argue that the EA bilionares should not have the right to direct their donations as they want, would be pointless or couterproductive.
Also, I do think that these big donors are awsome people and that the world is better for their generosity. As far as I can see, they are spending their money on very important projects.
But they are not perfect! (This is not an attack!)
I think it would be very bad for EA to spread the idea that the large EA funders are some how infalable, and that small donors should avoid making their on grant decition.
Charles He @ 2022-04-09T20:09 (+2)
The end of the above comment included a statement about no funding, which suggested that my comment is entirely disinterested.
I've since learned (this morning) of additional funding and/or interest in funding and this statement about no funding is no longer true. It was probably also misleading or unfair to have made it in the first place.
tobyj @ 2022-04-04T11:20 (+22)
Hundreds of people spent considerable time writing applications to FTX Future fund's first round of funding. It seems inefficient to me that there aren't more sources of funding looking over these applications and funding the projects they think look the most promising.
This wouldn't directly address your main concern, but I'd be really interested to see more full grant applications posted publicly (both successful and non-successful).
Linch @ 2022-04-04T16:50 (+7)
Manifold Markets (which I have a COI with) posted their FTX FF grant application here.
Charles He @ 2022-04-04T16:38 (+2)
I want you to know there isn't some secret sauce or special formula in the words of a grant proposal itself. I don't think there is really anything canonically correct.
There might be one such grant application shared publicly, if that person ever gets around to it.
This is grant is interesting because it was both successful and non-successful at the same time. This is because it has interest but was rejected due to the founder, so the project might be "open".
PabloAMC @ 2022-04-04T16:22 (+16)
One advantage of centralized grantmaking though is that it can convey more information, due to the experience of the grantmakers. In particular, centralized decision-making allows for better comparisons between proposals. This can lead to only the most effective projects being carried out, as it would be the case with startups if one were to restrict himself to only top venture capitalists.
Brendon_Wong @ 2022-04-05T19:53 (+5)
Do you have any evidence for this? There's definitely evidence to suggest that decentralized decision making can outperform centralized decision making; for example, prediction markets and crowdsourcing. I think it's dangerous to automatically assume that all centralized thinking and institutions are better than decentralized thinking and institutions.
PabloAMC @ 2022-04-05T22:03 (+1)
I recall reading that top VC's are able to outperform the startup investing market, although it may have a causal relationship going the other way around. That being said, the very fact that superforecasters are able to outperform prediction markets should signal that there are (small groups of) people able to outperform the average, isn't it?
On the other hand prediction markets are useful, I'm just wondering how much of a feedback signal there is for altruistic donations, and if it is sufficient for some level of efficiency.
Brendon_Wong @ 2022-04-06T02:47 (+6)
I recall reading that top VC's are able to outperform the startup investing market, although it may have a causal relationship going the other way around.
Yep, there's definitely return persistence with top VCs, and the last time I checked I recall there was uncertainty around whether that was due to enhanced deal flow or actual better judgement.
That being said, the very fact that superforecasters are able to outperform prediction markets should signal that there are (small groups of) people able to outperform the average, isn't it?
I think that just taking the average is one decentralized approach, but certainly not representative of decentralized decision making systems and approaches as a whole.
Even the Good Judgement Project can be considered a decentralized system to identify good grantmakers. Identifying superforecasters requires having everyone do predictions and then find the best forecasters among them, whereas I do not believe the route to become a funder/grantmaker is that democratized. For example, there's currently no way to measure what various people think of a grant proposal, fund that regardless of what occurs (there can be rules about not funding downside risk stuff, of course), and then look back and see who was actually accurate.
There haven't actually been real prediction markets implemented at a large scale (Kalshi aside, which is very new), so it's not clear whether that's true. Denise quotes Tetlock mentioning that objection here.
I also think that determining what to fund requires certain values and preferences, not necessarily assessing what's successful. So viewpoint diversity would be valuable. For example, before longtermism became mainstream in EA, it would have been better to allocate some fraction of funding towards that viewpoint, and likewise with other viewpoints that exist today. A test of who makes grants to successful individuals doesn't protect against funding the wrong aims altogether, or certain theories of change that turn out to not be that impactful. Centralized funding isn't representative of the diversity of community views and theories of change by default (I don't see funding orgs allocating some fraction of funding towards novel theories of change as a policy).
PabloAMC @ 2022-04-06T10:08 (+1)
So viewpoint diversity would be valuable. Definitely. In particular, this is valuable when the community also pivots around cause neutrality. So I think it would be good to have people with different opinions on what cause areas are better to support.
PeterSlattery @ 2022-04-04T19:59 (+12)
(On phone, early in the morning!)
Thanks for this.
I agree with nearly all of it.
I'd like us to have a community fundraising platform and a coexisting crowdfunding norm so that more good ideas get proposed and backed. Also, so that the community (including centralised funders) have a better read on what the community wants and why.
As an example, I have several desires for changes and innovations that I'd be happy to help fund. As an example, I would like to be able to read and refer to a really detailed assessment and guesstimate model for whether, when, and how best to decide on giving now v saving and giving later. I'd help fund an effective bequest or volunteer pledge program. I know others who share my views. I'd like to know the collective interest in funding, either of these. I'd also like centralised funders to know that information, as that community willingness to funding something might make them decide to fund it in conjunction or instead. I don't currently have any easy way to do this.
I suspect there are many ideas in EA that would possibly attract crowdfunding but not centralised funding (at least initially) because many people in some part of the EA community have some individually small, but collectively important need that funders don't realise.
With regard to Stefan's point, rather than reduce risk by reducing and centralising access to funding like we do now, we could reduce it in other ways. We could have community feedback. We could also have contingencies within grants (e.g., projects only funded after a risk assessment is conducted). We could have something modelled on ethics committees to assess what project types are higher risk.
Ivy_Mazzola @ 2022-04-06T19:17 (+9)
As a community manager, I care a lot about maximizing the potential of any community member who is already deep enough on the EA engagement funnel to even be applying for a grant. In addition to the (very good) reasons in OP's post, I want to see the grantmaking ecosystem become less centralized because:
1. Founders, scalers, and new projects are a bottleneck for EA and it is surprisingly hard to prompt people to take such a route. It seems to be a personality thing, so we should look twice before dismissing people who want to try.
2. Even if a project ends up underperforming, the opportunity to try scaling or starting up a project does give a dedicated and self-starting EA valuable experience. That innovator-EA may get more potential benefit from being funded than a lot of other ways that one might slowly gain experience. And funding the project should come with some potential positive impact, even if it isn't the most impactful and exciting project to many grantmakers.
Similar tactics exist in the movement already: EA/80K recommends people enter the for-profit world to gain experience, which comes with near-zero positive impact potential during that time. EA also subsidizes career trainings, workshops, and even advanced degrees toward filling bottlenecks of all types.
Therefore, I'd also advocate for being a bit more lax in funding/subsidizing relatively cheap new projects or scale-ups which can help dedicated innovator/self-starter EAs gain career experience and yield some altruistic wins. (I admit that some funders may already be thinking this way, I don't know!)
3. It is sad to me that dedicated EAs can essentially be blackballed in what I'd still like to think of as an egalitarian movement. I don't think it is anyone's fault (mad props to grantmakers and funders), but if the funding ecosystem evolves to be a bit more diverse, I think it would be good for the movement's impact and reputation, at least via the mental health and value drift levels of EAs themselves. I'm not saying "fund everything that isn't risky", but that being gatekept/blackballed is a uniquely frustrating experience that can sour one's involvement with the movement. Despite good intentions and a mature personality, it seems natural to stick more to the sidelines after being rejected the first time you stick your neck out and not given any recommendations for where else to apply for funding. The more avenues the movement has and the more obvious these avenues are, the less a rejection will feel like a blackball and prompt people to stop trying.
FWIW I really like the vetted kickstarter idea posted by Peter Slattery below. A bonus with an idea like that is that it will also keep E2Gers engaged. It is a lot more interesting than, say, donating to EAIF every year, and maybe they can get their warm fuzzies there too.
Brendon_Wong @ 2022-04-05T20:37 (+9)
I agree with the issues related to centralized grantmaking flagged by this article! I wrote a bit about this back in 2018. To my understanding, EA has not been trying forms of decentralized/collective thinking, including decentralized grantmaking. I think that this is definitely a very promising area of inquiry worthy of further research and experimentation.
One example of the blind spots and differences in theories of change you mention is reflected in the results of the Future Fund's Project Ideas Competition. Highly upvoted ideas like "Investment strategies for longtermist funders" and "Highly effective enhancement of productivity, health, and wellbeing for people in high-impact roles," which came in at #3 and #4 respectively, did not win any awards or mention. This suggests that there is decent community interest and consensus around projects and project areas that aren't being funded or funded sufficiently by centralized entities. For those project areas, there are a decent number of people within EA , project leads, and smaller-scale funders (BERI, EA Funds, various HNWIs) that I am aware of that either believe such efforts are valuable and underfunded or have funded projects in those areas in the past. The specific grantmaking team at The Future Fund may have interests and theories of change that aren't the same as other grantmaking teams and EAs. It's definitely fine to have specialized interests and theories of change, and indeed everyone does, but the issue is only one set of those is coming through to decide how to allocate all of the Future Fund's funding. As you point out, that's basically guaranteed to be suboptimal.
Chris Leong @ 2022-04-04T14:22 (+5)
This is yet another reason why I'd love to see mini-EA hotels in major cities around the world as I described in this Twitter thread. Obviously, this wouldn't remove the bias towards people in major cities, but it would decrease geographical bias overall and the perfect shouldn't be the enemy of the good.
ElliotJDavies @ 2022-04-05T20:26 (+1)
I would be very interested in doing this in Copenhagen. If anybody going to EA global has strong opinions this I would love to set up a meeting and chat about this
Chris Leong @ 2022-04-05T20:38 (+4)
I'll be at EAGlobal. Feel free to reach out to me.
Jamie_Harris @ 2022-04-30T14:01 (+4)
I agree that centralised grant-making might mean that some promising projects are missed. But we're not solely interested in this? We're overall interested in:
Average cost-effectiveness per $ granted * Number of $ we're able to grant
My intuition would be that the more decentralised the grant-making process, the more $ we're able to grant.
But this also requires us to invest more talent in grant-making, which means, in practice, fewer promising people applying for grants themselves, which might non-negligibly reduce average cost-effectiveness per $ granted.
Beyond the above consideration, it seems unclear whether decentralised grant-making would overall increase of decrease the average cost-effectiveness. Sure, fewer projects above the current average cost-effectiveness would slip through the net, but so too fewer projects below the current average cost-effectiveness would slip through the net. So I'd expect these things to balance each other out roughly UNLESS we're making a separate claim that the current grantmakers are making poor / miscalibrated decisions. But at that point, this is not an argument in favour of decentralising grant-making, but an argument in favour of replacing (or competing with) the current grantmakers.
So maybe overall, decentralising grant-making would trade an increase in $ we're able to grant for a small decrease in average cost-effectiveness of granted $.
(I felt pretty confused writing these comments and suspect I've missed many relevant considerations, but thought I'd flesh out and share my intuitive concerns with the central argument of this post, rather than just sit on them.)
ElliotJDavies @ 2022-04-05T20:09 (+3)
[Quick thoughts whilst on mobile]
My takeaway: interested to hear what said grant makers think about this idea.
I find the arguments re: efficient market hypothesis pretty compelling , but also find the arguments re: "inferential distance" and unilateralist curse also compelling.
One last points, so far, I think one EA's biggest achievements is around truly unsually good epistemics, and I'm particularly concerned around how centralised small groups could damage that - especially since more funding could exacerbate this effect
Yitz @ 2022-04-04T22:06 (+1)
Posted on my shortform, but thought itās worth putting here as well, given that I was inspired by this post to write it:
Thinking about what Iād do if I was a grantmaker that others wouldnāt do. One course of action Iād strongly consider is to reach out to my non-EA friendsāmost of whom are fairly poor, are artists/game developers whose ideas/philosophies I consider high value, and who live around the worldāand fund them to do independent research/work on EA cause areas instead of the minimum-wage day jobs many of them currently have. Iād expect some of them to be interested (though some would decline), and theyād likely be coming from a very different angle than most people in this space. This may not be the most efficient use of money, but making use of my peculiar/unique network of friends is something only I can do, and may be of value.