80,000 Hours: Anonymous contributors on flaws of the EA community
By Aaron Gertler šø @ 2020-03-04T00:32 (+46)
This is a linkpost to https://80000hours.org/2020/03/anonymous-answers-flaws-effective-altruism-community/
The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes donāt represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think itās valuable to showcase the range of views on difficult topics where reasonable people might disagree.
This entry is most likely to be of interest to people who are already aware of or involved with the effective altruism (EA) community.
But itās the thirteenth in this series of posts with anonymous answers ā many of which are likely to be useful to everyone. Four of the most popular entries have been:
- āWhatās some underrated general life advice?ā?
- āIs there any career advice youād be hesitant to give if it were going to be attributed to you?ā
- āHow have you seen talented people fail in their work?ā
- āWhatās the thing people most overrate in their career?ā
What are the biggest flaws of the effective altruism community?
Groupthink
Groupthink seems like a problem to me. Iāve noticed that if one really respected member of the community changes their mind on something, a lot of other people quickly do too. And there is some merit to that, if you think someone is really smart and shares your values ā it does make sense to update somewhat. But I see it happening a lot more than it probably should.
Something I feel is radically undersupplied at the moment is just people who are really trying to figure stuff out ā which takes years. So the person Iām mainly thinking about as the kind of paragon of this is Carl Shulman, where heās spent years and years just really working out for himself all the most important arguments related to having a positive influence in the long-run, and moral philosophy, and meta-ethics, and anthropics and, well ā basically everything. And the number of people doing that is very small at the moment. Because thereās not really a path for it.
If you go into academia, then you write papers. But thatās just one narrow piece of the puzzle. Itās a similar case in most research organisations.
Whereas just trying to understand basically everything, and how it all fits together ā and not really deferring to others, and actually trying to work out everything yourself, is so valuable. And I feel like very few people are trying to do that. Maybe Carl counts, Paul Christiano counts, Brian Tomasik counts, I think Eric Drexler as well.
If youāre someone whoās considering research in general, I think thereās an enormous value here because thereās just so few people doing it.
I think there are plenty of people who are intellectually capable of this, but it does require a certain personality. If we were in a culture where having your own worldview ā even if it didnāt seem that plausible ā was an activity that was really valued, and really praised, then a lot more people could be doing this.
Whereas I think the culture can be more like āwell, thereās a very narrow band of super-geniuses who are allowed to do that. And if you do it, youāre going to be punished for not believing the median views of the community.ā
Iām extremely pro peer-updating in general, but from the perspective of the community as a whole ā Iād much rather a lot of people having a lot of personally formed views. I feel like I learn a lot more from reading opinions on a subject from ten people who each have different, strong, honest views that theyāve figured out themselves, rather than ten people who are trying to peer-update on each other all the time.
Everyoneās trying to work at effective altruism (EA) orgs.
Too many people think that thereās some group of people who have thought things through really carefully ā and then go with those views. As opposed to acknowledging that things are often chaotic and unpredictable, and that while there might be some wisdom in these views, itās probably only a little bit.
Disagreeableness
Iām concerned that some of the social norms of EA are turning people off who would otherwise find the ideas compelling. Thereās such a norm of disagreeableness in EA, it can seem like every conversation is a semi-dispute between smart people. I think itās not clear to a lot of people who have been around EA for a long time just how unusual that norm is. For people new to EA, it can be pretty off-putting to see people fighting about small details. I donāt think this problem is obvious to everyone, but it seems concerning.
Too much focus on āthe communityā
Sometimes it isnāt that fun to be around the EA community.
Iād much prefer an emphasis on specific intellectual projects rather than a community. It sometimes feels like youāre held to this vague jurisdiction of the EA community ā are you upholding the norms? Are you going to be subject to someoneās decision about whether this is appropriate for the community? It can seem like youāre assumed to have opted in to something you didnāt opt in to, something that has unclear norms and rules that maybe donāt represent your values.
I think sometimes people are too focused on what the community does, thinks, etc. What youāre doing shouldnāt depend too much on what other people are doing unless you personally agree with it. If the effective altruism community ended tomorrow it honestly wouldnāt affect what Iām doing with my life ā I do what I do because I think the arguments for it are good and not because the effective altruism community thinks itās good.
So I think the ideas would survive the non-existence of the community. And I think we should generally focus on the ideas independently (though if you really value the community, I understand why that might be important).
A āholier than thouā attitude
Something that seems kinda bad is people having a āholier than thouā attitude. Thinking that theyāve worked out whatās important, and most other people havenāt.
But the important part of EA is less the answers weāve arrived at, and more the virtues in thinking that weāve cultivated. If you want other people to pick up on your virtues, being a jerk isnāt the best way to do it.
Failing to give more people a vision of how they can contribute
I donāt think EA ever settled the question of āhow big a mass movement does it want to beā? We raised a lot of good points on both sides, and then just ambivalently proceeded.
If we want to be a mass movement, weāre really failing to give average people, and even some well-above average people, a vision of how they can contribute.
A lot of people get convinced of the arguments for longtermism, and then encounter the fact that there arenāt really good places to donate for far-future stuff ā and donating money is the most accessible way to contribute for a lot of people.
I worry that this creates a fairly large pool of money that may actually end up being spent on net-negative projects, because itās just floating around looking for somebody to take it. That creates conditions for frauds, or at the very least for people whose projects arenāt well thought through ā and maybe the reasons they havenāt received funding through official sources yet are good ones.
But there are a lot of people who want to help, and who havenāt been given any good opportunities. If we want to be a mass movement, I think weāre really failing by being too elitist and too hostile towards regular people.
Weāre also not giving good, clear ways to donate to improving the far-future. I think that even if youāre convinced by the arguments for longtermism, unless you have a really good reason to think that a particular giving opportunity is going to be underrated by the institutions that are meant to be evaluating these things ā you should consider donating to animal welfare or global development charities. Both of which are very important.
The arguments for why those causes are important are not undermined by the possibility of short AI timelines. If anything, saving someoneās life is a bigger deal if it means they make it to the singularity. Itās fine to say, āyep, Iām persuaded by these long-term future arguments, but I donāt actually see a way for my money to make a difference there right now, so Iām going to make donations to other areas where itās clearer that my donation will have a positive effect.ā
The community should be more willing to say this. I donāt think Iām the only person convinced by longtermism arguments who doesnāt think that a lot of people should donate to longtermist stuff, because there just arenāt that many good giving opportunities. People can be unwilling to say that, because āwe donāt want your moneyā can sound snobby etc.
Deemphasizing growth. One way of countering lock-in in the media is to have new media stories with additional facets of EA. I think there are a lot of problems that would be great to have more EAs working on and donating to. EAs have expressed concern that recruiting more people would dilute the movement in terms of ability. But I think that it is okay to have different levels of ability in EA. You generally need to be near the top to be at an EA organisation or contributing to the EA forum. But if someone wants to donate 10% of their money to a charity recommended by EA, and not engage further, I think thatās definitely beneficial.
Iād like to see a part of EA devoted to a GiveWell-type ranking of charities working on the reduction of global catastrophic risks.
Longtermism has become a status symbol
Believing the arguments for longtermism has become something of a status thing. A lot of EAs will tend to think less of people if they either havenāt engaged with those arguments, or havenāt been convinced. I think thatās a mistake ā you have to create conditions where people donāt lose respect for disagreeing, or your community will predictably be wrong about most things.
Not engaging enough with the outside world
I worry about there being an EA bubble ā Iād like to see more engagement with the outside world. There are some people who arenāt ever going to be convinced by your view of the most important things, and itās fine to not worry about them.
At the same time, thereās a risk of getting carried away talking with people who they really agree with ā and then trying to transfer that to the rest of their career. They might say things at work that are too weird, or they might make overly risky career decisions that leave themselves without backup options.
Not following best hiring practices
There are some incompetent people in prominent positions at EA organisations ā because the orgs havenāt put enough time into studying how to best find successful employees.
EA orgs should study best hiring practices. If a role is important, you need to get the right person ā and that shouldnāt be on the basis of a cover letter, a resume and an interview. Everybody involved in hiring should read Work Rules!, and people should be implementing those principles.
Being too unwilling to encourage high standards
I think it does make sense to have messages for highly involved EAs to make sure they donāt burn out. However, this should probably be more in person rather than online, as these people are typically in in-person EA communities anyway. The large majority of EAs are not giving 10% of their money or not changing their career radically or working themselves to the bone, so I think they should be encouraged to meet high standards. I think we can keep our standards high, such that you donate 10% of your money, or do direct effective work, or volunteer 10% of your free time (roughly 4 hours a week) to EA organisations or maybe just promoting EA individually. I think EA can still grow much faster even with these high standards.
I donāt know if we should have the norm that donation should end when retirement starts. But maybe it was an appropriate compromise to not have it be too intimidating.
Doing non-technical research that isnāt actually useful
Iām sceptical of most forms of non-technical EA-ish research being practically useful.
I think thereās a few people who do excellent macro strategy research, like Nick Bostrom ā but thereās a norm in the EA community of valuing when someone comes up with a new cool consideration or an abstract model that relates to an EA topic, and I think most of that work isnāt actually valuable. Itās the sort of thing where if youāre not exceptionally talented, itās really difficult to do valuable work.
There can be a temptation among EAs to think that just writing considerations on interesting topics is the most useful thing that they could be doing. But I often see write-ups that are overly general, not empirically grounded enough, that only a few people are going to read ā and of the people who read it none are likely to update their views as a result.
People can feel like if they write something and put it up on the internet that equals impact ā but thatās only true if the right people read it, and it causes them to change their minds.
Abandoning projects too quickly
Often people donāt commit enough time to a project. Projects can be abandoned after 6 months when they should have probably been given years to develop.
Most people live in the centre of big cities
I think itās a problem that the important organisations and individuals are mostly in EA hubs. This is especially problematic because all the EA hubs are in NATO cities, which likely would not survive full-scale nuclear war. A simple step to mitigate this problem is living in the suburbs or even outside the suburbs, but I think EAs have a bias towards city life (there is already a gradient in rent representing commuting costs, so if you actually think there is significant chance of nuclear war, it makes sense living outside of metros, especially if you can multitask while commuting). Even better would be locating outside NATO countries in ones such as Australia or New Zealand (because of lower pandemic risk as well).
Lack of support for entrepreneurs
Iād love to see someone create a good EA startup incubator. I donāt think anyoneās doing it well at the moment.
One of the biggest problems with EA is a lack of entrepreneurs that are ready to start a project on their own. But if we could get some of the best EAs to commit to allocating some of their time systematically to help people with the best proposals ā get their new projects, or orgs ready to go ā I think that would be the most effective way to utilise the resources we currently have at our disposal.
Valuing exceptional work in a non-effective job too highly
Many EAs have said that if one is building career capital in a noneffective job, you have to be an exemplary performer in that job. But I think that that takes so much effort that you are not able to develop background knowledge and expertise towards your actual effective work. One example is working hard for bonuses; in my experience, the marginal dollar per hour is very low for bonuses.
Too cautious
Maybe slightly too cautious overall. I understand the reasons for focusing on possible negative consequences, but I think generally Iām more pro ādoing thingsā.
Too narrow
Thinking about the way that you are putting things, and the tone that they have, is very important. But itās one of those things where people can fail to acknowledge the importance of it.
People who disagree with an idea find it very hard to say āI disagree with this, but I donāt quite know whyā. Itās also very hard to say, āthe thing is, I donāt really disagree with any of the claims you made, but I really do disagree with the way they were made, or what they seem to implyā.
I suspect when it comes to a lot of the criticisms of EA, people will try to present them as disagreements with the core ideas. And I think a lot of the people making these critiques donāt actually disagree with the core ideas, theyāre really saying āit feels like youāre ignoring a bunch of things that feel important to meā.
So I would like to see EA grow, and be sensitive to those things. And maybe that means I want EA to be broader, I think I probably do. I would like there to be more people who disagree. I would like there to be more people who wonāt present things in that way. It would be nice to see more moral views presented; I think these ideas are not restricted to the groups that are currently dominantly represented in EA. And so I think an epistemically virtuous version of EA probably is broader, in terms of actually gathering, and being compelling to people with a range of different views.
I think there is a bias in the existential risk community towards work at global top 20 universities. Something like 90 of percent of the work gets funded there, compared to general research where it might be about a couple percent in those universities. You could argue that for some problems you really need the smartest people in the world. But I think that lots of progress can be made with people not at those elite universities. And it is a lot cheaper at other universities.
Neglecting less popular funding opportunities
I think one mistake is Good Ventures not diversifying their investments (last time I checked, I think nearly all was still in Facebook).
There are still funding gaps that arenāt necessarily always recognised. Thereās talk about earning-to-give being deprioritised, but that only makes sense for higher-profile EA cause areas. For areas that arenāt popular at all in the mainstream world ā EA funding is essential. There are a lot of exciting projects that just donāt get done purely because of funding gaps.
I think The Open Philanthropy Project putting $55 million into something [CSET] that is not even focused on transformative AI, let alone AGI was not a good idea considering all the other GCR reduction opportunities there are.
There are really large funding gaps both for existing and EA-aligned organisations yet to be funded. When a group gets funded, it also doesnāt mean they were able to get full funding. It can also be challenging to learn about all the different EA organisations as thereās no central hub. Lists are very scattered and it can be challenging for the community to learn about them all and what their needs are.
A lack of focus on broader global catastrophic risks
I think a common mistake long-term future EAs make is that existential risk means only extinction. In reality, there are many routes to far future impact that do not involve extinction right away.
Iāve heard a number of long-term future EAs express skepticism that any GCR interventions could actually be net beneficial to the present generation. However, there was the book Catastrophe: Risk and Response that made just this argument. Also, there are models showing both AGI and preparation for agricultural catastrophes are both highly cost-effective for the long-term future and for the present generation.
Being too siloed
I think EA is a little too siloed. I think it is useful to take into account impacts on multiple cause areas of particular interventions, like GCR interventions saving lives in the present generation.
I think it is great that EAs are proposing a lot of possible Cause Xs, but I would like to see more Guesstimate cost-effectiveness models to be able to evaluate them.
Not media savvy enough
EAs should try to be more media savvy. This applies to avoiding misconceptions around topics, earning-to-give etc.
But EAs should also recognise the importance of telling a good story. For longtermism, this is particularly hard. Showing a video of a starving child tugs on the heartstrings, but how do you do that for future generations? How do you do that for AI safety? I think EAs could spend more time thinking about how to communicate this stuff so that it resonates.
Also focus on the positives. That everyone can be a hero. If you focus on guilt, people switch off.
When I tell people that weāre trying to avoid catastrophic risk, they always think Iām talking about climate change.
How can EA better communicate that climate change isnāt the only big risk?
MichaelStJules @ 2020-03-04T06:14 (+4)
Many EAs have said that if one is building career capital in a noneffective job, you have to be an exemplary performer in that job. But I think that that takes so much effort that you are not able to develop background knowledge and expertise towards your actual effective work.
I think this really depends on the types of roles you're looking for. Project management or operations in industry and then at an EA org or EA-recommended org seems like a good transition.
One example is working hard for bonuses; in my experience, the marginal dollar per hour is very low for bonuses.
Is this true in software or finance? Also, it isn't just bonuses, you get promotions and pay increases.
Arepo @ 2020-03-07T10:52 (+1)
Kudos to 80K for both asking and publishing this. I think I literally agree with every single one of these (quite strongly with most). In particular, the hiring practices criticism - I think there was a tendency especially with early EA orgs to hire for EAness first and competence/experience second, and that this has led to a sort of hiring practice lock in where they value the characteristics if not to the same degree then with a greater bias than a lean efficiency-minded org should have.
A related concern is overinterviewing - I read somewhere (unfortunately I can't remember the source) the claim that the longer and more thorough your interview process, the more you select for people with the willingness and lack of competition for their time to go through all those steps.
This (if I'm right) would have the quadruple effect of wasting EAs' times, which you'd hope would be counterfactually valuable, wasting the organisations' times, ditto, potentially reducing the fidelity of the hiring practice, and of increasing the aforementioned bias towards willingness.
Aaron Gertler @ 2020-03-09T20:47 (+6)
I'd be surprised if Open Philanthropy routinely lost good candidates to the length of their hiring process; if this happened, I don't think it came up in their analysis of their biggest (?) hiring round.
(I do think orgs should be thinking carefully about all the stages of their interview processes and looking for good tradeoffs between time and information on candidates. Open Phil certainly does this already, but I'm not sure about other orgs.)
Empirically, the single hiring process I've run for an EA org didn't lose anyone; every candidate I asked to schedule an interview did so, and every candidate who got through to the second-round work test completed it.
I think I may have wasted candidates' time in the first round by assigning an editing task that was too long, but the length of that initial test was in line with other industries' initial requirements, and I hope that I "gave back" some EA time by not requiring a formal cover letter, accepting LinkedIn in lieu of resumes, etc.
Arepo @ 2020-06-26T11:15 (+1)
I'm not sure how public the hiring methodology is, but if it's fully public then I'd expect the candidates to be 'lost' before the point of sending in a CV.
If it's less public that would be less likely, though perhaps the best candidates (assuming they consider applying for jobs at all, and aren't always just headhunted) would only apply to jobs that had a transparent methodology that revealed a short hiring process.