80,000 Hours: Anonymous contributors on flaws of the EA community

By Aaron Gertler šŸ”ø @ 2020-03-04T00:32 (+46)

This is a linkpost to https://80000hours.org/2020/03/anonymous-answers-flaws-effective-altruism-community/

The following are excerpts from interviews with people whose work we respect and whose answers we offered to publish without attribution. This means that these quotes donā€™t represent the views of 80,000 Hours, and indeed in some cases, individual pieces of advice explicitly contradict our own. Nonetheless, we think itā€™s valuable to showcase the range of views on difficult topics where reasonable people might disagree.

This entry is most likely to be of interest to people who are already aware of or involved with the effective altruism (EA) community.

But itā€™s the thirteenth in this series of posts with anonymous answers ā€” many of which are likely to be useful to everyone. Four of the most popular entries have been:

  1. ā€œWhatā€™s some underrated general life advice?ā€?
  2. ā€œIs there any career advice youā€™d be hesitant to give if it were going to be attributed to you?ā€
  3. ā€œHow have you seen talented people fail in their work?ā€
  4. ā€œWhatā€™s the thing people most overrate in their career?ā€

 

What are the biggest flaws of the effective altruism community?

Groupthink

Groupthink seems like a problem to me. Iā€™ve noticed that if one really respected member of the community changes their mind on something, a lot of other people quickly do too. And there is some merit to that, if you think someone is really smart and shares your values ā€” it does make sense to update somewhat. But I see it happening a lot more than it probably should.


Something I feel is radically undersupplied at the moment is just people who are really trying to figure stuff out ā€” which takes years. So the person Iā€™m mainly thinking about as the kind of paragon of this is Carl Shulman, where heā€™s spent years and years just really working out for himself all the most important arguments related to having a positive influence in the long-run, and moral philosophy, and meta-ethics, and anthropics and, well ā€” basically everything. And the number of people doing that is very small at the moment. Because thereā€™s not really a path for it.

If you go into academia, then you write papers. But thatā€™s just one narrow piece of the puzzle. Itā€™s a similar case in most research organisations.

Whereas just trying to understand basically everything, and how it all fits together ā€” and not really deferring to others, and actually trying to work out everything yourself, is so valuable. And I feel like very few people are trying to do that. Maybe Carl counts, Paul Christiano counts, Brian Tomasik counts, I think Eric Drexler as well.

If youā€™re someone whoā€™s considering research in general, I think thereā€™s an enormous value here because thereā€™s just so few people doing it.

I think there are plenty of people who are intellectually capable of this, but it does require a certain personality. If we were in a culture where having your own worldview ā€” even if it didnā€™t seem that plausible ā€” was an activity that was really valued, and really praised, then a lot more people could be doing this.

Whereas I think the culture can be more like ā€œwell, thereā€™s a very narrow band of super-geniuses who are allowed to do that. And if you do it, youā€™re going to be punished for not believing the median views of the community.ā€

Iā€™m extremely pro peer-updating in general, but from the perspective of the community as a whole ā€” Iā€™d much rather a lot of people having a lot of personally formed views. I feel like I learn a lot more from reading opinions on a subject from ten people who each have different, strong, honest views that theyā€™ve figured out themselves, rather than ten people who are trying to peer-update on each other all the time.


Everyoneā€™s trying to work at effective altruism (EA) orgs.

Too many people think that thereā€™s some group of people who have thought things through really carefully ā€” and then go with those views. As opposed to acknowledging that things are often chaotic and unpredictable, and that while there might be some wisdom in these views, itā€™s probably only a little bit.

Disagreeableness

Iā€™m concerned that some of the social norms of EA are turning people off who would otherwise find the ideas compelling. Thereā€™s such a norm of disagreeableness in EA, it can seem like every conversation is a semi-dispute between smart people. I think itā€™s not clear to a lot of people who have been around EA for a long time just how unusual that norm is. For people new to EA, it can be pretty off-putting to see people fighting about small details. I donā€™t think this problem is obvious to everyone, but it seems concerning.

Too much focus on ā€˜the communityā€™

Sometimes it isnā€™t that fun to be around the EA community.

Iā€™d much prefer an emphasis on specific intellectual projects rather than a community. It sometimes feels like youā€™re held to this vague jurisdiction of the EA community ā€” are you upholding the norms? Are you going to be subject to someoneā€™s decision about whether this is appropriate for the community? It can seem like youā€™re assumed to have opted in to something you didnā€™t opt in to, something that has unclear norms and rules that maybe donā€™t represent your values.


I think sometimes people are too focused on what the community does, thinks, etc. What youā€™re doing shouldnā€™t depend too much on what other people are doing unless you personally agree with it. If the effective altruism community ended tomorrow it honestly wouldnā€™t affect what Iā€™m doing with my life ā€“ I do what I do because I think the arguments for it are good and not because the effective altruism community thinks itā€™s good.

So I think the ideas would survive the non-existence of the community. And I think we should generally focus on the ideas independently (though if you really value the community, I understand why that might be important).

A ā€˜holier than thouā€™ attitude

Something that seems kinda bad is people having a ā€˜holier than thouā€™ attitude. Thinking that theyā€™ve worked out whatā€™s important, and most other people havenā€™t.

But the important part of EA is less the answers weā€™ve arrived at, and more the virtues in thinking that weā€™ve cultivated. If you want other people to pick up on your virtues, being a jerk isnā€™t the best way to do it.

Failing to give more people a vision of how they can contribute

I donā€™t think EA ever settled the question of ā€œhow big a mass movement does it want to beā€? We raised a lot of good points on both sides, and then just ambivalently proceeded.

If we want to be a mass movement, weā€™re really failing to give average people, and even some well-above average people, a vision of how they can contribute.

A lot of people get convinced of the arguments for longtermism, and then encounter the fact that there arenā€™t really good places to donate for far-future stuff ā€” and donating money is the most accessible way to contribute for a lot of people.

I worry that this creates a fairly large pool of money that may actually end up being spent on net-negative projects, because itā€™s just floating around looking for somebody to take it. That creates conditions for frauds, or at the very least for people whose projects arenā€™t well thought through ā€” and maybe the reasons they havenā€™t received funding through official sources yet are good ones.

But there are a lot of people who want to help, and who havenā€™t been given any good opportunities. If we want to be a mass movement, I think weā€™re really failing by being too elitist and too hostile towards regular people.

Weā€™re also not giving good, clear ways to donate to improving the far-future. I think that even if youā€™re convinced by the arguments for longtermism, unless you have a really good reason to think that a particular giving opportunity is going to be underrated by the institutions that are meant to be evaluating these things ā€” you should consider donating to animal welfare or global development charities. Both of which are very important.

The arguments for why those causes are important are not undermined by the possibility of short AI timelines. If anything, saving someoneā€™s life is a bigger deal if it means they make it to the singularity. Itā€™s fine to say, ā€œyep, Iā€™m persuaded by these long-term future arguments, but I donā€™t actually see a way for my money to make a difference there right now, so Iā€™m going to make donations to other areas where itā€™s clearer that my donation will have a positive effect.ā€

The community should be more willing to say this. I donā€™t think Iā€™m the only person convinced by longtermism arguments who doesnā€™t think that a lot of people should donate to longtermist stuff, because there just arenā€™t that many good giving opportunities. People can be unwilling to say that, because ā€œwe donā€™t want your moneyā€ can sound snobby etc.


Deemphasizing growth. One way of countering lock-in in the media is to have new media stories with additional facets of EA. I think there are a lot of problems that would be great to have more EAs working on and donating to. EAs have expressed concern that recruiting more people would dilute the movement in terms of ability. But I think that it is okay to have different levels of ability in EA. You generally need to be near the top to be at an EA organisation or contributing to the EA forum. But if someone wants to donate 10% of their money to a charity recommended by EA, and not engage further, I think thatā€™s definitely beneficial.


Iā€™d like to see a part of EA devoted to a GiveWell-type ranking of charities working on the reduction of global catastrophic risks.

Longtermism has become a status symbol

Believing the arguments for longtermism has become something of a status thing. A lot of EAs will tend to think less of people if they either havenā€™t engaged with those arguments, or havenā€™t been convinced. I think thatā€™s a mistake ā€” you have to create conditions where people donā€™t lose respect for disagreeing, or your community will predictably be wrong about most things.

Not engaging enough with the outside world

I worry about there being an EA bubble ā€” Iā€™d like to see more engagement with the outside world. There are some people who arenā€™t ever going to be convinced by your view of the most important things, and itā€™s fine to not worry about them.

At the same time, thereā€™s a risk of getting carried away talking with people who they really agree with ā€” and then trying to transfer that to the rest of their career. They might say things at work that are too weird, or they might make overly risky career decisions that leave themselves without backup options.

Not following best hiring practices

There are some incompetent people in prominent positions at EA organisations ā€” because the orgs havenā€™t put enough time into studying how to best find successful employees.

EA orgs should study best hiring practices. If a role is important, you need to get the right person ā€” and that shouldnā€™t be on the basis of a cover letter, a resume and an interview. Everybody involved in hiring should read Work Rules!, and people should be implementing those principles.

Being too unwilling to encourage high standards

I think it does make sense to have messages for highly involved EAs to make sure they donā€™t burn out. However, this should probably be more in person rather than online, as these people are typically in in-person EA communities anyway. The large majority of EAs are not giving 10% of their money or not changing their career radically or working themselves to the bone, so I think they should be encouraged to meet high standards. I think we can keep our standards high, such that you donate 10% of your money, or do direct effective work, or volunteer 10% of your free time (roughly 4 hours a week) to EA organisations or maybe just promoting EA individually. I think EA can still grow much faster even with these high standards.


I donā€™t know if we should have the norm that donation should end when retirement starts. But maybe it was an appropriate compromise to not have it be too intimidating.

Doing non-technical research that isnā€™t actually useful

Iā€™m sceptical of most forms of non-technical EA-ish research being practically useful.

I think thereā€™s a few people who do excellent macro strategy research, like Nick Bostrom ā€” but thereā€™s a norm in the EA community of valuing when someone comes up with a new cool consideration or an abstract model that relates to an EA topic, and I think most of that work isnā€™t actually valuable. Itā€™s the sort of thing where if youā€™re not exceptionally talented, itā€™s really difficult to do valuable work.


There can be a temptation among EAs to think that just writing considerations on interesting topics is the most useful thing that they could be doing. But I often see write-ups that are overly general, not empirically grounded enough, that only a few people are going to read ā€” and of the people who read it none are likely to update their views as a result.

People can feel like if they write something and put it up on the internet that equals impact ā€” but thatā€™s only true if the right people read it, and it causes them to change their minds.

Abandoning projects too quickly

Often people donā€™t commit enough time to a project. Projects can be abandoned after 6 months when they should have probably been given years to develop.

Most people live in the centre of big cities

I think itā€™s a problem that the important organisations and individuals are mostly in EA hubs. This is especially problematic because all the EA hubs are in NATO cities, which likely would not survive full-scale nuclear war. A simple step to mitigate this problem is living in the suburbs or even outside the suburbs, but I think EAs have a bias towards city life (there is already a gradient in rent representing commuting costs, so if you actually think there is significant chance of nuclear war, it makes sense living outside of metros, especially if you can multitask while commuting). Even better would be locating outside NATO countries in ones such as Australia or New Zealand (because of lower pandemic risk as well).

Lack of support for entrepreneurs

Iā€™d love to see someone create a good EA startup incubator. I donā€™t think anyoneā€™s doing it well at the moment.

One of the biggest problems with EA is a lack of entrepreneurs that are ready to start a project on their own. But if we could get some of the best EAs to commit to allocating some of their time systematically to help people with the best proposals ā€” get their new projects, or orgs ready to go ā€” I think that would be the most effective way to utilise the resources we currently have at our disposal.

Valuing exceptional work in a non-effective job too highly

Many EAs have said that if one is building career capital in a noneffective job, you have to be an exemplary performer in that job. But I think that that takes so much effort that you are not able to develop background knowledge and expertise towards your actual effective work. One example is working hard for bonuses; in my experience, the marginal dollar per hour is very low for bonuses.

Too cautious

Maybe slightly too cautious overall. I understand the reasons for focusing on possible negative consequences, but I think generally Iā€™m more pro ā€œdoing thingsā€.

Too narrow

Thinking about the way that you are putting things, and the tone that they have, is very important. But itā€™s one of those things where people can fail to acknowledge the importance of it.

People who disagree with an idea find it very hard to say ā€œI disagree with this, but I donā€™t quite know whyā€. Itā€™s also very hard to say, ā€œthe thing is, I donā€™t really disagree with any of the claims you made, but I really do disagree with the way they were made, or what they seem to implyā€.

I suspect when it comes to a lot of the criticisms of EA, people will try to present them as disagreements with the core ideas. And I think a lot of the people making these critiques donā€™t actually disagree with the core ideas, theyā€™re really saying ā€œit feels like youā€™re ignoring a bunch of things that feel important to meā€.

So I would like to see EA grow, and be sensitive to those things. And maybe that means I want EA to be broader, I think I probably do. I would like there to be more people who disagree. I would like there to be more people who wonā€™t present things in that way. It would be nice to see more moral views presented; I think these ideas are not restricted to the groups that are currently dominantly represented in EA. And so I think an epistemically virtuous version of EA probably is broader, in terms of actually gathering, and being compelling to people with a range of different views.


I think there is a bias in the existential risk community towards work at global top 20 universities. Something like 90 of percent of the work gets funded there, compared to general research where it might be about a couple percent in those universities. You could argue that for some problems you really need the smartest people in the world. But I think that lots of progress can be made with people not at those elite universities. And it is a lot cheaper at other universities.

Neglecting less popular funding opportunities

I think one mistake is Good Ventures not diversifying their investments (last time I checked, I think nearly all was still in Facebook).


There are still funding gaps that arenā€™t necessarily always recognised. Thereā€™s talk about earning-to-give being deprioritised, but that only makes sense for higher-profile EA cause areas. For areas that arenā€™t popular at all in the mainstream world ā€” EA funding is essential. There are a lot of exciting projects that just donā€™t get done purely because of funding gaps.


I think The Open Philanthropy Project putting $55 million into something [CSET] that is not even focused on transformative AI, let alone AGI was not a good idea considering all the other GCR reduction opportunities there are.


There are really large funding gaps both for existing and EA-aligned organisations yet to be funded. When a group gets funded, it also doesnā€™t mean they were able to get full funding. It can also be challenging to learn about all the different EA organisations as thereā€™s no central hub. Lists are very scattered and it can be challenging for the community to learn about them all and what their needs are.

A lack of focus on broader global catastrophic risks

I think a common mistake long-term future EAs make is that existential risk means only extinction. In reality, there are many routes to far future impact that do not involve extinction right away.


Iā€™ve heard a number of long-term future EAs express skepticism that any GCR interventions could actually be net beneficial to the present generation. However, there was the book Catastrophe: Risk and Response that made just this argument. Also, there are models showing both AGI and preparation for agricultural catastrophes are both highly cost-effective for the long-term future and for the present generation.

Being too siloed

I think EA is a little too siloed. I think it is useful to take into account impacts on multiple cause areas of particular interventions, like GCR interventions saving lives in the present generation.

I think it is great that EAs are proposing a lot of possible Cause Xs, but I would like to see more Guesstimate cost-effectiveness models to be able to evaluate them.

Not media savvy enough

EAs should try to be more media savvy. This applies to avoiding misconceptions around topics, earning-to-give etc.

But EAs should also recognise the importance of telling a good story. For longtermism, this is particularly hard. Showing a video of a starving child tugs on the heartstrings, but how do you do that for future generations? How do you do that for AI safety? I think EAs could spend more time thinking about how to communicate this stuff so that it resonates.

Also focus on the positives. That everyone can be a hero. If you focus on guilt, people switch off.

When I tell people that weā€™re trying to avoid catastrophic risk, they always think Iā€™m talking about climate change.

How can EA better communicate that climate change isnā€™t the only big risk?


MichaelStJules @ 2020-03-04T06:14 (+4)
Many EAs have said that if one is building career capital in a noneffective job, you have to be an exemplary performer in that job. But I think that that takes so much effort that you are not able to develop background knowledge and expertise towards your actual effective work.

I think this really depends on the types of roles you're looking for. Project management or operations in industry and then at an EA org or EA-recommended org seems like a good transition.

One example is working hard for bonuses; in my experience, the marginal dollar per hour is very low for bonuses.

Is this true in software or finance? Also, it isn't just bonuses, you get promotions and pay increases.

Arepo @ 2020-03-07T10:52 (+1)

Kudos to 80K for both asking and publishing this. I think I literally agree with every single one of these (quite strongly with most). In particular, the hiring practices criticism - I think there was a tendency especially with early EA orgs to hire for EAness first and competence/experience second, and that this has led to a sort of hiring practice lock in where they value the characteristics if not to the same degree then with a greater bias than a lean efficiency-minded org should have.

A related concern is overinterviewing - I read somewhere (unfortunately I can't remember the source) the claim that the longer and more thorough your interview process, the more you select for people with the willingness and lack of competition for their time to go through all those steps.

This (if I'm right) would have the quadruple effect of wasting EAs' times, which you'd hope would be counterfactually valuable, wasting the organisations' times, ditto, potentially reducing the fidelity of the hiring practice, and of increasing the aforementioned bias towards willingness.

Aaron Gertler @ 2020-03-09T20:47 (+6)

I'd be surprised if Open Philanthropy routinely lost good candidates to the length of their hiring process; if this happened, I don't think it came up in their analysis of their biggest (?) hiring round.

(I do think orgs should be thinking carefully about all the stages of their interview processes and looking for good tradeoffs between time and information on candidates. Open Phil certainly does this already, but I'm not sure about other orgs.)

Empirically, the single hiring process I've run for an EA org didn't lose anyone; every candidate I asked to schedule an interview did so, and every candidate who got through to the second-round work test completed it. 

I think I may have wasted candidates' time in the first round by assigning an editing task that was too long, but the length of that initial test was in line with other industries' initial requirements, and I hope that I "gave back" some EA time by not requiring a formal cover letter, accepting LinkedIn in lieu of resumes, etc.

Arepo @ 2020-06-26T11:15 (+1)

I'm not sure how public the hiring methodology is, but if it's fully public then I'd expect the candidates to be 'lost' before the point of sending in a CV.

If it's less public that would be less likely, though perhaps the best candidates (assuming they consider applying for jobs at all, and aren't always just headhunted) would only apply to jobs that had a transparent methodology that revealed a short hiring process.