The Folly of "EAs Should"
By Davidmanheim @ 2021-01-06T07:04 (+75)
I've seen and heard many discussions about what EAs should do. William McAskill has ventured a definition of Effective Altruism, and I think it is instructive. Will notes that "Effective altruism consists of two projects, rather than a set of normative claims." One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid. This is a technical point, and one which might seem irrelevant to practical concerns, but I think there are some pernicious consequences of some of the normative claims that get made.
So I think we should discuss why "Effective Altruism" implying that there are specific and clear preferable options for "Effective Altruists" is often harmful. Will's careful definition avoids that harm, and I think should be taken seriously in that regard.
Mistaken Assertions
Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints. This is not because they cannot be justified, but because they can be strategic mistakes. Specifically, we should be wary of making the project exclusive rather than inclusive.
EA is Young, Small, and Weird
EA is very young. Some find this an obvious situation - aren't most radical movements young? Are most people willing to embrace new ideas young? - but I disagree. Many of the most popular movements sweep across age groups. Environmentalism, Gay rights, and Animal welfare all skewed young, but were increasingly adopted by those of all ages. In part, that is because they allow people to embrace them. There is no widespread belief in environmentalism that doctors have wasted their careers focusing on saving lives at the retail level rather than saving the world. There is little reason that anyone would hesitate the raise the pride flag because they are not doing enough for the movement. But effective altruism is often perceived differently.
To the extent that EAs embrace a single vision (a very limited extent, to be clear,) they often exclude those who differ on details, intentionally or not. "Failing" to embrace longtermism, or ("worse"?) disagreeing about impartiality, is enough to start arguments. Is it any wonder that we have so few people with well-established lives and worldviews willing to consider our project, "the use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources"? Nothing about the project is exclusive - it is the community that creates exclusion. And it would be a shame for people to feel useless and excluded.
Of course, allowing more diversity will allow the ideas of effective altruism to spread - but it will also reduce the tension which seems to exist around disagreeing with the orthodoxy. People debate whether EA should be large and welcoming or small and weird. But as John Maxwell suggests, large and weird might be a fine compromise. We see this - the LGBT movement, now widely embraced, famously suggests that people should "let your freak flag fly," but the phrase dates back to the 60s counterculture. Neither stayed small and weird, and despite each leading to a culture war, each seems to have been, at least in retrospect, very widely embraced. And neither needed to develop a single coherent worldview to get there; no-one can argue that LGBT groups all agree about many issues. And despite the fragmentation and arguments, the key messages came through to the broader public just fine.
EA is Already Fragmented
It may come as a surprise to readers of the forum, but many of those pushing forward the EA project are only involved for their pet causes. Animal welfare activists gladly take money, logistical help, strategic guidance, and moral support to do things they have long wanted to do. AI safety researchers may or may not embrace the values of EA, but agree it's a good idea to ensure the world doesn't end in fire. Longtermists, life-extension activists, and biosecurity researchers also have groups and projects which predate EA, and they are happy to have found fellow travelers. None of these are a bad thing.
Even within "central" EA organizations, there are debates about relative priority of different goals. Feel free to disagree with Open Philanthropy's work that prioritizes US Policy - but it's one of their main cause areas. (A fact that has shocked more than one person I've mentioned it to.) Perhaps you think that longtermism is obviously correct, but Givewell focuses mainly on the short term. We are uncertain, as a community, and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions.
Human Variety
Different people have different values, and also have different skills and abilities. When optimizing almost any goal in a complex system, this diversity implies that the optimal path involves some degree of diversity of approaches. That is, most goals are better served by having a diversity of skills available. (Note that this is a positive claim about reality, not a normative one.)
In fact, we find that a diversity of skills are useful for Effective Altruism. Web and graphic designers contribute differently than philosophers and researchers, who contribute differently than operations people, who contribute differently than logistics experts for international shipping, financial analysts, etc. Yes, all of these skills can be paid for on the open market, so some are more expensive than others, but value alignment cannot, and the movement benefits greatly from having value-aligned organizations, especially as it grows.
Hearing that being a doctor "isn't EA" is not just unfortunately dismissive, it's dead wrong. Among EA priorities, doctors have important roles to play in biosecurity, in longevity research, and in understanding how to implement the logistics of vaccine programs. In a different vein, if I had been involved and followed EA advice, I might have gone for a PhD in economics, which I already knew I wouldn't enjoy as much as what I did, in public policy. Of course, just as I was graduating it turned out that EA organizations were getting more interested in policy. That was luck for me, but unsurprising at a group level; of course disparate skills are needed. And a movement that pushes acquiring a narrow set of skills will, unsurprisingly, end up with a narrow set of skills.
Conclusion
I'm obviously not opposed to every use of the word should, and there really are many generally applicable recommendations. I'm not sure how many of them are specific to EAs - all humans should get enough sleep, and it's usually a good idea for younger people to maximize their career capital and preserve options for the future. 80,000 hours seems to tread the balance well, but it seems like many of the readers see "recommended career paths," and think it's a far stronger statement than might be intended.
The narrow vision that seems common when I talk to EAs, and non-EAs that have interacted with EAs, is that there are correct answers that we have for others. This is unhelpful. Instead, think of EA mentorship and advice as suggestions for those that want to follow a "priority" career path. At the same time, we should focus more on continuing to build a vision for, and paths to, improve the world. Alongside that, we have a mutable and evolving program for doing so, one that should (and will) be informed and advanced by anyone interested in being involved.
Acknowledgements: Thank you to Edo Arad for useful feedback.
Halstead @ 2021-01-06T11:22 (+53)
Thanks for taking the time to put this together.
At the start, you seem to suggest that we should not use 'should' because of moral uncertainty, and then you gloss this as a claim about cooperation. Moral uncertainty is intrapersonal, whereas moral cooperation is interpersonal. It might be the case that my credence is split between Theory 1 and Theory 2, but that everyone else has the exact same credal split. In this case, there is no need for interpersonal cooperation between people with conflicting moral beliefs because there is unanimity. Rather, the puzzle I face is to act under moral uncertainty, which is a very different point.
In general, I think you have raised some sensible considerations about whether and how we might go about making EA more popular, such as around framing. But I think the idea that we should avoid talking about what EAs should do is untenable. Even while writing this comment, I have found it impossible not to say what EAs should do. Indeed, at several points in your post you make normative claims about what EA should do
- "So I think we should discuss why "Effective Altruism" implying that there are specific and clear preferable options for "Effective Altruists" is often harmful"
- "Specifically, we should be wary of making the project exclusive rather than inclusive."
- In the section on EA beyond small and weird, your argument is maybe EA should be big and weird.
- In the section on fragmentation, if I have interpreted you correctly, you are saying some people should not be overconfident about their cause commitments given peer disagreement.
- In the section on human variety, you say that EAs shouldn't have narrow career paths
Without making some normative claims about what EAs should and should not do, I don't see how EA could remain a distinctive movement. I just think it is true that EAs shouldn't donate to their local opera house, pet sanctuary, homeless shelter or to their private school, and that is what makes EA distinctive. Moreover, criticising the cause choices of EA actors just seems fundamental to the project. If our aim is to do the most good, then we should criticise approaches to that that seem unpromising.
As an example, Hauke and I wrote a piece criticising GiveWell's reliance on RCTs. I took this to be an argument about what GiveWell or other EA research orgs should do with their staff time. How would you propose reframing this?
Halstead @ 2021-01-06T11:30 (+7)
I think this is consistent with Will's definition because you can view the 'should' claims as what we should do conditional on us accepting the goal of doing the most good using reason and evidence.
Jakob_J @ 2021-01-07T20:23 (+5)
"I just think it is true that EAs shouldn't donate to their local opera house, pet sanctuary, homeless shelter or to their private school"
This is a very minor point, but I don't quite understand what EA has against cultural establishments like opera houses and museums. Of course, counting the number of lives saved one shouldn't donate to museums, but that kind of misses the point that these institutions might be offering free or discounted tickets in exchange for charitable donations. If they switched over to everyone paying a full price they would probably still get a similar revenue, but it would be an objectively worse situation since fewer people would get the chance to visit.
Davidmanheim @ 2021-01-08T10:45 (+22)
If you're pledging 10% of your income to EA causes, none of that money should go the local opera house or your kid's private school. (And if you instead pledge 50%, or 5%, the same is true of the other 50%, or 95%.)
What you do with the remainder of your money is a separate question - and it has moral implications, but that's a different discussion. I've said this elsewhere, but think it's worth repeating:
Most supporters of EA don't tell people not to go out to nice restaurants and get gourmet food for themselves, or not to go the the opera, or not to support local organizations they are involved with or wish to support, including the arts. The consensus simply seems to be that people shouldn't confuse supporting a local museum with attempting to effectively maximize global good with effective altruism.
Jakob_J @ 2021-01-08T11:40 (+10)
"Most supporters of EA don't tell people not to go out to nice restaurants and get gourmet food for themselves, or not to go the the opera, or not to support local organizations they are involved with or wish to support, including the arts."
Thanks, I agree with this statement! However, in Halsteads comment it said
"I just think it is true that EAs shouldn't donate to their local opera house, pet sanctuary, homeless shelter or to their private school, and that is what makes EA distinctive."
I think it would be good to be clearer in our communication and say that we don't consider local opera houses, pet sanctuaries, homeless shelters, or private schools to be good cause areas, but there might be other good reasons for you to donate to them. For example, maybe you like opera and you want to help your local opera house survive during the pandemic, or you got a new dog from a pet sanctuary and want to donate some money in return, or perhaps your kids private school is fundraising for scholarships for disadvantaged students and you want to contribute. In my view, the claim EA is making isn't that we shouldn't donate to these places, same as that its not telling us not to buy a car or go to restaurants, but that your earmarked "EA budget" should be spent on the causes that do the most good.
Davidmanheim @ 2021-01-10T11:01 (+6)
I think it would be good to be clearer in our communication and say that we don't consider local opera houses, pet sanctuaries, homeless shelters, or private schools to be good cause areas, but there might be other good reasons for you to donate to them.
I made a similar claim here, regarding carbon offsets:
https://forum.effectivealtruism.org/posts/brTXG5pS3JgTatP7i/carbon-offsets-as-an-non-altruistic-expense
Harrison D @ 2021-01-08T18:09 (+5)
I think it’s helpful to just put aside the “EA Budget” thread for a moment; I think what Halstead was trying to get at is the idea/argument “If you are trying to maximize the amount of good you do (e.g., from a utilitarian perspective), that will (almost) never involve (substantive) donations to your local opera house, pet shelter, ...” I think this is a pretty defensible claim. The thing is, nobody is a perfect utilitarian; trying to actually maximize good is very demanding, so a lot of people do it within limits. This might relate to the concept of leisure, stress relief, personal enjoyment, etc. which is a complicated subject: perhaps someone could make an argument that having a few local/ineffective donations like you describe is optimal in the long term because it makes you happier with your lifestyle and thus more likely to continue focusing on EA causes... etc. But “the EA (utilitarian) choice” would very rarely actually be to donate to the local opera house, etc.
Jakob_J @ 2021-01-08T20:05 (+14)
Yes, I agree that when we are trying to maximise the amount of good we do with limited resources, these local charities are not likely to be a good target for donations. However, as you mention, EA is different from utilitarianism because we don't believe everyone should use all or most of their resources to do as much good as possible.
So when we spend money on ourselves or others for reasons other than trying to maximise the good this might also include donations to local causes. It seems inconsistent to say that we can spend money on whatever we want for ourselves, but if we choose to spend money on others, it can't be for those in our community.
My point was therefore about communication: it's not correct to say that EAs should never donate to local causes, when what we mean is that donating to local causes is unlikely to bring about the most good (but people might have other reasons for doing so anyway).
Khorton @ 2021-01-08T20:27 (+10)
Yes, I think this point is both important and underrated - we need to stop saying "don't donate to your local theatre" or "don't be a doctor" because actually those are very alienating statements that turn out to be bad advice a lot of the time
Habryka @ 2021-01-09T19:11 (+15)
(I don't know of a practical scenario where either of those turned out to be bad advice, and multiple times when it saved someone from choosing a career that would have been much worse in terms of impact, so I don't think I understand why you think it's bad advice. At least for people I know it seems to have been really good advice, at least the doctor part.)
Khorton @ 2021-01-11T22:28 (+19)
I think there are a lot of people who are already doctors who can use that to do a lot of good, and there are some naive EAs who suggest they should drop their 25 years of medical experience to become a technical AI safety researcher. No! Maybe they should become a public health policy expert; maybe they should keep being a great doctor.
I also think a lot of people value their local community theatre and want it to continue - they enjoy it, it's a hobby. If they and others donate, the theatre continues to exist, otherwise it doesn't. I wouldn't suggest they should become freeriders.
Habryka @ 2021-01-12T06:15 (+12)
I do think anyone who has any decent shot at being an AI Safety researcher should probably stop being a doctor and try doing that instead. I do think that many people don't fit that category, though some of the most prominent doctors in the community who quit their job (Ryan Carey and Gregory Lewis) have fit that bill, and I am exceptionally glad for them to have made that decision.
I don't currently know of a reliable way to actually do a lot of good as a doctor. As such, I don't know why from an impact perspective I should suggest that people continue being a doctor. Of course there are outliers, but as career advice goes, it strikes me as one of the most reliably bad decisions I've seen people make. It also seems from a personal perspective a pretty reliably bad choice, with depression and suicide rates being far above population average.
AGB @ 2021-01-12T08:12 (+35)
The ‘any decent shot’ is doing a lot of work in that first sentence, given how hard the field is to get into. And even then you only say ‘probably stop’.
There’s a motte/bailey thing going on here, where the motte is something like ‘AI safety researchers probably do a lot more good than doctors’ and the bailey is ‘all doctors who come into contact with EA should be told to stop what they are doing and switch to becoming (e.g.) AI safety researchers, because that’s how bad being a doctor is’.
I don’t think we are making the world a better place by doing the second; where possible we should stick to ‘probably’ and communicate the first, nuance and all, as you did do here but as Khorton is noting people often don’t do in person.
Habryka @ 2021-01-13T18:57 (+6)
The "probably" there is just for the case of becoming an AI safety researcher. The argument for why being a doctor seems rarely the right choice does of course not just route through AI Alignment being important. It routes through a large number of alternative careers that seem more promising, many of which are analyzed and listed on 80k's website. That is what my second paragraph was trying to say.
I think if you take into account all of those alternatives, the "probably" turns into a "very likely" and conditioning on "any decent shot" no longer seems necessary to me.
Denise_Melchin @ 2021-01-12T11:17 (+12)
I don't currently know of a reliable way to actually do a lot of good as a doctor.
I do know of such a way, but that might be because we have different things in mind when we say 'reliably do a lot of good'.
Some specialisations for doctors are very high earning. If someone was on the path to being a doctor and could still specialise in one of them, that is what I would suggest as an earning-to-give strategy. If they might also do a great job as a quant trader, I would also suggest checking that out. But I doubt most doctors make good quant traders, so it might still be one of the best opportunities for them.
I am less familiar with this and therefore not confident, but there are also some specialisations Doctors without Borders have a hard time filling (while for some, there is an over-supply). I think this would be worth looking into, as well as other paths to deliver medical expertise in developing countries.
Habryka @ 2021-01-13T00:02 (+9)
Some specialisations for doctors are very high earning. If someone was on the path to being a doctor and could still specialise in one of them, that is what I would suggest as an earning-to-give strategy.
Yeah, I do think this is plausible. When I last did a fermi on this I tended to overestimate the lifetime earnings of doctors because I didn't properly account for the many years of additional education required to become one, which often cost a ton of money and of course replace potential other career paths during that same time, so my current guess is that while being a doctor is definitely high-paying, I think it's not actually that great for EtG.
The key difference here does seem to be whether you are already past the point where you finished your education. After you finished med-school or maybe even have your own practice, then it's pretty likely being a doctor will be the best way for you to earn lots of money, but if you are trying to decide whether to become a doctor and haven't started med-school, I think it's rarely the right choice from an impact perspective.
Denise_Melchin @ 2021-01-13T08:44 (+8)
Agree with all of the above!
Davidmanheim @ 2021-01-17T11:07 (+6)
I want to point out that there's something unfair that you did here. You pointed out that AI safety is more important, and that there were two doctors that left medical practice. Ryan does AI safety now, but Greg does Biosecurity, and frankly, the fact he has an MD is fairly important for his ability to interact with policymakers in the UK. So one of your examples is at least very weak, if not evidence for the opposite of what you claimed.
"A reliable way to actually do a lot of good as a doctor" doesn't just mean not practicing; many doctors are in research, or policy, making a far greater difference - and their background in clinical medicine can be anywhere from a useful credential to being critical to their work.
Habryka @ 2021-01-17T14:02 (+8)
Huh, I didn't have a sense that Greg's medical degree helped much with his work, but could totally be convinced otherwise.
Thinking more about it, I think I also just fully retract Greg as an example for other reasons. I think for many other people's epistemic states the above goes through, but I wouldn't personally think that he necessarily made the right call.
Khorton @ 2021-01-11T22:33 (+11)
I'm particularly annoyed by this because I've seen this play out in person - I've invited respected professionals to EA events who were seriously disrespected by people with dubious and overconfident ideas.
Davidmanheim @ 2021-01-10T10:58 (+17)
At least for people I know it seems to have been really good advice, at least the doctor part.
It seems like this is almost certain to be true given post-hoc selection bias, regardless of whether or not it is good - it doesn't differentiate between worlds where it is alienating or bad advice, and some people leave the community, and ones where it is good.
Linch @ 2021-01-10T20:32 (+16)
I'm generally leery of putting words in other people's mouths, but perhaps people are using "bad advice" to mean different things, or at least have different central examples in mind.
There's at least 3 possible interpretations of what "bad advice" can mean here:
A. Advice that, if some fraction of people are compelled to follow it across the board, can predictably lead to worse outcomes than if the advice isn't followed.
B. Advice that, if followed by people likely to follow such advice, can predictably lead to worse outcomes than if the advice isn't followed.
C. Words that can be in some sense considered "advice" that have negative outcomes/emotional affect upon hearing these words, regardless of whether such advice is actually followed.
Consider the following pieces of "advice":
- You should self-treat covid-19 with homeopathy.
- You should eat raw lead nails.
#1 will be considered "bad advice" in all 3 interpretations (it will be bad if everybody treats covid-19 with homepathy(A), it will be bad if people especially susceptible to homeopathic messaging treat covid-19 with homeopathy(B), and also I will negatively judge someone for recommending self treatment with homeopathy(C)).
#2 is only "bad advice" in at most 2 of the interpretations (forcibly eating raw lead nails is bad(A), but realistically I don't expect anybody to listen to such "recommendations" ( B), and this advice is so obviously absurd that context will determine whether I'd be upset about this suggestion (C)).
In context here, if Habryka (and for that matter me) doesn't know any EA ex-doctors who regret no longer being a doctor (whereas he has positive examples of EA ex-doctors who do not regret this), this is strong evidence that telling people to not be doctors is good advice under interpretation B*, and moderate-weak evidence that it's good advice under interpretation A.
(I was mostly reading "bad advice" in the context of B and maybe A when I first read these comments).
However, if David/Khorton interpret "bad advice" to mean something closer to C, then it makes more sense why not knowing a single person harmed by following such advice is not a lot of evidence for whether the advice is actually good or bad.
* I suppose you can posit a selection-effected world where there's a large "dark matter" of former EAs/former doctors who quit the medical profession, regretted that choice, and then quit EA in disgust. This claim is not insane to me, but will not be where I place the balance of my probabilities.
Khorton @ 2021-01-11T22:38 (+6)
Thanks this is very clear! Yes, I was thinking of outcome C - I've seen people decide not to get involved with the EA community because strangers repeatedly gave them advice they found offensive.
I think the world would be better if we didn't regularly offend respected professionals, even if it's been very helpful for 5 or 10 people - and I imagine those 5 or 10 people may have transitioned from medicine anyways when presented with the arguments without it being presented as quite such a definitive answer.
Habryka @ 2021-01-11T02:54 (+11)
Yeah, I do think the selection effects here are substantial.
I do think I can identify multiple other very similarly popular pieces of advice that did turn out to be bad reasonably frequently, and caused people to regret their choices, which is evidence the selection effects aren't completely overdetermining the outcome.
Concretely, I think I know of a good number of people who regret taking the GWWC pledge, a good number of people who regret trying to get an ML PhD, and a good number of people who regret becoming active in policy. I do think those pieces of advice are a bit more controversial than the "don't become a doctor" advice within the EA Community, so the selection effects are less strong, but I do think the selection effects are not strong enough to make reasoning from experience impossible here.
Khorton @ 2021-01-11T22:39 (+4)
To be clear, I wasn't aiming to criticize "don't become a doctor", but rather "don't continue to be a doctor."
Aaron Gertler @ 2021-01-19T08:16 (+11)
I don't know of a practical scenario where either of those turned out to be bad advice
(I don't mean to pick too hard on this point, which is generally pointing at something true, but a counterexample sprang immediately to mind when I read it.)
I know one medical student who wound up perceiving EA somewhat negatively after reading 80K's early writing on the perils of being a doctor. This person is still fairly value-aligned and makes EA donations, but I saw them engage with the community much less than I'd have otherwise expected, because they thought they would face judgment for their career path and choices. (Even without being an EA specialist, this person is smart and capable and could have made substantial community contributions.)
This person would almost certainly have had greater impact in EA-aligned operations or research, but they'd also dreamed of becoming a doctor since early childhood, and their relationship with their family was somewhat contingent on their following through on those dreams. (A combination of "Mom and Dad would be heartbroken if I chose a different career with no status in their community" and "I want to have a high-paying job so I can provide financial support to my family later").
Hence the strong reaction to the idea of a movement where being a doctor was a slightly odd, suspicious thing to do (at least, that was their takeaway from the 80K piece, and I found the impression hard to shake).
This kind of story may be unusual, but I consider it to be one practical example of a time when the advice "don't become a doctor" led to a bad result -- though it's arguable whether this makes it "bad advice" even in that one case.
Habryka @ 2021-01-19T19:37 (+8)
Yeah, I feel like this should just be screened off by whether it is indeed good or bad career advice.
Like, if something is good career advice, I think we should tell people even if they don't like hearing it, and if something is bad career advice, we should tell people that even if they really want it to be true. But that's a general stance I seem to disagree with lots of EAs on, but at least for me, it isn't very cruxy whether anyone didn't like what that advice sounded like.
Aaron Gertler @ 2021-01-19T20:19 (+19)
I don't disagree with elements of this stance -- this kind of career advice is probably strongly positive-EV to share in some form with the average medical student.
But I think there's a strong argument for at least trying to frame advice carefully if you have a good idea of how someone will react to different frames. And messages like "tell people X even if they don't like hearing it" can obscure the importance of framing. I think that what advice sounds like to people can often be decisive in how they react, even if the most important thing is actually giving the good advice.
Habryka @ 2021-01-19T21:44 (+7)
Yep, I totally agree.
Marginal effort on making the information present better is totally valuable, and there is of course some level of bad presentation where it should be higher priority to improve your presentation than your accuracy, but my guess is in this case we are far from the relevant thresholds, and generally would want us to value marginal accuracy as quite a bit higher than marginal palatableness.
Davidmanheim @ 2021-01-20T18:14 (+2)
Strongly endorsed.
Larks @ 2021-01-10T18:10 (+7)
we need to stop saying "don't donate to your local theatre" ... because actually [that is a] bad advice a lot of the time
I'm surprised you would say this - I would expect that not donating to a local theatre would have basically no negative effects for most people. I can see an argument for phrasing it more delicately - e.g. "I wouldn't donate to a local theatre because I don't think it will really help make the world a better place" - but I would be very surprised if it was actually bad advice. Most people who stop donating to a charity suffer essentially no negative consequences from doing so.
Khorton @ 2021-01-11T22:31 (+18)
I don't think donating to a theatre is done in order to "make the world a better place", I think it's done to be able to continue to have access to a community research you enjoy and build your reputation in your community. It's actually a really bad idea for EAs to become known as a community of free riders.
And ultimately, it should be that person's choice - if you don't know much about their life, why would you tell them what part of their budget they should replace in order to increase donations to top causes? It's better to donate 10% to effective charities and continue donating to local community organisations than to donate 10% to effective charities and spend the rest on fast food, in my view, but ultimately it's none of my business!
Linch @ 2021-01-10T20:32 (+11)
It has a negative effect on the local theater, but hopefully a positive effect on the counterfactual recipients of that money.
Davidmanheim @ 2021-01-06T16:39 (+4)
I am not suggesting avoiding the word "should" generally, as I said in the post. I thought it was clear that I am criticizing the way in which overly narrowing the ideal of what is and is not EA, and unreasonably narrowing what is normatively acceptable within the movement, which I keep seeing, is harmful. I think it's clear that this can be done without claiming that everything is EA, or refraining from making normative statements altogether.
Regarding criticising Givewell's reliance on RCTs, I think there is room for a diversity of opinion. It's certainly reasonable to claim that as a matter of decision analysis, non-RCT evidence should be considered, and that risk-neutrality and unbiased decision making require treating less convincing evidence as valid, if weaker. (I'm certainly of that opinion.)
On the other hand, there is room for some effective altruists who prefer to be somewhat risk-averse to correctly view RCTs as more certain evidence than most other forms, and prefer interventions with clear evidence of that sort. So instead of saying that GiveWell should not rely as heavily on RCTs, or that EA organizations should do other things, I think we can, and should, make the case that there is an alternative approach which treats RCTs as only a single type of evidence, and that the views of Givewell and similar EA orgs are not the only valid way to approach effective giving. (And I think that this view is at least understood, and partly shared by many EA organizations and individuals, including many at Givewell.)
Halstead @ 2021-01-06T17:39 (+19)
Hi, thanks for the reply!
The argument now has a bit of a motte and bailey feel, in that case. In various places you make claims such as
- "The Folly of "EAs Should"
- "One consequence of this is that if there are no normative claims, any supposition about what ought to happen based on EA ideas is invalid";
- "So I think we should discuss why Effective Altruism implying that there are specific and clear preferable options for Effective Altruists is often harmful";
- "Claiming something normative given moral uncertainty, i.e. that we may be incorrect, is hard to justify. There are approaches to moral uncertainty that allow a resolution, but if EAs should cooperate, I argue that it may be useful, regardless of normative goals, to avoid normative statements that exclude some viewpoints."
- "and conclusions based on the suppositions about key facts are usually unwarranted, at least without clear caveats about the positions needed to make the conclusions"
These seem to be claims to the effect that (1) we should (almost) never make normative claims (2) strong scepticism about knowing that one path is better from an EA point of view than another. But I don't see a defence of either of these claims in the piece. For example, I don't see a defence of the claim that it is mistaken to think/say/argue that focusing on US policy or on GiveWell charities is not the best way to do the most good.
If the claim is the weaker one that sometimes EAs can be overconfident in their view of the best way forward or use language that can be off-putting, then that may be right. But that seems different to the "never say that some choices EAs make are better than others" claim, which is suggested elsewhere in the piece
Davidmanheim @ 2021-01-08T10:38 (+4)
I think I agree with you on the substantive points, and didn't think that people would misread it as making the bolder claim if they read the post, given that I caveated most of the statements fairly explicitly. If this was misleading, I apologize.
Halstead @ 2021-01-11T12:01 (+4)
I don't think there's any need to apologise! I was trying to make the case that I don't think you showed how we could distinguish reasonable and unreasonable uses of normative claims
jlemien @ 2021-01-28T00:41 (+14)
I'm glad that you mentioned EA being young. I am in my 30s and fairly new to reading about EA, having just started to read books and forum posts within the past month or two. The youth is very surprising to me, and it is something I've been thinking about for the past few days. This isn't a well thought out thesis, but I will share my rough thoughts:
First, if a middle aged person sees a community of a bunch of 20 somethings, he/she will likely conclude "this isn't meant for me" and walk away (even if the 20 somethings are friendly). Thus, potential collaborators are turned off.
Second, there are a lot of biases blind spots that younger people will have simply due to a lack of life experience. Maybe it is about having children, about the financial stresses of taking care of financial commitments without help from scholarships or parents, or simply about having a more "zoomed out" view of events which allows one to recognize patterns. Often it is simply the empathy and understanding that comes from having encountered situations over the course of one's life. Thus, I'd suggest that the EA movement is missing out on perspectives/wisdom/experience due to the demographics skewing so young.
That being said, I understand that equality and representation are not core values of the EA community. So I'm not sure where it leaves me, but I'll keep mulling over it.
(there is also the idea of the demographics skewing toward upper class and the perspectives/biases that come as a result, as it seems most EAs are able to afford a university education and many are able to afford to start their own organization immediately after finishing their education without any work experience, but since that isn't related to age I'll set that aside for now)
Davidmanheim @ 2021-01-28T08:26 (+14)
As someone in their late 30s with kids often identified as one of the "older" EAs, I strongly agree with this.
And to quote Monty Python: "I'm thirty seven - I'm not old!"
G Gordon Worley III @ 2021-01-06T18:59 (+12)
I often make an adjacent point to folks, which is something like:
EA is not all one thing, just like the economy is not all one thing. Just as civilization as we know it doesn't work unless we have people willing to do different things for different reasons, EA depends on different folks doing different things for different reasons to give us a rounded out basket of altruistic "goods".
Like, if everyone thought saltine crackers were the best food and everyone competed to make the best saltines, we'd ultimately all be pretty disappointed that we had a mountain of amazing saltine crackers and literally nothing else, and so it makes sense even in the world where saltines really are the best food that generate the most benefit by their production that we instrumentally produce other things so we can enjoy our saltines in full.
I think the same is true of EA. I care a lot about AI x-risk and it's what I focus on, but that doesn't mean I think everyone should do the same. In fact, if they did, I'm not sure it would be so good, because then maybe we stop paying attention to other causes that, if we don't address them, end up making trying to address AI risks moot. I'm always very glad to see folks working on things, even things I don't personally think are worthwhile, both because of uncertainty about what is best and because there's multiple dimensions along which it seems we can optimize (and would be happy if we did).
Davidmanheim @ 2021-01-08T10:53 (+5)
Strongly agree substantively about the adjacency of your point, and about the desire for a well-rounded world. I think it's a different thread of thought than mine, but it is worth being clear about as well. And see my reply to Jacob_J elsewhere in the comments, here, for how I think that can work even for individuals.
Luke Freeman @ 2021-01-22T01:45 (+9)
I really liked this post. I especially like the similarities with the LGBT+ movement and that being large and weird is also fine 😀
rorty @ 2021-01-06T17:37 (+7)
I take this post to raise both practical/strategic and epistemological/moral reasons to think EAs should avoid being too exclusive or narrow in what they say "EAs should do." Some good objections have been raised in the comments already.
Is it possible this post boils down to shifting from saying what EAs should do to what EAs should not do?
That sounds maybe intuitively unappealing and un-strategic because you're not presenting a compelling, positive message to the outside world. But I don't mean literally going around telling people what not to do. I mean focusing on shifting people away from clearly bad or neutral activities toward positive ones, rather than focusing so much on what the optimal paths are. I raised this before in my "low-fidelity EA" comment: https://forum.effectivealtruism.org/posts/6oaxj4GxWi5vuea4o/what-s-the-low-resolution-version-of-effective-altruism?commentId=9AsgNmts2JqibdcwY
Even if you don't think there are epistemological/moral reasons for this, there may be practical/strategic ones: A large movement that applies rationality and science to encourage all its participants to do some good may do a lot more good than a small one that uses it to do the most good.
Davidmanheim @ 2021-01-08T10:50 (+4)
I think that negative claims are often more polarizing than positive ones, but I agree that there is a reason to advocate for a large movement that applies science and reasoning to do some good. I just think it already exists, albeit in a more dispersed form than a single "EA-lite." (It's what almost every large foundation already does, for example.)
I do think that there is a clear need for an "EA-Heavy," i.e. core EA, in which we emphasize the "most" in the phrase "do the most good." My point here is that I think that this core group should be more willing to allow for diversity of action and approach. And in fact, I think the core of EA, the central thinkers and planners, people at CEA, Givewell, Oxford, etc. already advocate this. I just don't think the message has been given as clearly as possible to everyone else.
lukasberglund @ 2021-01-06T11:51 (+6)
[Comment pointing out a minor error] Also, great post!
Davidmanheim @ 2021-01-06T16:42 (+3)
Whoops! My apologies to both individuals - this is now fixed. (I don't know what I was looking at when I wrote this, but I vaguely recall that there was a second link which I was thinking of linking to which I can no longer find where Peter made a similar point. If not, additional apologies!)