What "Effective Altruism" Means to Me
By Richard Y Chappellđ¸ @ 2024-06-14T18:13 (+57)
This is a linkpost to https://www.goodthoughts.blog/p/what-effective-altruism-means-to
I previously included a link to this as part of my trilogy on anti-philanthropic misdirection, but a commenter asked me to post the full text here for the automated audio conversion. Apologies to anyone who has already read it.
As I wrote in âWhy Not Effective Altruism?â, I find the extreme hostility towards effective altruism from some quarters to be rather baffling. Group evaluations can be vexing: perhaps what the critics have in mind when they hate on EA has little or no overlap with what I have in mind when I support it? Itâs hard to know without getting into details, which the critics rarely do. So here are some concrete claims that I think are true and important. If you disagree with any of them, Iâd be curious to hear which ones, and why!
What I think:
- Itâs good and virtuous to be beneficent and want to help others, for example by taking the Giving What We Can 10% pledge.
- Itâs good and virtuous to want to help others effectively: to help more rather than less with oneâs efforts.
- We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against global catastrophic risks such as future pandemics).
- In all these areas, it is worth making deliberate, informed efforts to act effectively. Better targeting our efforts may make even more of a difference than the initial decision to help at all.
- In all these areas, we can find interventions that we can reasonably be confident are very positive in expectation. (One can never be so confident of actual outcomes in any given instance, but being robustly positive in prospect is whatâs decision-relevant.)
- Beneficent efforts can be expected to prove (much) more effective if guided by careful, in-depth empirical research. Quantitative tools and evidence, used wisely, can help us to do more good.
- So itâs good and virtuous to use quantitatively tools and evidence wisely.
- GiveWell does incredibly careful, in-depth empirical research evaluating promising-seeming global charities, using quantitative tools and evidence wisely.
- So itâs good and virtuous to be guided by GiveWell (or comparably high-quality evaluators) rather than less-effective alternatives like choosing charities based on locality, personal passion, or gut feelings.
- Thereâs no good reason to think that GiveWellâs top charities are net harmful.[1]
- But even if youâre the worldâs most extreme aid skeptic, itâs clearly good and virtuous to voluntary redistribute your own wealth to some of the worldâs poorest people via GiveDirectly. (And again: more good and virtuous than typical alternatives.)
- Many are repelled by how âhands-offâ effective philanthropy is compared to (e.g.) local volunteering. But itâs good and virtuous to care more about saving and improving lives than about being hands on. To prioritize the latter over the former would be morally self-indulgent.
- Hits-based giving is a good idea. A portfolio of long shots can collectively be likely to do more good than putting all your resources into lower-expected-value âsure thingsâ. In such cases, this is worth doing.
- Even if one-off cases, it is often better and more virtuous to accept some risk of inefficacy in exchange for a reasonable shot at proportionately greater positive impact. (But reasonable people can disagree about which trade-offs of this sort are worth it.)
- The above point encompasses much relating to politics and âsystemic changeâ, in addition to longtermist long-shots. Itâs very possible for well-targeted efforts in these areas to be even better in expectation than traditional philanthropyâjust note that this potential impact comes at the cost of both (i) far greater uncertainty, contestability, and potential for bias; and often (ii) potential for immense harm if you get it wrong.
- Anti-capitalist critics of effective altruism are absurdly overconfident about the value of their preferred political interventions. Many objections to speculative longtermism apply at least as strongly to speculative politics.
- In general, I donât think that doing good through oneâs advocacy should be treated as a substitute for âputting oneâs money where oneâs mouth isâ. It strikes me as overly convenient, and potentially morally corrupt, when I hear people (whether political advocates or longtermists) excusing not making any personal financial sacrifices to improve the world, when we know we can do so much. But Iâm completely open to judging political donations (when epistemically justified) as constituting âeffective philanthropyââI donât think we should put narrow constraints on the latter concept, or limit it to traditional charities.
- Decision theory provides useful tools (in particular, the concept of expected value) for thinking about these trade-offs between certainty and potential impact.
- It would be very bad for humanity to go extinct. We should take reasonable precautions to try to reduce the risk of this.
- Ethical cosmopolitanism is correct: Itâs better and more virtuous for oneâs sympathy to extend to a broader moral circle (including distant strangers) than to be narrowly limited. Entering your field of sight does not make someone matter more.
- Insofar as oneâs natural sympathy falls short, itâs better and more virtuous to at least be âcontinentâ (as Aristotle would say) and allow oneâs reason to set one on the path that the fully virtuous agent would follow from apt feelings.
- Since we can do so much good via effective donations, we haveâin principleâexcellent moral reason to want to make more money (via permissible means) in order to give it away to these good causes.
- Many individuals have in fact successfully pursued this path. (So Rousseauian predictions of inevitable corruption seem misguided.)
- Someone who shares all my above beliefs is likely to do more good as a result. (For example, they are likely to donate more to effective charities, which is indeed a good thing to do.)
- When the stakes are high, there are no âsafeâ options. For example, discouraging someone from earning to give, when they would have otherwise given $50k per year to GiveWellâs top charities, would make you causally responsible for approximately ten deaths every year. Thatâs really bad! You should only cause this clear harm if you have good grounds for believing that the alternative would be even worse. (If you do have good grounds for thinking this, then of course EA principles support your criticism.)
- Most public critics of effective altruism display reckless disregard for these predictable costs of discouraging acts of effective altruism. (They donât, for example, provide evidence to think that alternative acts would do more good for the world.) They are either deliberately or negligently making the world worse.
- Deliberately or negligently making the world worse is vicious, bad, and wrong.
- Most (all?) of us are not as effectively beneficent as would be morally ideal.
- Our moral motivations are very shaped by social norms and expectationsâby community and culture.
- This means it is good and virtuous to be public about oneâs efforts to do good effectively.
- If thereâs a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.
- In principle, we should expect it to be good for the world to have a community of do-gooders who are explicitly aiming to be more effectively beneficent, together.
- For most individuals: it would be good (and improve their moral character) to be part of a community whose culture, social norms, and expectations promoted greater effective beneficence.
- Thatâs what the âEffective Altruismâ community constitutively aims to do.
- It clearly failed in the case of SBF: he seems to have been influenced by EA ideas, but his fraud was not remotely effectively beneficent or good for the world (even in prospect).
- Community leaders (e.g. the Centre for Effective Altruism) should carefully investigate / reflect on how they can reduce the risk of the EA community generating more bad actors in future.
- Such reflection has indeed happened. (I donât know exactly how much.) For example, EA messaging now includes much greater attention to downside risks, and the value of moral constraints. This seems like a good development. (Itâs not entirely new, of course: SBFâs fraud flagrantly violated extant EA norms;[2] everyone I know was genuinely shocked by it. But greater emphasis on the practical wisdom of commonsense moral constraints seems like a good idea. As does changing the culture to be more âprofessionalâ in various ways.)
- No community is foolproof against bad actors. It would not be fair or reasonable to tar others with âguilt by associationâ, merely for sharing a community with someone who turned out to be very bad. The existence of SBF (n=1) is extremely weak evidence that EA is generally a force for ill in the world.
- The actually-existing EA community has (very) positive expected value for the world. We should expect that having more people exposed to EA ideas would result in more acts of (successful) effective beneficence, and hence we should view the prospect favorably.
- The truth of the above claims does not much depend upon how likeable or annoying EAs in general turn out to be.
- If you find the EA community annoying, itâs fine to say so (and reject the âEAâ label), but it would still be good and virtuous to practice, and publicly promote, the underlying principles of effective beneficence. It would be very vicious to let children die of malaria because you find EAs annoying or donât want to be associated with them.
- None of the above assumes utilitarianism. (Rossian pluralists and cosmopolitan virtue ethicists could plausibly agree with all the relevant normative claims.)
Things I donât think
I donât think:
- that people should blindly follow crude calculations, or otherwise attempt to directly implement ideal theory in practice without properly taking into account our cognitive limitations
- that youâre obligated to dedicate your entire life to maximizing the good, neglecting your loved ones and personal projects. (The suggestion is just that it would be good and virtuous for advancing impartially effective beneficence to be among oneâs life projects.)
- that we should care about numbers rather than people (rather, as suggested above, I think we should use numbers as a tool to enable us to help more people)
- that we should completely ignore present-day needs in pursuit of tiling the universe with digital experience machines
- that double or nothing existence gambles are worth taking
- that inexperienced, self-styled ârationalistâ EAs are thereby competent to run important organizations (just based on a priori first principles)
- that you should trust someone with great power (e.g. unregulated control of AI) just because they identify as an âEAâ (let alone a ârationalistâ).
Conclusion: Beware of Stereotypes
A few months ago, Dustin Moskovitz (the billionaire funder behind Open Philanthropy) wrote some very thoughtful reflections on âthe long journey to doing good betterâ. I highly recommend it. I was especially taken by his comments on why outside perceptions of a movement can seem so alien to those within it:
When a group has a shared sense of identity, the people within it are still not all one thing, a homogenous group with one big set of shared beliefs â and yet they often are perceived that way. Necessarily, the way that you engage in characterizing a group is by giving it broad, sweeping attributes that describe how the people in the group are similar, or distinctive relative to the broader world. As an individual within a group trying to understand yourself, however, this gets flipped, and you can more easily see how you differ. Any one of those sweeping attributes do apply to some of the group, and itâs hard to identify with the group when you clearly donât identify with many of the individuals, in particular the ones with the strongest beliefs. I often observe that the people with the most fringe opinions inside a group paradoxically get the most visibility outside the group, precisely because they are saying something unfamiliar and controversial.
(Though I also think that critics often just straw man their targets.)
Anyway, I hope my above listing proves illuminating to some. I would be especially curious to hear from the haters of EA about which numbered points they actually disagree with (and why).[3] Perhaps there will turn out to be such fundamental disagreements that reasoned conversation is pointless? But you never know until you try.
- ^
For example, what empirical evidence we have on the question suggests that Deatonâs speculative worries about political accountability are easily addressed: âPolitical accountability is not necessarily undermined by foreign aid: even illiterate and semi-literate folks in rural Bangladesh appear to be quite sophisticated about how they evaluate their leaders, given the information they possess. Further, any unintended negative accountability consequences were effectively countered by a simple, scalable information campaign.â
- ^
Not to mention the standard practical advice of the utilitarian tradition, as Iâve known ever since I was an undergrad (sadly many senior philosophers persist in misrepresenting it).
- ^
To explain my curiosity: most anti-EA criticism Iâve come across to date, especially by philosophers, has struck me as
painfully stupidentirely missing the point. It doesnât help that itâs all so unrelentingly hostileâwhich makes me question whether itâs in good faith, as it prima facie seems a rather inexplicably vicious attitude to take towards people who are trying to do good, often at significant personal cost! If any critics reading this are capable of explaining their precise disagreements with me (not an imagined straw-EA) in a civil tone, Iâd be delighted to hear it.
titotal @ 2024-06-14T22:52 (+31)
You've caught me stuck in bed, and I'm probably the most EA-critical person that regularly posts here, so I'll take a stab at responding point by point to your list:
- Itâs good and virtuous to be beneficent and want to help others, for example by taking the Giving What We Can 10% pledge.
- Agree.
- Itâs good and virtuous to want to help others effectively: to help more rather than less with oneâs efforts.
2. Agree.
- We have the potential to do a lot of good in the face of severe global problems (including global poverty, factory-farmed animal welfare, and protecting against global catastrophic risks such as future pandemics).
3. Agree on global poverty and animal welfare, but I think it might be difficult to do "a lot of good" in some catastrophic risk areas.
- In all these areas, it is worth making deliberate, informed efforts to act effectively. Better targeting our efforts may make even more of a difference than the initial decision to help at all.
4. Agreed, although I should note that efforts to better target efforts can have diminishing returns, especially when a problem is speculative and not well understood.
- In all these areas, we can find interventions that we can reasonably be confident are very positive in expectation. (One can never be so confident of actual outcomes in any given instance, but being robustly positive in prospect is whatâs decision-relevant.)
5. Agreed for global poverty and animal welfare, but I'm mixed on this for speculative causes like AI risk, where theres a decent chance that efforts could backfire and make things worse, and theres no real way to tell until after the fact.
- Beneficent efforts can be expected to prove (much) more effective if guided by careful, in-depth empirical research. Quantitative tools and evidence, used wisely, can help us to do more good.
6. Agreed. Unfortunately, EA often fails to live up to this idea.
- So itâs good and virtuous to use quantitatively tools and evidence wisely.
7. Agreed, but see above.
- GiveWell does incredibly careful, in-depth empirical research evaluating promising-seeming global charities, using quantitative tools and evidence wisely.
8. Agreed, I like givewell in general.
- So itâs good and virtuous to be guided by GiveWell (or comparably high-quality evaluators) rather than less-effective alternatives like choosing charities based on locality, personal passion, or gut feelings.
9. Agreed, with regards to the area givewell specialises in.
- Thereâs no good reason to think that GiveWellâs top charities are net harmful.[1]
10. I think the chances that givewells top charities are net good is very high, but not 100%. See the mosquito net fishing for a possible pitfall.
- But even if youâre the worldâs most extreme aid skeptic, itâs clearly good and virtuous to voluntary redistribute your own wealth to some of the worldâs poorest people via GiveDirectly. (And again: more good and virtuous than typical alternatives.)
11.Agreed.
- Many are repelled by how âhands-offâ effective philanthropy is compared to (e.g.) local volunteering. But itâs good and virtuous to care more about saving and improving lives than about being hands on. To prioritize the latter over the former would be morally self-indulgent.
12.Agreed, but sometimes being hands on can be helpful with improving lives. For example, being hands on can allow one to more easily recieve feedback and understand overlooked problems with an intervention, and to ensure it goes to the right place. I don't think voluntourism is good at this, but I would like to see support for more grassroots projects by people actually from impoverished communities.
- Hits-based giving is a good idea. A portfolio of long shots can collectively be likely to do more good than putting all your resources into lower-expected-value âsure thingsâ. In such cases, this is worth doing.
13. I agree in principle, but disagree in practice given the "hits based giving" of EA can be pretty bad. The effectiveness of hits based giving very much depends on how much each miss costs and the likely effectiveness of a hit. I don't think the 100,000 grant for a failed video game was a good idea, nor the $28000 to print out harry potter fanfiction that was free online anyway.
- Even if one-off cases, it is often better and more virtuous to accept some risk of inefficacy in exchange for a reasonable shot at proportionately greater positive impact. (But reasonable people can disagree about which trade-offs of this sort are worth it.)
14. This is so broad as to be trivially true, but in practice I often disagree with the judgements here.
- The above point encompasses much relating to politics and âsystemic changeâ, in addition to longtermist long-shots. Itâs very possible for well-targeted efforts in these areas to be even better in expectation than traditional philanthropyâjust note that this potential impact comes at the cost of both (i) far greater uncertainty, contestability, and potential for bias; and often (ii) potential for immense harm if you get it wrong.
15. Generally agree.
- Anti-capitalist critics of effective altruism are absurdly overconfident about the value of their preferred political interventions. Many objections to speculative longtermism apply at least as strongly to speculative politics.
16. Anti-capitalist is a pretty broad tent. I agree that some people who adopt that label are dumb and naive, but others have pretty good ideas. I think it would be really dumb if capitalism was still the dominant system 1000 years from now, and there are political interventions that can be predicted to reliably help people. I think "overthrow the government for communism" gets the sideye: "universal healthcare" does not.
- In general, I donât think that doing good through oneâs advocacy should be treated as a substitute for âputting oneâs money where oneâs mouth isâ. It strikes me as overly convenient, and potentially morally corrupt, when I hear people (whether political advocates or longtermists) excusing not making any personal financial sacrifices to improve the world, when we know we can do so much. But Iâm completely open to judging political donations (when epistemically justified) as constituting âeffective philanthropyââI donât think we should put narrow constraints on the latter concept, or limit it to traditional charities.
Some people are poor and cannot contribute much without kneecapping themselves. I don't think those people are useless, and I think for a lot of people political action is a rational choice for how to effectively help. Similarly, some people are very good at political action, but not so good at making large amounts of money, and they should do the former, not the latter.
- Decision theory provides useful tools (in particular, the concept of expected value) for thinking about these trade-offs between certainty and potential impact.
I agree it provides useful tools. But if you take the tools like expected value too seriously, you end up doing insane things (see SBF). In general EA is way too willing to swallow the math even when it gives bad results.
- It would be very bad for humanity to go extinct. We should take reasonable precautions to try to reduce the risk of this.
Agreed, depending on what you mean by "reasonable".
- Ethical cosmopolitanism is correct: Itâs better and more virtuous for oneâs sympathy to extend to a broader moral circle (including distant strangers) than to be narrowly limited. Entering your field of sight does not make someone matter more
Agreed, with the caveat that we are talking about beings that currently exist or have a high probability of existing in the future.
- Insofar as oneâs natural sympathy falls short, itâs better and more virtuous to at least be âcontinentâ (as Aristotle would say) and allow oneâs reason to set one on the path that the fully virtuous agent would follow from apt feelings.
The term "fully virtuous agent" raises my eyebrows. I don't think that's a thing that can actually exist.
- Since we can do so much good via effective donations, we haveâin principleâexcellent moral reason to want to make more money (via permissible means) in order to give it away to these good causes.
Agreed, with emphasis on the "permissible means".
- Many individuals have in fact successfully pursued this path. (So Rousseauian predictions of inevitable corruption seem misguided.)
It looks like it, although of course this could be negated if they got their fortunes from more harmful than average means. I don't see evidence that this is the case for these examples.
- Someone who shares all my above beliefs is likely to do more good as a result. (For example, they are likely to donate more to effective charities, which is indeed a good thing to do.)
agreed
- When the stakes are high, there are no âsafeâ options. For example, discouraging someone from earning to give, when they would have otherwise given $50k per year to GiveWellâs top charities, would make you causally responsible for approximately ten deaths every year. Thatâs really bad! You should only cause this clear harm if you have good grounds for believing that the alternative would be even worse. (If you do have good grounds for thinking this, then of course EA principles support your criticism.)
Agreed, although I'll note that from my perspective, persuading an EAer to donate to AI x-risk instead of givewell will have a similar effect, and should be treated to the same level of scrutiny.
- Most public critics of effective altruism display reckless disregard for these predictable costs of discouraging acts of effective altruism. (They donât, for example, provide evidence to think that alternative acts would do more good for the world.) They are either deliberately or negligently making the world worse.
Agreed for some critiques of Givewell/AMF in particular, like the recent time article. However, I don't think this applies to critiques of AI x-risk, because I don't think AI x-risk charities are effective. If that turns them away and they donate to oxfam or something instead, that is a net good.
- Deliberately or negligently making the world worse is vicious, bad, and wrong.
Agreed.
- Most (all?) of us are not as effectively beneficent as would be morally ideal.
Agreed
- Our moral motivations are very shaped by social norms and expectationsâby community and culture.
Agreed
- This means it is good and virtuous to be public about oneâs efforts to do good effectively.
Generally agreed.
- If thereâs a risk that others will perceive you negatively (e.g. as boastful), accepting this reputational cost for the sake of better promoting norms of beneficence is even more virtuous. Staying quiet for fear of seeming arrogant or boastful would be selfish in comparison.
Agreed
- In principle, we should expect it to be good for the world to have a community of do-gooders who are explicitly aiming to be more effectively beneficent, together.
Agreed, but "in principle" is doing a lot of work here. I think the initial Bolshevik party broadly fit this description, for an example of how this could go wrong.
- For most individuals: it would be good (and improve their moral character) to be part of a community whose culture, social norms, and expectations promoted greater effective beneficence.
Depends on which community we are talking about. See again: the Bolsheviks.
- Thatâs what the âEffective Altruismâ community constitutively aims to do.
agreed.
- It clearly failed in the case of SBF: he seems to have been influenced by EA ideas, but his fraud was not remotely effectively beneficent or good for the world (even in prospect).
Agreed on all statements.
- Community leaders (e.g. the Centre for Effective Altruism) should carefully investigate / reflect on how they can reduce the risk of the EA community generating more bad actors in future.
Agreed.
- Such reflection has indeed happened. (I donât know exactly how much.) For example, EA messaging now includes much greater attention to downside risks, and the value of moral constraints. This seems like a good development. (Itâs not entirely new, of course: SBFâs fraud flagrantly violated extant EA norms;[2] everyone I know was genuinely shocked by it. But greater emphasis on the practical wisdom of commonsense moral constraints seems like a good idea. As does changing the culture to be more âprofessionalâ in various ways.)
There has definitely been some reflections and changes, many of which I approve of. But it has not been smooth sailing, and I think the response to other scandals leaves a lot to be desired. It remains to be seen as to whether ongoing efforts are enough.
- No community is foolproof against bad actors. It would not be fair or reasonable to tar others with âguilt by associationâ, merely for sharing a community with someone who turned out to be very bad. The existence of SBF (n=1) is extremely weak evidence that EA is generally a force for ill in the world.
I agree that individuals should not be tarred by SBF, but I don't think this same protection applies to the movement as a whole. We care about outcomes. If a fringe minority does bad things, those things still occur. SBF conducted one of the largest frauds in history: you don't see OXFAM having this kind of effect. It's n=1 for billion dollar frauds, but the n is a lot higher if we consider abuse of power, sexual harrassment, and other smaller harms.
The more power and influence EA amasses, the more appropriate it is to be concerned about bad things within the community.
- The actually-existing EA community has (very) positive expected value for the world. We should expect that having more people exposed to EA ideas would result in more acts of (successful) effective beneficence, and hence we should view the prospect favorably.
I think EA has totally flubbed on AI x-risk. Therefore, if I have the choice between recommending them EA in general, or just givewells top charities, doing the latter will be better.
- The truth of the above claims does not much depend upon how likeable or annoying EAs in general turn out to be.
Agreed, but certain types of dickish behaviour are a flaw of the community that has a detrimental effect on it's health, and make it's decision making and effectiveness worse.
- If you find the EA community annoying, itâs fine to say so (and reject the âEAâ label), but it would still be good and virtuous to practice, and publicly promote, the underlying principles of effective beneficence. It would be very vicious to let children die of malaria because you find EAs annoying or donât want to be associated with them.
Agreed. I generally steer people to givewell or it's charities, rather than
- None of the above assumes utilitarianism. (Rossian pluralists and cosmopolitan virtue ethicists could plausibly agree with all the relevant normative claims.)
I think some of the claims are less valuable outside of utilitarianism, but whatever.
With that all answered, let me add my own take on why I don't recommend EA to people anymore:
I think that the non-speculative side of EA (global poverty and animal welfare) is nice and good, and is on net making the world a better place. I think the speculative side of EA, and in particular AI risk, contains some reasonable people but also enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory.
Most of this bad thinking originates from the Rationalist community, which is generally a punchline in wider intellectual circles. I think the Rationalist community is on the whole epistemically atrocious and overconfident on things for baffling reasons, and tend to be hero worshipping and spread a lot of factually dubious ideas with very poor justification. I find some of the heroes they adore to be unpleasant people who spread harmful norms, ideas, and behaviour.
Putting it all together, I think that overall EA is a net positive, but that recommending EA is not the most positive thing you can do. Attacking the bad parts of EA while acknowledging that malaria nets are still good seems like a completely rational and good thing to do, either to put pressure on EA to improve, or to provide impetus for the good parts of EA to split off.
JWS @ 2024-06-15T11:25 (+11)
> Says he's stuck in bed and only going to take a stab
> Posts a thorough, thoughtful, point-by-point response to the OP in good faith
> Just titotal things
- - - - - - - - - - - - - - -
On a serious note, as Richard says it seems like you agree with most of his points, at least on the 'EA values/EA-as-ideas' set of things. It sounds like atm you think that you can't recommend EA without recommending the speculative AI part of it, which I don't think has to be true.
I continue to appreciate your thoughts and contributions to the Forum and have learned a lot from them, and given the reception you get[1] I think I'm clearly not alone there :)
- ^
You're probably by far the highest-upvoted person who considers them EA critical here? (though maybe Habryka would also count)
David Mathers @ 2024-06-17T14:38 (+4)
"enough people who are ridiculously wrong, overconfident, and power-seeking to drag the whole operation into the net-negative territory"
Do you mean drag just longtermist EA spending into net negative territory or drag EA spend as a whole into net negative territory? Do you expect actual bad effects from longtermist EA or just wasted money that could have been spent on short-term stuff? I think AI safety money is likely wasted (even though I've ended up doing quite a lot of work paid for by it!), but probably mostly harmless. I expect the big impact of longtermist money for good or ill to come from biorisk spending, where it's clear that at least catastrophic risks are real, even if not existential ones, so I think everything you say about rationalism could be true and still longtermist spending could be quite net positive in expectation if biorisk work goes well.
Ian Turner @ 2024-06-17T14:55 (+1)
Given how many of the frontier AI labs have an EA-related origin story, I think it's totally plausible that the EA AI xrisk project has been net negative.
David Mathers @ 2024-06-17T15:19 (+3)
Yeah, that makes sense if you think X-risk from AI is a significant concern, or if you really buy reasoning about even tiny increases in X-risk being very bad. But actually, "net negative in expectation" is compatible with "probably mostly harmless". I.e. the expected value of X can be very negative, even while the chance of the claim "X did (actual not expected) harm" turning out to be true is low. If you don't really buy the arguments for AI X-risk but you do buy the argument for "very small increases in X-risk are really bad" you might think that. On some days, I think I think that, though my views on all this aren't very stable.
Richard Y Chappell @ 2024-06-14T23:46 (+4)
That seems reasonable to me! I'm most confident that the underlying principles of effective altruism are important and good, and you seem to agree on that. I agree there's plenty of room for people to disagree about speculative cause prioritization, and if you think the EA movement is getting things systematically wrong there then it makes sense to (in effect, not in these words) "do EA better" by just sticking with GiveWell or whatever you think is actually best.
Joseph Lemien @ 2024-06-17T12:49 (+2)
I enjoyed reading your responses to these points. Thanks for taking the time to write them out.
MichaelStJules @ 2024-06-15T05:39 (+25)
Thereâs no good reason to think that GiveWellâs top charities are net harmful.
The effects on farmed animals and wild animals could make GiveWell top charities net harmful in the near term. See Comparison between the hedonic utility of human life and poultry living time and Finding bugs in GiveWell's top charities by Vasco Grilo.
My own best guess is that they're net good for wild animals based on my suffering-focused views and the resulting reductions of wild arthropod populations. I also endorse hedging in portfolios of interventions.
Vasco Grilo @ 2024-06-15T18:12 (+20)
Thanks for pointing that out, Michael! I should note I Fermi estimated accounting for farmed animals only decreases the cost-effectiveness of GiveWell's top charities by 8.72 %. However, this was without considering future increases in the consumption of animals throughout the lives of people who are saved, which usually follow economic growth. I also Fermi estimated the badness of the experiences of all farmed animals alive is 4.64 times the goodness of the experiences of all humans alive, which suggests saving a random human life results in a nearterm increase in suffering.
Ozzie Gooen @ 2024-06-15T13:53 (+7)
Related, this is likely a nitpick, but I think there might be some steelman-able views of "GiveWell top charities might seem net-negative on a longtermist lens, which could outweigh the shorter term implications".
Personally, i have a ton of uncertainty here (I assume most do) and have not thought about this much. Also, I assume that from a longtermist lens, the net impact either way is likely small compared to more direct longtermist actions.
But I think that on many hard and complex issues, it's really hard to say "there's no good reason for one side" very safely. Often there are some good reasons on both sides.
I find that it's often the case where there aren't any highly-coherent arguments raised for one side of an issue - but that's a different question than asking if intelligent arguments could be raised.
MichaelStJules @ 2024-06-15T17:00 (+4)
Ya, someone might argue that the average person contributes to economic growth and technological development, and so accelerates and increases x-risk. So, saving lives and increasing incomes could increase x-risk. Some subgroups of people may be exceptions, like EAs/x-risk people or poor people in low-income countries (who are far from the frontier of technological development), but even those could be questionable.
Richard Y Chappell @ 2024-06-15T15:43 (+2)
I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth, and (if very skeptical of attempts at more "direct", explicitly longtermist long-shots) think that this category of intervention is among our best longtermist options. I suggest something along these lines as a candidate EA "worldview" here.
I'd be curious to hear more about longtermist reasons to view GiveWell top charities as net-negative.
Ozzie Gooen @ 2024-06-15T16:32 (+8)
I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth
I think I've almost never heard this argued, and I'd be surprised if it were true.
[Edit: Sorry - I just saw your link, where this was argued. I think the discussion there in the comments is good]
- GiveWell selected very heavily for QALYs gained in the next 10-40 years. Generally, when you heavily optimize for one variable (short-term welfare), you trade-off against others.
- As Robin Hanson noted, if you'd just save up money, you could often make a higher return than by donating it to people today.
- I believe that there's little evidence yet to show that funding AMF/GiveDirectly results in large (>5-7% per year) long-term economic / political gains. I would be very happy if this evidence existed! (links appreciated, at any point)
Some people have argued that EAs should fund this, to get better media support, which would be useful in the long-run, but this seems very indirect to me (though possible).
As for it being *possibly* net-negative:
- We have a lot of uncertainty on if many actions are good or bad. Arguably, we would find that many to be net-bad in the long-run. (This is arguably more of a "meta-reason" than a "reason").
- If AI is divided equally among beings, we might prefer there being a greater number of beings with values more similar to ours.
----REMINDER - PLEASE DON'T TAKE THIS OUT OF CONTEXT----
- Maybe, marginal population now is net-harmful in certain populations. Perhaps these areas have limited resources soon, and more people will lead to greater risks later on (poorer average outcomes, more immigration and instability). Related, I've heard arguments that the Black Death might have been a net-positive, as it gave workers more power and might have helped lead to the Renaissance. (Again, this is SUPER SPECULATIVE AND UNCERTAIN, just a possibility).
- If we think AI is likely to come soonish, we might want to preserve most resources for after it.
- This is an awkward/hazardous thing to discuss. If it were the case that there were good arguments, perhaps we'd expect them to not be said. This might increase the chances that there could be good arguments, if one were to really investigate it.
Again, I have an absolute ton of uncertainty on this, and my quick guess is more, "it's probably a small-ish longtermist deal, with a huge probability spread", than "I'm fairly sure it's net-negative."
I feel like it's probably important for EAs to have reasonable/nuanced views on this topic, which is why I wrote these thoughts above.
I'll get annoyed if the above gets greatly taken out of context later, as has been done for many other EAs when discussing topics. (See Beckstead's dissertation) I added that ugly line in-between to maybe help a bit here.
Ozzie Gooen @ 2024-06-15T17:27 (+2)
I should have done this earlier, but would flag that LLMs can summarize a lot of the existing literature on the topic, though most of it isn't from EAs specifically. I would argue that many of these arguments are still about "optimizing for the long-term", they just often use different underlying assumptions than EAs do.
https://chatgpt.com/share/b8a9a3f5-d2f3-4dc6-921c-dba1226d25c1
Ozzie Gooen @ 2024-06-15T16:36 (+2)
I'll also add that many direct longtermist risks have significant potential downsides too. It seems very possible to me that we'll wind up finding out that many were net-negative, or that there were good reasons for us to realize they were net-negative in advance.
Richard Y Chappell @ 2024-06-15T13:05 (+7)
Yeah, that's interesting, but the argument "we should consider just letting people die, even when we could easily save them, because they eat too much chicken," is very much not what anti-EAs like Leif Wenar have in mind when they talk about GiveWell being "harmful"!
(Aside: have you heard anyone argue for domestic policies, like cuts to health care / insurance coverage, on the grounds that more human deaths would actually be a good thing? It seems to follow from the view you mention [not your view, I understand], but one doesn't hear that implication expressed so often.)
Wei Dai @ 2024-06-19T13:19 (+2)
You probably didn't have someone like me in mind when you wrote this, but it seems a good opportunities to write down some of my thoughts about EA.
On 1, I think despite paying lip service to moral uncertainty, EA encourages too much certainty in the normative correctness of altruism (and more specific ideas like utilitarianism), perhaps attracting people like SBF with too much philosophical certainty in general (such as about how much risk aversion is normative), or even causing such general overconfidence (by implying that philosophical questions in general aren't that hard to answer, or by suggesting how much confidence is appropriate given a certain amount of argumentation/reflection).
I think EA also encourages too much certainty in descriptive assessment of people's altruism, e.g., viewing a philanthropic action or commitment as directly virtuous, instead of an instance of virtue signaling (that only gives probabilistic information about someone's true values/motivations, and that has to be interpreted through the lenses of game theory and human psychology).
On 25, I think the "safe option" is to give people information/arguments in a non-manipulative way and let them make up their own minds. If some critics are using things like social pressure or rhetoric to manipulate people into being anti-EA (as you seem to implying - I haven't looked into it myself), then that seems bad on their part.
On 37, where has EA messaging emphasized downside risk more? A text search for "downside" and "risk" on https://www.effectivealtruism.org/articles/introduction-to-effective-altruism both came up empty, for example. In general it seems like there has been insufficient reflection on SBF and also AI safety (where EA made some clear mistakes, e.g. with OpenAI, and generally contributed to the current AGI race in a potentially net negative way, but seem to have produced no public reflections on these topics).
On 39, seeing statements like this (which seems overconfident to me) makes me more worried about EA, similar to how my concern about each AI company is inversely related to how optimistic it is about AI safety.