My independent impressions on the EA Handbook

By VictorW @ 2023-08-07T09:09 (+15)

Context

I've just finished going through the EA Handbook as part of the Introductory EA Program. My experience with EA predates this. I'd like to present some of my independent impressions and gut reactions to the ideas I came across in the handbook.

My post is intended to be low effort, i.e. my ideas here haven't gone through any rigorous thought and I'd love to receive any kind of feedback. I am totally open to hearing that my arguments are rubbish and why.

My takes are mainly targeted at the material I encountered during the process of going through the EA Handbook, even if this ignores other knowledge/experiences I have about EA predating this. As a side effect, it may appear that I'm criticizing the EA Handbook as a weak link, whereas my intention is just to express my spiciest takes in hopes of receiving feedback that I can efficiently update my thoughts on. (I.e., I don't know what the steel man versions are, assume I am poorly read but curious.)

Impression 1: Maximization (#rant)

My strongest reaction to the EA Handbook is against the idea of maximization of good, which I feel is presented as a thing across chapters 1-3.  If not for the final post "You have more than one goal, and that's fine", I could easily get the impression that maximization is a core belief and that EA might be encouraging people to take it too far.

My reactive stance to what felt like an aggressive thrust looks like this:

  1. I am against maximization. (I'm not a utilitarian, but even if I were, I would still retain some arguments against maximization.)
  2. I am against rationality being a top priority (e.g., doing good being #1 and trying to do more good by relying on rationality being #2). I think it is fundamentally unhealthy for most people to try and live by.
  3. I am against all "should"s. I would disagree with a statement such as "EAs should update their beliefs if provided with decisive evidence against their beliefs", or the idea that being/acting illogically is wrong/bad/not-EA.
  4. I am against the assertion that the world is a bad place, even if it is the case that there are moral catastrophes happening all around us. To assert that "things aren't good enough" seems to be a slippery slope. There is a natural parallel to psychology about long-term improvement being impossible without self-acceptance. Example of the dangerous slope: "we're not doing enough because there are still moral catastrophes". After reducing moral catastrophes in the world by 99%: "we're not doing enough because there are still moral catastrophes, and we can still do much better". Something something Aristotelian idea that if something can reach a good potential state then it was already good to begin with?
  5. I personally feel that the EA Handbook goes too far in hinting that making suboptimal decisions is not-very-EA. I believe this viewpoint is unjustified on a logical level (as well as emotional). Firstly, in real life we don't have trade-off decisions where we know accurately that option 1 has a net expectation of saving 50 lives and option 2 has a net expectation of saving 100 lives, with "all else being equal". All else is never equal, not in the spillover effects, nor in the way making that decision affects us personally. Even if we came to a real-life scenario that appeared on paper to be exactly like such a scenario, there is some percentage of the time where making the seemingly better decision backfires because our understanding of the two systems was inadequate and our estimates which seemed completely logical were wrong due to a blind spot. Secondly, choosing the suboptimal decision might lead us to faster updating of our global decision-making than always choosing the better option on paper. Mistakes are necessary for learning. I believe even AI superintelligence would not be exempt from this. Making a mistake (or doing an experiment) can be a local suboptimal that's part of a global optimum.
  6. I support the idea of individuals having a "moral budget" and that we have an absolute right to decide how big our moral budget is and what to spend it on.
  7. Utilitarianism asks too much of us. What would the world look like if everyone acted like the person who donated their kidney on the basis that their life isn't worth that of 4000 strangers? It isn't obvious to me that life would be superior or even not-disastrous if everyone applied that literally everywhere in their lives.
  8. If everyone on earth suddenly became a hard-core (maximizing) EA, the living population would be far more miserable in the short-term than if we suddenly became soft (non-maximizing) EAs. Although both scenarios could hypothetically lead to balancing out to the optimum in the long run, I would argue that the latter population would reach the optimum quicker 100% of the time, all else being equal.

To tie in these points in a maybe-coherent way, my reasoning against maximization is that:

Impression 2: What is EA?

I would describe as a movement and a community. One of the questions posed was "Movements in the past have also sought to better the state of altruistic endeavors. What makes EA fundamentally distinct as a concept to any altruistic movement before now or that might come after?" My answer is "Maybe nothing, and that's okay."

My tentative definition of EA would currently boil down to something like "EA is a movement that promotes considering the opportunity cost of our altruistic efforts." This seems like such a "small" definition and yet I can't find immediate fault with it.

Disclosure: I consider myself an EA and I have a light interest in my personal definition of EA not ruling me out as one.

Impression 3: Equality and Pascal's mugging

...we should make it quite clear that the claim to equality does not depend on intelligence, moral capacity, physical strength, or similar matters of fact. Equality is a moral idea, not an assertion of fact. There is no logically compelling reason for assuming that a factual difference in ability between two people justifies any difference in the amount of consideration we give to their needs and interests. The principle of the equality of human beings is not a description of an alleged actual equality among humans: it is a prescription of how we should treat human beings. 

Chapter 3 makes strong claims about equality in a way that seem to come out of nowhere and also be logically contradicted in disturbing ways by Chapter 4 (existential risks).

If:

almost any moral crime (such as wiping out 99% of the current population) that reduces the risk of human extinction by 0.01% can be justified in the name of equal right to life. Equal deserts seems to lead to extremely non-equal treatment in the present. Many generations after us can also keep making the same justifications.

Even without inducing Pascal's mugging type x-risk assumptions, we still run into logical ideas of justified discrimination. If everyone has equal rights but some people can save/improve millions of lives rather than just the hundreds estimated from donating 10% of lifetime earnings, isn't the best way to promote equal rights ironically by enabling people who can get us closer to being able to sustain everyone while neglecting the currently neglected who are less likely to be able to help us reach that state?

There are even more Pascal's mugging dilemmas with digital minds or AI superintelligence, and I feel skeptical that we should embrace being mugged.

Weird relief from Pascal's mugging

There actually appear to be some weird benefits of Pascal's mugging in the trillions of future lives scenarios. One of them is that almost nothing we do in this century makes a dent in the total scale of utility of human lives, so long as we don't destroy that long-term potential. Therefore, even if we do happen to exist in the most pivotal century of all past and future time in the universe, moral catastrophes are pretty insignificant and even the timeline of progress is insignificant (all else being equal). Heck, if we had reached the industrial age 1000 years later than we did, almost no potential would have been lost, at least on a percentage basis.

Here's another one. If we will inevitably reach a state where resources are effectively limitless, then at that point we can naturally or artificially create billions of lifespans that are kept perpetually happy. It could be billions of rats on drugs, humans on happy drugs and zero suffering (is this objectively better than more balanced alternative experiences?), digital minds that are kept happy and live on for hundreds of thousands of years? Again, this seems like the opposite of the utility monster, where temporary goodness or badness seems insignificant as long as we end up reaching this capability of mass-producing utility.

Impression 4: Suffering and sentience

This was a pretty interesting and confusing topic to think about for me. Here are some random trinkets:

Impression 5: Animal welfare

I found the content on animal welfare unexpectedly mild and not very challenging. This seems in stark contrast to the perspectives and actions I see within the EA community. I'd be interested to hear suggestions on beginner-friendly reading material that makes a more compelling case. Basically I'd like to know why veganism is so common in EA and why I often hear the hand-waving suggestion that going vegan probably makes more of an impact than other things in EA. In case I'm missing out, as it were.

Closing remarks

All in all, I got a lot of value from the EA Handbook. There are many things that can be improved about it, but if I had to pick two:


CEvans @ 2023-08-07T11:07 (+9)

Thanks for writing this post Victor, I think your context section represents a really good and truth-seeking attitude coming into this with. From my perspective, it is also always good to have good critiques of key EA ideas.  To respond to your points:

1 and 2. I agree that the messaging about maximisation has the danger of people taking it too far, but I think it is quite defensible as an anchor point. Maybe this should be more present in the handbook, but I think it is worth initially saying that  >95% of EAs' lives don't look like some extreme naive optimiser per your framing. 

I think I see EA more as "how can we do the most amount of good you can do with X resources", where it is up to you to determine X in terms of your time, money, career etc. When phrases begin with "EAs should", I generally interpret that as "If you are wanting to have more impact, then you should". I think the moral demandingness aspect is actually not very present in most EA discourse, and this is likely best for ensuring a healthy community. 

 EAs are of course human too, and the community from what I have seen of it is generally very supportive of people making decisions that are right for themselves when necessary (eg. career breaks, quitting a job which was very impactful, changing jobs to have kids etc - an example (read the comments)). Even if you are a "hard-core utilitarian", then I think placing some value on your own happiness, motivation etc is still good for helping you achieve the best you can. Most EAs live on quite healthy salaries, in nice work environments, with a supportive community - while I don't deny that there are also mental health issues within the group, I think EA as a movement thus far hasn't caused many people to be self-sacrificial to the point of being detrimental to their wellbeing.

On whether maximisation is a good goal in the first place; the current societal default in most cases of altruistic work is to not consider optimisation or effectiveness at all. This has led to huge amounts of wasted time and money, which has by extension allowed massive amounts of suffering to continue. While you're subpoint 5 about uncertainty is true, I think EA successes have proved the the ability to increase the expected impact you have with careful thought and evidence, hence the value EA has placed on rationality. Of course people make mistakes and some projects aren't successful or even might be net negative, but I think it is reasonable to say that the expected value of your actions is what is important. If you buy that the effectiveness of interventions is roughly heavy-tailed, then you should also expect that the best options are much better than the "good" ones, and so it is worth taking a maximisation mindset to get the most value.

I don't think saying "the world is a bad place" is a very useful or meaningful claim to make, but I think it is true that there is just so much low-hanging fruit still on the table for making it so much better, and that this is worth drawing attention to. People say things like the world is bad(which could be done in a better way) because honestly a lot of the world just doesn't care about massive issues like poverty, factory farming, or threats from eg. pandemics or AI, and I think it is somewhat important to draw attention to the status quo being a bit messed up.

3. Ah your initial point is a classic argument that I think targets something no EA actually endorses. I think moral uncertainty and ideas of worldview diversification are highly regarded in EA, and I think everyone would immediately disregard acts that cause huge suffering today in the hope of increasing future potential, for both moral and epistemic uncertainty reasons.

I think your points regarding the insignificance of today's events for humanity's long-term seem to rely heavily on a view of non path dependency - my guess is that how the next couple of centuries go on key issues like AI, international coordination norms, factory farming, and space governance, could all significantly affect the long-term expected value of the future. I think ideas of hinginess are good to think about for this, see here:  Hinge of history - EA Forum (effectivealtruism.org)

4. I agree it is generally a confusing topic and don't have anything particularly useful to say besides wanting to highlight that people in the community are also very unsure. Fwiw I think most S-risk scenarios people are worried about are more to do with digital suffering/astronomical scale factory farming. I think human-slavery type situations are also quite unlikely. 

 

VictorW @ 2023-08-08T20:49 (+2)

Thanks for the clarification about how 1 and 2 may look very different in the EA communities.

I'm not particularly concerned about the thought that people might be out there taking maximization too far, the framing of my observations is more like "well here's what going through the EA Handbook may prompt me to think about EA ideas or what other EAs may believe.

After thinking about your reply, I realized that I made a bunch of assumptions based on things that might just be incidental and not strongly connected. I came to the wrong impression that the EA Handbook is meant to be the most canonical and endorsed collection of EA fundamentals.

Here's how I ended up there. In my encounters hearing about EA resources, the Handbook is the only introductory "course", and presumably due to being the only one of its kind, it's also the only one that's been promoted to me via over multiple mediums. So I assumed that it must be the most official source of introduction, remaining alone in that spot over multiple years, seeing it bundled with EA VP also seemed like an endorsement. I also made the subconscious assumption that since there's plenty of alternative high quality EA writing out there, as well as resources put into writing, that the Handbook as a compilation is probably designed to be the most representative collection of EA meta, otherwise it wouldn't still be promoted the way it has been to me.

I've had almost no interaction with the EA Forum before reading the Handbook, so very limited prior context to gauge how "meta" the Handbook is among EA communities, or how meta any of its individual articles are. (Which now someone has helpfully provided a bunch of reading material that is also fundamental but while having quite different perspectives.)