Room for Other Things: How to adjust if EA seems overwhelming

By Lukas_Gloor @ 2015-03-26T14:10 (+49)

Overwhelming obligations

The drowning child argument has persuaded many people to join the effective altruism movement. It highlights one of EA’s central concepts: opportunity costs. The money spent on expensive clothes could instead be used to donate to cost-effective charities overseas, where it can save a person's life. Of course, the force of the argument does not stop there: If you're left with more money, or if you could work for a few extra hours to earn more, you can donate more to help additional people. Every decision we make has opportunity costs – this realization can feel overwhelming

Several critics consider the ideas behind effective altruism flawed or impractical because they appear to demand too much from us. In his widely cited essay A Critique of Utilitarianism, Bernard Williams argues that the idea of always trying to bring about the best outcome1 places too high a burden on a person by taking away their choice in what they want to do in life: 

 

It is to alienate him in a real sense from his actions and the source of his action in his own convictions. It is to make him into a channel between the input of everyone’s projects, including his own, and an output of optimific decision; but this is to neglect the extent to which his actions and his decisions have to be seen as the actions and decisions which flow from the projects and attitudes with which he is most closely identified. It is thus, in the most literal sense, an attack on his integrity.

 

When I first read Williams’ critique a few years ago, I already considered myself an effective altruist and, as such, I found the arguments surprisingly unimpressive. I felt that obviously, if someone’s goal is to make the world a better place, this is the person’s decision, her chosen life-project. However, as I’m now more aware of than before, people have the tendency to overestimate how similar others are to themselves.  I was already so immersed in EA thinking that I forgot how things may feel for others. Williams’ critique highlights an important point. People can be altruistically motivated but have other goals in life besides doing the most good, or they may have pre-existing commitments that contribute to their identity and happiness. If so, they might – consciously or unconsciously – view the all-encompassing interpretations of effective altruism as something that threatens what they value. This can manifest itself either in rationalizations against the idea of EA, or – if the person is more introspective – it might lead to a genuine conflict of internal motivations, which often results in unhappiness. The situation becomes especially difficult for such a person if he/she is being pressured, either externally (by other people’s expectations) or internally (e.g. by comparing oneself to a very high moral standard or to a person who gave up everything else for effective altruism).  Needless to say, such an outcome is very unfortunate, both for the people themselves and for them not getting involved because of it. 


A healthier framing

Perhaps it cannot be entirely avoided that some people are going have an aversive reaction, at least to some extent, when they learn more about effective altruism. There is truth to the saying “ignorance is bliss”: Some ideas, like the drowning child argument, irreversibly change the way we view life. Nevertheless, I believe that the feeling of “overwhelmingness” discussed above is uncalled for.

In this article I want to present a way to see or frame effective altruism that I consider both philosophically correct and useful from a motivational point of view. I prefer to think of EA as a choice, rather than some sort of external moral obligation. In reply to people who are concerned that effective altruism is overwhelming or overly demanding, I want to point out two important considerations: 

  1. If you view EA as a possible goal to have, there is nothing contradictory about having other goals in addition. 
  2. Even if EA becomes your only goal, it does not mean, necessarily, that you should spend the majority of your time thinking about it, or change your life in drastic ways. (More on this below.)

What are goals?

Imagine you could shape yourself and the world any way you like, unconstrained by the limits of what is considered feasible. What would you do? Which changes would you make? Your answers to these questions describe your ideal world. To guide our actions in practice, we also have to specify how important various good things are compared to other good things. So, in a second step, imagine that you had the same powers to shape the world as you wish, but this time, they are limited. You cannot make every change you had in mind, so you need to prioritize some changes over others. Which changes would be most important to you? The outcome of this thought experiment approximates your goals2

1) Having other goals besides EA

It is perfectly possible to give a lot of weight to one's personal well-being or one's favored life-projects, while still choosing to dedicate some amount of time and money to effective altruism. There is nothing contradictory about having multiple goals – it just means that one is willing to make tradeoffs in the face of resources being limited. Some people may think that it is somehow wrong or “inelegant” to have several goals, but as long as the person herself is fine with it, that’s all that matters. 

2) EA as a goal does not necessarily imply sacrificing all other commitments

Giving up all personal commitments is bad on any account of rational goal-achievement, if this would make a person psychologically unable to continue working productively toward their goals. Compared to a perfect utilitarian robot, humans have many shortcomings. This applies to everyone, but the degrees vary from person to person. It makes just as little sense to compare oneself to a perfect utilitarian robot than to compare oneself to a person who is, for whatever reason, unreachably in a different starting position from one's own. 

When people think of effective altruists, what sort of person are they typically picturing? Most likely, we first think of the people who went into investment banking, working excessive hours and night-shifts, to donate 50% of their income; or the people who quit promising career paths in order to work full-time for EA organizations; or the famous philosopher who churns out paper after paper. Of course, these sorts of people are outliers, high-achievers that don’t represent the typical population. It takes specific personality traits and skills to be motivated and able to do EA almost as an extreme sport. A more typical EA would be someone with a more “normal” job who donates say 10% of their income, or everything above a limit that is enough to live comfortably while having some financial security in the future. 

The “high-achievers” mentioned first are arguably the EAs who make the most difference individually, yet they only represent a small minority of EAs, even more so as EA becomes increasingly more mainstream. Are these people more motivated than other EAs, are they the only ones who “take EA seriously”? I don't think so. These people aren’t more motivated to be EAs; rather, they are more motivated and/or more suited to do the things that are most effective from an EA-standpoint. This distinction is important. Outliers don’t necessarily care more, but their personalities and skills make them better better positioned to contribute effectively3

For the typical person, therefore, being an EA does not imply trying to do all the things the highest-achieving EAs are doing. One could be tempted to view this as a watered-down version of EA, as “EA light”, but this would be getting it wrong. If you’re doing the best you can, there’s nothing watered down about your goals, your moral ideals. Comparing yourself to the most skilled and hardworking EAs would be making the same kind of mistake, on a lower level, as comparing yourself to a perfect utilitarian robot. Even the most hardworking EAs need breaks sometimes, and in comparison with a perfect robot, they too fall short. Rationality is about making the best out of what you have. Holding yourself to impossible standards is silly and counterproductive. The better approach is to find smaller but sustainable ways to contribute.

Personalities are different

People don’t only differ in regard to skills, they also differ in regard to what they’re interested in. In my case, becoming an effective altruist came easy to me. I discovered all this information out there, LessWrong and the now less active Felicifia, and I couldn’t stop reading and having discussions with others. It wasn’t like I had to force myself to do any of that. I think about EA-related topics most of the time because this is what I enjoy doing; if I found it strenuous (as some people will), I would be doing it less.

Personality differences also reflect what sort of things people are (de)motivated by. Some people are attracted to weird (i.e. non-mainstream) ideas because they enjoy discussions. Others might dislike having to talk about or defend their positions constantly on the internet or in social settings, and if that's the case, a lot of EA-related activities become much harder. 

Finally, personality differences also affect the way people prefer their life to go as a whole. Having balance in life is important for everyone, but some need a lot of it and others are more okay with a life that is optimized obsessively towards a single goal. People might have life-projects or strong commitments that would make them miserable if they had to be abandoned, wanting children for example, or a specific job one really likes. These things are compatible with EA, because EA doesn’t solely work if done as an extreme sport.


Some advice

It is important to take into account that people differ from each other in many respects, including previous commitments,  personality differences and differences in skills. Being rational about achieving your goals means, among other things, to understand what is within your reach and what isn’t, and to not hold yourself responsible for being unable to do the impossible. I have the impression that some personality types, especially people who are very altruistic and caring, sometimes suffer from holding themselves to too high a standard, and being unable to allow themselves to relax in spite of all the world’s suffering on their shoulders. Ben Kuhn’s blogpost To stressed-out altruists contains a great insight that is worth quoting at length:

I think the culprit of stress for many EAs is a lack of compartmentalization. Now, to really understand the ideas of effective altruism, you need not to compartmentalize too much—for example, I have a roommate who buys the idea of altruism in the abstract, but doesn’t do anything about it because he separates his brain’s “abstract morality” module and its “decide what to do” module. Because of things like this, compartmentalization has an often-deserved poor reputation for letting people evade cognitive dissonance without really coming to terms with their conflicting ideas. But compartmentalization isn’t always maladaptive: if you do it too little, then whatever you care about completely consumes you to the point of non-functional misery.

 

Effective altruism requires less compartmentalization than the average person has, so standard effective altruism discourse, which is calibrated against the average person, tries to break down compartmentalization. But you probably aren’t the average person. If you’re stressed out about effective altruism, ignore the standard EA discourse and compartmentalize more


Most of the advice I’d give to people struggling with EA is along the lines Ben talks about. Below, collected a list of things I use myself or would recommend for others for trying to learn to compartmentalize more. Of course, not everyone will find these applicable or equally helpful. 


Note
: This next example works very well for me personally, but I can imagine that the competitive aspect of “trying to score as many points as possible” is bad for some people.
 


1Williams is talking about utilitarianism as a moral theory, not about effective altruism as an idea/movement. I realize that the two are distinct. E.g., one can be an EA without subscribing to utilitarianism. Nevertheless, large parts of Williams’ critique, and especially the passage I quoted, apply well to EA. 
2This question is of course a very difficult one, and what someone says after thinking about it for five minutes might be quite different from what someone would choose if she had heard all the ethical arguments in the world and thought about the matter for a long time. If you care about making decisions for good/informed reasons, you might want to refrain from committing too much to specific answers and instead give weight to what a better informed version of yourself would say after longer reflection. 
3Of course, the matter is not black-and-white. Caring/commitment does matter to a significant extent, i.e. there will be people who would be suited to do the extreme-EA thing but don’t try enough. 
4This also goes the other way (cf. “scope insensitivity”), but standard EA discourse talks a lot about this already. 

 


undefined @ 2015-03-26T18:01 (+12)

Oh yes, thank you for this article! Since 2011-ish I’ve gradually discovered these various mental coping strategies. I wish I had had a summary like yours back then!

The scene from Schindler’s List is powerful. Back in 2011 or maybe 2012 I often spent sleepless hours in bed crying and wondering why I’m not doing more than I am, why I’m the only one who has made these moral inferences, whether I’m crazy or overlooking something, and how I’m supposed to do it all alone. It would take me till early 2014 to find out about EA and find out that there are renowned philosophers who’ve made the same inferences, so I couldn’t be all that crazy.

What has helped me a lot is a specific kind of compartmentalization. I find it intellectually painful to compartmentalize, so I’m very hesitant to do it purposefully, but after a longer deliberative process I’ve found that my strong emotions paralyzed me more than they motivated me, and that I was sufficiently motivated anyway, so instead of wasting time crying, I trained thinking about things without feeling them. Or “trained” is an exaggeration, it’s more that it took me a tiny bit of effort to bring my empathy to this paralyzing emotional level in the first place, so that I could just avoid putting in this bit of effort. It’s still pretty hard when I see something visually, but it’s easier when I just think about it.

These emotions are not to scale anyway, and I don’t think I’m atrophying my ability to empathize doing this, so I think this is a kind of compartmentalization that will not negatively impact my decision-making.

As Bertrand Russell said:

Three passions, simple but overwhelmingly strong, have governed my life: the longing for love, the search for knowledge, and unbearable pity for the suffering of [sentient beings]. These passions, like great winds, have blown me hither and thither, in a wayward course, over a great ocean of anguish, reaching to the very verge of despair.

It’s just too unbearable to not suppress it and still function.

There’s also the factor that many EAs who do earning to give still keep savings. I think this also gives them the ability to give large amounts with regularity sustainably. It might also give them the ability to be more enterprising in their job search and thus better at maximizing their income. There are a lot of people who already feel a need to donate more but don’t dare to out of self-preservation-related fears. Those may be in some cases disproportionate, but self-preservation is probably an instinct that is easier to pacify than to vanquish.

Conversely, there is also the problem of coping with other people who are very strongly compartmentalized. There, Robert’s and my exchange on another article seems relevant.

I’ve also written another introduction to EA article for people with low compartmentalization, which I haven’t posted here yet. I’ll probably do that one of these days.

PS: You copied a lot of in-line style. The article will look prettier when you strip it.

undefined @ 2015-03-26T21:29 (+2)

Thanks, great points! I got some help and managed to fix the layout.

undefined @ 2015-03-27T08:18 (+1)

Compassion is also helpful - sometime visualising the suffering of the world and trying to identify with it in a cleaner way than involving guilt can generate some really powerful energy in the right direction. Doing this is the only way I can keep going while recognising that I'm a bottom 1-10% performer of the people that I've met in the movement - which is quite ego-depleting!!

Tonglen is a Tibetan buddhist practice that seeks to build compassion in this way.

undefined @ 2015-03-27T09:00 (+1)

Yeah, I’ve been hard pressed to find any use for guilt. There is this justice-related defense against helping (“Why am I supposed to help if [random other person] doesn’t!”), and showing someone how they profit from a system of exploitation can break thought that. But otherwise empathy/compassion is my central motivation. I know that this unbearable suffering exists. I just don’t let it take the emotional shape that overwhelms and paralyzes me with its sheer intensity.

I hope the ego depletion doesn’t keep you from aspiring to become a top 1–10% performer in the movement?

undefined @ 2015-03-27T10:42 (+1)

other things stop that, like raw ability, health, resource and availability of time due to prior commitments, but thanks for the encouragement!

Now its been written about I can see the comparison between these feelings and the feelings people often have of 'oh, well, there's no way I can compete with Bill Gates so why bother'. I haven't let them affect me that much but now I can see that they shouldn't affect me at all / even come in to play quite clearly. Thank you Telofy! :)

undefined @ 2015-03-26T19:36 (+2)

Thanks for writing this, Lukas.

undefined @ 2015-03-27T10:15 (+1)

I'm part of the target audience, I think, but this post isn't very helpful to me. Mistrust of arguments which tell me to calm down may be a part of it, but it seems like you're looking for reasons to excuse caring for other things than effective altruism, rather than weighing the evidence for what works better for getting EA results.

Your "two considerations",

  1. If you view EA as a possible goal to have, there is nothing contradictory about having other goals in addition.
  1. Even if EA becomes your only goal, it does not mean that you should necessarily spend the majority of your time thinking about it, or change your life in drastic ways. (More on this below.)

, look like a two-tiered defence against EA pressures rather than convergence on a single right answer on how to consider your goals. Maybe you mean that some people are 'partial EAs' and others are 'full EAs (who are far from highly productive EA work in willpower -space)', but it isn't very clear.

Now, on 'partial EAs': If you agree that effective altruism = good (if you don't, adjust your EA accordingly, IMO), then agency attached to something with different goals is bad compared to agency towards EA. Even if those goals can't be changed right now, they would still be worse, just like death is bad even if we can't change it (yet (except maybe with cryonics)). If you are a 'partial EA' who feels guilty about not being a 'full EA', this seems like an accurate weighing of the relative moral values, only wrong if the guilt makes you weaker rather than stronger. Your explanation doesn't look like a begrudging acceptance of the circumstances, it looks almost like saying 'partial EAs' and 'full EAs' are morally equivalent.

Concerning 'full EAs who are far from being very effective EAs in willpower -space", this triggers many alarm bells in my mind, warning of the risk of it turning into an excuse to merely try. You reduce effective effective altruists' productivity to a personality trait (and 'skills' which in context sound unlearnable), which doesn't match 80,000hours' conclusion that people can't estimate well how good they are at things or how much they'll enjoy things before they've tried.

Your statement on compartmentalisation (and Ben Kuhn's original post) both just seem to assume that because denying yourself social contact because you could be making money itself is bad, therefore compartmentalisation is good. But the reasoning for this compartmentalisation - it causes happiness, which causes productivity - isn't (necessarily) compartmentalised, so why compartmentalise at all? Your choice isn't just between a delicious candy bar and deworming someone, it's between a delicious candy bar which empowers you to work to deworm two people and deworming one person. This choice isn't removed when you use the compartmentalisation heuristic, it's just hidden. You're "freeing your mind from the moral dilemma", but that is exactly what evading cognitive dissonance is.

I don't have a good answer. I still have an ugh field around making actual decisions and a whole bunch of stress, but this doesn't sound like it should convince anyone.

undefined @ 2015-03-27T17:32 (+2)

Thanks for this feedback! You bring up a very important point with the danger of things turning into merely "pretending to try". I see this problem, but at the same time I think many people are far closer to the other end of the spectrum.

I'm part of the target audience, I think, but this post isn't very helpful to me. Mistrust of arguments which tell me to calm down may be a part of it, but it seems like you're looking for reasons to excuse caring for other things than effective altruism, rather than weighing the evidence for what works better for getting EA results.

I suspect that many people don't really get involved in EA in the first place because they're on some level afraid that things will grow over their head. And I know of cases where people gave up EA at least partially because of these problems. This to me is enough evidence that there are people who are putting too much pressure on themselves and would benefit from doing it less. Of course, there is a possibility that a post like this one does more harm because it provides others with "ammunition" to rationalize more, but I doubt this would make much of a difference – it's unfortunately easy to rationalize in general and you don't need that much "ammunition" for it.

Your "two considerations", look like a two-tiered defence against EA pressures rather than convergence on a single right answer on how to consider your goals.

That's what they are. I think there's no other criterion that make your goals the "right" ones other than that you would in fact choose these goals upon careful reflection.

Maybe you mean that some people are 'partial EAs' and others are 'full EAs (who are far from highly productive EA work in willpower -space)', but it isn't very clear.

Yes, that's what I meant. And I agree it's unclear because it's confusing that I'm talking only about 2) in all of what follows, I'll try to make this more clear. So to clarify, most of my post addresses 2), "full EAs (who are far from highly productive in willpower-space)", and 1) is another option that I mention and then don't explore more because the consequences are straightforward. I think there's absolutely nothing wrong with 1), if your goals are different from mine then that doesn't necessarily mean you're making a mistake about your goals. I personally focus on suffering and don't care about preventing death, all else being equal, but I don't (necessarily) consider you irrational for doing so.

Now, on 'partial EAs': If you agree that effective altruism = good (if you don't, adjust your EA accordingly, IMO), then agency attached to something with different goals is bad compared to agency towards EA. Even if those goals can't be changed right now, they would still be worse, just like death is bad even if we can't change it (yet (except maybe with cryonics)).

I'm arguing within a framework of moral anti-realism. I basically don't understand what people mean by the term "good" that could do the philosophical work they expect it to do. A partial EA is someone who would refuse to self-modify to become more altruistic IF this conflicts with other goals (like personal happiness, specific commitments/relationships, etc). I don't think there's any meaningful and fruitful sense in which these people are doing something bad or making some sort of mistake, all you can say is that they're being less altruistic as someone with a 100%-EA goal, and they would reply: "Yes."

Concerning 'full EAs who are far from being very effective EAs in willpower -space", this triggers many alarm bells in my mind, warning of the risk of it turning into an excuse to merely try. You reduce effective effective altruists' productivity to a personality trait (and 'skills' which in context sound unlearnable), which doesn't match 80,000hours' conclusion that people can't estimate well how good they are at things or how much they'll enjoy things before they've tried.

I accept that there's a danger that my post can be read as such, but that's not what I'm saying. Not all skills are learnable to the same extent, but of course there is also a component to how much people try! And I would also second the advice that it's important to try things even if they seem very hard to do at first. But the thing is, some people have tried and failed and feel miserable about it, or even the thought of trying makes them feel miserable, so that certainly cannot be ideal because these people aren't being productive at that point.

Your statement on compartmentalisation (and Ben Kuhn's original post) both just seem to assume that because denying yourself social contact because you could be making money itself is bad, therefore compartmentalisation is good. But the reasoning for this compartmentalisation - it causes happiness, which causes productivity - isn't (necessarily) compartmentalised, so why compartmentalise at all? Your choice isn't just between a delicious candy bar and deworming someone, it's between a delicious candy bar which empowers you to work to deworm two people and deworming one person. This choice isn't removed when you use the compartmentalisation heuristic, it's just hidden. You're "freeing your mind from the moral dilemma", but that is exactly what evading cognitive dissonance is.

Human brains are not designed to optimize towards a single goal. It can drive you crazy. For some, it works, for others, it probably does not. I'm not saying "if you're stressed sometimes, do less EA stuff". Maybe being stressed is the lesser problem. My point is: "If you're stressed to the point that the status quo is not sustainable, then change something and don't feel bad about it".

To sum up, I'm aware that rationalizing is a huge danger – it always amazes me just how irrational people can become when they are protecting a cherished belief – but I think that there are certain people who really aren't in danger of setting their expectations too low, because they have a problem with doing the opposite.

undefined @ 2015-03-30T06:45 (+2)

Thanks for replying. (note: I'm making this up as I go along. I'm forgoing self-consistency for accuracy).

You bring up a very important point with the danger of things turning into merely "pretending to try". I see this problem, but at the same time I think many people are far closer to the other end of the spectrum.

Merely trying isn't the same as pretending to try. It isn't on the same axis as emotionally caring, it's the (lack of) agency towards achieving a goal. Someone who is so emotionally affected by EA that they give up is definitely someone who 'merely tried' to affect the world, because you can't just give up if you care in an agentic sense.

What we want is for people to be emotionally healthy - not caring too much or too little, and with control over how affected they are - but with high agency. Telling people they don't need to be like highly agentic EA people affects both, and to me at least it isn't obvious if you meant that people should still try their hardest to be highly agentic but merely not beat themselves up over falling short.

Your "two considerations", look like a two-tiered defence against EA pressures rather than convergence on a single right answer on how to consider your goals.

That's what they are. I think there's no other criterion that make your goals the "right" ones other than that you would in fact choose these goals upon careful reflection.

Whose "right" are we talking about, here? If it's "right" according to effective altruism, that is obviously false: someone who discovers they like murdering is wrong by EA standards (as well as those of the general population). "Careful reflection" also isn't enough for humans to converge on an answer for themselves. If it was, tens of thousands of philosophers should have managed to map out morality, and we wouldn't need the likes of MIRI.

Why should (some) people who are partial EAs not be pushed to become full EAs? Or why should (some) full EAs not be pushed to become partial EAs? Do you expect people to just happen to have the morality which has highest utility^1 by this standard? I suppose there is the trivial solution where people should always have the morality they have, but in that case we can't judge people who like murdering.

I think there's absolutely nothing wrong with 1), if your goals are different from mine then that doesn't necessarily mean you're making a mistake about your goals. I personally focus on suffering and don't care about preventing death, all else being equal, but I don't (necessarily) consider you irrational for doing so.

People's goals can be changed and/or people can be wrong about their goals, depending on what you consider proper "goals". I'm sufficiently confident that I'm either misunderstanding you or that you're wrong about your morality that I can point out that the best way to achieve "minimise suffering, without caring about death" is to kill things as painlessly as possible (and by extension, to kill everything everywhere). I would expect people who believe they are suffering-minimisers to be objectively wrong.

I'm arguing within a framework of moral anti-realism. I basically don't understand what people mean by the term "good" that could do the philosophical work they expect it to do. A partial EA is someone who would refuse to self-modify to become more altruistic IF this conflicts with other goals (like personal happiness, specific commitments/relationships, etc). I don't think there's any meaningful and fruitful sense in which these people are doing something bad or making some sort of mistake, all you can say is that they're being less altruistic as someone with a 100%-EA goal, and they would reply: "Yes."

Just because there is no objective morality, that doesn't mean people can't be wrong about their own morality. We can observe that people can be convinced to become more altruistic, which contradicts your model: if they were true partial EAs, they would refuse because anything other than what they believe is worse. I don't expect warring ideological states to be made up of people who all happened to be born with the right moral priors at the right time to oppose one another; their environment is much more likely to play a deciding role in what they believe. And environments can be changed, for example by telling people that they're wrong and you're right.

Regarding your second confusion, not knowing how "good" works in a framework of moral anti-realism. Basically, in that case, every agent has its morality where doing good is "good" and doing bad is bad. What's good according to the cat is bad according to the mouse. Humans are sort of like agents and we're all sort of similar, so our moralities tend to always be sort of the same. So much so that I can say many things are good according to humanity, and have it make a decent amount of sense. In common speech, we drop the "according to [x]". Also note that agents can judge each other just as they can judge objects. We can say that Effective Altruism is good and murder is bad, so we can say that an agent becoming more likely to do effective altruism is good and one becoming less likely to commit murder is good.

But the thing is, some people have tried and failed and feel miserable about it, or even the thought of trying makes them feel miserable, so that certainly cannot be ideal because these people aren't being productive at that point.

That isn't trivial. If 1 out of X miserable people manages to find a way to make things work eventually they could be more productive than Y people who chose to give up on levelling up and to be 'regular' EAs instead, with Y greater than X, and in that case we should advice people to keep trying even if they're depressed and miserable. But more importantly, it's a false choice: it should be possible to have people be less miserable but still to continue trying, and you could give advice on how to do that, if you know it. Signing up for a CFAR workshop might help, or showing some sort of clear evidence that happiness increases productivity. Compared to lesswrong posts, this is very light on evidence.

Human brains are not designed to optimize towards a single goal. It can drive you crazy. For some, it works, for others, it probably does not.

This looks like you're contradicting yourself, so I'm not sure if I understand you correctly. But if you mean the first two sentences, do you have a source for that, or could you otherwise explain why you believe it? It doesn't seem obvious to me, and if it's true I need to change my mind.

[1] This may include their personal happiness, EA productivity, right not to have their minds overwritten, etc.

undefined @ 2015-03-30T12:50 (+1)

Someone who is so emotionally affected by EA that they give up is definitely someone who 'merely tried' to affect the world, because you can't just give up if you care in an agentic sense.

I strongly disagree. Why would people be so deeply affected if they didn't truly care? The way I see it, when you give up EA because it's causing you too much stress, what happens constitutes a failure of goal-preservation, which is irrational, but after you've given up, you've become a different sort of agent. Just because you don't care/try anymore does not mean that caring/trying in the earlier stages was somehow fake.

Giving up is not a rational decision made by your system-2*, it's a coping mechanism triggered by your system-1 feeling miserable, which then creates changes/rationalizations in system-2 that could become permanent. As I said before (and you expressed skepticism), humans are not designed to efficiently pursue a single goal. A neuroscientist of the future, when the remaining mysteries of the human brain will be solved, will not be able to look at people's brains and read out a clear utility-function. Instead, what you have is a web of situational heuristics (system-1), combined with some explicitly or implicitly represented beliefs and goals (system-2), which can well be contradictory. There is often no clear way to get out a utility-function. Of course, people can decide to do what they can to self-modify towards becoming more agenty, and some succeed quite well despite of all the messy obstacles your brain throws at you. But if your ideal self-image and system-2 goals are too far removed from your system-1 intuitions and generally the way your mind works, then this will create a tension that leads to unhappiness and quite likely cognitive dissonance somewhere. If you keep going without changing anything, the outcome won't be good for neither you nor your goals.

You mentioned in your earlier comment that lowering your expectations is exactly what evading cognitive dissonance is. Indeed! But look at the alternatives: If your expectations are impossible to fulfill for you, then you cannot reduce cognitive dissonance by improving your behavior. So either you lower your expectations (which preserves your EA-goal!), or you don't, in which case the only way to reduce the cognitive dissonance is by rationalizing and changing your goal. By choosing strategies like "Avoiding daily dilemmas", you're not changing your goals, your only changing the expectations you set for yourself in regard to these goals.

[…] to me at least it isn't obvious if you meant that people should still try their hardest to be highly agentic but merely not beat themselves up over falling short.

Have you considered that for some people, the most agenty thing to do would be to change their decision-procedure so it becomes less "agenty"?

An analogy (adapted from Parfit): You have a combination-safe at home and get robbed, the robbers want the combination from you. They threaten to kill your family if you don't comply. The safe contains something that is extremely valuable to you, e.g. you and your family would be in gigantic trouble if it got stolen. You realize that the robbers are probably going to kill you and your family anyway after they got the safe open, because you all have seen their faces. What do you do? Now, imagine you had a magic pill that temporarily turns you, a rational, agenty person, into an irrational person whose actions don't make sense. Imagine that this state would be transparent to the robbers, e.g. they would know with certainty that you're not faking it and therefore realize that they can't get the combination from you. Should you take the pill, or would you say "I'm rational, so I can never decide to try to become less rational in the future"? Of course, you'd take the pill, because the rational action for your present self here is rendering your future self irrational.

Likewise: If you notice that trying to be more agenty is counterproductive in important regards, the right/rational/agenty thing for you to do would be to try become a bit less agenty in the future. The robbers in the EA examples is your system-1 and personality vs. your idealized self-image/system-2/goal. With EA being too demanding, you don't even have to change your goals, it suffices to adjust your expectations to yourself. Both would have the same desired effect, the main difference being that, when you don't change your goals, you would want to become more agenty again if you discovered a means to overcome your obstacles in a different way.

Whose "right" are we talking about, here? If it's "right" according to effective altruism, that is obviously false: someone who discovers they like murdering is wrong by EA standards (as well as those of the general population).

We were talking about whether and to what extent people's goals contain EA-components. If part of people's goals contradict EA tenets, then of course they cannot be (fully) EA. I do agree with your implicit point that "right according to x" is a meaningful notion if "x" is sufficiently clearly defined.

"Careful reflection" also isn't enough for humans to converge on an answer for themselves. If it was, tens of thousands of philosophers should have managed to map out morality, and we wouldn't need the likes of MIRI.

Are you equating "morality" with "figuring out an answer for one's goals that converges for all humans"? If yes, then I suspect that the reference of "morality" fails because goals probably don't converge (completely). Why is there so much disagreement in moral philosophy? To a large extent, people seem to be trying to answer different questions. In addition, some people are certainly being irrational at what they're trying to do, e.g. they fail to distinguish between things that they care about terminally and things they only care about instrumentally, or they might fail to even ask fundamental questions.

People's goals can be changed and/or people can be wrong about their goals, depending on what you consider proper "goals".

I agree, see my 2nd footnote in my original post. The point where we disagree is whether you can infer from an existing disagreement about goals that at least one participant is necessary being irrational/wrong about her goals. I'm saying that's not the case.

I'm sufficiently confident that I'm either misunderstanding you or that you're wrong about your morality [...]

I probably thought about my values more than most EAs and have gone through unusual lengths to lay out my arguments and reasons. If you want to try to find mistakes, inconsistencies or thought experiment that would make me change them, feel free to send me a PM here or on FB.

Humans are sort of like agents and we're all sort of similar, so our moralities tend to always be sort of the same.

With lots of caveats, e.g. people will be more or less altruistic, and that's part of your "morality-axis" as well if morality=your_goals. In addition, people will disagree about the specifics of even such "straightfoward" things as what "altruism" implies. Is it altruistic to give someone a sleeping pill against their will if they plan to engage in some activity you consider bad for them? Is it altruistic to turn rocks into happy people? People will disagree about what they would choose here, and it's entirely possible that they are not making any meaningful sort of mistake in the process of disagreeing.

That isn't trivial. If 1 out of X miserable people manages to find a way to make things work eventually they could be more productive than Y people who chose to give up on levelling up and to be 'regular' EAs instead, with Y greater than X, and in that case we should advice people to keep trying even if they're depressed and miserable.

OK, but even so, I would in such a case at least be right about the theoretical possibility of there being people to whom my advice applies correctly. For what it's worth, I consider it dangerous that EA will be associated with a lot of "bad press" if people drop out due to it being too stressful. All my experience with pitching EA so far indicates that it's bad to be too demanding. Sure, you can say you shouldn't be demanding towards newcomers, not established EAs, but you won't be able to keep up a memetic barrier there.

But more importantly, it's a false choice: it should be possible to have people be less miserable but still to continue trying, and you could give advice on how to do that, if you know it.

As a general point, I object to your choice of words: I don't think my posts ever argued for people to stop trying. I'm putting a lot of emphasis on getting the things right that you do do, like e.g. splitting separate motivations for donating so people don't end up donating to fuzzies only due to rationalizing or actual giving-up. I agree with the sentiment that advice is better the more it helps people stay well and still remain super-productive, I tried to give some advice that goes into that direction, e.g. the "Viewing life as a game" is also useful for when you're thinking about EA all the time, but of course I don't have all the advice either; I also welcome more contributions, and from what I've heard, CFAR has helped a lot of people in these regards.

  • I'm using terms like "system-2" in a non-technical sense here.