What should an effective altruist be committed to?
By peterhartree @ 2014-12-17T13:21 (+9)
If I were to self-identify as an effective altruist, I might take myself merely to have committed to (*).
undefined @ 2014-12-17T15:05 (+8)
I think this post is long overdue. People often get stressed about how to be more effective, and if we made it clear that we were including people who demand less severe sacrifices from themselves, then we might have more allies.
And as EA increasingly becomes a political and scientific movement, allies will be increasingly important. Getting prominent politicians and scientists like Stephen Hawking or Bill Gates to affiliate with the philosophy is much more important than getting non-prominent people to offer to meet more extreme demands. If we need to recruit allies to make a societal change, this will be easier if we define EA in a way that is not extremely demanding.
Of course, more extreme generosity is still better. But there's a cap - once you give away about half or three-quarters of your funds, you will run out. Whereas how effectively or cleverly we can donate has no obvious upper bound. If we gather greater insights, we can always start newer and better projects.
As any philosophical movement gains widespread support, its idea gets watered down. The wider public will take hold of the central message of the idea, like women's suffrage, african-american civil rights or environmentalism but some of the details will be lost, and some of the message will be watered down. So it's important for us to think about which issues have room to compromise and which don't. The whole idea of extreme self-sacrifice has pretty mixed effects, so we don't need it in a bigger effective altruism movement. The importance of evidence and reason in altruistic actions is what's new and indispensible. So it would be nice if we could all relax our expectations of demandingness for a while.
undefined @ 2014-12-17T23:41 (+3)
I don't think it's clear if we should insist on anything in particular, but I don't see it as a no-brainer.
"Getting prominent politicians and scientists like Stephen Hawking or Bill Gates to affiliate..."
People respect and are impressed by those who are making big sacrifices for the benefit of others. Note how much attention and respect from important figures Toby Ord has gotten by pledging a large fraction of his income. Also the kudos generally given to doctors, soldiers, firefighters, Mother Teresa, etc.
"But there's a cap - once you give away about half or three-quarters of your funds, you will run out. Whereas how effectively or cleverly we can donate has no obvious upper bound. If we gather greater insights, we can always start newer and better projects."
We might be able to get people to give 50 times more than they do now (an average of 1% to 50% of income). Do you think we can persuade many people, who wouldn't be motivated to give more, to give to a charity that is, ex ante, 50x better than they do now on average (keeping in mind the mean of a log-normal distribution is already much higher than the median due to the right tail)?
"As any philosophical movement gains widespread support, its idea gets watered down."
This seems like an argument in favour of very high expectations to start with, knowing it will be diluted later anyway on as more people get involved.
"The whole idea of extreme self-sacrifice has pretty mixed effects"
A standard doesn't have to and shouldn't embody extreme self-sacrifice, it can just ask for something like 10%, which is not extreme - indeed it used to be the norm.
Strong rules can make for stronger communities: http://slatestarcodex.com/2014/12/24/there-are-rules-here/
Also note the empirical regularity that churches that place high demands on their members tend to last longer and have more internal cooperation (e.g. http://ccr.sagepub.com/content/37/2/211.abstract).
Quote relating to this:
"Which is too bad, because the theology of liberal Protestantism is pretty admirable. Openness to the validity of other traditions, respect for doubters and for skeptical thinkers, acceptance of the findings of science, pro-environmentalism – if I had to pick a church off a sheet of paper, I’d choose a liberal denomination like the United Church of Christ or the Episcopalians any day. But their openness and refusal to be exclusive – to demand standards for belonging – is also their downfall. By agreeing not to erect any high threshold for belonging, the liberal Protestant churches make their boundaries so porous that everything of substance leaks out, mingling with the secular culture around them.
So what if liberal Protestants kept their open-minded, tolerant theology, but started being strict about it – kicking people out for not showing up, or for not volunteering enough? Liberals have historically been wary of authority and its abuses, and so are hesitant about being strict. But strictness matters, if for no other reason because conservatives are so good at it: most of the strict, costly requirements for belonging to Christian churches in American today have to do with believing theologies that contradict science, or see non-Christians as damned. What if liberal Protestantism flexed its muscle, stood up straight, and demanded its own standards of commitment – to service of God and other people, to the dignity of women, and to radical environmental protection? Parishioners would have to make real sacrifices in these areas, or they’d risk exclusion. They couldn’t just talk the talk. By being strict about the important things, could liberal Protestant churches make their followers walk the walk of their faith – and save their denominations in the process?"
http://www.patheos.com/blogs/scienceonreligion/2013/07/why-is-liberal-protestantism-dying-anyway/
undefined @ 2014-12-20T06:20 (+4)
Do you think we can persuade many people, who wouldn't be motivated to give more, to give to a charity that is, ex ante, 50x better than they do now on average (keeping in mind the mean of a log-normal distribution is already quite high due to the right tail)?
This is a backwards interpretation of the dynamics of log-normal distributions.
The (rough) equivalent operation of moving everyone's donations from 1% to 50% would be moving everyone's donations from the (dollar-weighted) mean charity to the best charity. Although (as you noted) the heavier tail of a log-normal distribution means that the sample mean is higher relative to the mode or median, it has an even stronger effect on the sample maximum.
This means that overall, a lognormal has a higher, not lower, maximum:mean ratio for a fixed, say, median and standard deviation, compared to a thinner-tailed distribution like the normal. For instance, in numerical simulations I just ran with 100 samples from a log-normal and normal distribution, both with median 2 and variance approximately 4.6, the average ratio of sample maximum to sample mean was 5.5 for the log-normal and 3.7 for the normal.
undefined @ 2014-12-22T14:00 (+1)
Yes, but ex ante. The higher up the distribution the harder they will get to identify because 'if it's transformative it's not measurable; if it's measurable it's not transformative'. The weakness of the measurements mean you're going to be hit with a lot of regression to the mean.
Also that stuff is likely to be weird (must be extreme on neglectedness if it's so important and still useful to put more money in to), and so just as it's hard to get someone to give enormous amounts, it will probably also be hard to move donations there.
undefined @ 2014-12-22T16:13 (+3)
Another speculative argument in favor of big asks: most charity seems to focus on making small asks, because people think getting people on the first step towards making a difference is the crucial bottleneck (e.g. see this guy), so the space of 'making big asks' is neglected. This means it's unusually effective to work in this space, even if you appeal to many fewer people.
This seems to be one of the main reasons GWWC has been much more effective than ordinary fundraising techniques.
The downside is that we're concerned with total scale as well as cost-effectiveness, and a 'big ask' approach probably has less total growth potential in the long-run.
undefined @ 2014-12-24T16:09 (+1)
Scott Alexander makes my point, that strong rules make for strong communities, better than I did here: http://slatestarcodex.com/2014/12/24/there-are-rules-here/
undefined @ 2014-12-23T09:11 (+3)
I think there are two questions here:
- How much of my time should I allocate to altruistic endeavour?
- How should I use the time I’ve allocated to altruistic endeavour?
Effective altruism clearly has a lot to say about (2). It could also say some things about (1), but I don’t think it is obliged to. These look like questions that can be addressed (fairly) independently of one another.
An aside: a weakness of the unqualified phrase “do the most good” is that it blurs these two questions. If you characterise the effective altruist as someone who wants to “do the most good”, it’s easy to give the impression that they are committed to maximising both the effectiveness of their altruistic endeavour and the amount of time they allocate to altruistic endeavour.
I’m quite keen on Rob’s proposed characterisation of an effective altruist, which remains fairly quiet on (1):
Someone who believes that to be a good altruist, you should use evidence and reason to do the most good with your altruistic actions, and puts at least some time or money behind the things they therefore believe will do the most good.
This strikes me as a substantive and inclusive idea. Complementary communities or sub-groups could form around the idea of giving 10%, giving 50%, etc, and effective altruists might be encouraged - but not obliged - to join them.
Much of the discussion in this thread has focussed on the question of which characterisation of effective altruism would have the greater impact potential in the long-run. In particular, whether a more demanding characterisation, likely to limit appeal, might nonetheless have a greater overall impact. I don't have much to add to what's been said, except to flag that an inclusive characterisation is likely to bring more diversity to the community - a quality it's somewhat lacking at present.
undefined @ 2014-12-20T18:50 (+3)
For better or for worse, I think it may be difficult to "police the door" on who should and shouldn't call themselves an effective altruist. For example, a whole lot of people call themselves environmentalists, even if they're doing little or nothing for the environment besides holding opinions positive to environmentalism. On the flip side, there are people doing more for the environment than the typical environmentalist.
In practice, I think that what words people use to describe themselves has more to do with what words their friends use to describe themselves. This applies to me too-- like Peter, I'm a GWWC member, but I don't self-identify as effective altruist, and I think this is because I don't feel very connected to the community.
I think this works in reverse too. "Queer" is a word that naively seems well defined to exclude some people, but I know people who self-identify as queer even though they are both straight and cisgender. I'm not criticizing--these people are also usually careful to communicate clearly about what this means. I say this to point out how difficult it can be to clearly define group membership.
GWWC has a well defined criterion for membership, and there could be other similar organizations with well defined criteria, but I'm not sure that we could give the movement itself a well defined criterion even if we wanted to.
undefined @ 2014-12-18T05:22 (+2)
I don't know about how to attract people to something but I certainly know a surefire way to turn them off: make them feel judged. I find that nothing will make someone like you more than making them feel validated and nothing will make someone hate you (or reject your position) more than if they feel that you're judging them. Look at videos of Singer's lectures to universities about EA and utilitarianism. Most of the students' questions afterwards are negative, sometimes strongly so. It's because they feel like he is judging them for being selfish. That's also why people tend to be negative towards vegans: they feel like the vegans probably think they're bad people for eating meat so they are resentful towards them.
Part of me likes the 10% standard, but part of me thinks that people that don't plan on giving that much will feel judged and therefore develop animosity against the movement, dismissing the whole effectiveness thing outright. I think that since people think so little about their impact in the world, a “the more good you do the better” attitude will probably be most productive. An all-or-nothing “if you donate less that 10%, or not to a “top” charity, you're not a real effective altruist, or you're a moral failure” attitude will probably just result in people to rejecting EA altogether, just like an “abolitionist” you're-a-horrible-person-if-you-consume-any-animal-products vegan stance results in most people simply dismissing changing their diets in any way.
Having said that, it's good to have an achievable goal and people are driven by aspiring to achieve something or be greater than they are, so the 10% standard I think would be net positive as long as it's just considered an ideal (the low end of the ideal) without any stigma for falling short of it.
undefined @ 2014-12-17T17:13 (+2)
The problem with this definition is that someone who did absolutely nothing to help others could hold that belief and qualify. That seems quite strange and confusing. At the EA Summit this year I proposed the following on my slides:
Possible standards
- ‘Significant’ altruism. One of:
- Money: 10% of income or more?
- Time: 10% of hours or more?
Or a ‘significant’ change in career path?
Open-mindedness
- Willing to change beliefs in response to evidence
- Cause neutral: If given good reasons to believe that a different cause area will do more good, will switch to that
- Must hold a ‘reasonable’ view of what is good (no Nazi clause)
Read more: https://drive.google.com/file/d/0B8_48dde-9C3WUVkTGdoUEliQ0E/view?usp=sharing
undefined @ 2014-12-17T20:39 (+4)
The problem with this definition is that someone who did absolutely nothing to help others could hold that belief and qualify.
Well zero can often be an awkward edge case but we don't really need a definition to tell us that someone who does nothing for others isn't an effective altruist. However, when someone does a small amount for others, if they're giving a small amount to highly effective causes, they can be a very important part of the extended altruism community. Take Peter Thiel, who seems to give <1%, or think of Richard Posner or Martin Rees, who have made huge contributions to the GCR-reduction space over many years, using a small fraction of their working hours.
On a related note, a lot of people think like effective altruists but don't act on it. I've found that it can be dangerous to write these kinds of people off because often you come back and meet them again in a few years and they take concrete actions by donating their time or other resources to help others.
Last, I just worry about the whole notion of applying 'standards' of effective altruism. The whole approach seems really wrong-headed. It doesn't feel useful to try to appraise whether people are sufficiently generous or "open-minded" or "willing to update" to "count" as "one of us". It's pretty common for people to say to me that they're not sure whether they "count" as an effective altruist. But that's obviously not what it's about. And I think we should be taking a loud and clear message from these kinds of cases that we're doing something wrong.
undefined @ 2014-12-17T21:45 (+1)
I think this is exactly right. Encouraging people to do more is of course great, but while in theory excluding people for not meeting a certain standard might nudge people up to that standard, I think in practice it's likely to result in a much smaller movement. Positive feedback for taking increasingly significant actions seems like a better motivator.
If we did spread the idea of effectiveness very widely but it didn't have a high level of altruism attached to it, I think that would already achieve a lot, and I think it would also be a world in which it was easier to persuade many people to be more altruistic.
undefined @ 2014-12-17T23:44 (+1)
"while in theory excluding people for not meeting a certain standard might nudge people up to that standard, I think in practice it's likely to result in a much smaller movement."
What makes you think that? I just have no idea which of the effects (encouraging people do to more; discouraging them from taking a greater interest) dominates.
undefined @ 2014-12-18T11:41 (+5)
Thanks for asking this question. I found it helpful to introspect on my reasons for thinking this.
Roughly, I picture a model where I have huge uncertainty over how far the movement will spread (~ 5 orders of magnitude), and merely large uncertainty over how much the average person involved will do (< 2 orders of magnitude). This makes it more important right now to care about gaining percentile improvements in movement breadth than commitment. Additionally, the growth model likely includes an exponential component, so nudging up the rate has compounding effects.
To put that another way, I see a lot of the expected value of the movement coming from scenarios where it gets very big (even though these are unlikely), so it's worth trying to maximise the chance of that happening. If we get to a point where it seems with high likelihood that it will become very big, it seems more worthwhile to start optimising value/person.
Two caveats here:
(i) It might be that demanding standards will help growth rather than hinder it. My intuition says not and that it's important to make drivers feel positive rather than negative, but I'm not certain.
(ii) My reasoning suggests paying a lot more attention at the margin to the effects on growth than on individual altruism than you might first think, but it doesn't say the ratio is infinite. We should err in both directions, taking at least some actions which push people to do more even if they hinder growth. The question is where we are on the spectrum right now. My perception is that we're already making noticeable trade-offs in this direction, so perhaps going too far, but I might be persuadable otherwise.
undefined @ 2014-12-20T18:26 (+2)
I have a different reason for thinking this is true, which involves fewer numbers and more personal experience and intuition.
Having a high standard--either you make major changes in your life or your not an effective altruist--will probably fail because people aren't used to or willing to make big, sudden changes in their lives. It's hard to imagine donating half your income from the point of view of someone currently donating nothing; it's much easier to imagine doing that if you're already donating 20% or 30%. When I was first exposed to EA, I found it very weird and vaguely threatening, and I could definitely not have jumped from that state to earning to give. Not that I have since gone that far, but I do donate 10% and the idea of donating more is at least contemplatable. Even if you mostly care about the number of people who end up very highly committed, having low or medium standards gives people plausible first steps on a ladder towards that state.
As an analogy, take Catholics and nuns. There are many Catholics and very few nuns, and even fewer of those nuns were people who converted to Catholicism and then immediately became nuns. If there was no way to be Catholic except being a nun, the only people who could possibly be nuns would be the people who converted and then immediately became nuns.
undefined @ 2014-12-17T14:43 (+2)
Interesting topic Peter!
If someone gives 10% of their income to effective charities, I don't think anyone would say that they don't count as an EA because they're not devoting all their actions and resources to altruism. (This is not to say that giving 10% of your income is required, only that it's sufficient.)
I don't think we'd want effective altruism to make claims about what it takes to "be a good person". EA says that you have the opportunity to do a lot of good with your resources, and that there's a strong moral case for devoting a large portion of them to doing so. But there's no non-arbitrary portion of your resources that you're required to devote to this to count as a good person.
I think that many people count someone as an EA if they subscribe to (*), regardless of what actions they take on the basis of it - perhaps even if they don't take any actions at all. I'd be curious as to others' views of this.
undefined @ 2014-12-22T23:18 (+1)
One reasonable starting point for this would be to get a list of 'sufficient' rather than 'necessary' conditions, which provide definition without being necessarily exclusionary. For instance, I think being in GWWC and keeping your pledge is a clear sufficient condition that we're unlikely to want to change or contest. What are some others?
undefined @ 2014-12-17T16:11 (+1)
Worth reminding everyone of the most upvoted post on this forum to date: "Effective Altruism is a Question (Not an Ideology)".