Effective Altruism as Global Catastrophe Mitigation
By Evan_Gaensbauer @ 2018-06-08T04:35 (+7)
Global poverty seems to be the least historically contingent cause area: if Peter Singer’s Famine, Affluence, and Morality didn’t get people to help people in the developing world, Peter Unger or any of the other people working with similar ideas would have. The popularity of animal rights, however, is clearly connected to the fact that Peter Singer, a prominent early effective altruist, wrote Animal Liberation, one of the foundational books of the animal rights movement. He carried over a significant amount of his fanbase from animal rights into effective altruism.As a cause area, existential risk reduction seems to be solely a product of Eliezer Yudkowsky, a tireless promoter of effective altruism who researches the risks of artificial general intelligence. In my experience, interest in other kinds of existential risk among effective altruists appears to be primarily a product of people who accept Eliezer’s arguments about the importance of the far future and existential risk, but who are skeptical about the importance of the specific issue of artificial general intelligence.Existential risk reduction, global poverty, and animal rights all seem to me to be important issues. But “global poverty, plus the pet issues of people who got a lot of people into EA” doesn’t seem to me to be a cause-area-finding mechanism that eliminates blind spots. I ask myself: what are we missing?The most obvious thing we’re missing, of course, is politics. I can hear all of my readers groaning now, because “effective altruism doesn’t pay enough attention to politics” is the single most tired criticism of effective altruism in the entire world. I do think, however, it is trotted out ad nauseam because it has a point. There is a significant gap in effective altruism for structural change in between “buy bed nets” and “literally build God”. And while development in Africa is a fiendishly difficult topic, so are wild-animal suffering and preventing existential risk, and effective altruists seem to have mostly approached the latter with an attitude of “challenge accepted”.(It’s possible, however, that development is not sufficiently neglected for effective altruists to improve the situation much? I don’t know enough to have an opinion on the issue.)However, the most interesting question of blind spots, to me, is not that. The three primary cause areas of effective altruism are all advocating for particular groups of beings who are commonly overlooked: the global poor, animals, and people who don’t yet exist. The question arises: what beings are effective altruists overlooking?
Indeed, effective altruists joke the four focus areas of EA come from this one blog post Luke Muehlhauser wrote as a description of the nascent EA movement, but we all took it as a prescription for what EA was supposed to be all about. However, EA's 3 major focus areas aren't that historically contingent. In 'Why do effective altruists support the causes we do?', Michelle Hutchinson from the Centre for Effective Altruism (CEA) provides a framework explaining how while to begin with EA's 3 major focus areas were due to historical precedent, they fit into a framework seemingly exhausting the space of possible beneficiaries for effective interventions to be focused on. From Michelle's post:
I think the reason is that our current delineation of causes cuts along beneficiary lines: present humans, non-human animals, and future conscious beings. Some of the most significant insights of effective altruism in terms of finding more effective ways to help others have come from highlighting different beneficiary groups. Since the groups above seem to exhaust the space of beneficiaries (if what we care about is well-being), we can’t expect to get more effectiveness improvements in this way. In future, such improvements will have to come from finding new interventions, or intervention types. These are harder to find, and likely to lead to fewer orders of magnitude improvement. This post is on how the current ‘causes in EA’ seem to arise from distinguishing beneficiary groups. In a follow up I’ll discuss what the implications of that might be in terms of our likelihood of finding more effective causes.‘Cause’ is a very fuzzy term. If you think of the different things that we tend to talk about as causes, they actually seem to fall in different categories. Take the causes ‘alleviating global poverty’ and ‘structural change’. These are sometimes described as alternatives to each other, yet structural change seems more naturally a way to achieve the alleviation of poverty. This is likewise true of the cause ‘meta’. This fuzziness increases all the more what could fall into the category of ‘causes we could be supporting’, and makes it all the more surprising that there would be three singled out.Distinguishing groups of beneficiariesThe starting point of effective altruism is increasing well-being: not just of those close to and similar to us, but all over the world and into the future. EA activities therefore fall into three groups: helping people currently existing, helping non-human animals, and helping future conscious beings. This maps out the whole space of possible effective altruism activities. There are altruistic activities which fall outside this grouping – for example, working to improve biodiversity for its own sake. But these don’t improve anyone’s well-being, and so fall outside the scope of effective altruism.
There are other ways to group beneficiaries than the 3 categories above. You might distinguish simply between existing sentient beings and future sentient beings. Or you might draw finer distinctions, such as between non-human animals whose suffering is caused by humans and wild animals.There are several reasons for using the three-fold distinction amongst beneficiaries:The groups are systematically different in how much information we have about their cost-effectiveness: We have first-hand experience of how good it is to help other humans, while it’s more difficult to know how to compare helping humans to helping other animals. Interventions that help others in the present can typically be tested for how well they work, unlike interventions aimed at the future.The kinds of things which will help the various groups will typically be somewhat more similar to each other than to those which help others of the groups.These distinctions are typically drawn quite strongly in people’s minds, and they represent different ways in which humanity’s moral circle could do with being widened - to people spatially far away, to people temporally far away, and to species other than our own.
Let us look more closely at what would, and would not, count as a global catastrophic risk. Recall that the damage caused must be serious, and the scale global. Given this, a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe. A catastrophe that caused 10 million fatalities worth of economic loss (e.g., an influenza pandemic) would count as a global catastrophe, even if some region of the world escaped unscathed. As for disasters falling between these points, the definition is vague. The stipulation of a precise cut-off does not appear needful at this stage.Global catastrophes have occurred many times in history, even if we only count the disasters causing more than 10 million deaths. A very partial list of examples might include the An Shi Rebellion (756-763), the Taiping Rebellion (1851-1864), and the famine of the Great Leap Forward in China, the Black Death in Europe, the Spanish flu pandemic, the two world wars, the Nazi genocides, the famines of British India, Stalinist totalitarianism, the decimation of the native American population through smallpox and other diseases following the arrival of European colonizers, probably the Mongol conquests, perhaps the Belgian Congo--innumerable others could be added to the list depending on how various misfortunes and chronic conditions are individuated and classified.We can roughly characterize the severity of risk by three variables; its scope (how many people - and other morally relevant beings - would be affected), its intensity (how badly these would be affected), and its probability (how likely the disaster is to occur, according to our best judgement, given currently available evidence). Using the first two of these variables, we can construct a qualitative diagram of different types of risk (Fig 1.1). (The probability dimension could be displayed along a z-axis were this diagram three-dimensional).The scope of a risk can be personal (affecting one person), local (affecting a large part of the human population), or trans-generational (affecting not only the current world population but all generations that could come to exist in the future). The intensity of the risk can be classified as imperceptible (barely noticeable), endurable (causing significant harm but not destroying quality of life completely), or terminal (causing death or permanently and drastically reducing quality of life). In this taxonomy, global catastrophic risks occupy the four risk classes in the high-severity upper-right corner of the figure: a global catastrophic risk is of either global or trans-generational scope, and of either endurable or terminal intensity. In principle, as suggested in the figure, the axes can be extended to encompass conceptually possible risks that are even more extreme. In particular, trans-generational risks can contain a subclass of risks so destructive that their realization would not only affect or pre-empt future human generations, but would also destroy the potential of our future light cone of the universe to produce intelligent or self-aware beings (labelled ‘Cosmic’). On the other hand, according to many theories of value there can be states of being that are even worse than non-existence or death (e.g., permanent and extreme forms of slavery or mind control), so it could, in principle, be possible to extend the x-axis to the right as well (see Fig. 1.1 labelled ‘Hellish’).
A subset of global catastrophic risks is existential risks. An existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or to reduce its quality of life (compared to what would otherwise have been possible) permanently and drastically. Existential risks share a number of features that mark them out as deserving of special consideration. For example, since it is not possible to recover from existential risks, we cannot allow even one existential disaster to happen; there would be no opportunity to learn from experience. Our approach to managing such risks must be proactive. How much worse an existential catastrophe would be than a non-existential global catastrophe depends very sensitively on controversial issues in value theory, in particular how much weight to give to the lives of possible future persons. Furthermore, assessing existential risks raises the distinctive methodological problems having to do with observation selection effects and the need to avoid anthropic bias.
One major current global catastrophic risk is infectious pandemic disease. As noted earlier, infectious disease causes approximately 15 million deaths per year, of which 75% occur in Southeast Asia and Sub-Saharan Africa. These dismal statistics pose a challenge to the classification of pandemic disease as a global catastrophic risk. One could argue that infectious disease is not so much a risk such as an ongoing global catastrophe. Even on a more fine-grained individuation of the hazard, based on specific infectious agents, at least some of the currently occurring pandemics (such as HIV/AIDS, which causes nearly 3 million deaths annually) would presumably qualify as global catastrophes. By similar reckoning, one could argue cardiovascular (responsible for approximately 30% of world mortality, or 18 million deaths per year) and cancer (8 million deaths) are also ongoing global catastrophes. It would be perverse if the study of possible catastrophes that could occur were to drain attention away from actual catastrophes that are occurring. It is also appropriate, at this juncture, to reflect for a moment on the biggest cause of death and disability of all, namely ageing, which accounts for perhaps two-thirds of the 57 million deaths that occur each year, along with an enormous loss of health and human capital. If ageing were not certain but merely probable, it would immediately shoot to the top of any list of global catastrophic risks. Yet the fact that ageing is not just a possible cause of future death, but a certain cause of present death, should not trick us into trivializing the matter. To the extent that we have a realistic prospect of mitigating the problem - for example, by disseminating information about healthier lifestyles or by investing more heavily in biogerontological research - we may be able to save a much larger expected number of lives (or quality-adjusted life-years) by making partial progress on this problem than by completely eliminating some of the global catastrophic risks discussed in this volume.
Finally, embracing domain-specific effective altruism diversifies the portfolio of potential impact for effective altruism. Even within the EA movement currently, there are disagreements about the highest-potential causes to champion. Indeed, one could argue that domain-specific effective altruist organizations already exist. Take, for example, Animal Charity Evaluators (ACE) or the Machine Intelligence Research Institute (MIRI), both of which are considered effective altruist organizations by the Centre for Effective Altruism. Animal welfare and the development of “friendly” artificial intelligence are both considered causes of interest for the EA movement. But how should they be evaluated against each other? And more to the point, if it were conclusively determined that friendly AI was the optimal cause to focus on, would ACE and other animal welfare EA charities shut down to avoid diverting attention and resources away from friendly AI? Or vice versa?The reality, as most EAs will admit, is that virtually all estimates of the expected impact of various interventions are rife with uncertainty. Small adjustments to core assumptions or the emergence of new information can change those calculations dramatically. Even a risk-friendly investor would be considered insane to bank her entire asset base with a single company or industry, and if anything, the information available in the social realm is far less plentiful and precise than is the case in business. Particularly as the EA movement seeks to grow in influence, the idea of risk mitigation is going to become increasingly applicable.[...]Effective altruism is a truly transformative idea that has the potential to improve billions of lives – but the movement’s rhetoric and ideology is currently limiting that potential in very significant ways. The few, wonderful people who are prepared to embrace any cause in the name of global empathy should be treasured and cultivated. But solely relying on them to change the world is very likely a losing strategy. If effective altruists can come up with ways to additionally engage those who want to maximize their impact but are not prepared to abandon causes and geographies they care about deeply, that could be the difference between EA ending up as a footnote to history or the world-changing social force it seeks to be.
-
What group of beneficiaries do the moral patients we're trying to help belong to?
-
Is the negative impact on their well-being from the problem we've identified global and trans-generational in scope?
-
Is the negative impact on well-being of this problem terminal in intensity?
-
Do the probabilities of the variables (e.g., scope, intensity, moral weight, criterion for "well-being" or "moral patienthood", etc.) we assign in our model of the problem make it competitive with existing causes in effective altruism?
Here are two toy models for how effective altruism might have thus far looked for the most effective causes.Model 1:From all the ways of helping others we can find, we hone in on those which seem most effective. We then investigate them in more detail to work out which seem most promising to work on. Some of the interventions effective altruism focuses on are quite surprising, so we shouldn’t think of this as looking at a series of interventions that are already described: we’re precisely looking for ones that haven’t been. On a model like this, the effectiveness of the interventions we’ll find in the future is quite a mystery, and it seems likely that for some time to come most of our efforts should go into trying to find the new, more effective causes.Model 2:We systematically expand our circle of caring to groups people tend to neglect, and doing so has highlights novel ways to help others. People tend to care about those close by and similar to them. Over history, the circle of those we care about has gradually widened - for example in coming to understand racism as an evil. You might think that what effective altruism has tried to do is continue this progression - persuading people that they should help not just those in their country, but also on the other side of the world; not just those of their own species but all sentient creatures; not just people currently existing but any people whose lives we can affect. Then in each of those cases it tried to find the most effective way to help that group.It seems unlikely that either of these models accurately represent how effective altruism has come to focus on the causes it does. But some of the biggest insights of effective altruism do seem to have come from expanding our circle of caring. The importance of preventing events which could severely affect those in the future for the worse, after all, follows naturally after the realisation that the wellbeing of those in the future matters as much as that of current people.
Model 1 is indeed inaccurate in the wake of a the new framework. From within this framework, what bridges what effective altruists have done, what they're starting to do and what to expect we'll do in the future is no longer a mystery. Model 2 is still likelier to fit. A fourth group of beneficiaries Michelle's original three missed out on is future generations of non-human populations. The Hedonistic Imperative and s-risk reduction both target these populations unlike the other focus areas of EA, aimed at suffering trans-generational in scope and terminal in intensity.
1. New beneficiary groupsFinding subgroups of these three beneficiary groups, or finding groups that cut across these groups, may highlight new effective ways of helping others. (H/t Daniel Dewey for this point). ‘People currently in extreme poverty’ is an example of a sub-group (of ‘current people’) while ‘people prone to depression’ is an example of a group that cuts across current and future people. Identifying such a group might be useful because a it is neglected compared to others. Eg typically animal welfare activists do not work on the suffering of wild animals, so identifying that sub-group as worth helping was novel. Or identifying such a group might be useful because there there is some particular way to help that group, which is highlighted by identifying the group. Eg specifically considering the category ‘animals in factory farms’.In some cases, identifying these groups highlights more effective interventions within a cause, such as concentrating on ending factory farming within the cause of animal rights. In other cases, it may suggest a new cause as being effective, as might be the case with trying to find a cure to depression (I’m not clear here whether the cause would be ‘medical research’, ‘improving mental health across the world’ or what).2. New methodsAlternatively, we might be to find new methods to help our three main beneficiary groups. That might lead to new causes for us to focus on. For example - perhaps it is possible to breed animals with a higher happiness set point, and we should be trying to forward that research rather than only working to alleviate suffering among animals. In other cases it might suggest more effective interventions within causes we already focus on. Eg developing a cheaper way to purify water, or a more effective way to raise money to fight poverty.
The fact that expanding the circle of caring yielded such gains in effectiveness, and that it probably won’t yield more, makes it somewhat unlikely that we will be able to find another cause which eclipses the ones we currently focus on. On the other hand, finding new groupings of beneficiaries and new methods for helping beneficiary groups both seem promising ways to find more effective causes.Our chance of finding a far more effective intervention depends in part on what overall class of beneficiaries we’re looking at and how neglected they tend to be. E.g. Animals tend to get less attention than humans – that plausibly explains why the suffering of wild animals has not previously been thought to be of moral importance. Any group that includes currently existing people in rich countries, on the other hand, is comparatively likely to have had quite a bit of work put into it.Where there hasn’t been, that will often be for reasons which make it rather intractable. E.g. there may be a strong lobby against an intervention, or it may require an untenable level of cooperation among diverse bodies. There seem to be some cases for which doesn’t hold though: you might think that human enhancements (as opposed to treatments of health problems), e.g. trying to slow ageing, have been neglected for the most part simply due to a perception that we don’t need them.
Relatedly, although effective altruism may have been useful in the past in highlighting particularly effective interventions within crowded areas, that does not mean it will in future. Past work by effective altruism in such areas has built on decades of research - whether by bodies like the WHO and the World Bank or by academia. Much of the value-add has been looking at the big picture of the interventions they’ve researched, and picking out the most effective ones. There might be interventions they entirely overlooked, but it’s more likely that improvements will come simply from some interventions being a bit better than they looked. This means the initial gains will have been far faster than subsequent ones.
With neglected areas like the far future, the story seems different. Because there has been little research done by others it’s more plausible that there are interventions we’ve never yet considered. I don’t know how to think about their likely value compared to those that we have. E.g. risks from AI seem quite major, potentially somewhat close in the future (~100 years?) and potentially tractable (e.g. by talking to the people doing the research), so it seems a high bar to surpass. But that’s not to say that we can’t: particularly if we could (say) find some way to improve society such that it was more robust to all possible disasters.
I didn't become part of effective altruism for what it ever was in the present, but what it could become. I wasn't raised in a tradition of charity. I was raised in the traditions of how ingenuity can be uplifting for the human condition, and how clear thinking can lead the conscience and the seek for justice, not the other way around. My family values were those of aspiring rationality. Apparently unlike most, I wasn't first drawn to rationality and skepticism because I found a community which for the first time in my life could fill an unnamed yearning. I joined because it simply felt like home. The culture of individualistic intellect and using enlightenment to provide that opportunity to anyone everyone is my native tribe.
That effective altruism hasn't been like this has led to what I think are rightful reservations about it. Pious virtue signaling dressed up as anything turns off worthy iconoclasts. If you couldn't tell, that's why I'm indifferent to expressing it myself. In my more shortsighted and dreary moments, I neglect to cultivate the virtue I feel is important in my personal life because I feel it's pointless to try for myself if even what still seems the most promising movement on Earth loses itself to flash at the cost of substance.
This is changing. Effective altruism will always contain donation as an aspect, but the throwing of money at problems which, rightly or wrongly, rings hollow for so many, will soon cease to be its strongest pillar. This year at Effective Altruism Global things were different. The whole community is different We are going from purchasing bednets to destroying the world's deadliest monster's with gene drives. We are going from small cash transfers to entrepreneurs empowering people to pull themselves out of poverty, and running the biggest RCTs on basic income in history. We're backing all the biotech startups engineering cultured substitutes for every thinkable animal-agriculture product. We are wholesale embracing the teachings of scholars who have over thirty years years discovered and designed the exciting science of predicting the future. We're not only looking athow to transform societies with robustly evidence-based policies, but learning from experienced professionals on how to build the intellectual supply chain to get it done, from building coalitions to influencing policymakers to intellectual to running more experiments.
Effective altruism is finally becoming what I signed up for. Let's keep it up. They say excited altruism is effective altruism. Let's get excited for effectiveness. Let's go beyond that. Let's become overwhelmingly effective. Maniacally effective. Titanically and bombastically and ballistically effective. Let's get nuts.
When the ideologues who told us we didn't do enough systemic change start telling us science is too scary to use for changing systems, we will know we're winning. Humanity will abandon the classes of dogma which prey on their fears and promise them false hopes for cheap power when they are shown it is public knowledge and technology which improves their well-being more than stirring but empty sentiments ever can. As the world becomes less tragic in all aspects of life, bad ideas won't have minds to prey on. Let's starve them. Let's not just raise the waterline of sanity. Let's drown the world in it.
undefined @ 2018-06-11T18:26 (+4)
Evan, just a data point: I don't understand a lot of what you're saying in most of your posts/comments, and I can only think of one person I find more difficult to understand out of everyone I've come across in the EA community who I've really wanted to understand. (By which I mean "I find the way you speak confusing and I often don't know what you mean", not "Boi, you crazy".)
undefined @ 2018-06-12T00:03 (+1)
Thanks. Are you referring to my posts and comments on social media? That's more transient, so I make less of an effort on social media to be legible to everyone. Do you have examples of the posts or comments of mine you mean? I don't get tons of feedback on this. Of course people tell me I'm often confusing. But the feedback isn't actionable. I can decode any posts you send me. For example, here is a post of mine where I haven't gotten any negative feedback on the content or writing style. This post was like a cross between a personal essay and dense cause prioritization discussion, so it's something I wouldn't usually post to the EA Forum. It's gotten some downvotes, but clearly more upvotes than downvotes, so somebody is finding it useful. Again, if I get some downvotes it's ultimately feedback on what does or doesn't work on the EA Forum. This is the kind of clearer feedback specifying something.
undefined @ 2018-06-12T10:49 (+2)
Also the dank memes stuff...at the meta level of treating it like valuable, serious stuff... This is a separate thing as it's a case of me thinking, "Surely they're still joking...but it really sounds like they're not," but it's another reason for me to give up on trying to understand you because it's too much effort.
undefined @ 2018-06-12T10:39 (+2)
I don't want to spend too long on this, so to take the most available example (i.e. treat this more as representative than an extreme example): Your summary at the top of this post.
- General point: I get it now but I had to re-read a few times.
- I think the old "you're using long words" is a part of this, which is common in EA and non-colloquial terms are often worth the price of reduced accessibility, but you seem to do this more than most (e.g. "posit how" could be "suggest that"/"explore how", "heretofore" could be "thus far", "delineate" could be "identify"/"trace" etc....it's not that I don't recognise these words, they're just less familiar and so make reading more effort).
- Perhaps long sentences with frequent subordinate clauses - and I note the irony of my using that term - and, indeed, the irony of adding a couple here - add to the density.
- More paragraphs, subheadings, italics, proofing etc. might help a bit.
I also have the general sense that you use too many words - your comments and posts are usually long but don't seem to be saying enough to justify the length. I am reminded of Orwell:
It is easier — even quicker, once you have the habit — to say "In my opinion it is not an unjustifiable assumption that" than to say "I think".
And yes - mostly on social media. But starting to read this post prompted the comment (I feel like you have useful stuff to say so was surprised to not see many upvotes and wondered if it's because others find you hard to follow too).
undefined @ 2018-06-17T18:17 (+1)
One heuristic I use for writing is to try Writing Like I Talk from Paul Graham. Of course, I already tend to speak differently than most people. I find keeping my head in books changes how I think internally, and thus how I speak. It comes full circle when I write like I talk, which is different than most people talk or write. The perfect is the enemy of the good, and there are trade-offs in time taken to write. Another is to know your audience. The post in question was meant to be read by suffering reducers and those familiar with the work on the Foundational Research Institute, from whom I've already received good feedback from, so I relatively achieved my goal with my writing. Also, those posts are rougher on my personal blog, but I would edit them before I put them up on the EA Forum.
As long as it takes to read my stuff, I use a lot of words because it provides full context. For example, I'd hope someone familiar with academic jargon but relatively new to EA might come to fully understand the case of potential s-risks from terraforming, having come in knowing little to nothing about the subject. I'm aware I often use too many words, but when the time comes to make posts more accessible, I can and will do so. I appreciate this feedback though. Please feel free to provide feedback anytime. I update on it quite quickly, even from a single person. I wish more people felt comfortable doing so.
I wrote this post up because it will tie into a series of blog posts I'll be rolling out. When it's done, in context, I hope this post will make more sense. I'm going to be working with various EA organizations to bring remote volunteering opportunities to local EA groups to do direct work. I'm going to consult with Rethink Charity's research team to tighten up a model I have for coordinating teams together numbering in potentially hundreds of individuals. Soon time too may be a unit of caring.
undefined @ 2018-06-11T18:16 (+2)
There is a significant gap in effective altruism for structural change in between “buy bed nets” and “literally build God”.
[Laughing crying face]
[Not because I'm crying with laughter, but because I'm laughing and crying at the same time]