What posts would you like someone to write? 

By tobytrem @ 2024-02-27T10:30 (+60)

I'm posting this to tie in with the Forum's Draft Amnesty Week (March 11-17) plans, but it is also a question of more general interest. The last time this question was posted, it got some great responses. 

This post is a companion post for What posts are you thinking about writing?

When answering in this thread, I suggest putting each idea in a different answer, so that comment threads don't get too confusing. 

If you think someone has already written the answer to a user's question, consider lending a hand and linking it in the comments. 

A few suggestions for possible answers:

If you find yourself with loads of ideas, consider writing a full "posts I would like someone to write" post.

Draft Amnesty Week

If you see a post idea here which you think you might be positioned to answer, Draft Amnesty Week (March 11-17) might be a great time to post it. In Draft Amnesty Week, your posts don't have to be fully thought through, or even fully drafted. Bullet-points and missing sections are allowed, so you can have a lower bar for posting. 


Brad West @ 2024-03-07T13:09 (+35)

Would be interesting to see an argument that the EA forum is net negative. It creates the impression that new ideas are being considered and voices are being heard, but people who have power and influence seldom actually are open to influence from EA posts, nor are there effective mechanisms by which others (like gatekeepers) disseminate such information. The most highly upvoted, and thus accessible posts are either cute, meta-level clever commentary that's often not actionable or by high status EAs or orgs that have little difficulty having their voices be heard (although having a convenient place for them to share stuff is a useful function).

I do feel like as a place for new ideas to translate into research and, ultimately, impactful action, the EA forum is quite overrated. While I wouldn't agree that it's net negative, I worry that there is an assumption by community members that it is doing things that it isn't.

Jason @ 2024-03-07T16:49 (+17)

A possible reframe: Under what circumstances is writing posts and/or comments on the EA Forum more (or less) likely to be an impactful use of one's time?

For example, your answer above suggests that writing on the Forum is not impactful where the theory of change involves influencing the actions of "people who have power and influence." I don't have an opinion on that either way. However, both that assertion and "Forum writing influences the views of more junior people, some of whom will have power in influence in 3-10 years" could be true. If so, that would nudge us toward writing certain types of posts and away from writing others (e.g., those in which a decision has to be made soon or never).

Jacob_Watts @ 2024-03-14T01:14 (+3)

Adjacent to this point about how we could improve EA communication, I think it would be cool to have a post that explores how we might effectively use, like, Mastodon or some other method of dynamic, self-governed federation to get around this issue. I think this issue goes well beyond just the EA forum in some ways lol.

Good suggestion! Happy Ramadan! <3

Ulrik Horn @ 2024-03-10T11:49 (+3)

Here is some more discussion on a very similar topic, if anyone wants more ideas. Brad and I seemed to have had this thought more or less at the same time! 

Joseph Lemien @ 2024-03-03T19:13 (+24)

The short version: How can people contribute to EA if they don't have lots of extra money and they don't have the skillsets to work at organizations focused on important problems?

The slightly longer version: The primary path that is promoted for contributing is something along the lines of "get skills and then do work." And I think that is a great suggestion for people who either A) have the ability to choose a field to work in (such as a college undergraduate), or B) already have the skills and can relatively easily pivot (such as experienced project managers or software developers). EA organizations don't really have a great need for nurses, for history professors, for plumbers, etc. Some people are able to afford to take a few years off work to reskill and then start a new career, but not everyone is able to do that. I know that I would be hard pressed to pay for tuition, food, and housing while spending a few years doing studies/retraining.

The secondary path is composed of variations on "donate money." This is great for people with high incomes (or more moderate incomes that are predicted to be very stable for the future, such as tenured professors), and simply isn't as feasible for people with lower incomes or with unstable future incomes.

So as a community, I'm not sure what our messaging should be for people who aren't able to easily shift their career, and who don't have much money to spare. I suspect that the answer might be some variation of "not all people can contribute to effective altruism; this community isn't for everyone." But I'm hoping that there is something else out there.

Arepo @ 2024-03-07T05:39 (+5)

EA organizations don't really have a great need for nurses, for history professors, for plumbers, etc.

Fwiw, I was involved with an EA organisation that  that struggled for years with the admin of finding trustworthy tradespeople (especially plumbers).

More generally, I think a lot of EA individuals would benefit a lot from access to specialist knowledge from all sorts of fields, if people with that knowledge were willing to offer it free or at a discount to others in the community. 

Jason @ 2024-03-07T12:43 (+4)

At the risk of going off-topic, look for plumbing firms that pay their employees a flat hourly rate rather than a commission based on how much revenue they generate. That's what my plumber said he looked for when researching plumbers for out-of-town family members.

In general, finding someone who has more than enough work and bills at an hourly rate is often a sound strategy when one is dependent on the contractor's professional judgment as to what needs to be done and how long it should take. Under those circumstances, the busy hourly-rate contractor has much less incentive to recommend unnecessary work or stretch it out. The downside is that, because they have more than enough work, they may not be immediately available. . . .

JP Addison @ 2024-02-27T17:02 (+23)

Maybe a report from someone with a strong network in the silicon valley scene about how AI safety's reputation is evolving post-OAI-board-stuff. (I'm sure there are lots of takes that exist, and I guess I'd be curious for either a data driven approach or a post which tries to take a levelheaded survey of different archetypes.)

Vasco Grilo @ 2024-03-05T15:16 (+2)

Interesting suggestion, JP. Somewhat relatedly, I think it would be interesting to know the extinction risk per training run employees at Anthropic, OpenAI and Google Deepmind would be willing to endure (e.g. per order of magnitude increase in the effective compute used to train the newest model).

Lizka @ 2024-02-28T18:50 (+21)

I'm basically always interested in potential lessons for EA/EA-related projects from various social movements/fields/projects.

Note that you can find existing research that hasn't been discussed (much) on the Forum and link-post it (I bet there's a lot of useful stuff out there), maybe with some notes on your takeaways. 

Example movements/fields/topics: 

Some resources, examples, etc. (not exhaustive or even a coherent category): 

Pablo @ 2024-02-27T19:31 (+20)

I would like someone write a post expanding on X-risks to all life v. to humans. Despite the importance of this consideration, it seems to have been almost completely neglected in EA discussion.

If I were to write on this, I’d reframe the issue somewhat differently than the author does in that post. Instead of a dichotomy between two types of risks, one could see it as a gradation of risks that push things back an increasing number of possible great filters. Risks to all life and risks to humans would then be two specific instances of this more general phenomenon.

Vasco Grilo @ 2024-03-05T15:06 (+6)

Nice point, Pablo! I did not know about the post you linked, but I had noted it was not mentioned (at least very clearly) in Hilary Greaves' working paper Concepts of existential catastrophe[1] (at least in the version of September 2023)

Despite the importance of this consideration, it seems to have been almost completely neglected in EA discussion.

I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher than those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.

A related seemingly underexplored question is determining under which conditions human disempowerment (including extinction) would be bad from an impartial perspective. Humans have arguably played a role in the extinction of many species, including maybe some of the genus Homo (there are 13!), but that was not an existential risk given humans are thought to be better steerers of the future. The same might apply to AI under some conditions. Matthew Barnett has a quick take somewhat related to this. Here is the 1st paragraph:

I'm curious why there hasn't been more work exploring a pro-AI or pro-AI-acceleration position from an effective altruist perspective. Some points:

  1. Unlike existential risk from other sources (e.g. an asteroid) AI x-risk is unique because humans would be replaced by other beings, rather than completely dying out. This means you can't simply apply a naive argument that AI threatens total extinction of value to make the case that AI safety is astronomically important, in the sense that you can for other x-risks. You generally need additional assumptions.
  2. Total utilitarianism is generally seen as non-speciesist, and therefore has no intrinsic preference for human values over unaligned AI values. If AIs are conscious, there don't appear to be strong prima facie reasons for preferring humans to AIs under hedonistic utilitarianism. Under preference utilitarianism, it doesn't necessarily matter whether AIs are conscious.
  3. Total utilitarianism generally recommends large population sizes. Accelerating AI can be modeled as a kind of "population accelerationism". Extremely large AI populations could be preferable under utilitarianism compared to small human populations, even those with high per-capita incomes. Indeed, humans populations have recently stagnated via low population growth rates, and AI promises to lift this bottleneck. 
  4. Therefore, AI accelerationism seems straightforwardly recommended by total utilitarianism under some plausible theories.
  1. ^

    So I sent her an email a few days ago about this.

Pablo @ 2024-03-05T18:17 (+4)

I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher that those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.

Yes, this seems right.

As a semi-tangential observation: your comment made me better appreciate an ambiguity in the concept of importance. When I said that this was an important consideration, I meant that it could cause us to significantly revise our estimates of impact. But by ‘important consideration’ one could also mean a consideration that could cause us to significantly alter our priorities.[1] "X-risks to all life v. to humans” may be important in the first sense but not in the second sense.

  1. ^

    Perhaps one could distinguish between ‘axiological importance’ and ‘deontic importance’ to disambiguate these two notions.

Dave Cortright @ 2024-03-05T16:55 (+1)

I wrote this post asking what success for sentience looks like. There's a good chance we humans are just another stepping stone on the path toward an even higher form of intelligence and sentience.

Tyner @ 2024-02-28T21:39 (+16)

I would really appreciate further analysis of family planning as an intervention.  Some specific questions I’d like to see tackled:

Here are some posts that provide a start:

https://forum.effectivealtruism.org/posts/WYmJoDxJZToDcA9Bq/population-size-growth-and-reproductive-choice-highly

https://forum.effectivealtruism.org/posts/zgBmSgyWECJcbhmpc/family-planning-a-significant-opportunity-for-impact

https://forum.effectivealtruism.org/posts/BMzmCohuPYRaGPcZD/maybe-family-planning-charities-are-better-for-farmed

And here’s a really good report on one org:

https://rethinkpriorities.org/publications/family-empowerment-media

And CE has some good reports on some interventions:

https://www.charityentrepreneurship.com/health-reports

NickLaing @ 2024-03-12T15:47 (+4)

In terms of cost effectiveness, Layifa Nigeria made a great coat effectiveness analysis for their org which I used for OneDay health and looks at most of your health metrics, but doesn't include other potential externalities.

https://forum.effectivealtruism.org/posts/sJpCYcHDGjHFG2Qvr/introducing-lafiya-nigeria

Julia_Wise @ 2024-03-05T17:54 (+4)

Love this topic!

>Do these interventions lead to a permanent reduction in family size, or a temporary one?

Note that even if total number of children ends up the same, there are benefits to spacing children by at least 18 months in terms of health (mother has more chance to recover between pregnancies, mother and baby are better nourished, better care for older siblings). Families may also be able to better afford to educate children who are more widely spaced.

This isn't relevant to all the impacts, you list, though — still worth thinking about those separately!

Max Görlitz @ 2024-03-04T16:32 (+15)

What a biosafe world looks like

Basically like "What Success Looks Like" (which is about transformative AI) but instead about what a world would look like that is really well protected from catastrophic pandemics. 

It could be set in e.g. 2035, and describe what technologies and (political) mechanisms have been implemented to make the world "biosafe"—i.e. safe from global catastrophic biological risks. 

I could even imagine versions of this that are a fictional story, maybe describing the life of someone living in that potential future.

tobytrem @ 2024-03-05T09:35 (+6)

@xander_balwit 

Max Görlitz @ 2024-03-05T11:44 (+2)

Xander, lmk if you have thought about this, and we can chat. 

tobytrem @ 2024-03-05T11:50 (+2)

(I'm linking Xander because I have a hunch that her publication might be interested in commissioning something here/ she might be interested in writing a fiction piece.) 

Tejas Subramaniam @ 2024-03-05T15:57 (+3)

This post by Carl Shulman is very similar to this, I think.

Max Görlitz @ 2024-03-05T17:22 (+2)

Very cool thanks for pointing that out! I think I might have seen it before but had forgotten about it—will check it out again.

Max Görlitz @ 2024-03-04T16:25 (+14)

List of theory of change documents of EA orgs.

I think it would be cool to have an overview of how different organizations think about their theory of change and how they present it. This would be helpful for organizations that don't yet have a public theory of change but would like to create one. It would also be useful for getting a clearer picture of what the high-level plans of different orgs are.

Arepo @ 2024-03-02T04:40 (+12)

How can we make effective altruism more appealing to political conservatives without alienating engaged liberals? If there is an inevitable trade-off between the two, what is the optimal equilibrium, how close to it are we, and can we get closer?

Arepo @ 2024-03-02T04:30 (+11)

Investigating incentives in EA organisations. Is money still the primary incentive? If not, how should we think about the intra-EA economy?

Stan Pinsent @ 2024-03-08T15:33 (+1)

By incentives do you mean incentives for taking one job over another, like pay, benefits, type of work, etc.?

Arepo @ 2024-03-10T10:37 (+2)

More generally, what incentives exist? In a normal for-profit environment there are various reasons for individuals to start their own company, to seek promotion, to do a good job, to do a bad job, to commit institutional fraud etc - we typically think of these as mainly financial, and often use the adage 'follow the money' as a methodology to try and discover these phenomena, to encourage the good ones and discourage the bad. 

I want to know what the equivalent methodology would be to find out equivalent phenomena at EA organisations.

Oisín Considine @ 2024-02-28T21:41 (+11)

I would really love to see someone, ideally someone with a background in philosophy, explore what effective altruism (either EA as a whole or various sub-causes of EA) would look like if the Epicurean view of death, namely that death is neither bad nor good (nor "neutral") for the individual who dies since they themselves cannot experience the sensation, and by extension the badness (or goodness), of death[1], were to actually be taken seriously.

I am not a philosopher, nor am I studying philosophy, and thus I believe I would not be able to tackle this project with the rigour and depth I feel it needs. Despite my limited knowledge on the subject, and as unintuitive as this idea of how to treat death may appear, I am as yet unable to rationalise my way out of the overall idea, which seems trivial when I really think about it. Given this, however, I am quite disappointed that it doesn't appear to be taken seriously, or even mentioned, within EA (a few brief mentions of it, most prominently here by the Happier Lives Institute, but unfortunately not much elsewhere).

I have been reading[2] "Epicurus and the Singularity of Death" by David B. Suits[3], through which the author attempts to defend the Epicurean view of death, first in its abstract form and then by testing its implications against real-life cases such as premature death, deprivation, killing and suicide, among others. I believe this book may be of use to some who wish to go down this particular rabbit hole.

Death happens to all living beings, and it is central to almost all ethical theories and beliefs throughout history, and Epicurus' idea of death is, I assume at least, commonly studied in courses on the philosophy of death. An issue as important as death, therefore, ought to be explored and discussed fully and thoroughly, which I do not at all see in the EA community regarding the Epicurean view.

  1. ^

    Epicurus' Letter to Menoeceus; translated by Cyril Bailey (1926)

  2. ^

    I had to take a break from it about a third of the way through in order to focus on my studies, but I will return to it once I am able to.

  3. ^

    Get the PDF for free off LibGen if you want

Vasco Grilo @ 2024-03-08T13:21 (+5)

Hi Oisín,

Epicurean view of death, namely that death is neither bad nor good (nor "neutral")

Does this imply that pressing a button which would lead to the total annihilation of all life would not be bad, good nor neutral? I would certainly not press such button! So I assume I find the Epicurean view very very implausible.

Oisín Considine @ 2024-03-08T14:38 (+8)

Hi Vasco, thank you for your question!

Yes, this view would probably imply that if pressing this button would wipe out all life instantaneously and without anyone anticipating it, it would not be bad, nor good, nor neutral. I mean, who would it be good/bad/neutral for exactly, when there is nobody to judge or perceive it. It makes no sense to say that death is something which "happens" to you, because there is no "you" when "you" "are" dead, just a memory of you held by others. How can you prefer something over death (or death over something) when death is not something one can experience in any way? In order to prefer one thing over another, both of these things need to have some common property which you favour more/less of in one than the other. I would guess that for most people this common property would be some form of pleasure (be it short- or long-term). But you are comparing the magnitude of your perception of this pleasure when you are in some given state (of being) against when you are dead. But you ("you") cannot perceive anything when dead, and to even see death as a state (in which you are) is mistaken (at least according to Epicurus, i.e. no afterlife etc.).

No sentient being can experience death, so in order to understand death we look at what it is not (which is everything), and it seems that we are quite selective about what it is not, and so (I'm speculating here) when we say that we prefer to continue living instead of dying, we usually mean that we prefer having or being able to have (perceive) some amount of pleasure over not being able to have it. However, we do not perceive the loss of missing out on these experiences since "we" cannot experience anything. We can only know about death from looking at others who die, and since we miss being with them, we think of death as not preferable to life (unless it is e.g. a life of lots of suffering, in which case we usually see death as better, but this too suffers from the same error). But to the person themselves, death cannot be perceived, and thus judged or valued over/below anything.

So sure, this way of looking at death can possibly have some unintuitive-seeming implications. However, the common idea of death is quite shallow and short-sighted as we are (at least implicitly) trying to look at our death by imposing the experiences of another (alive) person who perceives our death (and the associated good/bad/indifferent sensations) onto how we would experience our own death, which is fundamentally mistaken.

So, if all life got instantaneously wiped out, then this can be neither good nor bad (nor neutral, given that a neutral experience is understood as one which one does not see as good nor bad, but is an experience nonetheless, if a neutral experience understood as such is even possible for one to obtain).

One immediate consequence of taking this view which comes to my mind would be maybe something like in-ovo sexing (and I am saying this as a vegan of 3 years, and a committed one at that) since baby chicks may not have much self-awareness and if their deaths are near-instantaneous, then this may not have that much direct positive impact (if any at all) in welfare, aside from maybe some secondary (social) impacts like greater awareness of their intrinsic value, which alone may make this type of intervention net-positive, but probably not nearly as cost-effective as other interventions. And the case of in-ovo sexing is just one example of how looking at death realistically (in my opinion) can have drastic changes to how EA looks at impact. This (along with its neglectedness) is why I believe it could be of huge importance for someone more intellectually equipped than myself to do a more detailed dive into how this view (or similar variants of it) would affect EA.

I'd love to hear if you or anyone else have any thoughts/criticisms about what I've said here :)

Vasco Grilo @ 2024-03-08T15:02 (+3)

Thanks for elaborating, Oisín! I strongly upvoted the comment just above because I think it is nice when people make an effort to explain their views.

So, if all life got instantaneously wiped out, then this can be neither good nor bad

I agree painless total annihilation of all life would not be good/bad in itself, in the sense there would be no difference in value relative to a world where it did not occur if we just consider the instant of the annihilation. However, I would still consider it extremely bad for instrumental reasons, as it would prevent future flourishing.

Imagining I had to press one of the following buttons:

  • A) eliminates all sentient beings, such that there is no more pain nor pleasure forever.
  • Button B) does nothing.

I would certainly press B. Would the Epicurean view say we should be indiferrent between the 2 options?

Oisín Considine @ 2024-03-08T15:55 (+3)

I feel like the act of consciously pushing the button to cause said consequences to occur vs the consequences of pushing button A or B occurring spontaneously and without anyone actually pushing either button are slightly different cases. I'm not 100% sure if Epicurus would push button B or be indifferent as in his words:

But the wise man neither seeks to escape life nor fears the cessation of life, for neither does life offend him nor does the absence of life seem to be any evil.[1]

However, I still believe that one should be indifferent to which outcome occurs, in order to remain consistent with this view. I do feel as though this view would probably lean towards being indifferent to which button one choses to push.

Having said this, I too would push button B, but this is due to my deep-rooted biases about my life and death, however irrational they may be, but maybe I would be better off changing this stance, since according to Epicurus:

And therefore a right understanding that death is nothing to us makes the mortality of life enjoyable, not because it adds to it an infinite span of time, but because it takes away the craving for immortality. For there is nothing terrible in life for the man who has truly comprehended that there is nothing terrible in not living. So that the man speaks but idly who says that he fears death not because it will be painful when it comes, but because it is
painful in anticipation.[2]

Also I just want to add that, on your point that annihilation would be bad because it prevents future flourishing, for whom would this be bad? It can't be bad for counterfactual non-existent beings, since they don't exist to perceive the badness of missing out on (the good bits of) life. Or am I misunderstanding your claim? And what exactly do you mean by instrumental reasons in this case? Could you give some examples?

  1. ^
  2. ^
Vasco Grilo @ 2024-03-08T17:21 (+3)

However, I still believe that one should be indifferent to which outcome occurs, in order to remain consistent with this view.

FWIW, I think one should put ~0 weight on a view which is indifferent between doing nothing and eliminating all sentient beings forever, as I consider the latter way way worse.

Also I just want to add that, on your point that annihilation would be bad because it prevents future flourishing, for whom would this be bad?

My understanding is that you are saying that killing all sentient beings alive today would not be good/bad/neutral, whereas I think it would be extremely bad, even if there is a remote sense in which it would not be good/bad for anyone. I encourage you to imagine there is an actual person in the real world who for some (impossible) reason had the power to kill all life. I would worry about what that person would do. Would you not, just because there is a sense in which killing all life would not be good/bad for anyone?

And what exactly do you mean by instrumental reasons in this case? Could you give some examples?

Sorry for the lack of clarity. I meant survival is instrumentally valuable to have positive conscious experiences. For example, if someone kills me, I can no longer spend time with my family and friends.

Oisín Considine @ 2024-03-08T18:41 (+3)

FWIW, I think one should put ~0 weight on a view which is indifferent between doing nothing and eliminating all sentient beings forever, as I consider the latter way way worse.

In which way is it worse? Again, you cannot compare a state of existence where one can experience an perceive things against a "state" of nonexistence in a way which leads to a preference of one over the other. As in, you cannot compare positive/negative experiences to no experience whatsoever, because then what is the common factor which you are comparing in order to prefer one over the other? And anyways, you would need to compare being dead from the perspective of the dead "person" against being alive from the perspective of the alive person. "They" cannot experience anything (as "they" don't exist) and thus they cannot have a preference for life. So this is, as I stated in my reply to your first comment, an example of mistakenly looking at death by imposing our c, the former of which is an actual experience and the latter of which doesn't exist.

I encourage you to imagine there is an actual person in the real world who for some (impossible) reason had the power to kill all life. I would worry about what that person would do. Would you not, just because there is a sense in which killing all life would not be good/bad for anyone?

I would too, but only due to the vast amount of suffering they could potentially bring upon the world.

survival is instrumentally valuable to have positive conscious experiences

But what is it that makes positive experiences preferable to no experience at all? Sure, they are preferable to less positive experiences, because you can experience (or can understand the experience of) that worse event as well as the better one, and thus you can make a preference between them. This is not the case for death, since death must be understood as fundamentally different from anything else in life (we can only understand death as a (fuzzy) abstract concept, and never intrinsically).

For example, if someone kills me, I can no longer spend time with my family and friends.

Yes, but again you are imposing your experience of perceiving someone else's death and how that affects you (or how you believe it would affect you if you did experience losing someone close) onto you experiencing ("experiencing") your own death, which are fundamentally different since one is an actual experience, and the other is simply nothing. And as a consequence, "you" also don't have anything like a memory when "you" are dead.

Vasco Grilo @ 2024-03-09T10:03 (+4)

I would too, but only due to the vast amount of suffering they could potentially bring upon the world.

Ok, but I would still very much worry about someone having the power to painlessly kill all life, whereas you would not (assuming there would be no suffering involved, although net future welfare would massively decrease)? My understanding is that you would be indifferent about pressing a button (or any other action with a negligible cost to you) which would remove the power from that person. In contrast, I would be willing to die myself to ensure lots of beings could continue to have net positive experiences (e.g. to allow lots of people could continue to talk with their friends and family).

But what is it that makes positive experiences preferable to no experience at all?

Are you indifferent between continuing your life as expected and being painlessly killed, assuming the net welfare of the rest of the world is the same in both scenarios? Even if you are, I think we should pay attention to the desires of other beings. Arguing for all life being painlessly killed not being good/bad seems quite selfish to me, because most beings would rather continue to live instead of being painlessly killed. By saying that all life being painlessly killed is not good/bad, I would say you are being indifferent to what most beings want.

RedTeam @ 2024-03-15T04:17 (+3)

What is your basis for the statement that "most beings would rather continue to live instead of being painlessly killed"? This seems to me to be a huge assumption. Vinding and many others who write from a suffering-focused ethics perspective highlight that non-human animals in the wild experience a large amount of suffering, and there's even greater consensus on non-human animals bred for food experiencing a large amount of suffering; is there research suggesting that the majority of beings would actively choose to continue to live over a painless death if they had an informed choice or is this an assumption? Even just considering humans, we have millions of people in extreme poverty; and an unknown number of humans suffering daily physical and / or sexual abuse. Too often there's both a significant underestimation of the number of beings experiencing extreme suffering - and a cursory disregard for their lived experience with statements like 'oh well if it was that bad they'd kill themselves', which completely ignores that a large proportion of humans follow religions in which they believe they will go to hell for eternity/similar if they die via suicide. I would counter your selfishness statement with 'If we accept the theory that ceasing to live is a painless nothingness, and we say there is a button to kill all life painlessly, is it not selfish for those who want to continue to live to not push the button and cause the continuation of extreme suffering for other beings?'
Oisín Considine's point may well be uncomfortable for many to think about and therefore unpopular, but I think it's a sound question/point to make. And one with potentially very significant implications when it comes to s-risks. If death (or non-existence) is neutral vs suffering is negative then that might imply we should dedicate more resources to preventing extreme suffering scenarios than to preventing extinction scenarios for example. 

Vasco Grilo @ 2024-03-15T14:36 (+2)

Welcome to the EA Forum, RedTeam!

What is your basis for the statement that "most beings would rather continue to live instead of being painlessly killed"?

I just mean that most beings have a preference for continuing to live. If an animal or person is at risk of being killed, chances are they will try to avoid being killed, i.e. they have a preference to continue to live. A willingness to avoid being killed makes all sense from the point of view of evolutionary biology. Beings who do not have an aversion to death will more often be killed before having offspring, and therefore their populations will tend to collapse.

This seems to me to be a huge assumption. Vinding and many others who write from a suffering-focused ethics perspective highlight that non-human animals in the wild experience a large amount of suffering, and there's even greater consensus on non-human animals bred for food experiencing a large amount of suffering; is there research suggesting that the majority of beings would actively choose to continue to live over a painless death if they had an informed choice or is this an assumption? Even just considering humans, we have millions of people in extreme poverty; and an unknown number of humans suffering daily physical and / or sexual abuse.

I estimated the scale of wild animal welfare is 50.8 M times that of human welfare, and I agree with Browning 2023 that it is unclear whether wild animals have positive/negative lives, so I think it is unclear whether the total welfare of all beings on Earth is positive/negative. However, in the same way I am not compelled to painlessly kill against their will people whose lives could be positive or negative, I am not compelled to press a button which would painlessly kill all beings. Would you painlessly kill people who are severely depressed or live in extreme poverty against their will if there was no risk of you being arrested or similar? I would not! Note I am in favour of euthanasia and assisted suicide, but these are very different because they do not involve going against the will of the people being killed.

RedTeam @ 2024-03-17T00:29 (+11)

Thank you for clarifying, Vasco - and for the welcome. I think it's important to distinguish between active reasoned preferences versus instinctive responses. There are lots of things that humans and other animals do instinctively that they might also choose not to do if given an informed choice. A trivial example - I scratch bug bites instinctively, including sometimes in my sleep, even though my preference is not to scratch them. There's lots of other examples in the world from criminals who look directly at CCTV cameras with certain sounds to turtles who go towards man-made lights instead of the ocean - and I'm sure many examples better than these ones I am thinking of off the top of my head. But in short, I am very reluctant to draw inferences on preferences from instinctive behaviour. I don't think the two are always linked. I'm also not sure - if we could theoretically communicate such a question to them - what proportion of non-human animals are capable of the level of thinking to be able to consider whether they would want to continue living or not if given the option. 

I agree with you that it is unclear whether the total sum of experiences on Earth is positive or negative; but I also don't necessarily believe that there is an equivalence or that positive experiences can be netted off against negative experiences so I'm not convinced that considering all beings experiences as a 'total' is the moral thing to do. If we do try and total them all together to get some kind of net positive or negative, how do you balance them out - how much happiness is someone's torture worth or netted off against in this scenario? It feels very dangerous to me to try to infer some sort of equivalency. I personally feel that only the individuals affected by the suffering can say under what circumstances they feel the suffering is worth it - particularly as different people can respond to and interpret the same stimuli differently. 
Like you, I am certainly not inclined to start killing people off against their will (and 'against their will' is a qualifier which adds completely different dimensions to the scenario; killing individuals is also extremely different to a hypothetical button painlessly ending all life - if you end all life, there is noone to mourn or to be upset or to feel pain or indeed injustice about individuals no longer being alive, which obviously isn't the case if you are talking about solitary deaths). If we are in favour of euthanasia and assisted suicide though, that suggests there is an acceptance that death is preferable to at least some types of/levels of suffering. To go back to the original post, what I was defending is the need for there to be more active discussion of what the implications are of accepting that concept. I do fear that because many humans find it uncomfortable to talk about death and because we may personally prefer to be alive, it can be uncomfortable to think about and acknowledge the volume of suffering that exists. It's a reasonably frequent lament in the EA world that not enough people care about the suffering of non-human animals and there is criticism of people who are viewed to be effectively ignoring the plight of animals in the food industry because they'd rather not know about it or not think about it. I worry though that many in EA do the same thing with this kind of question. I think we write off too easily the hypothetical kill all painlessly button because there's an instinctive desire to live and those of us who are happy living would rather not think about how many beings might prefer nothingness to living if given a choice. I'm not saying I definitely would push such a button but I am saying that I think a lot of people who say they definitely wouldn't say it instinctively rather than because they've given adequate consideration to the scenario. Is it really so black and white as we definitely shouldn't press that hypothetical button - and if it is, what are the implications of that? We value positive experiences more than we disvalue suffering? We think some level of happiness can justify or balance out extreme suffering? What's the tipping point - if every being on Earth was being endlessly tortured, should we push the button? What about if every being on Earth bar one? What if it's 50/50? 
I will readily admit I do not have a philosophy PhD, I have lots of further reading to do in this space and I am not ready myself to say definitively what my view is on the hypothetical button one way or the other, but I do personally view death or non-existence as a neutral state, I do view suffering as a negative to be avoided and I do think there's an asymmetry between suffering and happiness/positive wellbeing. With that in mind I really don't think that there is any level of human satisfaction that I would be comfortable saying 'this volume of human joy/positive wellbeing is worth/justifies the continuation of one human being subject to extreme torture'. If that's the case, can I really say it's the wrong thing to do to press the hypothetical painless end for all button in a world where we know there are beings experiencing extreme suffering? 

Vasco Grilo @ 2024-03-17T08:31 (+2)

Thanks for clarifying too! Strongly upvoted.

I am very reluctant to draw inferences on preferences from instinctive behaviour

Fair! I would say instinctive behaviour could provide a prior for what beings want, but that we should remain open to going against them given enough evidence. I have complained about Our World in Data's implicitly assuming that nature conservation is good.

If we are in favour of euthanasia and assisted suicide though, that suggests there is an acceptance that death is preferable to at least some types of/levels of suffering.

Agreed. For what it is worth, I estimated 6.37 % of people have negative lives. This is one reason I prefer using WELLBYs instead of DALYs/QALYs, which assume lives are always positive.

Is it really so black and white as we definitely shouldn't press that hypothetical button - and if it is, what are the implications of that?

It is quite clear to me I should not painlessly eliminate all sentient beings forever. Even though I have no idea about whether the current total welfare is positive/negative, I am more confident that future total welfare is positive. I expect intelligent beings to control an ever increasing fraction of the resources in the universe. I estimated the scale of wild animal welfare is 50.8 M times that of human welfare, but this ratio used to be orders of magnitude larger when there were only a few humans. Extrapolating how this ratio has evolved across time into the future suggests the welfare of the beings in control of the future (humans now, presumably digital beings in the future) will dominate. In addition, I expect intelligent beings like humans to have positive lives for the most part, so I am guessing the expected value of the future is positive.

Even if I though the expected value of the future was negative, I would not want to press the button. In this case, pressing the button would be good, as it would increase the value of the future from negative to neutral. However, I guess there would be actions available to me which could make the future positive, thus being better than just pressing the button. For example, conditional on me having the chance to press such button, I would likely have a super important position in the world government, so I could direct lots of resources towards investigating which beings are having positive and negative lives, and then painlessly eliminate or improve the negative ones to maximise total welfare.

We value positive experiences more than we disvalue suffering?

As long as positive and negative experiences are being measured in the same unit, 1 unit of welfare plus 1 unit of suffering cancel out.

We think some level of happiness can justify or balance out extreme suffering?

I think so, as I strongly endorse the tota view. Yet, there are physical limits. If the amount of suffering is sufficiently large, there may not be enough energy in the universe to produce enough happiness to outweigh it.

What's the tipping point - if every being on Earth was being endlessly tortured, should we push the button?

If there was no realistic way of stopping the widespread torture apart from killing everyone involved, I would be happy with killing all humans. However, I do not think it would be good to kill all beings, as I think wild animals have good lives, although I am quite uncertain.

I do think there's an asymmetry between suffering and happiness/positive wellbeing

In which sense do you think there is an asymmetry? As I said above, I think 1 unit of welfare plus 1 unit of suffering cancel out. However, I think it is quite possible that the maximum amount of suffering Smax which can be produced with a certain amount of energy exceeds the maximum amount of happiness Hmax which can be produced with the same energy. On the other hand, I think the opposite is also possible, so I am guessing Smax = Hmax (relatedly), although the total view does not require this.

With that in mind I really don't think that there is any level of human satisfaction that I would be comfortable saying 'this volume of human joy/positive wellbeing is worth/justifies the continuation of one human being subject to extreme torture'. If that's the case, can I really say it's the wrong thing to do to press the hypothetical painless end for all button in a world where we know there are beings experiencing extreme suffering?

In the 1st sentence above, I think you are saying that "arbitrarily large amount of happiness"*"value of happiness" <= "some amount of extreme suffering"*"disvalue of extreme suffering", i.e. "value of happiness" <= "some amount of extreme suffering"*"disvalue of extreme suffering"/"arbitrarily large amount of happiness". This inequality tends to "value of happiness" <= 0 as "arbitrarily large amount of happiness" goes to infinity, and by definition "value of happiness" >= 0 (otherwise it would not be happiness, but suffering). So I believe your 1st sentence implies "value of happiness" = 0. In other words, I would say you are valuing happiness the same as non-existence. In this case, having maximally happy beings would be as valuable as non-existence. So painlessly eliminating all beings forever by pressing the button would be optimal, in the sense there is no action which would produce more value.

Of course, I personally do not think it makes any sense to value happiness and non-existence the same. I assume most people would have the same view on reflection.

RedTeam @ 2024-03-17T17:45 (+3)

On asymmetry - and indeed most of the points I'm trying to make - Magnus Vinding gives better explanations than I could. On asymmetry specifically I'd recommend: https://centerforreducingsuffering.org/research/suffering-and-happiness-morally-symmetric-or-orthogonal/ 
and on whether positive can outweigh suffering: https://centerforreducingsuffering.org/research/on-purported-positive-goods-outweighing-suffering/ 
To get a better understanding of these points, I highly recommend his book 'Suffering-focused ethics' - it is the most compelling thing I've read on these topics. 

I think - probably about 90% sure rather than 100% - I agree that happiness is preferable to non-existence. However, I don't think there's an urgency/moral imperative to act to create happiness over neutral states in the same way that there is an urgency and moral imperative to reduce suffering. I.e. I think it's much more important to spend the world's resources reducing suffering (taking people from a position of suffering to a position of neutral needs met/not in suffering) than to spend resources on boosting people from a neutral needs met state (which needn't be non-existence) to a heightened 'happiness' state. 
I view that both: the value difference between neutral and suffering is much larger than the value difference between neutral and happiness AND that there is a moral imperative to reduce suffering where there isn't necessarily a moral imperative to increase happiness. 

To give an example, if presented with the option to either give someone a paracetamol for a mild headache or to give someone a bit of cake that they would enjoy (but do not need - they are not in famine/hunger), I would always choose the painkiller. And - perhaps I'm wrong - I think this would be quite a common preference in the general population. I think most people on a case by case basis would make statements that indicate they do believe we should prioritise suffering. Yet, when we talk on aggregate, suffering-prioritisation seems to be less prevalent. It reminds me of some of the examples in the Frames and Reality chapter of Thinking Fast and Slow about how people will respond the essentially the same scenario differently depending on it's framing. 

WIth apologies for getting a bit dark - (with the possible exclusion of sociopaths etc.), I think people in general would agree they would refuse an ice-cream or the joy of being on a rollercoaster if the cost of it was that someone would be tortured or raped. My point is that I can't think of any amount of positive/happiness that I would be willing to say yes, this extra happiness for me balances out someone else being raped. So there are at least some examples of suffering, that I just don't think can be offset by any amount of happiness and therefore my viewpoint definitely includes asymmetry between happiness and suffering. Morally, I just don't think I can accept a view that says some amount of happiness can offset someone else's rape or torture. 

And I am concerned that the views of people who have experience significant suffering are very under-represented and we don't think about their viewpoints because it's easier not to and they often don't have a platform. What proportion of people working in population ethics have experienced destitution or been a severe burns victim? What proportion of people working in population ethics have spoken to and listened to the views of people who have experienced extreme suffering in order to try and mitigate their own experiential gap? How does this impact their conclusions?

Oisín Considine @ 2024-03-20T23:23 (+1)

Hi, sorry if I'm a bit late here, and I don't want to be repeating myself too much, but since I feel it was not properly understood, one of the main points I originally made in this thread and I want to really hit home is that happiness as measured while in a state of happiness cannot be compared in any way to non-existence as "measured" in a state of non-existence, since we obviously cannot perceive sensations (or literally anything) when dead/not in existence. So the common intuition that happiness is preferable to non-existence is based upon our shallow understanding of what it is to "be" dead/non-existant, but from a rational point of view this idea simply does not hold. If I was being tortured with no way out, I would certainly want to die as quickly as I could, however when I imagine death in that moment, I am imagining (while in the state of suffering, and not in the "state" of death) a cessation of that suffering. However, to experience such a cessation I must be able to experience something to which I can compare against said experience of suffering. So technically speaking it doesn't make any sense at all to say that happiness/suffering is better than non-existence as measured in the respective states of happiness/suffering and death/non-existence. It's

And it's not like death/non-existence is neutral in this case. If you picture a scale, with positive experiences (e.g. happiness/satisfaction) in the positive direction and negative experiences (e.g. pain/suffering) in the negative direction, death does NOT appear at 0 since what we are measuring is the perceived value of the experiences. Put another way in terms of utility functions, if someone has a utility function at some value, and then they die, rather than immediately going to zero, their utility function immediately ceases to exist, as a utility function must belong to someone.

Also this idea of mine is somewhat new to me (a few months old maybe), so I haven't thought through many implications and edge-cases too thoroughly (yet). However this idea, however difficult for me to wrestle with, is something which I find myself simply unable to reason out of.

MichaelDickens @ 2024-03-28T20:57 (+3)

I was originally going to write an essay based on this prompt but I don't think I actually understand the Epicurean view well enough to do it justice. So instead, here's a quick list of what seem to me to be the implications. I don't exactly agree with the Epicurean view but I do tend to believe that death in itself isn't bad, it's only bad in that it prevents you from having future good experiences.

  1. Metrics like "$3000 per life saved" don't really make sense.
    • I avoid referencing dollars-per-life-saved when I'm being rigorous. I might use them when speaking casually—it's an easy way to introduce EA or GiveWell to new people.
  2. Interventions that focus on preventing deaths are not good purely because they prevent deaths. Preventing a person's death is good if that person then gets to experience a good life, and the goodness of preventing the death exactly equals the goodness of the life (minus the goodness of any life that would have existed otherwise).
    • This is most obviously relevant for life-saving global poverty charities such as the Against Malaria Foundation (AMF). Some people (including Michael Plant and me) have criticized GiveWell's recommendation of AMF on this basis—my post doesn't explicitly discuss the Epicurean view, but Michael Plant's post does (under "4. Epicureanism").
  3. One's view of death isn't relevant to most of the popular EA charities:
    • Many popular global poverty charities, like GiveDirectly, don't prevent deaths (much). Any reasonable philosophical view should agree that improving people's welfare is good, all else equal.
    • Factory farming interventions such as cage-free campaigns improve animals' welfare but don't affect death.
    • Vegetarian/vegan advocacy causes animals not to exist (by reducing demand for meat). This neither causes nor prevents deaths so it's also not affected by the Epicurean view.
    • People who prioritize preventing existential risk rarely do so because it cost-effectively prevents deaths. Instead, they want to preserve the value of the long-run future, which again applies equally well whether you adopt the Epicurean view or not.
      • One could argue that existential risk is indeed cost-effective at preventing deaths, as Carl Shulman does here. In that case, your view of the badness of death becomes relevant. But I think Carl Shulman's argument still works even under the Epicurean view.
Lizka @ 2024-03-11T18:40 (+10)

Not sure if this already exists somewhere (would love recommendations!), but I'd be really excited to see a clear and carefully linked/referenced overview or summary of what various agriculture/farming ~lobby groups do to influence laws and public opinion, and how they do it (with a focus on anything related to animal welfare concerns). This seems relevant.

BrownHairedEevee @ 2024-03-18T06:36 (+9)

A post about the current status of the Future of Humanity Institute (FHI) and a post-mortem if it has shut down. Some users including me have speculated that FHI is dead, but an official confirmation of the org's status would count as a reliable source for Wikipedia purposes.

Arepo @ 2024-03-02T04:43 (+9)

A steel manned version of the best longtermist argument(s) against AI safety as the top priority cause area.

Vasco Grilo @ 2024-03-05T15:22 (+4)

Thanks for the suggestion. For reference, readers interested in this topic can check the posts on AI risk skepticism.

Arepo @ 2024-03-02T04:17 (+9)

If we take utilitarianism at face value, what are the most likely candidates for the physical substrate of 'a utilon'? Is it plausible there are multiple such substrates? Can we usefully speculate on any interesting properties they might have?

Chris バルス @ 2024-02-29T10:27 (+9)

I would love for someone to do research on the question about the Global AIS upskilling pipeline / funnel, to gain a better understanding about the supply and demand of seats. 

This would enable field builders to gain a better understanding where the bottlenecks are (early/late stage). In so, hopefully we could create better ToCs on where and how to intervene in the system; toward the goal creating more well-suited and cost-effective programs, or in other ways to increase amount of talent going into the field.

The MVP version of the analysis could be done by reaching out to the current upskilling programs and ask them something along the lines of: how many of your applicants that you couldn't admit are you somewhat confident could become strong AIS contributors? 

In my mind, this would include programs such as MATS, PIBBS, ARENA, AISF, and others. 

In a perfect world with much more resources, the analysis would include how academia, the industry and governments position themselves in the global "pipeline", or how they enable people to become AIS contributors. 

Edit: minor. 

Max Görlitz @ 2024-03-04T12:28 (+7)

What would it take to eradicate all infectious diseases by 2050?

I want to see high-level abstract research what it would take to eliminate all infectious disease by a certain date, e.g. 2050 or 2080

I really liked "10 technologies that won't exist in 5 years" by Jacob Trefethen, and this post would have a similar vibe. 

do some very rough BOTECs

NickLaing @ 2024-03-12T15:51 (+4)

This is a really interesting one and I would love to see something on it too . I think framing it just around cost could be a mistake. if the tech was there to eliminate even one disease then I think we would be doing almost all we can almost regardless of cost - almost the case with polio right now.

Vasco Grilo @ 2024-03-05T15:33 (+4)

Hi Max,

I like that you suggest doing some BOTECs. Open Philanthropy has spent 191 M$ on their focus area of "Biosecurity & Pandemic Preparedness", but I am not aware of them publishing any cost-effectiveness analysis, and they just have 2 reports on their website (one from 2014, and another from 2018).

Arepo @ 2024-03-02T03:41 (+7)

Some kind of investigation into feedback mechanisms to reward good nonprofit work and potentially penalise bad nonprofit work that are more organic/finer instruments than 'you get a grant or you don't'. 

I know impact certificates are a possibility, but I don't understand what the secondary market for those could be. They also seem more relevant to individuals and possibly organisations than regular employees at organisations.

zeshen @ 2024-03-12T07:10 (+6)

I'd be interested to understand why there are still huge shortfalls in the supposedly top effective charities.

For example, AMF has a funding gap of $300 milion. The Bill and Melinda Gates Foundation has an endowment of $67 billion, which of course they intend to donate away. Bill Gates also endorses GiveWell, and has an explicit focus on solving Malaria (it also lists 20 organizations that they partner with, but AMF is not one of it).

So why isn't the AMF funding gap plugged yet, by the Gates foundation, or anyone else? As for the Foundation, is it a matter of grant evaluation process? Is there anything else relevant I should know to better understand the whole funding landscape of these issues?

DavidNash @ 2024-03-12T09:03 (+11)

There is a post about this (although it was written in 2015).

There are some good reasons for why large donors would want to not give too much money to a charity at once:

  1. Avoiding excessive reserves: Because of the opportunity costs (other charities could use money productively sooner), it is undesirable to have a charity having excessive reserves. Ideally, they would be promised a steady stream of funding if they meet specific targets over many years in order for them to be able to plan ahead.
  2. Risk diversification: Funds should be distributed to several high impact organisations in order to diversify the risk of one of them not performing well.
  3. Incentivizing others to join the cause area:
    1. Countries: By restricting funding to a particular country, one incentivizes the country to invest in very effective health interventions themselves and use their (often very limited) domestic resources to close the funding gap between donations and the full cost of delivering effective health interventions. Poorer, low-income countries (such as Ethiopia) are less able to do this than low-to-middle income countries (such as India).
    2. Charities: By restricting funding to charities, they’re being kept on their toes, so that they do not rely on a particular foundation or big grant giver exclusively and apply for other grants. For instance, in the past, the Gates foundation has heavily funded the Schistosomiasis Control Initiative. However, Gates later discontinued SCI’s funding not because of too little effectiveness, but because, since their effectiveness had been established, other funders would more readily fund them.
    3. Other donors: By restricting funding to particular charities, other donors are incentivized to also invest in the effective charities. For instance, the Against Malaria foundation has a broader appeal to small private donors than more high-expected-value interventions. Thus, even though theoretically, the Gates foundation, which is the largest private foundation in the world with an endowment of US$42.9 billion[4], could buy every person in Africa a bednet every two years (population of Africa (1 Billion) * Cost of Bednet (5 Dollars) = 5 Billion dollars) that would rapidly deplete their limited resources and then they could not spend their money on other very effective causes. They might reason that (small) more risk-averse donors (who want to be certain that their money will have an impact) will close the funding gap of very effective and established interventions and that they can instead spend more money on riskier, high expected value areas.
  4. Technological Innovation: New technological innovations—such as a very effective malaria vaccine—might be discovered, and these might be more cost-effective.
  5. High risk, high reward project:
zeshen @ 2024-03-13T18:57 (+3)

Thanks for the link! I vaguely remember reading this but probably didn't really get an answer that I was hoping for. In the case of AMF, reason 1 doesn't apply, because they seem to want the money to do things now instead of building reserves. Reason 4 seems most relevant - maybe the Gates Foundation is hoping that a Malaria vaccine (which recent developments have shown positive results) could render bed nets futile? But I don't think I buy this either - considering how effective these vaccines currently are, how long it takes to roll out vaccines in these countries, and that Bill Gates himself has previously vouched for bed nets (albeit before the vaccines were endorsed by WHO). As for reasons 2, 3, and 5, I just don't really see how these reasons are worth killing so many babies for - I can't picture a decision maker in the Foundation saying "yeah we have decided to let a hundred thousand people die of Malaria so that we can diversify our risks and encourage others to donate". 

I may be missing something, but I only see a few reasonable scenarios:

  1. The Gates Foundation does indeed plan to donate, and they might be the 'donor of last resort'
  2. They really do not intend to fill the funding gap, perhaps because they don't think additional funding to AMF is as cost-effective as advertised
  3. They are confident that AMF will somehow get funding from other sources
Lorenzo Buonanno @ 2024-03-13T22:00 (+4)

I think the most likely explanation is that the Bill & Melinda Gates Foundation is funding bednet distribution programs that it considers at least as cost-effective as the marginal distribution funded by the AMF (and that are probably equivalent).

From this post, my high-level naive understanding is that the Gates-funded Global Fund and the AMF fund the same kind of programs.

My understanding is that the main reason these funding gaps exist is that even Gates doesn't have enough money to fund everything. From the post linked above: "The Global Fund is the world’s largest funder of malaria control activities and has a funding replenishment round every three years, with funding provided by global governments, that determines the funds it has available across three disease areas: HIV/Aids, malaria and TB. The target for the 2024 to 2026 period was raising US$18 billion, largely to stand still. The funding achieved was US$15.7 billion."

The Gates Foundation has committed to giving away $8.6 billion this year. They could cover the Global Fund's budget by themselves only if they exclusively funded those things (which they don't; they fund lots of things).

And if they did, the gap would move to the next best funding opportunity.

zeshen @ 2024-03-14T12:36 (+1)

Thanks! I think I was having the impression that the Gates Foundation was struggling to give out money (e.g. this comment from a long time ago), but I'm now learning that that's probably no longer true - they set a goal of $9 billion by 2026 and they're already having a budget of $8.6 billion this year. Now it makes sense.

Arepo @ 2024-03-02T04:27 (+6)

What are the most likely scenarios in which we don't see transformative AI this century or perhaps for even longer? Do they require strong assumptions about (e.g.) theory of mind?

Vasco Grilo @ 2024-03-07T19:43 (+4)

Hi,

Relatedly, I liked Explosive Growth from AI: A Review of the Arguments.

Catherine Harries @ 2024-02-29T07:05 (+5)

The most effective way to reduce global hunger

Vasco Grilo @ 2024-03-05T15:25 (+4)

Hi Catherine,

You may be interested in GiveWell's post on Why malnutrition treatment is one of our [GiveWell's] top research priorities.

Catherine Harries @ 2024-04-17T20:06 (+3)

Sorry I’ve just seen this - thank you!

Lizka @ 2024-02-28T18:24 (+5)

I'd love to see two types of posts that were already requested in the last version of this thread:

Ulrik Horn @ 2024-03-06T10:09 (+4)

A post on voting statistics on the EAF. I am (perhaps unsurprisingly by now!) interested especially in gender break-down. I would have liked to do this myself with the forum API and getting help from some AI code writer and perhaps also for using user names or descriptions to guess at gender. But I just think I do not have time. I would be super interested to see if there are indications that posts from users perceived to be female gets less votes and/or more downvoted. I guess this is less about what I want someone to write about than what work I would love for someone to do. I think potentially the data is right there in front of us.

MvK @ 2024-03-06T15:52 (+3)

Interesting idea. Say we DO find that - what implications would this have?

It seems to me that this data point alone wouldn't be sufficient to derive from it any actionable consequences in the absence of the even more interesting but even harder-to-get-data on WHY this is the case.

Or maybe you think that this is knowledge that is intrinsically rather than instrumentally valuable to have?

Ulrik Horn @ 2024-03-07T04:54 (+1)

That is a good point. If the work is quick to do for someone super skilled at this, perhaps it is almost quicker to do the work than to try to anticipate its effect? I have some hope that if the results turn out to be shockingly bad (something like women get 3 times as few votes with 90% confidence) that it might inspire this rationality-drive community to take action. Ideally it would just mean people when reading stuff and voting keeps this in the back of their head and perhaps tries to compensate for it - kind of when you force yourself to read stuff you do not agree with to overcome confirmation bias. I am not sure. Another idea is for someone to apply this bias to users on the forum and see if there are some women users that actually might be much more important voices than what current karma tallies indicate. Really not sure here. I lean towards thinking the data is instrumentally valuable. If we could have as many women as men in EA by snapping our fingers, we would be almost twice the size of what we are today! I am super open to suggestions. And I might well be naive about how quick a job this is.

Ben_West @ 2024-03-05T17:39 (+4)

Bumping my list of EA Communication Project Ideas

Arepo @ 2024-03-02T04:50 (+4)

Are there reasonably engaging narrative tropes (or could we invent effective new ones) that could easily be recycled in genre fiction to promote effective altruist principles, in much the same way that e.g. the noble savage trope can easily be used to promote ecocentric philosophies, no-one gets left behind trope promotes localism, etc?

Arepo @ 2024-03-02T04:34 (+4)

Write a concrete proposal for a scalable bunker system that would be robust and reliable enough to preserve technological civilisation in the event of human extinction due to e.g. nuclear winter, biopandemics on the surface. How much would it cost? Given that many people assert it would be much easier than settling other planets, why hasn't anyone started building such systems en mass, and how could we remove whatever the blocker is?

Vasco Grilo @ 2024-03-05T15:48 (+2)

Thanks for the suggestion. @Ulrik Horn, who is working on a project related to refuges, may have some thoughts.

Given that many people assert it would be much easier than settling other planets, why hasn't anyone started building such systems en mass, and how could we remove whatever the blocker is?

I think the reason is that they would be very far from passing a standard cost-benefit analysis. I estimated the cost-effectiveness of decreasing nearterm annual extinction risk from asteroids and comets via refuges is 6.04*10^-10 bp/T$. For a population of 8 billion, and a refuges which remained effective for 10 years, that would be a cost per life saved of 207 T$ (= 10^12/(6.04*10^-10*10^-4*8*10^9*10)), i.e. one would have to spend 2 times the size of the global economy to save a life. In reality, the cost-effectiveness would be much higher because refuges would work in non-extinction catastrophes too, but it would remain very far from passing a standard governmental cost-benefit analysis.

Ulrik Horn @ 2024-03-06T04:56 (+5)

Thanks for the mention. ASB had previously estimated $100M-$300M if I remember correctly. After that, a diverse team specified an "ultimate bunker" and I then used reference class forecasting to arrive at a total cost (including ~20 years of operation) of $200M-$20bn. Yes that range is super wide, but so are uncertainties at this stage. Some examples of drivers of this uncertainty in cost:

  • Do we need some exotic SMR with complicated cooling systems (expensive) or can we locate the facility near a stable hydro resource (cheaper) and also what is the power need of the bunker (filtering and high ACH can drive this very high)?
  • Do we need to secure the air intakes from adversaries (potentially super expensive)?
  • How expensive will operations and maintenance be? We would require all such work to be done safely "from the inside" so that e.g. filter replacement would potentially be super complicated and costly.

There is some disagreement on what is stopping such shelters from being built but we might be about to find out as there are a few people working to see if we can make progress on shelters. That said, if someone was to earmark $10bn for constructing and operating a shelter, I think chances would be quite high of actually building one, so money is definitely a blocker at this point.

On cost effectiveness I would defer to others with better threat models. I am happy to provide cost estimates given some specifications so that people with threat models (i.e. how many % reduction in x-risk does a shelter provide) can calculate such metrics. 

Moreover, and perhaps people already do this, but I would also advocate for an "expected x-risk reduction" approach. Compared to e.g. convincing governments about enacting legislation on AI (uncertain if they actually will), a sufficiently funded shelter project does to a much smaller degree depend on actions of others and as such we have more control over the final outcome. And it is quite certain that shelters will give protection at least from catastrophic bio events whereas it could be argued that it is uncertain if a certain approach to AI safety will make the AI safe.

Vasco Grilo @ 2024-03-06T10:03 (+9)

Thanks for the context, Ulrik!

ASB had previously estimated $100M-$300M if I remember correctly. After that, a diverse team specified an "ultimate bunker" and I then used reference class forecasting to arrive at a total cost (including ~20 years of operation) of $200M-$20bn.

Feel free to share links. Your 2nd range suggest a cost of 398 M$[1] (= 10^9/2.51). If such bunker could halve bio extinction risk from 2031 to 2050[2], and one sets this to 0.00269 % based on guesses from XPT's superforecasters[3], it would reduce extinction risk with a cost-effectiveness of 0.338 bp/G$ (= 0.5*2.69*10^-5/(398*10^6)). For reference, below are some cost-effectiveness bars I collected.

AnswerCost-effectiveness bar (bp/G$)
Open Philanthropy (OP)0.05[11]
Anonymous Person1[12]
Oliver Habryka1
Linchuan Zhang3.33[13]
Simon Skade6
William Kiely10
Median2.17

My cost-effectiveness estimate for the bunker exceeds Open Philanthropy's conservative bar (i.e. my understanding is that their actual is bar; see footnote). However, I think the actual cost-effectiveness of bunkers is way lower than I estimated. I think XPT's superforecasters overestimated nuclear extinction risk by 6 orders of magnitude, so I guess they are overrating bio extinction risk too.

And it is quite certain that shelters will give protection at least from catastrophic bio events whereas it could be argued that it is uncertain if a certain approach to AI safety will make the AI safe.

Fair point. On the other hand, I think bio extinction is very unlikely to be an existential risk, because I guess another intelligent sentient species would emerge with high probability (relatedly). I wrote that:

Toby would expect an asteroid impact similar to that of the last mass extinction to be an existential catastrophe. Yet, at least ignoring anthropics, I believe the probability of not fully recovering would only be 0.0513 % (= e^(-10^9/(132*10^6))), assuming:

  • An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time to go from i) human extinction due to such an asteroid to ii) evolving a species as capable as humans at steering the future. I supposed this on the basis that:
    • An exponential distribution with a mean of 66 M years describes the time between extinction threats as well as that to go from i) to ii) conditional on no extinction threats.
    • Given the above, extinction and full recovery are equally likely. So there is a 50 % chance of full recovery, and one should expect the time until full recovery to be 2 times (= 1/0.50) as long as that conditional on no extinction threats.
  • The above evolution could take place in the next 1 billion years during which the Earth will remain habitable.

In contrast, AI causing human extinction would arguably prevent any future Earth-originating species from regaining control over the future. As a counter point to this, AI causing human extinction can be good if the AI is benevolent, but I think this is unlikely if extinction is caused this century.

  1. ^

    Reciprocal of the mean of a lognormal distribution describing the reciprocal of the cost with 10th and 90th percentiles equal to 1/20 and 1/0.2 (G$)^-1. I am using the reciprocal of the cost because the expected cost-effectiveness equals the product between it and expected benefits, not the ratio between expected benefits and cost (E(1/X) differs from 1/E(X)).

  2. ^

    If it was finished at the end of 2030, it would have 20 years of operation as you mentioned.

  3. ^

    XPT's superforecasters guessed 0.01 % between 2023 and 2100 (see Table 3), which suggests 0.00269 % (= 1 - (1 - 10^-4)^(21/78)) between 2031 and 2050.

JWS @ 2024-03-06T10:09 (+4)

As a counter point to this, AI causing human extinction can be good if the AI is benevolent

 

Uh... I think there's a lot of load-bearing on words 'benevolent' and 'can be' here[1]

Like I think outside of the most naïve consequentialism it'd be hard to argue that this would be a moral course of action, or that this state of affairs would be best described as 'benevolent' - the AI certainly wouldn't be being 'benevolent' toward humanity

Though probably a topic for another post (or dialogue)? Appreciated both yours and Ulrik's comments above :)

  1. ^

    And 'good', but metaethics will be metaethics

Vasco Grilo @ 2024-03-06T10:49 (+4)

Thanks for the comment, JWS!

Though probably a topic for another post (or dialogue)? Appreciated both yours and Ulrik's comments above :)

I agree it is too outside scope to be discussed here, and I do not think I have enough to say to have a dialogue, but I encourage people interested in this to check Matthew Barnett's related quick take.

Ulrik Horn @ 2024-03-06T11:42 (+3)

Thanks Vasco, your cost effectiveness estimate is super helpful, thanks for putting that together (I and others have done some already but having more of them helps)!

And I had missed that post on intelligent life re-emerging - I gave your comment a strong upvote because that points to an idea I had not heard before: That one can use the existing evolutionary tree to make prob dists of the likelihood of some branch of that tree evolving brains that could harbor intelligence.

I have not polished much of my work up until now so I prefer to share directly with people interested. And I think if someone would have time to polish my work it would be ok to have it more publicly. That said, we also might want to check for info-hazards - I feel myself becoming more relaxed about this as time goes on and that causes occasional bouts of nervousness (like now!).

Arepo @ 2024-03-02T03:47 (+4)

Some empirical research into the fragile world hypothesis, in particular with reference to energy return on investment (EROI). Is there a less extreme version of 'The great energy descent' that implies that average societal EROI could stay at sustainable levels but only absent shocks, and that one or two big shocks could push it below that point and make it a) impossible to recover or b) possible to recover but only after such a major restructuring of our economy that it would resemble the collapse of civiliation?

Arepo @ 2024-03-02T03:43 (+4)

An updated version of Luisa Rodriguez's 'What is the likelihood that civilizational collapse would cause technological stagnation? (outdated research)' post that took into account her subsequent concerns, and looked beyond 'reaching an industrial revolution' to 'rebuilding an economy large enough to eventually become spacefaring'.

Max Görlitz @ 2024-03-04T12:38 (+3)

Comparing different ARPAs, ARIA, and SPRIN-D

RedTeam @ 2024-03-17T17:59 (+2)

What proportion of people working in population ethics have experienced some kind of prolonged significant suffering themselves, e.g. destitution, 3rd degree burns on a large proportion of the body? What proportion of people working in population ethics have spoken to and listened to the views of people who have experienced extreme suffering in order to try and mitigate their own experiential gap? How does this impact their conclusions?

I am concerned that the views of people who have experience significant suffering are very under-represented and this results in a bias in many areas of society, including population ethics.

Vasco Grilo @ 2024-03-08T07:34 (+2)

Thanks for organising this, Toby!

I think it would be interesting to have a post on "Is the EA Forum becoming polarised?". For example, the EA Forum team could see how the following metrics have evolved over time for posts and comments:

I guess the EA Forum has become more polarised according to these metrics, but this may be a little of a rosy retrospection.

Ben Millwood @ 2024-03-07T00:28 (+2)

Many of the post ideas on my list of things I want to write would be basically as good if someone else wrote them (and they come with some existing prioritisation in agreevotes)

Arepo @ 2024-03-02T04:23 (+2)

Is there an underunexplored option to fund early stage for-profits that seem to have high potential social value? Might it sometimes be worth funding them in exchange for basically 0 equity so that it's comparatively easy for them to raise further funding the normal way?

RedTeam @ 2024-03-15T03:37 (+1)

(Somewhat adjacent to the Qs posed by Brad West and Max Görlitz) Does the EA community spend enough time and resource on outreach and public engagement activity? 
I often wonder whether, despite being fictional, Chidi Anagonye has done more for effective altruism than all EA orgs combined, given there is so much focus on an academic exchange of ideas and it's not clear to me how wide-reaching a lot of the existing Effective Altruism (with a capital E and A) organisations truly are; the number of pledgers on Giving What We Can is quite low for example - and awareness of many of these organisations is low outside of the EA community itself. Has there - via theory of change or impact assessment work, been an evaluation of how much prioritisation should be given to public engagement and to what extent this aligns with the reality of how much activity is currently undertaken in this space? 

Benjamin M. @ 2024-03-07T00:40 (+1)

I want somebody to flesh out some of the negative comments on Open Philanthropy's announcement about funding forecasting into an actual post.

I don't have a background in forecasting or any insider knowledge of EA community dynamics, so I'm the wrong person to write this post but I might if nobody steps forward to claim it.

Benjamin M. @ 2024-03-07T00:58 (+1)

If I wrote this it would probably mostly be links/summaries/categorization of other people's arguments against funding forecasting, plus maybe a few reasons of my own. 

carter allen @ 2024-03-06T02:47 (+1)

I want someone to write a post on bets as insurance. Sometimes, placing monetary bets against your own interests may help make worst-case scenarios less bad. For example, if one thinks Trump is an existential threat, they might bet money on Trump winning so they have more resources to deal with the fallout in the event that he does win. One could also bet against good news, e.g., last year I bet real money against the room-temperature superconductor stuff being legit, which ensured that either it was legit, or I'd make a bunch of money; I thought this guaranteed net good news in either case. 

Someone could think through the risks and benefits of this approach and/or create a larger list of promising insurance-bets.

Oisín Considine @ 2024-03-03T01:12 (+1)

I'd like to see research on methods we could use to effectively and efficiently collect data at large scales with minimal costs.

I'm not sure how much of a bottleneck (high-quality) data collection is in different cause-areas, but since it is super important to so many cause-areas, I think it would be well worth looking into. I imagine that surely there has to be at least some low-hanging fruit in terms of ways we can obtain lots of data for various cause-areas, buy I'd love some proper investigation into these ways.