Increasing existential hope as an effective cause?

By Owen Cotton-Barratt @ 2015-01-10T19:55 (+10)

In a recent report, Toby Ord and I introduce the idea of 'existential hope': roughly, the chance of something extremely good happening. Decreasing existential risk is a popular cause area among effective altruists who care about the far future. Could increasing existential hope be another useful area to consider?

Trying to increase existential hope amounts to identifying something which would be very good for the expected future value of the world, and then trying to achieve that. This could include getting more long-term focused governance (where perhaps the benefit is coming from reduced existential risk after you reach that state), or effecting a value-shift in society so that it is normal to care about avoiding suffering (where the benefit may come from much lower chances of large amounts of future suffering).

What other existential hopes could we aim for?

Technical note: the idea of increasing existential hope is similar to that of a trajectory change, as explained in section 1.1.2.3 of Nick Beckstead's thesis. It is distinct in that it is extremely hard to tell when a trajectory change occurs, because we don't know what the long-term future will look like. In contrast we can have a much better idea of expectations.


undefined @ 2015-01-11T21:30 (+4)

Thanks for the paper Owen.

Existential hope sounds like an opposite to existential despair, rather than to existential risk and could increase the already common confusion regarding that!! Of course, it's only a private paper, but since it's designed to establish terminologies, it's something to think about.

undefined @ 2015-01-12T20:59 (+4)

When I first heard of Bostrom's phrase "existential risk", I felt it was overly philosophical because it sounded like a concept in existentialism. I agree with Owen+Toby's paper that "extinction risk" is already adequate when talking about extinction.

Words are sticky, so it may be hard to ditch "existential risk", but if we were doing it over again, I'd choose something else, like "astronomical risks" and "astronomical benefits".

undefined @ 2015-01-12T04:26 (+3)

This is the kind of nugget I visit this forum for! I have x-dreams for veganism and EA to become the norm globally. Other x-dreams I can think of are making electricity out of nothing (or close to it); high yield, drought resistant crops; a technological breakthrough that will permit people to work less, or not at all; Islam to go the way of Christianity in giving up violence; a strong African Union that keeps peace on the continent; a way of preventing global warming or of cooling the earth; an end to the practice of girl killing in Asia responsible for huge gender imbalances; compassion replacing domination as the prevailing worldview/lifestyle choice; and an end put to anonymous shell companies and secret bank accounts that enable the corrupt.

undefined @ 2015-01-23T10:57 (+1)

I think all those are great. But I am more suspicious that taking work out of the equation will improve society - how will we ensure that surplus is distributed reasonably?

my x-hopes are really a kind of success criteria for the movement: a culture pinned around evidence, scientific reasoning and trial and error in policy, medicine and other important areas. A culture around prioritising what problems to solve based on suffering, human flourishing and equality (and that includes the brakes on trial and error in some areas such as new technologies). A global economic/political system that comes from/whose creation is the genesis of those two things that is immensely more effective in improving the human condition than we have currently.

undefined @ 2015-01-12T11:25 (+2)

Probably the most important "good things that can happen" after FAI are:

undefined @ 2015-01-12T11:52 (+3)

It seems like the development of these would increase expected value massively in the medium term. I'm not sure what the effect on long term expected value would be (because we'd expect to develop these at some point anyway in the long term).