Permanent Societal Improvements

By Larks @ 2015-09-06T01:30 (+11)

Daniel Kokotajlo, Diego Caleiro, Ramana Kumar and I recently discussed the idea of Permanent societal improvement - non-Xrisk related ways of affecting, and hopefully improving, the far future. These are actions we can take now that will have some multiplicative effect on the value of the future of humanity. This post is intended as the beginning of a conversation, not the end of a research project, and we eagerly await feedback and more ideas. Also please bear in mind that not everyone agreed with all the ideas, and any mistakes remain my own.

 

A toy model:


Suppose there is a 5% chance humanity will be destroyed in 2100, and if we survive that great filter then we will go on to colonise the light cone. Assuming this is a ‘good’ colonisation, full of happy, enlightened, virtuous people, it seems that reducing Existential Risk by 5% to 0% would roughly increase the Expected Value of the future by 5.3%. We could compare this to an action that would make the colonised universe 10% better - this would then increase the EV of the future by roughly 10%. So improving the future, in this toy example, could be dramatically better than reducing Xrisk.

 

What types of things could be Permanent Societal Improvements?

 

A major restriction is that they have to be things that would not otherwise be done later. If I invent something that would otherwise have been invented 20 years later, I have only improved the world by ( 20 years x impact of invention ) , not ( lifespan of humanity x impact of invention ).* This is quite a strong restriction on what could count as such permanent improvements.

 

Here are a few broad categories we came up with.

 

Influencing Lock-in

 

Compounding resource constraints

 

Moral Progress / Decay

 

Original Sin

 

Coordination problems

 

* ignoring whatever else the future would-be inventor would otherwise do with their resources.

 


null @ 2015-09-07T00:33 (+3)

Or maybe we could invest in server capacity in readiness of a EM future.

This one seemed out of place to me. Conditioned on the time we start expanding and the rate at which we expand, we're going to have access to some fixed set of resources at a given point in the future, so I don't see how investing in server capacity now affects our server capacity in the far future. (though I do agree that affecting the start time and rate of expansion could be permanent improvements.)

Establishing norms that will protect biological humans and EMs from Hansonian competition - like a right to retire. If uploads are not conscious, it might be important to agree on this before EMs massively outnumber biological humans; after that point it would become much harder.

These seem to be about simply picking the right policies now and locking them in. It might also be important to lock in the right policies vis-a-vis privacy, the death penalty, property rights, etc etc, but why should we think that we can lock such policies in now? This reduces to either "minimize value drift" or "create a singleton", both of which I agree with but you already listed them.

null @ 2015-09-07T11:43 (+2)

Have you seen Nick Beckstead's slides on 'How to compare broad and targeted attempts to shape the far future'?

He gives a lot of ideas for broad interventions, along with ways of thinking about them.

null @ 2015-09-07T00:44 (+2)

So we get astronomical stakes by multiplying a large amount of time by a large amount of space to get a large light cone of potential future value. Interventions that work along only one of those dimensions -- say, I bury a single computer that generates one utilon per year deep underground, which continues to run for the life of the universe, or I somehow grant a one-off one utilon to every human alive in the year 1 billion -- are dominated by those interventions that affect the product of space and time (e.g. the interventions you listed here). But if there were just one more dimension to multiply, then interventions that addressed the product of all three might dominate all considerations that we currently think about.

null @ 2015-09-13T23:35 (+1)

"Assuming this is a ‘good’ colonisation, full of happy, enlightened, virtuous people, it seems that reducing Existential Risk by 5% to 0% would roughly increase the Expected Value of the future by 5.3%."

How did you get 5.3%?

null @ 2015-09-14T03:13 (+1)

(100/95) - 1

null @ 2015-09-07T20:36 (+1)

An important topic!

Potentially influencing lock-in is certainly among my motivations for wanting to work on AI friendliness, and doing things that could have a positive impact of a potential lock-in has a lot speaking for it I think (and many of these things, such as improving the morality of the general populous, or creating tools or initiatives for thinking better about such questions, are things that could have significant positive effects also if no lock-in occurs).

As to example of having-more-children out of far-future concerns, I think this could go the other way also (although I don't necessarily thing that it would - I really don't know). If we e.g. reach a solution where it is decided that all humans have certain rights, can reproduce, etc, but also decide that all or a fraction of the matter in the universe we have little need for are used to increase utility in more efficient ways (e.g. by creating utilitronium or by creating non-human sentient beings with positive and meaningful existences), then a larger human population could lead to less of that.