D0TheMath's Quick takes

By D0TheMath @ 2021-11-11T13:57 (+4)

null
D0TheMath @ 2021-11-11T13:57 (+9)

I saw this comment on LessWrong

This seems noncrazy on reflection.

10 million dollars will probably have very small impact on Terry Tao's decision to work on the problem. 

OTOH, setting up an open invitation for all world-class mathematicians/physicists/theoretical computer science to work on AGI safety through some sort of sabbatical system may be very impactful.

Many academics, especially in theoretical areas where funding for even the very best can be scarce, would jump at the opportunity of a no-strings-attached sabbatical. The no-strings-attached is crucial to my mind. Despite LW/Rationalist dogma equating IQ with weirdo-points, the vast majority of brilliant (mathematical) minds are fairly conventional - see Tao, Euler, Gauss. 

EA cause area?

Thoughts? 

D0TheMath @ 2021-11-15T02:23 (+1)

Are there any obvious reasons why this line of argument is wrong:

Suppose Everett interpretation of qm is true, and an x-risk curtailing humanity's future is >99% certain, with no leads on the solution to it. Then, given a qm bit generator, which generates some high number of bits, for any particular combination of bits, there exists a universe in which that combination was generated. In particular, the combination of bits encoding actions one can take to solve the x-risk are generated in some world. Thus, one should use such a qm bit generator to generate a plan to stop the x-risk. Even though you will likely see a bunch of random letters, there will exist a version of you with a good plan, and the world will not end.

One may argue the chances of finding a plan which produces an s-risk is just as high as one curtailing the x-risk. This only seems plausible to me if the solution produced is some optimization process, or induces some optimization process. These scenarios should not be discounted.