AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

By Toby_Ord @ 2020-03-17T02:39 (+68)

Note: Aaron Gertler, a Forum moderator, is posting this with Toby's account. (That's why the post is written in the third person.)

 

This is a Virtual EA Global AMA: several people will be posting AMAs on the Forum, then recording their answers in videos that will be broadcast at the Virtual EA Global event this weekend.

Please post your questions by 10:00 am PDT on March 18th (Wednesday) if you can. That's when Toby plans to record his video. 

 

About Toby

Toby Ord is a moral philosopher focusing on the big picture questions facing humanity. What are the most important issues of our time? How can we best address them?

His earlier work explored the ethics of global health and global poverty, which led him to create Giving What We Can, whose members have pledged hundreds of millions of pounds to the most effective charities helping to improve the world. He also co-founded the wider effective altruism movement.

His current research is on avoiding the threat of human extinction, which he considers to be among the most pressing and neglected issues we face. He has advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science. His work has been featured more than a hundred times in the national and international media.

Toby's new book, The Precipice, is now available for purchase in the UK and pre-order in other countries. You can learn more about the book here.


Halstead @ 2020-03-18T10:23 (+53)

How likely do you think we would be to recover from a catastrophe killing 50%/90%/99% of the world population respectively?

SiebeRozendal @ 2020-03-20T09:28 (+6)

Given the high uncertainty of this question, would you (Toby) consider giving imprecise credences?

Halstead @ 2020-03-17T18:07 (+34)

Does it worry you that there are very few published peer reviewed treatments of why AGI risk should be taken seriously that are relevant to current machine learning technology?

richard_ngo @ 2020-03-17T17:26 (+31)

What would convince you that preventing s-risks is a bigger priority than preventing x-risks?

Suppose that humanity unified to pursue a common goal, and you faced a gamble where that goal would be the most morally valuable goal with probability p, and the most morally disvaluable goal with probability 1-p. Given your current beliefs about those goals, at what value of p would you prefer this gamble over extinction?

NunoSempere @ 2020-03-17T21:00 (+5)

I like how you operationalized the second question.

riceissa @ 2020-03-17T03:29 (+30)

The timing of this AMA is pretty awkward, since many people will presumably not have access to the book or will not have finished reading the book. For comparison, Stuart Russell's new book was published in October, and the AMA was in December, which seems like a much more comfortable length of time for people to process the book. Personally, I will probably have a lot of questions once I read the book, and I also don't want to waste Toby's time by asking questions that will be answered in the book. Is there any way to delay the AMA or hold a second one at a later date?

AmyLabenz @ 2020-03-17T04:12 (+44)

Thanks for the comment! Toby is going to do a written AMA on the Forum later in the year too. This one is timed so that we can have video answers during Virtual EA Global.

Linch @ 2020-03-17T03:43 (+2)

Strongly concur, as someone who preordered the book and is excited to read it.

Halstead @ 2020-03-18T10:18 (+26)

What is your solution to Pascal's Mugging?

Ben Pace @ 2020-03-17T05:25 (+26)

What's a regular disagreement that you have with other researchers at FHI? What's your take on it and why do you think the other people are wrong? ;-)

Ben Pace @ 2020-03-17T05:17 (+23)

We're currently in a time of global crisis, as the number of people infected by the coronavirus continues to grow exponentially in many countries. This is a bit of a hard question, but a time of crisis is often the time when governments substantially refactor things because it's finally transparent that they're not working, so can you name a feasible concrete change in the UK government (or a broader policy for any developed government) that you think would put us in a far better position for future such situations, especially future pandemics that have a much more serious chance of being an existential catastrophe?

RandomEA @ 2020-03-18T01:47 (+21)

In an 80,000 Hours interview, Tyler Cowen states:

[44:06]
I don't think we'll ever leave the galaxy or maybe not even the solar system.
. . .
[44:27]
I see the recurrence of war in human history so frequently, and I’m not completely convinced by Steven Pinker [author of the book Better Angels of Our Nature, which argues that human violence is declining]. I agree with Steven Pinker, that the chance of a very violent war indeed has gone down and is going down, maybe every year, but the tail risk is still there. And if you let the clock tick out for a long enough period of time, at some point it will happen.
Powerful abilities to manipulate energy also mean powerful weapons, eventually powerful weapons in decentralized hands. I don’t think we know how stable that process is, but again, let the clock tick out, and you should be very worried.

How likely do you think it is that humans (or post-humans) will get to a point where existential risk becomes extremely low? Have you looked into the question of whether interstellar colonization will be possible in the future, and if so, do you broadly agree with Nick Beckstead's conclusion in this piece? Do you think Cowen's argument should push EAs towards forms of existential risk reduction (referenced by you in your recent 80,000 Hours interview) that are "not just dealing with today’s threats, [but] actually fundamentally enhancing our ability to understand and manage this risk"? Does positively shaping the development of artificial intelligence fall into that category?

Edit (likely after Toby recorded his answer): This comment from Pablo Stafforini also mentions the idea of "reduc[ing] the risk of extinction for all future generations."

MichaelStJules @ 2020-03-18T06:52 (+2)

This math problem is relevant, although maybe the assumptions aren't realistic. Basically, under certain assumptions, either our population has to increase without bound, or we go extinct.

EDIT: The main assumption is effectively that extinction risk is bounded below by a constant that depends only on the current population size, and not the time (when the generation happens). But you could imagine that even for a stable population size, this risk could be decreased asymptotically to 0 over time. I think that's basically the only other way out.

So, either:

1. We go extinct,

2. Our population increases without bound, or

3. We decrease extinction risk towards 0 in the long-run.

Of course, extinction could still take a long time, and a lot of (dis)value could happen before then. This result isn't so interesting if we think extinction is almost guaranteed anyway, due to heat death, etc..

Pablo_Stafforini @ 2020-03-18T14:06 (+6)

Source for the screenshot: Samuel Karlin & Howard E. Taylor, A First Course in Stochastic Processes, 2nd ed., New York: Academic Press, 1975.

Misha_Yagudin @ 2020-03-18T17:26 (+2)

re: 3 — to be more precise, one can show that $\prod_i (1 - p_i) > 0$ iff $\sum p_i < ∞$, where $p_i \in [0, 1)$ is a probability of extinction in a given year.

MichaelStJules @ 2020-03-18T19:20 (+2)

Should that be ? Just taking logarithms.

Misha_Yagudin @ 2020-03-19T06:23 (+3)

This is a valid convergence test. But I think it's easier to reason about \sum p_i < ∞. See math.SE for a proof.

ishi @ 2020-03-21T11:11 (+1)

I've seen and liked that book. But i don't think there really is enough information about risks (eg earth being hit by a comet or meteor that kills everything) to really say much---maybe if cosmology makes major advances or in other fields one can say somerthing but that might takes centuries.

Linch @ 2020-03-17T06:42 (+20)

What do you think is the biggest mistake that the EA community is currently making?

Halstead @ 2020-03-18T10:11 (+19)

Is your view that:

(i) the main thing that matters for the long-term is whether we get to the stars

(ii) This could plausibly happen in the next few centuries

(iii) therefore the main long-termist relevance of our actions is whether we survive the next few centuries and can make it to the stars?

Or do you put some weight on the view that long-term human and post-human flourishing on Earth could also account for >1% of the total plausible potential of our actions?

RandomEA @ 2020-03-18T02:50 (+18)

Do you think that "a panel of superforecasters, after being exposed to all the arguments [about existential risk], would be closer to [MacAskill's] view [about the level of risk this century] than to the median FHI view"? If so, should we defer to such a panel out of epistemic modesty?

Davidmanheim @ 2020-03-22T10:02 (+8)

I personally, writing as a superforecaster, think that this isn't particularly useful. Superforecasters tend to be really good at evaluating and updating based on concrete evidence, but I'm far less sure about whether their ability to evaluate arguments is any better than that of a similarly educated / intelligent group. I do think that FHI is a weird test case, however, because it is selecting on the outcome variable - people who think existential risks are urgent are actively trying to work there. I'd prefer to look at, say, the views of a groups of undergraduates after taking a course on existential risk. (And this seems like an easy thing to check, given that there are such courses ongoing.)

MichaelStJules @ 2020-03-18T06:55 (+3)

Do you have references/numbers for these views you can include here?

Misha_Yagudin @ 2020-03-17T09:02 (+18)

What have you changed your mind on recently?

RandomEA @ 2020-03-18T03:37 (+17)

There are many ways that technological development and economic growth could potentially affect the long-term future, including:

What do you think is the overall sign of economic growth? Is it different for developing and developed countries?

Note: The fifth bullet point was added after Toby recorded his answers.

richard_ngo @ 2020-03-17T16:12 (+17)

If you could only convey one idea from your new book to people who are already heavily involved in longtermism, what would it be?

Ben Pace @ 2020-03-17T05:16 (+16)

Can you tell us a specific insight about AI that has made you positively update on the likelihood that we can align superintelligence? And a negative one?

Ben Pace @ 2020-03-17T05:16 (+16)

What are the three most interesting ideas you've heard in the last three years? (They don't have to be the most important, just the most surprising/brilliant/unexpected/etc.)

Halstead @ 2020-03-18T10:14 (+13)

Do you think we will ever have a unified and satisfying theory of how to respond to moral uncertainty, given the huge structural and substantive differences between apparently plausible moral theories? Will MacAskill's thesis is one of the best treatments of this problem, and it seems like it would be hard to build an account of how one ought to respond to e.g. Rawlsianism, totalism, libertarianism, person-affecting views, absolutist rights-based theories, and so on, across most choice situations.

RandomEA @ 2020-03-18T03:57 (+13)

What do you think is the strongest argument against working to improve the long-term future? What do you think is the strongest argument against working to reduce existential risk?

Ben Pace @ 2020-03-17T05:19 (+11)

Can you describe what you think it would look like 5 years from now if we were in a world that was making substantially good steps to deal with the existential threat of misaligned artificial general intelligence?

RandomEA @ 2020-03-18T05:30 (+10)

Should non-suffering focused altruists cooperate with suffering-focused altruists by giving more weight to suffering than they otherwise would given their worldview (or given their worldview adjusted for moral uncertainty)?

RandomEA @ 2020-03-18T00:32 (+10)

Do you think there are any actions that would obviously decrease existential risk? (I took this question from here.) If not, does this significantly reduce the expected value of work to reduce existential risk or is it just something that people have to be careful about (similar to limited feedback loops, information hazards, unilateralist's curse etc.)?

richard_ngo @ 2020-03-17T17:06 (+10)

If you could convince a dozen of the world's best philosophers (who aren't already doing EA-aligned research) to work on topics of your choice, which questions would you ask them to investigate?

Linch @ 2020-03-17T07:13 (+10)

Are there any specific natural existential risks that are significant enough that more than 1% of EA resources should be devoted to it? .1%? .01%?

MichaelA @ 2020-03-18T14:47 (+3)

Good question!

Just a thought: Assuming this question is intended to essentially be about natural vs anthropogenic risks, rather than also comparing against other things like animal welfare and global poverty, it might be simplest to instead wonder: "Are there any specific natural existential risks that are significant enough that more than 1% of longtermist [or "existential risk focused"] resources should be devoted to it? .1%? .01%?"

Ben Pace @ 2020-03-17T05:18 (+10)

Can you tell us something funny that Nick Bostrom once said that made you laugh? We know he used to do standup in London...

Linch @ 2020-03-18T02:25 (+9)

On balance, what do you think is the probability that we are at or close to a hinge of history (either right now, this decade, or this century)?

John_Maxwell @ 2020-03-22T03:29 (+8)

What are the most important new ideas in your book for someone who's already been in the EA movement for quite a while?

MichaelA @ 2020-03-18T15:41 (+8)

You break down a "grand strategy for humanity" into reaching existential security, the long reflection, and then actually achieving our potential. I like this, and think it would be a good strategy for most risks.

But do you worry that we might not get a chance for a long reflection before having to "lock in" certain things to reach existential security?

For example, perhaps to reach existential security given a vulnerable world, we put in place "greatly amplified capacities for preventive policing and global governance" (Bostrom), and this somehow prevents a long reflection - either through permanent totalitarianism or just through something like locking in extreme norms of caution and stifling of free thought. Or perhaps in order to avoid disastrously misaligned AI systems, we have to make certain choices that are hard to reverse later, so we have to have at least some idea up-front of what we should ultimately choose to value.

(I've only started the book; this may well be addressed there already.)

RhysSouthan @ 2020-04-08T17:44 (+3)

I had a similar question myself. It seems like believing in a "long reflection" period requires denying that there will be a human-aligned AGI. My understanding would have been that once a human-aligned AGI is developed, there would not be much need for human reflection—and whatever human reflection did take place could be accelerated through interactions with the superintelligence, and would therefore not be "long." I would have thought, then, that most of the reflection on our values would need to have been completed before the creation of an AGI. From what I've read of The Precipice, there is no explanation for how a long reflection is compatible with the creation of a human-aligned AGI.

Halstead @ 2020-03-18T10:18 (+8)

What are your top three productivity tips?

CarolineJ @ 2020-03-17T21:07 (+8)

Do you think that climate change has been neglected in the EA movement? What are some options that seem great to you at the moment to have a very large impact to stir us in a better direction regarding climate change?

richard_ngo @ 2020-03-17T17:10 (+8)

We have a lot of philosophers and philosophically-minded people in EA, but only a tiny number of them are working on philosophical issues related to AI safety. Yet from my perspective as an AI safety researcher, it feels like there are some crucial questions which we need good philosophy to answer (many listed here; I'm particularly thinking about philosophy of mind and agency as applied to AI, a la Dennett). How do you think this funnel could be improved?

Ben Pace @ 2020-03-17T05:16 (+8)

What's a book that you read and has impacted how you think / who you are, that you expected most people here won't have read?

Linch @ 2020-03-18T02:02 (+7)

Can you describe a typical day in your life with sufficient granularity that readers can have a sense of what "being a researcher at a place like FHI" is like?

NunoSempere @ 2020-03-17T21:21 (+7)

What's up with Pascal's Mugging? Why hasn't this pesky problem just been authoritatively solved? (and if it has, what's the solution?) What is your preferred answer? / Which bullets do you bite (e.g., bounded utility function, assigning probability 0 to events, a decision-theoretical approach cop-out, etc.)?

MichaelStJules @ 2020-03-17T18:46 (+6)

Which ethical views do you have non-negligible credence in and, if true, would substantially change what you think ought to be prioritized, and how? How much credence do you have in these views?

NunoSempere @ 2020-03-17T16:04 (+6)

Suppose your life's work ended up having negative impact. What is the most likely scenario under which this could happen?

NunoSempere @ 2020-03-17T08:27 (+6)

As a sharp mind, respected scholar, or prominent member in the EA community, you have a certain degree of agency, an ability to start new projects and make things happen, a no small amount of oomph and mojo. How are you planning to use this agency in the coming decades?

NunoSempere @ 2020-03-17T21:21 (+8)

This is a genuine question. The framing is that if Toby Ord wants to get in touch with a high ranking member of government, get an article published in a prominent newspaper, direct a large number of man hours to a project he finds worthy, etc. he probably can; just the association to Oxford will open doors in many cases.

This is in opposition to a box in a basement which produces the same research he would, and some of these differences stem from him being endorsed by some prestigious organizations, and there being some social common knowledge around his person. The words "public intellectual" come to mind.

I'm wondering how the powers-of-being-different-from-a-box-which-produces-research will pan out.

Linch @ 2020-03-18T02:22 (+4)

What's one book that you think most EAs have not yet read and you think that they should (other than your own, of course)?

CarolineJ @ 2020-03-17T21:06 (+4)

What are some of your current challenges? (maybe someone in the audience can help!)

CarolineJ @ 2020-03-17T21:05 (+4)

What are you looking for in a research / operations colleague?

MichaelStJules @ 2020-03-17T18:48 (+4)

How robust do you think the case is for any specific longtermist intervention? E.g. do new considerations constantly affect your belief in their cost-effectiveness, and by how much?

MichaelA @ 2020-03-18T15:25 (+3)

In your book, you define an existential catastrophe as "the destruction of humanity's longterm potential". Would defining it instead as "the destruction of the vast majority of the longterm potential for value in the universe" capture the concept you wish to refer to? Would it perhaps slightly more technically accurately/explicitly capture what you wish to refer to, just perhaps in a less accessible or emotionally resonating way?

I wonder this partly because you write:

It is not that I think only humans count. Instead, it is that humans are the only beings we know of that are responsive to moral reasons and moral argument - the beings who can examine the world and decide to do what is best. If we fail, that upwards force, that capacity to push towards what is best or what is just, will vanish from the world.

It also seems to me that "the destruction of the vast majority of the longterm potential for value in the universe" would seem to be meaningfully more similar to what I'm really interested in avoiding than the destruction of humanity's potential if/when AGI, aliens, or other intelligent life evolving on earth becomes or is predicted to become an important shaper of events (either now or in the distant future).

Halstead @ 2020-03-18T10:15 (+3)

Do you think the problems of infinite ethics give us reason to reject totalism or long-termism? If so, what is the alternative?

RandomEA @ 2020-03-18T06:26 (+3)

What are your thoughts on the argument that the track record of robustly good actions is much better than that of actions contingent on high uncertainty arguments? (See here and here at 34:38 for pushback.)

RandomEA @ 2020-03-18T04:18 (+3)

How confident are you that the solution to infinite ethics is not discounting? How confident are you that the solution to the possibility of an infinitely positive/infinitely negative world automatically taking priority is not capping the amount of value we care about at a level low enough to undermine longtermism? If you're pretty confident about both of these, do you think additional research on infinites is relatively low priority?

RandomEA @ 2020-03-18T02:44 (+3)

How much uncertainty is there in your case for existential risk? What would you put as the probability that, in 2100, the expected value of a substantial reduction in existential risk over the course of this century will be viewed by EA-minded people as highly positive? Do you think we can predict what direction future crucial considerations will point based on what direction past crucial considerations have pointed?

RandomEA @ 2020-03-18T02:36 (+3)

What do you think of applying Open Phil's outlier opportunities principle to an individual EA? Do you think that, even in the absence of instrumental considerations, an early career EA who thinks longtermism is probably correct but possibly wrong should choose a substantial chance of making a major contribution to increasing access to pain relief in the developing world over a small chance of making a major contribution to reducing GCBRs?

RandomEA @ 2020-03-18T02:18 (+3)

Is the cause area of reducing great power conflict still entirely in the research stage or is there anything that people can concretely do? (Brian Tse's EA Global talk seemed to mostly call for more research.) What do you think of greater transparency about military capabilities (click here and go to 24:13 for context) or promoting a more positive view of China (same link at 25:38 for context)? Do you think EAs should refrain from criticizing China on human rights issues (click here and search the transcript for "I noticed that over the last few weeks" for context)?

RandomEA @ 2020-03-18T00:40 (+3)

What are your thoughts on these questions from page 20 of the Global Priorities Institute research agenda?

How likely is it that civilisation will converge on the correct moral theory given enough time? What implications does this have for cause prioritisation in the nearer term?
How likely is it that the correct moral theory is a ‘Theory X’, a theory radically different from any yet proposed? If likely, how likely is it that civilisation will discover it, and converge on it, given enough time? While it remains unknown, how can we properly hedge against the associated moral risk?

How important do you think those questions are for the value of existential risk reduction vs. (other) trajectory change work? (The idea for this question comes from the informal piece listed after each of the above two paragraphs in the research agenda.)

Edited to add: What is your credence in there being a correct moral theory? Conditional on there being no correct moral theory, how likely do you think it is that current humans, after reflection, would approve of the values of our descendants far in the future?

MichaelStJules @ 2020-03-17T18:49 (+3)

What are your views on the prioritization of extinction risks vs other longtermist interventions/causes?

MichaelStJules @ 2020-03-17T18:45 (+3)

Which interventions/causes do you think are best to support/work on according to views in which extra people with good or great lives not being born is not at all bad (or far outweighed by other considerations)? E.g. different person-affecting views, or the procreation asymmetry.

MichaelA @ 2020-03-18T15:29 (+2)

You seem fairly confident that we are at "the precipice", or "a uniquely important time in our story". This seems very plausible to me. But how long of a period are you imagining for the precipice?

The claim is much stronger if you mean something like a century than something like a few millennia. But even if the "hingey" period is a few millennia, then I imagine that us being somewhere in it could still be quite an important fact.

(This might be answered past chapter 1 of the book.)

MichaelStJules @ 2020-03-17T19:03 (+2)

Do you lean more towards a preferential account of value, a hedonistic one, or something else?

How do you think tradeoffs between pleasure and suffering are best grounded according to a hedonistic view? It seems like there's no objective one-size-fits-all trade-off rate, since it seems like you could have different people have different preferences about the same quantities of pleasure and suffering in themselves.

MichaelStJules @ 2020-03-17T18:56 (+2)

What new evidence would cause the biggest shifts in your priorities?

Peter_Hurford @ 2020-03-17T15:32 (+2)

What are the three least interesting ideas you've heard in the last three years? (They don't have to be the least important, just the least surprising/brilliant/unexpected/etc.)

Ben Pace @ 2020-03-17T17:57 (+2)

This is such an odd question. Could produce surprising answers though, if it’s something like “the least interesting ideas that people still took seriously” or “the least interesting ideas that are still a little bit interesting”. Upvoted.

Peter_Hurford @ 2020-03-17T22:48 (+2)

Sometimes the obvious is still important to discuss.

Ben Pace @ 2020-03-17T05:21 (+2)

Can you describe what you think it would look like 5 years from now if we were in a world that was making substantially good steps to deal with the existential threat of engineered pandemics?

SiebeRozendal @ 2020-03-20T09:34 (+1)

There will be a lot to learn from the current pandemic from global society. Which lesson would be most useful to "push" from EA's side?

I assume this question is in between the "best lesson to learn" and "lesson most likely to be learned". We probably want to push a lesson that's useful to learn, and that our push actually helps to bring it into policy.

MichaelA @ 2020-03-18T15:49 (+1)

What are your thoughts on how to evaluate or predict the impact of longtermist/x-risk interventions, or specifically efforts to generate and spread insights on this matters? E.g., how do you think about decisions like which medium to write in and whether to focus on generating ideas vs publicising ideas vs fundraising?

MichaelA @ 2020-03-18T15:46 (+1)

How would your views change (if at all) if you thought it was likely that there are intelligent beings elsewhere in the universe that "are responsive to moral reasons and moral argument" (quote from your book)? Or if you thought it's likely that, if humans suffer an existential catastrophe, other such beings would evolve on Earth later, with enough time to potentially colonise the stars?

Do your thoughts on these matters depend somewhat on your thoughts on moral realism vs antirealism/subjectivism?

Misha_Yagudin @ 2020-03-17T21:27 (+1)

What are some of your favourite theorems, proofs, algorithms, and data structures?

CarolineJ @ 2020-03-17T21:07 (+1)

What are some directions you'd like the EA movement or some parts of the EA movement to take?

CarolineJ @ 2020-03-17T21:06 (+1)

What do you like to do during your free time?

CarolineJ @ 2020-03-17T21:05 (+1)

If you've read the book 'So good they can't ignore you', what do you think are the most important skills to master to be a writer/philosopher like yourself?

CarolineJ @ 2020-03-17T18:50 (+1)

Hi Tobby! Thanks for being such a great source of inspiration for philosophy and EA. You're a great model to me!

Some questions, feel free to pick:

1) What philosophers are your sources of inspiration and why?

(put my other questions in separate comments). Also, writing "Toby"!

Ben Pace @ 2020-03-17T20:07 (+4)

I think your questions are great. I suggest that you leave 7 separate comments so that users can vote on the ones that they’re most interested in.

CarolineJ @ 2020-03-17T21:07 (+3)

Thanks Ben! I've edited the message to have only one question per post. :-)