Next week I'm interviewing Will MacAskill — what should I ask?

By Robert_Wiblin @ 2022-04-08T14:20 (+25)

Next week for The 80,000 Hours Podcast I'm again interviewing Will MacAskill. The hats he's wearing these days are:

• Author of 'What We Owe The Future' • Associate Professor in Philosophy at Oxford's Global Priorities Institute, and • Director of the Forethought Foundation for Global Priorities Research

What should I ask him?

(Here's interview one and two.)


JackM @ 2022-04-08T19:02 (+5)
Baptiste Roucau @ 2022-04-09T01:27 (+1)

Great set of questions! 

I'm personally very interested in the question about educational interventions. 

Nathan Young @ 2022-04-08T18:30 (+3)

If people want inspiration there are about 30 questions here (Robert asked on twitter)

https://twitter.com/robertwiblin/status/1512433252438626305?s=20&t=DrnOgG_0LxGlMhbrtJ_hgg

johnburidan @ 2022-04-09T01:06 (+1)

Is the birthrate of Western countries a long-term risk, given that even immigrants and developing countries also seem to have falling rates? And if so, what is it a risk of? What's the downside?

Michael @ 2022-04-08T20:57 (+1)

1 Will MacAskill mentions that "What We Owe The Future" is somewhat complimentary to "The Precipice". What can we expect to learn from "WWOTF" having previoulsy read "The Precipice"?

2 How would Will go about estimating the discount rate for the future? We shouldn't discriminate against future "just because", however we still need some estimate for a discount rate, because:

a) there are other reasons for applying discount rate other than discrimination eg. "possibility of extinction, expropriation, value drift, or changes in philanthropic opportunities" (see https://forum.effectivealtruism.org/posts/3QhcSxHTz2F7xxXdY/estimating-the-philanthropic-discount-rate#Significance_of_mis_estimating_the_discount_rate )

b) not applying a discount rate at all makes all current charity etc. negligably effective compared to working towards better future - eg. by virtue of the future having much, much greater number of moral agents for which we can safeguard said future (people, animals, but also AIs/robots perhaps or some post-human or trans-human species). Not having any discount rate would completely de-prioritize all current charity, which is what a lot of EAs would not agree with.

In other words: How do we divide our resources (time, attention, money, career etc.) between short-term and long-term causes?

3 What are the possible criticisms that the book could receive - both from within and from the outside of EA community?

4 To which extent the book will discuss value shift/drift? It seems an interesting topic, which also appears not to be discussed very extensively in other EA sources

5 What comes next after "WWOTF"? If another book, what will it be about?

6 What is Will's stance on War in Ukraine? How does it contribute to x-risks, s-risks and how can it influence the future (incl. deep future)? It appears to be one of the first major conflicts involving (to an extent unseen earlier) technologies such as: social media (for shaping public opinion, organizing), cyberwarfare, AI (eg. for analyzing open source intelligence, face recognition), renewable energy sources (touted as an alternative to dependence on Russian fossil fuels) etc.