I'm interviewing Oxford philosopher, global priorities researcher and early thinker in EA, Andreas Mogensen. What should I ask him?

By Robert_Wiblin @ 2022-06-10T14:24 (+9)

Next week for the 80,000 Hours Podcast I'll be interviewing Andreas Mogensen — Oxford philosopher, All Souls College Fellow and Assistant Director at the Global Priorities Institute.

He's the author of, among other papers:

Somewhat unusually among philosophers working on effective altruist ideas, Andreas leans towards deontological approaches to ethics.

What should I ask him?


MichaelStJules @ 2022-06-10T18:47 (+4)

What kinds of procreation asymmetries would follow from plausible deontological views, if any? What would be their implications for cause prioritization?

Abby Hoskin @ 2022-06-10T18:13 (+3)
  1. Which directions in global priorities research seem most promising?
  2. Has Andreas ever tried communicating deep philosophical research to politicians/CEOs/powerful non-academics? If so, how did they react to ideas like deontic long-termism? Does he think any of them made a big behavior change after hearing about these kinds of ideas?
MichaelStJules @ 2022-06-10T16:39 (+2)

Does he think the maximality rule from Maximal Cluelessness is hopelessly too permissive, e.g. between any two options, it will practically never tell us which is better?

I have a few ideas on ways you might be able to get more out of it that I'd be interested in his thoughts on, although this may be too technical for an interview:

  1. Portfolio approaches, e.g. hedging and diversification.
  2. More structure on your set of probability distributions or just generally a smaller range of probability distributions you entertain. For example, you might not be willing to put precise probabilities on exclusive possibilities A and B, but you might be willing to say that A is more likely than B, and this cuts down the kinds of probability distributions you need to entertain. Maybe you have a sense that some probability distributions are more "likely" than others, but still be unable to to put precise probabilities on those distributions, and this gives you more structure.
  3. Discounting sufficiently low probabilities or using bounded social welfare functions (possibly with respect to the difference you make, or aggregating from 0), so that extremely unlikely but extreme possibilities (or tiny differences in probabilities for extreme outcomes) don't tip the balance.
  4. Targeting outcomes with the largest expected differences in value, with relatively small indirect effects from actions targeting them, e.g. among existential risks.
James Aitchison @ 2022-06-11T20:43 (+1)

What books or papers have been most important for Andreas?  What books does he recommend that EAs should read?