Mini summaries of GPI papers

By JackM @ 2022-11-02T22:33 (+97)

I have previously written about the importance of making global priorities research accessible to a wider range of people. Many people don’t have the time or desire to read academic papers, but the findings of the research are still hugely important and action-relevant.

The Global Priorities Institute (GPI) has started producing paper summaries, but even these might have somewhat limited readership given their length. They are also time-consuming for GPI to develop and aren’t all in one place.

With this in mind, and given my personal interest in global priorities research, I have written a few mini-summaries of GPI papers. The extra lazy / time poor can read just “The bottom lines”. I would welcome feedback on if these samples are useful and if I should continue to make them - working towards a post with all papers summarised. It is impossible to cover everything in just a few bullet points, but I hope my summaries successfully inform of the main arguments and key takeaways. Please note that for the final two summaries I made use of the existing GPI paper summaries.

On the desire to make a difference (Hilary Greaves, William MacAskill, Andreas Mogensen and Teruji Thomas)

The bottom line: Preferring to make a difference yourself is in deep tension with the ideals of benevolence. If we are to be benevolent, we should solely care about how much total good is done. In practice, this means avoiding tendencies to diversify individual philanthropic portfolios or to neglect mitigation of extinction risks in favour of neartermist options that seem “safer”.

My brief summary:

The unexpected value of the future (Hayden Wilkinson)

The bottom line: An undefined expected value of the future doesn’t invalidate longtermism. A theory is developed to deal with undefined expected values and this theory leads to an even stronger longtermist conclusion than what we started with.

My brief summary:

Longtermism, aggregation, and catastrophic risk (Emma J. Curran)

The bottom line: If one is sceptical about aggregative views, where one can be driven by sufficiently many small harms outweighing a smaller number of large harms, one should also be sceptical about longtermism.

My brief summary:

The case for strong longtermism (Hilary Greaves and William MacAskill)

The bottom line: Humanity’s future could be vast, and we can influence its course. That suggests the truth of strong longtermism: impact on the far future is the most important feature of our actions today.

My brief summary:

The Epistemic Challenge to Longtermism (Christian Tarsney)

The bottom line: If we are happy with expected value theory and don’t mind being driven by very small probabilities, longtermism holds up well. However, if we don’t like being fanatical, the epistemic challenge against longtermism seems fairly serious.

My brief summary:


EJT @ 2022-11-03T13:04 (+15)

Nice post! Consider this a vote for more summaries.

Jack Malde @ 2022-11-04T09:15 (+8)

Thanks Elliott! I wasn’t sure how you’d react to these summaries. I’m very happy to continue to make them. It’s also for my benefit so I can easily remind myself what a paper said.

I think I’ll get back in touch with you or Rossa in the near future to offer if I can do anything else with regards to helping GPI research get heard.

rossaokod @ 2022-11-10T12:07 (+3)

+1 as a vote for more summaries and thanks a lot for doing these! I'll check in with Sven (who's been organising our paper summaries) and we'll get in touch soon

Jack Malde @ 2022-11-10T19:56 (+2)

Thanks Rossa, very happy to keep doing these if you think they’re useful!

I’m conscious of maximising impact and not inadvertently doing harm, so would be happy to speak to anyone at GPI about how to use my time as effectively as possible, even if that means not doing much!

EJT @ 2022-11-04T09:35 (+3)

Sounds good!

trait-feign @ 2022-11-03T09:46 (+7)

Thanks for writing this Jack! This is a really helpful collection of summarized papers, and I wish there was more work like it.

Jack Malde @ 2022-11-04T09:10 (+3)

Thanks! I am likely to continue to make these summaries and would be happy to share them.

William D'Alessandro @ 2022-11-09T18:52 (+4)

Yeah, this is cool! I recently taught a longtermism MA course, am currently doing an online fellowship version of the course, and have been reading a good amount of GPI's philosophy stuff, so I might be interested in helping out if you'd find that useful.

Jack Malde @ 2022-11-09T22:35 (+3)

Hey William. I would welcome some help and you seem highly qualified! I'll message you and perhaps we can work together on this. Thanks for getting in touch!

Vasco Grilo @ 2022-11-11T18:56 (+3)

I find these summaries quite valuable. Thanks for doing them, and hopefully there will be more!

Jack Malde @ 2022-11-11T19:20 (+3)

Glad to hear it. I do plan on doing more!

Ramiro @ 2023-02-28T11:41 (+2)

Thanks for this. I really think we should have more paper summaries like this, on a regular basis.

There’s a point that caught my attention

Longtermism, aggregation, and catastrophic risk (Emma J. Curran)

[…]

This argument relies on an aggregative view where we should be driven by sufficiently many small harms outweighing a smaller number of large harms. However there are some cases where we might say such decision-making is impermissible e.g. letting a man get run over by a train instead of pulling a lever to save the man but also make lots of people late for work. One argument for why it’s better to save the man from death is the separateness of persons - there is no actual person who experiences the sum of the individual harms of being late - so there can be no aggregate complaint.

I really liked this paper and its whole argument. On the other hand, and I here I’m probably even going against the usual deontologist literature, I’m not sure that the problem with these counter-intuitive examples of aggregating small harms / pleasures is aggregation per se, but that in such cases hedonist aggregation tends to conflict with other types of aggregation – such as through a preference-based ordinal social welfare function (for instance, if every individual prefers a slight delay to having someone killed, then nobody should be killed)  – or that they might violate something like a Golden Rule (if I wouldn’t want to die to avoid millions of minor delays, then I must not want to let someone die to avoid small delays). I suspect that just saying, like Rawls and Scanlon etc., that aggregation violates “separateness of persons” turns an interesting discussion into a “fight between strawmen"[1]

  1. ^

    EAs sometimes ridicule people for siding with deontologists in such dilemmas. Rob Wiblin once said to A. Mogensen (during an 80kh podcast interview) that:
    “[...] at least for myself, as I mentioned, I actually don’t share this intuition at all, that there’s no number of people who could watch the World Cup where it would be justified to allow someone to die by electrocution. And in fact, I think that intuition that there’s no number is actually crazy and ridiculous and completely inconsistent with other actions that we take all the time.”
    If you agree with Rob’s statement, ask yourself questions like:
    a)    Would you die to allow millions to watch the World Cup?
    b)    Would you want someone to die to allow you to watch the World Cup - if that’s the only way?
    c)    Would you support a norm (or vote for a law) stating that it is OK to let people die so we can watch the World Cup?
    d) If we were to vote to let Bernard die for us to watch the World Cup, would you vote yes?
    e)    Do you think others would (usually) answer “yes” to these previous questions?
    Nothing here contradicts that we do let people die (though in situations where they voluntarily choose to take some risk in exchange of fair previous compensation) for us to watch the World Cup; not even that the world is a “better place” (in the sense that, e.g., there’s more welfare) if people die for our watching the World Cup. It might be the optimal policy, indeed.
    But I think that, if you answered “no” to some of the questions above, you are not entitled to say that this intuition is “crazy and ridiculous”. After all, if you prefer to save a life to watching the World Cup, and if you think others would reason similarly, why do you think that it is “crazy” to state that we should interrupt the show to save one person?
    It’s true that I might be conflating individual preferences and moral preferences / judgment here, but I am not sure about how easy it is to separate them; I’d probably lose any pleasure in watching a match if I knew someone unwillingly died for it – and I would certainly not say “Well, too bad; but by the Sure Thing Principle, it should not affect my preferences – may they have not died in vain”. Just like in the literature about the connection between perception and judgment, particularly when it comes to providing contexto, I think our individual preferences and mental states are deeply connected to more abstract judgments regarding norms.
    Sorry for this long footnote, since it's not exaclty related to the core of the post, I felt it'd be inappropriate to insert it in the main comment.

michel @ 2022-11-13T05:13 (+2)

+1 to a desire to read GPI papers but never having actually read any because I perceive them to be big and academic at first glance. 

I have engaged with them in podcasts that felt more accessible, so maybe  there's something there.

Jack Malde @ 2022-11-13T07:44 (+2)

Thanks. Did you find these summaries to be more accessible?

Joe Pusey @ 2022-11-03T20:14 (+2)

Is there any scope for people to do this on an ad-hoc/crowdsourced basis? I used to a similar thing for medical AI papers (https://explainthispaper.com), where volunteers would summarise them and then the coordinators would vet, publish and distribute the summaries- is there a similar process that happens here?

trait-feign @ 2022-11-07T15:25 (+2)

I like this idea. One example of it within the EA sphere was the AI Safety Distillation Contest.

I would be interested in a Minimal Viable Product version of what you describe above. Perhaps where a group of individuals each attempt to make a mini summary of a paper/post of interest - holding each other accountable. If it has sufficient traction an more robust system as you describe above could be put in place. Would you be interested?

For motivation - Lizka writes a good breakdown of why things like this might be useful Distillation and research debt

Jack Malde @ 2022-11-04T09:41 (+2)

To my knowledge this doesn’t happen, but it’s not a bad idea. There are quite a few research organisations and it would be great to have easily-digestible summaries all saved in one place.