Why I've come to think global priorities research is even more important than I thought

By Benjamin_Todd @ 2020-08-15T13:34 (+68)

We’ve rated global priorities research (GPR) as one of our top priority areas for some time, but over the last couple of years I’ve come to see it as even more promising.

The field of GPR is about rigorously investigating what the most important global problems are, how we should compare them, and what kinds of interventions best address them. For example, how to compare the relative importance of tackling global health vs. existential risks.

It also considers questions such as how much weight to put on longtermism, or whether we should give now or later.

I’d be keen to see more investment in the field, both in absolute terms and relative to the portfolio of effort within the effective altruism community.

Here are some reasons why. Each reason is weak by itself, but taken together they’ve caused me to shift my views.

Positive recent progress

I think the Global Priorities Institute has made good progress, which makes me optimistic about further work.

One form of work is putting existing ideas about global priorities on a firmer intellectual footing, of which I think Hilary Greaves and Will MacAskill’s strong longtermism paper is a great example. This kind of work is useful because it encourages the ideas to be taken seriously within academia, and also helps to uncover new flaws in them.

Another form of work is aimed at directly changing the priorities of the effective altruism community or other altruists. I think Philip Trammell’s work on optimal timing is a success, and might significantly change how we want to allocate resources. Another example is Will MacAskill’s work on whether we’re at the most influential time in history.

Implications of longtermism

I’ve come to better appreciate how little we know about what longtermism implies. Several years ago, it seemed clearer that focusing on reducing existential risks over the coming decades—especially risks posed by AI—was the key priority.

Now, though, we realise there could be a much wider range of potential longtermist priorities, such as patient longtermism, work focused on reducing risk factors rather than risks, or other trajectory changes. There has been almost no research on the pros and cons of each.

There are also many crucial considerations, which could have huge implications for our priorities. I could see myself significantly changing my views if more research was done.

Patient longtermism

I now put a greater credence in patient longtermism compared to the past (due to arguments by Will MacAskill and Phil Trammell and less credence in very short AI timelines), which makes GPR look more attractive. (And in general GPR seems more robustly good across a variety of forms of longtermism, except for the most urgency-focused forms.)

Relative neglectedness

AI safety has caught on more broadly, while GPR hasn’t. Because of that, the resources invested in AI safety seem have substantially increased over the last few years, decreasing its neglectedness, while GPR seems to have seen a smaller increase.

Scale of the community

At a lower bound, we can think of GPR as a multiplier on the effectiveness of the rest of the effective altruism community, and so the larger the community, the more valuable the research.

There are now hundreds of millions of dollars spent by the community each year, and thousands of community members doing direct work (probably several fold higher than 5 years ago), and this research can have real effects on what they do. The research can also be applied beyond the community, so it hopefully has even more potential than this increase would imply.[1]

Importance of ideas

If anything, I’m even more convinced that the ideas are what matter most about EA, and that there should at least be a branch of EA that’s focused on being an intellectual project. The field of GPR is perhaps our best chance of being this project, and either way, it helps to put EA on a firmer intellectual footing.


Is there anything that has made me less keen on GPR in the last few years? There are a couple of factors, but I think they’re small.

One issue is that it’s still proving hard to attract academic economists into the field, though there has been some progress.

Some have pointed out that there haven’t been paradigm shifting new arguments in the last couple of years, perhaps suggesting progress is harder than expected, though I think progress has been reasonable or good compared to my expectations.

Another issue is that it’s difficult for most people to contribute to the field, and this bottleneck makes it hard for many more people to contribute to it than do today. Still, there are ways that more people can get involved, and anyone can contribute through donating.

Here’s some more detail on how people can contribute:

How might you contribute?

Crossposted from the 80,000 Hours blog.

Footnotes


  1. Strictly speaking, even if we just focus on the community, the immediate scale of the community is not what’s most relevant. We care more about something like the integral of the scale of the community over its entire future, and research discoveries made today only speed up future discoveries. It’s less obvious that GPR is higher impact based on this analysis, though the current scale of the community is a relevant factor that’s easier to measure. ↩︎


jared_m @ 2020-08-15T16:54 (+23)

Thank you for sharing more about GPI's priorities and non-Open Phil fundraising goals for this year. Our family will plan to contribute in November or December, after focusing on some other non-profit investments in the next few months.

To borrow a page from political fundraising in the U.S., it could make sense to create formal or informal recognition strategies (along the lines of 80,000 Hours's "Our donors" page) or social opportunities for donors to GPI - whether on the GPI site or on a Medium page a supporter might roguely maintain if that's easier. Perhaps a fundraiser "Zoom" for $500 or $1,000 a head, where guests could have the chance to meet each other and ask questions of one or more game members of the GPI team? I'd be happy to help organize one of those if helpful.

Also: one suggested edit for the GPI team, in the tiny chance it has an infinitesimal impact on someone's decision re: how much or whether to give to GPI. On the following page, " We are very greatful for any support!" should read "grateful." https://globalprioritiesinstitute.org/supporting-gpi/

Benjamin_Todd @ 2020-08-16T17:16 (+5)

Thanks, it's great you're planning to contribute! I've also let GPI know about your feedback.

rossaokod @ 2020-08-28T09:47 (+21)

I just wanted to explicitly add to this post that valuable GPR can, does, and should happen outside of an academic setting. I think this is implied in this post (e.g. the mention of OpenPhil and the link to the GPR roles on the 80k website), but is not quite explicit, so I just wanted to flag it. Researchers outside of academia face a different set of incentives to academics, and can sometimes have more freedom to work on questions that are more practically relevant but less 'publishable' in academic journals. The point is made quite nicely on the 80k GPR page here: https://80000hours.org/problem-profiles/global-priorities-research/#what-are-some-top-career-options-within-this-area

"That said, we expect that other centres will be established over the coming years, and you could also pursue this research in other academic positions.

One downside of academia, however, is that you need to work on topics that are publishable, and these are often not those that are most relevant to real decisions. This means it’s also important to have researchers working elsewhere on more practical questions."

Personally, I think/hope the field of GPR will develop in a similar way to 'impact evaluation' in development economics over the last ~20 years -- i.e. significant progress has been made in academic research (including some of the more important methodological or foundational advances), but there has also been a lot of valuable non-academic impact evaluation research (including lots that is more directly relevant for decision-makers).

weeatquince @ 2020-08-16T00:27 (+8)

Thank you for writing this Ben. I strongly agree with this.

I have also been thinking and writing about this for the past few weeks. And so, in a fit of self-promotion and/or pointing readers to similar work, direct anyone interested to my post here. I suggest that the EA movement has not done enough in this space, lay out some areas I would like to see researched, make the case that new organisations (or significant growth of existing organisations) are needed and look at some of the challenges to making that happen.

Benjamin_Todd @ 2022-01-18T18:12 (+4)

This was written pretty recently and I still agree with it!

david_reinstein @ 2021-06-07T17:14 (+2)

I'm doing a series of recordings of EA Forum posts on my "found in the struce" podcast, also delving into the links and with my own comments.

I've just done an episode on the present post HERE

I also did one on @weeatquice's post HERE

Let me know your thoughts, and if its useful. I think you can also engage directly with the Anchor app leaving a voice response or something.

Milton @ 2020-08-15T21:36 (+2)
There are not many of these organisations currently, but I expect that the field will grow over the next 10 years, and more centres will be established in a number of universities around the world

What's the basis of this claim?

Benjamin_Todd @ 2020-08-16T12:02 (+11)

I think there are donors in the community who will fund this work if we can find people to run these centres (e.g. similar people to those who funded GPI).

I think we can find more people able to run more centres over 10 years. My evidence for this is mainly that we have managed to find people in the past (e.g. the people who work at GPI) and I expect that to continue. I also think GPI is making progress finding people through their seminars and fellowships. Many of these people are junior now, but in 10 years some will be senior enough to found new centres.

Milton @ 2020-08-16T15:25 (+1)

Okay, that's great! Thanks :)