What's the GiveDirectly of longtermism & existential risk?

By Nathan Young @ 2021-11-15T23:55 (+28)

GiveDirectly has a huge space for additional funding and spends the money effectively. It's processes are simple and most poeple can see it's helping.

What is the longtermist equivalent of this?

What is the organisation you give to to improve the longterm future, with these critera, which can sit as the organisation of last resort to donate to?

This question was brought to you by me stealing Ben Todd's tweets and turning them into questions. https://twitter.com/ben_j_todd/status/1459196519924604928

Please write one suggestion per answer.


Linch @ 2021-11-16T14:55 (+18)

This is probably dumb, but I wonder if there are precious metal bottlenecks (like U-235 for nukes and platinum for cars) for specialized deep learning hardware/GPUs/computers that you can buy mines etc of (and then shut down) to slow down AI progress slightly. 

I highly doubt this is the most leveraged thing you can do, but this seems like it can scale to arbitrary amounts of dollars invested, certainly much more money than EA currently has access to.

It also makes more sense in a worldview where AI is "the only game in town" and less for a worldview where you think e.g. climate or bio or nukes or broad longtermism is >0.1x as important as AI.

Nathan Young @ 2021-11-16T00:13 (+18)

Patient Philanthropy Fund 

Edited* 

Like GD it gives agency to the people who are suffering to make their own decisions. It has small overheads and could take far larger sums than most other spaces.

Unlike GiveDirectly, which is solving the problem in itself, the PPF kicks the can down the road. 

 *This was originally a placeholder - I was going to bed so said that someone should else write this as a proper answer. But everyone upvoted my placeholder anyway. [Thanos voice] "Fine, I'll do it myself"

James Smith @ 2021-11-16T01:05 (+9)

I'm not sure this meets the 'spends the money effectively' criterion - it might, but we don't really know that yet. 

Stefan_Schubert @ 2021-11-16T00:17 (+2)

I guess this one feels most obviously analogous, since you can in principle just keep throwing money into a patient philanthropy fund (not saying you should).

Nathan Young @ 2021-11-16T00:25 (+2)

But can we think of a better suggestion?
 

Frank_R @ 2021-11-16T08:18 (+16)

Genomic mass screening of wastewater for unknown pathogens, as described here:

[2108.02678] A Global Nucleic Acid Observatory for Biodefense and Planetary Health (arxiv.org)

A few test sites can already help to detect a new (natural or manmade) pandemic at an early stage. Nevertheless, there is room for a few billion dollars if you want to build a global screening network.

Unfortunately, I do not know if there is any organisation with need for funding working on this. 

HaukeHillebrandt @ 2021-11-16T13:07 (+6)

Also see Carl Shulman's 'Envisioning a world immune to global catastrophic biological risks'

AppliedDivinityStudies @ 2021-11-16T09:01 (+8)

Here's Will MacAskill at EAG 2020:

I’ve started to view [working on climate change] as the GiveDirectly of longtermist interventions. It's a fairly safe option.

Command-f for the full context on this.

HaukeHillebrandt @ 2021-11-16T13:05 (+7)

"[Climate change interventions are] just so robustly good, especially when it comes to what Founders Pledge typically champions funding the most: clean tech. Renewables, super hot rock geothermal, and other sorts of clean energy technologies are really good in a lot of worlds, over the very long term — and we have very good evidence to think that. A lot of the other stuff we're doing is much more speculative. So I’ve started to view [working on climate change] as the GiveDirectly of longtermist interventions. It's a fairly safe option."

But then this might be a bit outdated now (see Good news on climate change ). 

Larks @ 2021-11-16T16:44 (+3)

The climate change scenarios that EAs are most worried about are tail-risks of extreme warming, in comparison to GiveDirectly's effects which seem slightly positive in most worlds. And while the best climate change interventions might be robustly not-bad, that's not true for the entire space. Given the relatively modest damage in the median forecasts (e.g. 10% counterfactual GDP, greatly outweighed by economic growth) many proposals, like banning all air travel, or anti-natalism, would do far more harm than good. Will suggests that climate change policies are robustly good for the very long term growth rate (not just level), but I don't understand why - virtually all very long-term growth will not take place on this planet.

Nathan Young @ 2021-11-16T00:08 (+8)

Vaccines

 (Originally suggested by Ben Todd)

@CEPIvaccines a $3.5bn program to develop vaccines to stop the next pandemic

Cons

Jackson Wagner @ 2021-11-16T02:56 (+6)

Yeah, some biosecurity stuff seems relatively shovel-ready. The whole "Sentinel" program of pandemic defense, as implemented in the $65 billion Biden proposal, could in a worst-case scenario be implemented at least partially by philanthropic dollars if the US government fails to see the light. (The Biden proposal has sadly now disappeared from the bills under debate... it seems the hope is that this might resurface as bipartisan legislation later.) Or for much higher cost-effectiveness, we could start an organization to advocate for versions of the Sentinel proposal to be implemented in other countries, just like how there are global health charities that go country to country advocating for salt iodization.

Jackson Wagner @ 2021-11-16T03:15 (+5)

Some relatively measurable, concrete megaprojects that might help the long-term future (super-high cost effectiveness not necessarily guaranteed):

To the extent that long-run economic growth is a longtermist goal and isn't totally overshadowed by X-risk, there are lots of ways we might encourage a faster pace of economic development:

It would be great if we had some way of putting money towards reducing "existential risk factors" -- if improving US-China relations was as straightforward as buying carbon offset credits, I think that would attract a hell of a lot of funding.

Nathan Young @ 2021-11-16T09:06 (+2)

Could you break this up into separate comments for each idea please.

Thomas Kwa @ 2021-11-16T20:12 (+4)

Paying AI researchers to do slightly useful alignment research (or even nothing) rather than advancing capabilities.

This easily scales up to the low tens of billions per year, at which point it turns into acquiring startups. In the low hundreds of billions per year, one could even acquire significant stakes in Nvidia/Google/etc., but this seems horrendously inefficient.

Larks @ 2021-11-16T16:25 (+3)

GiveDirectly is about giving resources to other present people, even though they will not use it in a very targeted manner. The obvious analogy for the future is to save/invest money, which very slightly accelerates economic growth and transfers resources to future people, even if not in a very targeted manner. 

It's not a great option; we should be able to do a lot better. But it does seem roughly equivalent to GiveDirectly, which is also not a great option.

Nathan Young @ 2021-11-16T00:06 (+2)

Downvote me to deprive Nathan of karma.