GiveWell should use shorter TAI timelines

By OscarD🔸 @ 2022-10-27T06:59 (+52)

Summary

Epistemic Status: timelines are hard, and I don’t have novel takes, but I worry perhaps GiveWell doesn’t either, and they are dissenting unintentionally.

In my accompanying post I argued GiveWell should use a probability distribution over discount rates, I will ignore that here though, and just consider whether their point estimate is appropriate.

GiveWell’s current discount rate of 4% is calculated as the sum of three factors. Quoting their explanations from this document:

I do not have a good understanding of how these numbers were derived, and have no reason to think the first two are unfair estimates. I think the third is a significant underestimate.

TAI is precisely the sort of “major change” meant to be captured by the temporal uncertainty factor. I have no insights to add on the question of TAI timelines, but I think absent GiveWell providing justification to the contrary, they should default towards using the timelines of people who have thought about this a lot. One such person is Ajeya Cotra, who in August reported a 50% credence in TAI being developed by 2040. I do not claim, and nor does Ajeya, that this is authoritative, however it seems a reasonable starting point for GiveWell to use, given they have not and (I think rightly) probably will not put significant work into forming independent timelines. Also in August, a broader survey of 738 experts by AI Impacts resulted in a median year for TAI of 2059. This source has the advantage of including many more people, but conversely most of them will have not spent much time thinking carefully about timelines.

I will not give a sophisticated instantiation of what I propose, but rather gesture at what I think a good approach would be, and give a toy example to improve on the status quo. A naive thing to do would be to imagine that there is a fixed annual probability of developing TAI conditional on not having developed it to date. This method gives an annual probability of 3.8% under Ajeya’s timelines, and 1.9% under the AI Impacts timelines.[1] In reality, more of our probability mass should be placed on the later years between now and 2040 (or 2059), and we should not simply stop at 2040 (or 2059). A proper model would likely need to dispense with a constant discount rate entirely, and instead track the probability the world has not seen a “major change” by each year. A model that accomplishes something like this was published on the EA Forum in July, and suggests interventions that actualise their benefit sooner should be favoured in shorter timelines worlds.

Clearly, TAI is not the only such event that could upend the world, so the contribution of temporal uncertainty to the discount rate should be greater than that just from AI. Thus, the temporal uncertainty parameter, and therefore the overall discount rate, seems to be significantly too low. If we instead use a discount rate of 7%, this reduces the cost-effectiveness of the deworming charities by 45-46%, while a discount rate of 5% would still lead to a reduction in cost effectiveness of 19%, both compared to the current value of 4%.[2]

All this relies on judgement calls in the challenging domain of forecasting AI timelines where reasonable people can and do disagree dramatically, so if GiveWell decides they have long timelines this is fine. However, if this is the case it should be communicated in a better justification for the time uncertainty parameter.

Notes


  1. If there is an annual probability, x, of TAI for the next 18 years, and a 50% chance of TAI within 18 years, then (1-x)^18=0.5 => x=1-2^(-1/18)=0.038. SImilarly, 37 years => 0.0186. ↩︎

  2. The naive AI temporal uncertainty parameter of 3.8% (Ajeya’s timelines) is 2.4 percentage points higher than GiveWell’s value of 1.4%, but I assume much of that original 1.4% was for non-AI reasons (else presumably they would have called it ‘AI risk’ or at least talked about AI specifically in the description). So I will round up from 2.4% to 3% to account for whatever other factors GiveWell was thinking of (Biorisk? A world war? Radically new medical technology? Unknown unknowns?). Likewise, I use 5% as indicative of the AI Impacts timelines. Data in accompanying spreadsheet ↩︎


MichaelStJules @ 2022-10-27T08:13 (+9)

If TAI arrives and doesn't cause extinction, it could still be years before the poorest countries are significantly impacted. So, the probability of TAI arrival could be too high to discount by (or at least for AI's contribution).

MichaelStJules @ 2022-10-27T15:12 (+6)

Also, a life saved might become more valuable and life-saving charities might do more good than otherwise, in case the beneficiaries' quality of life or life expectancies improve due to the arrival of TAI! I'd guess you'd only want to discount by the probability of extinction or global catastrophe for life-saving interventions. I suppose there's also a chance that between your donation and its use saving a life, the beneficiary would have been saved through the benefits from TAI, but I think GiveWell has been recommending donations for benefits in the next couple years after reception, so this seems unlikely.

The extreme person-affecting tails involve far longer lives from life extension tech and mind uploading.

Income/wealth gains would probably become less valuable if GiveWell charity beneficiaries benefit from TAI.

Oscar Delaney @ 2022-10-27T15:55 (+1)

All good points. Yes, in slower take-off scenarios there would be a larger lag, I suppose I was implicitly thinking of cases where the world quickly moves to collapse or >=20% annual economic growth, but true this does weaken my conclusion. Ah interesting thought about saving lives being especially valuable given the possibility of life-extension tech. Perhaps our best guess 'life expectancy' for someone alive today should be >100 years then and maybe far more, if there is even a small chance of entering post-death worlds.

RobertM @ 2022-10-28T04:20 (+1)

I think it requires either a disagreement in definitions, or very pessimistic views about how tractable certain scientific problems will prove to be, to think that the "transformative" bit will take long enough to impact the discount rate by more than a few percent (total).  But yes, it will be non-zero.

Sylvester Kollin @ 2022-10-27T09:56 (+8)

Related: Neartermists should consider AGI timelines in their spending decisions  by Tristan Cook.

GiveWell @ 2022-10-30T10:16 (+4)

Thanks for your entry!