Do short AI timelines demand short Giving timelines?
By ScienceMon🔸 @ 2025-02-01T22:44 (+12)
I'm a biologist in my 30s working on cures for chronic lung diseases. I've followed AI developments closely over the past 3 years. Holy smokes it's moving fast. But I have neither the technical skills nor the policy conviction to do anything about AI safety.
And I have signed the Giving What We Can pledge 🔸.
If superintelligence is coming soon and goes horribly, then I won't be around to help anyone in 2040. If superintelligence is coming soon and goes wonderfully, then no one will need my help that badly in 2040.
Those two extreme scenarios both push me to aggressively donate to global health in the near term. While I still can.
Does anyone else feel this way? Does anyone in a similar scenario to me see things differently?
CalebMaresca @ 2025-02-02T14:01 (+13)
I don't think that the possible outcomes of AGI/superintelligence are necessarily so binary. For example, I am concerned that AI could displace almost all human labor, making traditional capital more important as human capital becomes almost worthless. This could exacerbate wealth inequality and significantly decrease economic mobility, making post-AGI wealth mostly a function of how much wealth you had pre-AGI.
In this scenario, saving more now would enable you to have more capital while returns to capital are increasing. At the same time, there could be billions of people out of work without significant savings and in need of assistance.
I also think even if AGI goes well for humans, that doesn't necessarily translate into going well for animals. Animal welfare could still be a significant cause area in a post-AGI future and by saving more now, you would have more to donate then (potentially a lot more if returns to capital are high).
Tristan Cook @ 2025-02-03T13:32 (+4)
I thought about this a few years ago and have a post here.
I agree with Caleb's comment on the necessity to consider what a post-superintelligence world would look like, and whether capital could be usefully deployed. This post might be of interest.
My own guess is that it's most likely that capital won't be useful and that more aggressive donating makes sense.
ScienceMon🔸 @ 2025-02-05T03:13 (+1)
Thanks for those links, Tristan! It felt a bit like @Jackson Wagner's comment was scolding me directly:
The idea that a neartermist funder becomes convinced that world-transformative AGI is right around the corner, and then takes action by dumping all their money into fast-acting welfare enhancements, instead of trying to prepare for or influence the immense changes that will shortly occur, almost seems like parody.
Why do you believe that capital won't be useful?
Jackson Wagner @ 2025-02-12T03:58 (+3)
Hello!
I'm glad you found my comment useful! I'm sorry if it came across as scolding; I interpreted Tristan's original post to be aimed at advising giant mega-donors like Open Philanthropy, moreso than individual donors. In my book, anybody donating to effective global health charities is doing a very admirable thing -- especially in these dark days when the US government seems to be trying to dismantle much of its foreign aid infrastructure.
As for my own two cents on how to navigate this situation (especially now that artificial intelligence feels much more real and pressing to me than it did a few years ago), here are a bunch of scattered thoughts (FYI these bullets have kind of a vibe of "sorry, i didn't have enough time to write you a short letter, so I wrote you a long one"):
- My scold-y comment on Tristan's post might suggest a pretty sharp dichotomy, where your choice is to either donate to proven global health interventions, or else to fully convert to longtermism and donate everything to some weird AI safety org doing hard-to-evaluate-from-the-outside technical work.
- That's a frustrating choice for a lot of reasons -- it implies totally pivoting your giving to a new field, where it might no longer feel like you have a special advantage in picking the best opportunities within the space. It also means going all-in on a very specific and uncertain theory of impact (cue the whole neartermist-vs-longtermist debate about the importance of RCTs, feedback loops, and tangible impact, versus ideas like "moral uncertainty" that m.
- You could try to split your giving 50/50, which seems a little better (in a kind of hedging-your-bets way), but still pretty frustrating for various reasons...
- I might rather seek to construct a kind of "spectrum" of giving opportunities, where Givewell-style global health interventions and longtermist AI existential-risk mitigation define the two ends of the spectrum. This might be a dumb idea -- what kinds of things could possibly be in the middle of such a bizarre spectrum? And even if we did find some things to put in the middle, what are the chances that any of them would pass muster as a highly-effective, EA-style opportunity? But I think possibly there could actually be some worthwhile ideas here. I will come back to this thought in a moment.
- Meanwhile, I agree with Tristan's comment here that it seems like eventually money will probably cease to be useful -- maybe we go extinct, maybe we build some kind of coherent-extrapolated-volition utopia, maybe some other similarly-weird scenario happens.
- (In a big-picture philosophical sense, this seems true even without AGI? Since humanity would likely eventually get around to building a utopia and/or going extinct via other means. But AGI means that the transition might happen within our own lifetimes.)
However, unless we very soon get a nightmare-scenario "fast takeoff" where AI recursively self-improves and seizes control of the future over the course of hours-to-weeks, it seems like there will probably be a transition period, where approximately human-level AI is rapidly transforming the economy and society, but where ordinary people like us can still substantially influence the future. There are a couple ways we could hope to influence the long-term future:
- We could simply try to avoid going extinct at the hands of misaligned ASI (most technical AI safety work is focused on this)
- If you are a MIRI-style doomer who believes that there is a 99%+ chance that AI development leads to egregious misalignment and therefore human extinction, then indeed it kinda seems like your charitable options are "donate to technical alignment research", "donate to attempts to implement a global moratorium on AI development", and "accept death and donate to near-term global welfare charities (which now look pretty good, since the purported benefits of longtermism are an illusion if there is effectively a 100% chance that civilization ends in just a few years/decades)". But if you are more optimistic than MIRI, then IMO there are some other promising cause areas that open up...
- There are other AI catastrophic risks aside from misalignment -- gradual disempowerment is a good example, as are various categories of "misuse" (including things like "countries get into a nuclear war as they fight over who gets to deploy ASI")
- Interventions focused on minimizing the risk of these kinds of catastrophes will look different -- finding ways to ease international tensions and cooperate around AI to avoid war? Advocating for georgism and UBI and designing new democratic mechanisms to avoid gradual disempowerment? Some of these things might also have tangible present-day benefits even aside from AI (like reducing the risks of ordinary wars, or reducing inequality, or making democracy work better), which might help them exist midway on the spectrum I mentioned earlier, from tangible givewell-style interventions to speculative and hard-to-evaluate direct AI safety work.
- Even among scenarios that don't involve catastrophes or human extinction, I feel like there is a HUGE variance betwen the best possible worlds, and the median outcome. So there is still tons of value in pushing for a marginally better future -- CalebMaresca's answer mentions the idea that it's not clear whether animals would be invited along for the ride in any future utopia. This indeed seems like an important thing to fight for. I think there are lots of things like this -- there are just so many different possible futures.
- (For example, if we get aligned ASI, this doesn't answer the question of whether ordinary people will have any kind of say in crafting the future direction of civilization; maybe people like Sam Altman would ideally like to have all the power for themselves, benevolently orchestrating a nice transhumanist future wherein ordinary people get to enjoy plenty of technological advancements, but have no real influence over the direction of which kind of utopia we create. This seems worse to me than having a wider process of debate & deliberation about what kind of far future we want.)
- CalebMaresca's answer seems to imply that we should be saving all our money now, to spend during a post-AGI era that they assume will look kind of neo-feudal. This strikes me as unwise, since a neo-feudal AGI semi-utopia is a pretty specific and maybe not especially likely vision of the future! Per Tristan's comment that money will eventually cease to be useful, it seems like it probably makes the most sense to deploy cash earlier, when the future is still very malleable:
- In the post-ASI far future, we might be dead and/or money might no longer have much meaning and/or the future might already be effectively locked in / out of our control.
- In the AGI transition period, the future will still be very malleable, we will probably have more money than we do now (although so will everyone else), and it'll be clearer what the most important / neglected / tractable things are to focus on. The downside is that by this point, everyone else will have realized that AGI is a big deal, lots of crazy stuff will be happening, and it might be harder to have an impact because things are less neglected.
- Today, lots of AI-related stuff is neglected, but it's also harder to tell what's important / tractable.
For a couple of examples of interventions that could exist midway along a spectrum from givewell-style interventions to AI safety research, which are also focused on influencing the transitional period of AGI, consider Dario Amodei's vision of what an aspirational AGI transition period might look like, and what it would take to bring it about:
- Dario talks about how AI-enhanced biological research could lead to amazing medical breakthroughs. To allow this to happen more quickly, it might make sense to lobby to reform the FDA or the clinical trial system. It also seems like a good idea to lobby for the most impactful breakthroughs to be quickly rolled out, even to people in poor countries who might not be able to afford them on their own. Getting AI-driven medical advances to more people, more quickly would of course benefit the people for whom the treatments arrive just in time. But it might also have important path-dependent effects on the long-run future, by setting precedents and infuencing culture and etc.
- In the section on "neuroscience and mind", Dario talks about the potential for an "AI coach who always helps you to be the best version of yourself, who studies your interactions and helps you learn to be more effective". Maybe there is some way to support / accelerate the development of such tools?
- Dario is thinking of psychology and mental health here. (Imagine a kind of supercharged, AI-powered version of Happier-Lives-Institute-style wellbeing interventions like StrongMinds?) But there could be similarly wide potential for disseminating AI technology for promoting economic growth in the third world (even today's LLMs can probably offer useful medical advice, engineering skills, entrepeneurial business tips, agricultural productivity best practices, etc).
- Maybe there's no angle for philanthropy in promoting the adoption of "AI coach" tools, since people are properly incentivized to use such tools and the market will presumably race to provide them (just as charitable initiatives like OneLaptopPerChild ended up much less impactful than ordinary capitalism manufacturing bajillions of incredibly cheap smartphones). But who knows; maybe there's a clever angle somewhere.
- He mentions a similar idea that "AI finance ministers and central bankers" could offer good economic advice, helping entire countries develop more quickly. It's not exactly clear to me why he expects nations to listen to AI finance ministers more than ordinary finance ministers. (Maybe the AIs will be more credibly neutral, or eventually have a better track record of success?) But the general theme of trying to find ways to improve policy and thereby boost economic growth in LMIC (as described by OpenPhil here) is obviously an important goal for both the tangible benefits, and potentially for its path-dependent effects on the long-run future. So, trying to find some way of making poor countries more open to taking pro-growth economic advice, or encouraging governments to adopt efficiency-boosting AI tools, or convincing them to be more willing to roll out new AI advancements, seem like they could be promising directions.
- Finally he talks about the importance of maintaining some form of egalitarian / democratic control over humanity's future, and the idea of potentially figuring out ways to improve democracy and make it work better than it does today. I mentioned these things earlier; both seem like promising cause areas.
Tristan Cook @ 2025-02-05T13:22 (+1)
Regarding Jackson's comment, I agree that 'dumping' money last-minute is a bit silly. Spending at a higher rate (and saving less) doesn't seem so crazy - which is what it seems you were considering.
Why do you believe that capital won't be useful?
My guess the modal outcome from AGI (and eventual ASI) is human disempowerment/extinction. Less confidently, I also suspect that most worlds where things go 'well' look weird and not not much like business-as-normal. For example, if we eventually have a sovereign ASI implement some form of coherent extrapolated volition I'm pretty unsure how (we would want) this to interact with individuals' capital. [Point 2. of this recent shortform feel adjacent - discussing CEV based on population rather than wealth].