Crucial questions about optimal timing of work and donations
By MichaelAđ¸ @ 2020-08-14T08:43 (+45)
This post is part of Convergence Analysisâs broader project on crucial questions for longtermists. The overview of this project explains its purpose and scope, and outlines the crucial questions weâve identified; we recommend reading that post before this one.
Introduction
Suppose your top altruistic priority - or one of them - is improving the expected value of the long-term future, such as by reducing existential risks.[1] There are a wide range of strategies you could take to achieve these goals, and a wide range of questions worth asking to inform your choice of strategy. One important set of decisions youâll have to make relates to the optimal timing of work and donations.[2]
For example, if youâre thinking about doing good through donations:
- Should you donate to effective charities now?
- Or should you invest to donate more later in your life?
- Or should you put your money in a foundation thatâs meant to disburse it effectively sometime after your death?
And if youâre thinking about doing good through your work:
- Should you try to influence âcurrentâ events that affect the future âdirectlyâ?
- Or should you try to build your ability to do âdirect workâ later in your life?
- E.g., gain networks and skills that position you for reducing risks from AGI development decades from now
- Or should you try to âpunt to the futureâ?
- E.g., engage in movement-building or abstract strategic research
This post will overview the crucial questions that we (Convergence) believe do or should influence different longtermistsâ views and choices regarding the best timing of work and donations. Peopleâs beliefs about these questions can be thus seen as cruxes underlying their beliefs about optimal timing. This post will:
- Break these crucial questions down into some sub-questions that feed into them, and occasionally break those sub-questions down further
- Attempt to clarify each of these questions and note the connections between them
- Discuss what the implications of various beliefs regarding those questions would be
- Note some key arguments, evidence, and prior work on the matter
- Typically not provide Convergenceâs own preferred âanswersâ to each question
We hope this can serve as something of an orientation to, âresearch agendaâ for, and structured reading list regarding the matter of optimal timing for longtermist work and donations.
Here are the questions weâll cover:
- How will âleverage over the futureâ[4] change over time?
- What should be our prior regarding how leverage over the future will change? What does the âoutside viewâ say?
- How will our knowledge about what we should do change over time?
- How will the neglectedness of longtermist causes change over time?
- What âwindows of opportunityâ might there be? When might those windows open and close? How important are they?
- Are we biased towards thinking the leverage over the future is currently unusually high? If so, how biased?
- How often have people been wrong about such things in the past?
- If leverage over the future is higher at a later time, would longtermists notice?
- How effectively can we âpunt to the futureâ?
- What would be the long-term growth rate of financial investments?
- What would be the long-term rate of expropriation of financial investments? How does this vary as investments grow larger?
- What would be the long-term âgrowth rateâ from other punting activities?
- Would the people weâd be punting to act in ways weâd endorse?
- Which âdirectâ actions might have compounding positive impacts?
- Do marginal returns to âdirect workâ done within a given time period diminish? If so, how steeply?
Some things to note:
- We discuss what strategies various beliefs about these questions would push in favour of, holding other factors constant. We donât mean to imply that holding those beliefs is necessary or sufficient for justifying those strategies.
- For example, even if Alice thinks that leverage over the future is increasing, such that donating $1000 in 10 years would be better than donating $1000 now, she may opt for donating now due to concerns about value drift.
- Conversely, even if Bob thinks that leverage over the future is decreasing, he may think this is outweighed by how much the size of his donation would increase if he first invests for 10 years.
- Oneâs beliefs about these crucial questions, and what implications those beliefs have, could als be influenced by oneâs beliefs about the other crucial questions highlighted in our overview of this project.
- For example, beliefs about AI timelines, or whether itâs more important to work on biorisk or on improving institutions, will likely influence how one approaches these questions of optimal timing.
- Sometimes discussions of optimal timing for altruistic actions focus on whether to âgive now vs. laterâ. However:
- The matter of optimal timing is relevant to work as well as to donations.
- The options for timing are more continuous than just ânow or laterâ. One reason is that âlaterâ could mean anywhere from months away to millenia away. Another reason is that, in each period, one could expend none of their resources, all of their resources, or any percentage in between.
- We expect others would identify additional questions, use different phrasings or operationalisations, and draw different (or additional) implications, and weâd be keen to hear feedback on that.
- Below, weâll link to many works relevant to specific parts of this topic. For some existing work thatâs relevant to this topic as a whole, see:[5]
- Phil Trammellâs 80,000 Hours interview, talk, and write-up
- The timing of labour aimed at reducing existential risk
- This postâs content happens to overlap in some ways with, but was not substantially influenced by, the posts Estimating the Philanthropic Discount Rate and The case for investing to give later.[6]
How will âleverage over the futureâ change over time?
What will be the âhinge of historyâ, the âmost influential timeâ, or the âprecipiceâ?[7] During which period will direct work (as opposed to punting to the future) have the highest leverage? How long will that period be? Will there be multiple such periods? Is one period now? How much higher than usual (or higher than now) will the leverage during that period be?
In general:
-
The more a person thinks that leverage is now unusually high, the more inclined they may be towards doing or supporting direct work (e.g., diplomacy to reduce risks of great power wars, or ALLFEDâs work to improve resilience to catastrophes that could occur in the near-term).
- E.g., Toby Ord wrote a book whose central theme was that (a) we are probably currently at âthe precipiceâ, and (b) that fact strengthens the argument for currently prioritising working on existential risk reduction.
- This particular implication will be stronger the shorter the person thinks the current high-leverage period will be. E.g., if someone thinks that weâre now in a high-leverage period, but that leverage will remain high for centuries, their views and choices about timing would likely be determined by other questions discussed in this post (e.g., how strongly and lastingly the impacts of various actions âcompoundâ).
-
The more a person thinks that leverage isnât now unusually high, the more inclined they may be to try to punt to the future (e.g., through investment, movement-building).
- E.g., MacAskill writes that, âIf we think that today is not exceptionally different from times in the pastâ, we have good reason to find promising the actions of âsaving in a long-term foundation, or movement-building, with the aim of increasing the amount of resources longtermist altruists have at a future, more hingey timeâ.
- Strategic views and choices should also depend on how long from now one thinks higher leverage periods would be, if they arenât now. As hypothetical examples:
- People who think the highest leverage period will be just decades from now might favour very similar strategies to those favoured by people who think weâre now in a multi-decade high-leverage period. E.g., supporting AI alignment work thatâs based on current systems.
- People who think the highest leverage period will be a century or more from now might favour strategies like setting up a foundation that will donate in effective and value-aligned ways later. These people likely wonât just save to give later in their lifetimes because these people wouldnât be likely to live to see the higher leverage period.
- People who think the highest leverage period will probably be millenia from now, but might be now, might favour acting as though the highest leverage period is now, because they think millenia-long chains of impact would be too hard to predict.
-
The more a person thinks that there will be no substantial difference between how high leverage is now vs. at any future time, the more likely it is that the personâs time-related strategic choices will be determined by other questions discussed in this post.
Some other topics or questions this question is especially related to include:
- How high is total existential risk? How will the risk change over time?
- Timelines for and risks from various emerging technologies (perhaps especially advanced AI, and how discontinuous its development will be)
- Importance of, and best approaches to, existential security and the long reflection
(For sources relevant to those three matters, follow the links from the overview of this project.)
Some relevant existing work includes:
- Are we living at the most influential time in history? and the comments there
- The Precipice, and 80,000 Hoursâ interview with Toby Ord
- Estimating the Philanthropic Discount Rate
- The case for investing to give later
What follows are some of the more fine-grained âsub-questionsâ that inform many peopleâs beliefs about how âleverage over the futureâ will change over time.
What should be our prior regarding how leverage over the future will change? What does the âoutside viewâ say?
MacAskill, commenters on his post, and Ord have provided lengthy and technical discussion of this question, which seems important to MacAskill and Ordâs differing views. I wonât summarise that discussion here.
How will our knowledge about what we should do change over time?
One reason longtermists and altruists may have more leverage later than they have now is if they later have better knowledge about what to do. For example, MacAskill writes:
Perhaps weâre at a really transformative moment now, and we can, in principle, do something about it, but weâre so bad at predicting the consequences of our actions, or so clueless about what the right values are, that it would be better for us to save our resources and give them to future longtermists who have greater knowledge and are better able to use their resources, even at that less pivotal moment.
MacAskill also writes:
There are at least three ways in which our knowledge is changing or improving over time, and itâs worth distinguishing them:
- Our basic scientific and technological understanding, including our ability to turn resources into things we want.
- Our social science understanding, including our ability to make predictions about the expected long-run effects of our actions.
- Our values.
(MacAskill provides the caveat that âItâs more contentious whether weâre improving on (3) â for this argument oneâs meta-ethics becomes crucial.â)
Similar points are also discussed by Ord and Cotton-Barratt (both using the term nearsightedness), Christiano, Tomasik, Dickens, Shlegeris, and Shulman.
Hoeijmakers makes an important distinction between endogenous and exogenous learning:
Endogenous learning is the learning that the investor-philanthropist brings about themselves, e.g. by funding research or trying things out. [...]
Exogenous learning includes advances in the scientific community, new philanthropic interventions being invented and/or tried out, moral progress, and more. It also captures the time needed for relevant knowledge to become available, e.g. an experiment might take time, research might need to be done in a certain order, or there might be a talent constraint in a research area that takes time to be resolved.
The possibility for exogenous learning is the focus of this question. The more exogenous learning one expects, the later the optimal timing for work and donations is likely to be. In contrast, the possibility to cause endogenous learning can be a reason to âact soonâ, and is related to the questions (covered below):
- Which âdirectâ actions might have compounding positive impacts?
- What âwindows of opportunityâ might there be? When might those windows open and close? How important are they?
How will the neglectedness of longtermist causes change over time?
One reason longtermists may have less leverage later than we have now is if the sorts of work theyâd wish to see done become less neglected over time. Reasons this could happen include:
- Population growth
- Growth of relevant movements/communities (e.g., EA, longtermism)
- âMoral progressâ (e.g., increasing concern for future generations)
- Increased understanding and appreciation of relevant ideas (e.g, expected value reasoning, existential risks)
- One potential cause of this (among many others) would be non-existential catastrophes which serve as âwarning shotsâ
- Increased longtermism-aligned spending from large funds that had been accruing compound interest
- This could occur due to them spending the same proportion of a bigger pie each year, or due to them ramping up their proportional spending
Similar points are discussed by MacAskill, Shulman, Trammell, and Cotton-Barratt.
Conversely, the neglectedness of longtermist causes might increase over time, for reasons including the possible collapse or fizzling out of the EA and longtermist movements. This could allow for more leverage later. This is discussed by MacAskill.
As noted by MacAskill, the implications of answers to this question also depend on how steeply marginal returns to (various types of) direct work diminish.
As with changes in knowledge:
- We can make a distinction between endogenous and exogenous changes in neglectedness
- The focus of this question is on the exogenous changes only
- The possibility to cause endogenous learning can actually be a reason to âact soonâ, and is related to the questions (covered below):
- Which âdirectâ actions might have compounding positive impacts?
- What âwindows of opportunityâ might there be? When might those windows open and close? How important are they?
What âwindows of opportunityâ might there be? When might those windows open and close? How important are they?
There may be some problems which canât be effectively worked on until a certain time, or canât be as effectively worked on before a certain time as after that time. That is, for some problems, thereâs a window of opportunity that opens at a particular time. In some cases, the window may already be open. For example, it seems a dedicated and resourceful group of people in 2020 stand a far better chance of deliberately influencing how quantum computing will be used than such a group of people in the 16th century wouldâve. In other cases, the window may be yet to open. For example, perhaps it will be easier to influence space governance once humanity is closer to colonising space.
Additionally, there may be some problems which canât be effectively worked on after a certain time, or canât be as effectively worked on after a certain time. That is, the window of opportunity might close; there might be a deadline. For example, itâs impossible to prevent an existential catastrophe after it has occurred. For another example, people are continually making decisions about things like what jobs to take, where to donate to, how to design systems, and what policies to advocate for or implement. Once each decision is made (or implemented), the window for influencing it closes. Thus, work that would influence many such decisions could be more valuable the sooner it is done.
Relatedly, Ord writes that:
[One major effect which can make earlier labour matter more is] if it helps to change course. If we are moving steadily in the wrong direction, we would do well to change our course, and this has a larger benefit the earlier we do so. For example, perhaps effective altruists are building up large resources in terms of specialist labour directed at combatting a particular existential risk, when they should be focusing on more general purpose labour. Switching to the superior course sooner matters more, so efforts to determine the better course and to switch onto it matter more the earlier they happen.
Shlegeris makes similar points in relation to work on AI safety (see his âAnalogy to securityâ). And similar points seem to often be raised in relation to why present-day work on AI policy may be important. For example, MoĂŤs states:
So these [AI] policies are getting written right now, which at first is quite soft and then becomes harder and harder policies, and now to the point that at least in the EU, you have regulations for AI on the agenda, which is one of the hardest form[s] of legislation out there. Once these are written it is very difficult to change them. Itâs quite sticky. There is a lot of path dependency in legislation. So this first legislation that passes, will probably shape the box in which future legislation can evolve. Its constraints, the trajectory of future policies, and therefore itâs really difficult to take future policies in another direction. So for people who are concerned about AGI, itâs important to be already present right now.
That said, itâs also worth noting that, the more decisions one can influence and path-dependencies one can create, the larger the downside risks an action might have. For example, one might lock in suboptimal choices or crowd out other efforts (see Wiblin & Lempel).
Generally speaking, the likelier it is that thereâs a not-yet-open window of opportunity for working on a particular problem, and the longer itâs likely to be until that window opens, the more that pushes in favour of:
- Punting to the future, rather than supporting or doing direct work
- Punting to the further future than one wouldâve otherwise punted to
- Prioritising work on other problems, whose windows of opportunity are more likely to be open, or to open sooner
In contrast, generally speaking, the likelier it is that thereâs a window of opportunity for working on a particular problem thatâs open but will close in future, and the sooner that window is likely to close, the more that pushes in favour of:
- Doing or supporting direct work
- Punting to the relatively near future (if one plans to punt)
- For example, investing or movement-building in ways targeted to âpay offâ in a few years, rather than many decades from now
- Prioritising work on that particular problem, rather than work on problems that are less likely to have windows that will close in future, or whose windows are likely to close later[8]
- For example, mitigating risks that could strike in the next few years, rather than risks that seem larger but that are likely 5+ years away
- For another example, prioritising strategies for AI alignment that work if timelines to transformative AI turn out to be short, relative to strategies that work if timelines are longer
Related points are also discussed by Dickens, Denkenberger, and Dixon.
There are multiple reasons why it can make sense to prioritise work on problems that are likelier to have windows of opportunity thatâll close relatively soon. One reason is that, compared to other problems, these problems may ultimately receive less work in total. This may increase the marginal returns to work on these problems, if marginal returns to work diminish. This point is discussed by Cotton-Barratt.
This question is especially related to, or perhaps hard to disentangle from, the questions:
- Which âdirectâ actions might have compounding positive impacts?
- How will the neglectedness of longtermist causes change over time?
- Do marginal returns to âdirect workâ done within a given time period diminish? If so, how steeply?
Additionally, answers to this question could inform answers to the question âHow effectively can we âpunt to the futureâ?â
Are we biased towards thinking the leverage over the future is currently unusually high? If so, how biased?
MacAskill discusses this question. For example, he writes:
Informally, the core argument against HoH [the Hinge of History Hypothesis] is that, in trying to figure out when the most influential time is, we should consider all of the potential billions of years through which civilisation might exist. Out of all those years, there is just one time that is the most influential. According to HoH, that time is⌠right now. If true, that would seem like an extraordinary coincidence, which should make us suspicious of whatever reasoning led us to that conclusion, and which we should be loath to accept without extraordinary evidence in its favour.
[...] it seems to me thereâs a strong risk of bias in our assessment of the evidence regarding how influential our time is, for a few reasons:
Salience. Itâs much easier to see the importance of whatâs happening around us now, which we can see and is salient to us, than it is to assess the importance of events in the future, involving technologies and institutions that are unknown to us today, or (to a lesser extent) the importance of events in the past, which we take for granted and involve unsalient and unfamiliar social settings.
Confirmation. For those of us, like myself, who would very much like for the world to be taking much stronger action on extinction risk mitigation (even if the probability of extinction is low) than it is today, it would be a good outcome if people (who do not have longtermist values) think that the risk of extinction is high, even if itâs low. So we might be biased (subconsciously) to overstate the case in our favour. And, in general, people have a tendency towards confirmation bias: once they have a conclusion (âwe should take extinction risk a lot more seriouslyâ), they tend to marshall arguments in its favour, rather than carefully assess arguments on either side, more than they should. Though we try our best to avoid such biases, itâs very hard to overcome them.
Track record. People have a poor track record of assessing the importance of historical developments. And in particular, it seems to me, technological advances are often widely regarded as being more dangerous than they are. Some examples include assessment of risks from nuclear power, horse manure from horse-drawn carts, GMOs, the bicycle, the train, and many modern drugs.[4]
I donât like putting weight on biases as a way of dismissing an argument outright (Scott Alexander gives a good run-down of reasons why here). But being aware that long-term forecasting is an area thatâs very difficult to reason correctly about should make us quite cautious when updating from our prior.
A similar point is also briefly discussed by Baumann.
This question is especially related to the question âIf leverage is higher at a later time, would longtermists notice?â
How often have people been wrong about such things in the past?
Some of MacAskillâs above-quoted arguments would seem to predict that people in history wouldâve often, mistakenly, believed themselves to be in high-leverage periods. So evidence on how often people have made such predictions, ideally relative to how often theyâve considered themselves to not be in high-leverage periods, could help us assess how biased we might be towards thinking leverage is currently unusually high.
Focusing on existential risk estimates rather than specifically discussions of leverage, Fodor writes:
[T]here is a very long history of predicting the end of the world (or the end of civilisation, or other existential catastrophes), so the baseline for accuracy of such claims is poor
On the other hand, Gwern argues that some people in history have thought their time was less âspecialâ or less of an âexceptionâ than it really was (though note that this isnât quite the same matter as how high leverage over the future was in those times). And Trammell writes:
On my cursory understanding of history, itâs likely that for most of history people saw themselves as part of a stagnant or cyclical process which no one could really change, and were right. But I donât have any quotes on this, let alone stats. Iâd love to know what proportion of people before ~1500 thought of themselves as living at a special time.
Bostrom provides some support for the idea that most people through history saw development during their times as stagnant or cyclical.
If leverage over the future is higher at a later time, would longtermists notice?
Lewis writes:
The invest for the future strategy[9] seems to rely on our descendants improving their epistemic access to the point where they can reliably determine whether they're at a 'hinge' or not, and deploying resources appropriately. There are grounds for pessimism about this ability ever being attained. Perhaps history (or the universe as a whole) is underpowered for these inferences.
[...] If we grant the ground truth is occasional 'crucial moments', but we expect evidence at-the-time for living in one of these is scant, my intuition is the optimal strategy would to husband resources to spend these disproportionately when the evidence gives some (but not decisive) indication one of these crucial moments is now.
Depending on how common these 'probably false alarms' are (plus things like how reliably can we steward resources for long periods of time), this might amount to monomaniacal work on immediate challenges. E.g., the prior is (say) 1/million this decade, but if the evidence suggests it is 1%, perhaps we should drop everything to work on it, if we won't expect our credence to be this high again for another millenia.
MacAskillâs reply included:
I think if that were one's credences, what you say makes sense. But it seems hard for me to imagine a (realistic) situation where I think that it's 1% chance of HoH this decade, but I'm confident that the chance will [be] much lower than that for all of the next 99 decades.
Yudkowsky makes similar points to Lewisâ in relation to the idea that There's No Fire Alarm for Artificial General Intelligence. For example, he states:
So far as I can presently estimate, now that we've had AlphaGo and a couple of other maybe/maybe-not shots across the bow, and seen a huge explosion of effort invested into machine learning and an enormous flood of papers, we are probably going to occupy our present epistemic state until very near the end.
By saying we're probably going to be in roughly this epistemic state until almost the end, I don't mean to say we know that AGI is imminent, or that there won't be important new breakthroughs in AI in the intervening time. I mean that it's hard to guess how many further insights are needed for AGI, or how long it will take to reach those insights. After the next breakthrough, we still won't know how many more breakthroughs are needed, leaving us in pretty much the same epistemic state as before. Whatever discoveries and milestones come next, it will probably continue to be hard to guess how many further insights are needed, and timelines will continue to be similarly murky.[10]
How effectively can we âpunt to the futureâ?
Even if leverage over the future will stay the same or slightly decrease over time, it may still be wise to punt to the future if that can allow more direct work, or more impactful direct work, to be done later. For example, a person may choose to:
- invest now so they can donate more later
- movement-build so that there are more people keen and able to do direct work later
- optimise the early years of their career not for âdirectlyâ having an impact, but rather for building career capital (e.g., through great training or credentials) in order to have a more impactful career down the line[11]
Conversely, even if leverage over the future will be far greater at some later date, it may still be best to âact nowâ, or relatively soon. This could be the case if punting to the future, or punting too far into the future, is unlikely to fully succeed, for example due to value drift.
Thus, a personâs beliefs about optimal timing for work and donations may depend in part on their beliefs about how effectively we can punt to the future. What follows are some of the more fine-grained âsub-questionsâ that inform many peopleâs beliefs about that question.
Some relevant existing work includes:
- Estimating the Philanthropic Discount Rate
- The case for investing to give later
- Let Us Give To Future
- Parable of the Multiplier Hole
- These sources on how social movements can rise, fall, be influential, etc.[12]
What would be the long-term growth rate of financial investments?
Beliefs about the value of financially investing in order to support larger amounts of direct work later (perhaps even after oneâs own death), compared to the value of giving now or soon, seem to be driven in part by:
- Whether one has thought about how large investments could become after compounding for a long time
- That is, it seem that some people have simply not noticed how large investments can become, and that noticing this fact can update people towards thinking itâs best to invest to give later
- How high one believes interest rates will be over the long term
- It may be particularly important whether interest rates will continue to exceed the overall economic growth rate (though Iâve only once seen the suggestion that they might not), and whether theyâll continue to incorporate the pure time preference most people have
- Three questions discussed elsewhere in this post:
- What âwindows of opportunityâ might there be? When might those windows open and close? How important are they?
- What would be the long-term rate of expropriation of financial investments? How does this vary as investments grow larger?
- Would the people weâd be punting to act in ways weâd endorse?
What would be the long-term rate of expropriation of financial investments? How does this vary as investments grow larger?
Growth in financial investments could be offset (or partially offset) by the annual probability that investments would be expropriated.
However, Trammell argues that that probability doesnât actually matter, as:
investors will generally be compensated for [that probability] with a higher interest rate [...] A historical case against long-term investing thus requires a demonstration that the expropriation rate grows with fund size.
On that matter, he writes: âAs the fund grows dizzyingly large, [...] People might grow more inclined to seize it, for example, or it might grow better able to defend itselfâ. He then explores the question more thoroughly. For readers interested in this question, I recommend reading his section 6.1.3 (see also Gwernâs reply).
Quotes on some relevant historical case studies are provided by Hanson and Gwern.
What would be the long-term âgrowth rateâ from punting activities other than financial investment?
Other than financial investing, other punting activities include movement-building and building oneâs own career capital. For each of those types of activities, and each specific activity fitting within one of those types, one could ask what function of growth it would cause (in impact, resources dedicated towards doing good, or whatever). In particular, one could ask whether the growth will indeed be positive; whether itâll be more like a lump sum increase, compounding growth, or some other function; and how large the lump sum, rate of compounding, or rate of some other function would be.
For example, Trammell states that, to count as âinvestmentâ for his purposes, movement-building has to:
look like fundraising in the sense that youâre not just putting more resources toward the cause next year, but toward the whole mindset of either giving to the cause or investing to give more in two yearsâ time to the cause. [The contrasting scenario would be one in which you] might spend all your money and get all these recruits who are passionate about the cause that youâre trying to fund, but then they just do it all next year.
One factor to consider is the potential reputational and motivational impacts of dedicating large amounts of resources to punting activities (relative to the resources dedicated to direct work), and especially to punting activities designed to âcompoundâ by causing further punting activities. Wiblin notes the risk of coming to look like âsome kind of multilevel marketing marketing scheme or some kind of Ponzi schemeâ. This sort of risk could reduce the long-term effective âgrowth rateâ of punting activities. This factor relates to the question (covered later) of âWhich âdirectâ actions might have compounding positive impacts?â
Christiano and Bergal discuss points related to this question.
Would the people weâd be punting to act in ways weâd endorse?
If we punt to the future, examples of the people weâd be punting to might be:
- our future selves
- future members of movements we helped build
- the people deciding how to spend money from funds we contributed to
Punting to the future is a less attractive strategy the less one expects the people weâd be punting to would act in ways that weâd (a) endorse currently, (b) endorse after some process of learning and reflection, and/or (c) endorse if we had âbetterâ values.
This question can be further broken down into (at least) the following questions:
- How far will values drift over time among people in general?
- How far will values drift among the particular people weâd be punting to?
- Would that drift be towards values weâd endorse (e.g., âprogressâ towards our current values or towards what we should value)? Or would it be towards ârandomâ or âbadâ values?
- If weâre punting to others (rather than our future selves), can we reduce risks of âbadâ value drift by selecting who weâd punt to? How effectively can we do that?
- Would we have to punt to a âsuccessorâ, who punts to another âsuccessorâ, who punts to another, and so on? How much distortion might this cause? (Think of the game telephone.)
- Are our values, or the principles underlying them, simple enough to be transferred with high fidelity, and perhaps codified in something like a foundationâs constitution?
- How common are our values, or the principles underlying them, and how common will they be in future?
- If these values or principles are or will be very rare, this may reduce the chances that we can punt to people whoâd act in ways weâd endorse.
- On the other hand, that might also increase the likely neglectedness of longtermist causes in future, perhaps increasing the value of punting (see the above section on âHow will the neglectedness of longtermist causes change over time?â).
- Can we create incentives that make it likely that people we punt to will act in ways weâd endorse, even if they donât have values weâd endorse? For example, can we create legally enforceable contracts that mandate actions weâd approve of?
- A similar idea when punting to oneâs future self is to give to a donor-advised fund, so that the money has to be donated to some charity.[13]
- But note that this sort of strategy may reduce our ability to benefit from improvements over time in knowledge about what to do.
These sorts of questions are discussed by Tomasik, by the section of the GPI research agenda on âIntergenerational governanceâ, by Dickens, by Hoeijmakers (also here), by Ngo, by commenters in this thread, by Christiano, and in some of these sources on value drift.
This question is especially related to, or perhaps hard to disentangle from the topic âImportance of, and best approaches to, improving institutions and/or decision-makingâ, and from the questions:
- How will the neglectedness of longtermist causes change over time?
- How close to optimal do we expect current trajectories to be (assuming no existential catastrophe)?
- How influential will longtermism and/or altruism be?
- How close to the appropriate size are influential agentsâ moral circles likely to be?
- Can we maintain useful option value, and engage in a useful long reflection?
(For sources relevant to the latter four questions, follow the links from the overview of this project.)
Which âdirectâ actions might have compounding positive impacts?
One argument often given for certain forms of punting to the future (e.g., financial investment to fuel later giving, or certain types of movement-building) is that they could provide compounding resources or impacts over time. This could cause much greater total impact than direct work done now would. But it also seems possible that certain forms of direct work could likewise have effects that compound over time. Itâs thus worth asking which âdirectâ actions could have compounding impacts, and how strongly and lastingly those impacts would compound.
For example, some have argued that doing âobject-levelâ research now, such as research into specific AI alignment problems, could help:
- Attract funding to the area
- Build a movement / build a field / attract talent
- Build academic credibility
- Provide a foundation that later research can be built on or be guided by
Reasons why direct work could have those sorts of compounding benefits include that such work could:
- Demonstrate that there is concrete work that can be done in the area
- Prevent motivational or reputational issues that could occur if a community is perceived as overly or solely focused on investing, movement-building, etc.
- See âWhat would be the long-term âgrowth rateâ from punting activities other than financial investment?â, and Wiblin.
- Direct work done (partly) for this benefit could be seen as a complement to punting actions, as it could allow more punting actions to be taken without them creating issues.
Conversely, it also seems possible that roughly the opposite effects could occur. For example, certain direct work conducted now could be perceived as pointless or premature, and this could make it harder to attract funding, attract talent, and so on. In any case, it seems likely that different âdirectâ actions would differ in whether and to what extent theyâd cause compounding benefits (or harms).
These sorts of points are discussed by Ord, Trammell, Gleave, Shlegeris, and Shulman.
Arguably, this question could be reframed as, or replaced by, questions such as:
- Which âdirectâ actions also have a âpunting to the futureâ component?
- How much overlap is there between those two types of strategies?
- Where is that overlap?
Do marginal returns to âdirect workâ done within a given time period diminish? If so, how steeply?
Itâs possible that there are diminishing returns to additional direct work (either in general or in a particular area) within a given time.[14] For example, perhaps in each given year, the first $100 million spent on global catastrophic biological risk mitigation can support the continuation of the most cost-effective efforts, while the next $100 million canât achieve as much value. This point is noted by Shulman, Trammell, and Yudkowksy & Muehlhauser (though see also MacAskill).
Relatedly, Ord writes that:
[One major effect which can make earlier labour matter more is] a matter of serial depth. Some things require a long succession of stages each of which must be complete before the next begins. If you are building a skyscraper, you will need to build the structure for one story before you can build the structure for the next. You will therefore want to allow enough time for each of these stages to be completed and might need to have some people start building soon.
Similarly, if a lot of novel and deep research needs to be done to avoid a risk, this might involve such a long pipeline that it could be worth starting it sooner to avoid the diminishing marginal returns that might come from labour applied in parallel. This effect is fairly common in computation and labour dynamics (see The Mythical Man Month), but it is the factor that I am least certain of here.
We obviously shouldnât hoard research labour (or other resources) until the last possible year, and so there is a reason based on serial depth to do some of that research earlier. But it isnât clear how many years ahead of time it needs to start getting allocated (examples from the business literature seem to have a time scale of a couple of years at most) or how this compares to the downsides of accidentally working on the wrong problem. [I added line breaks to that quote]
This question is especially related to, or perhaps hard to disentangle from, the questions:
- What âwindows of opportunityâ might there be? When might those windows open and close? How important are they?
Additionally, answers to this question could:
- Inform answers to the question âHow effectively can we âpunt to the futureâ?â
- Influence the implications of the question âHow will the neglectedness of longtermist causes change over time?â (and vice versa)
- For example, if we expect longtermist causes to become more neglected in future, then steeply diminishing returns to spending within a given year would be an argument for punting to the future, rather than a reason against.
Directions for future work
This post aimed to serve as something of an orientation to, research agenda for, and structured reading list regarding the matter of optimal timing for longtermist work and donations. But this post is certainly not the final say on the matter. Weâd be excited to do or see future work which:
- Identifies additional crucial questions on this topic
- Provides better or also useful ways of categorising and structuring these questions
- Improves the phrasings and explanations used
- Highlights additional relevant sources
- Improves or adds to our discussion of how beliefs about these questions empirically do and/or logically should relate to each other and to strategic views and choices
- Attempts to build formal models of what one should believe or do, or how the future is likely to go, based on various beliefs about these questions
- Ideally, it would be possible for readers to provide their own inputs and see what the results âshouldâ be
- Provides further discussion and evaluation of the questions themselves; each could be the subject of at least a post, and some could warrant a whole research community
Weâd very much appreciate input and feedback that could help us or others pursue such future work. Please feel free to get in touch with us if you are looking to do work on these questions.
This post was based in part on ideas and earlier writings by Justin Shovelain and David Kristoffersson, and benefitted from input from them. Iâm grateful also to feedback from Michael Dickens, Phil Trammell, Arden Koehler, and Alex Holness-Tofts. This does not imply these peopleâs endorsement of all aspects of this post.
If you donât subscribe to longtermism, many of the points and links in this post should still be relevant to you, though some might not be. âŠď¸
In some ways, decisions about optimal timing of work and donations can also overlap or interact with decisions about exploring vs. exploiting. âŠď¸
Unfortunately, this term is also used in other ways, most notably to distinguish between jobs that are âdirectlyâ impactful and those that can be impactful via allowing one to donate money. And âdirect workâ, as we and MacAskill use the term, may still be âindirectâ in other senses, such as being quite âmetaâ. We would thus be happy to hear suggestions of alternative terms. We also considered âact-now strategiesâ and âpresent-influence strategiesâ, but both have their own issues. âŠď¸
Some alternative phrases include âhingeynessâ, âpivotalityâ, âcriticalityâ, âinfluentialnessâ, âimportanceâ, âsignificanceâ, and âmomentousnessâ.
âLeverageâ was suggested by Siebe Rozendal. I prefer that term, because I think it best highlights that, as MacAskill notes, the focus here is âon how much influence a person at a time can have, rather than how much influence occurs during a time period. It could be the case, for example, that the 20th century was a bigger deal than the 17th century, but that, because there were 1/5th as many people alive during the 17th century, a longtermist altruist could have had more direct impact in the 17th century than in the 20th centuryâ. âŠď¸
See also Section 2.3: Discounting in GPIâs research agenda. âŠď¸
I began writing the present post in March, and its core structure and points have been the same since April. Estimating the Philanthropic Discount Rate and The case for investing to give later were posted in July, at which point I read them, added links to them in appropriate places in this post, and added an idea from the latter post in the section âHow will our knowledge about what we should do change over time?â Reading those posts did not lead to other major changes to this post. âŠď¸
Another phrase similar to âthe precipiceâ is âthe time of perilsâ.
Hereâs Will MacAskillâs proposal for defining âmost influential timeâ: âa time ti is more influential (from a longtermist perspective) than a time tj iff you would prefer to give an additional unit of resources,[1] that has to be spent doing direct work (rather than investment), to a longtermist altruist living at ti rather than to a longtermist altruist living at tj.â
Note that weâre focused here on what will be the highest leverage period remaining, âbecause we can try to save resources to affect future times, but we know we canât affect past timesâ (MacAskill). That said, as MacAskill also discusses, âpast hingeyness might still be relevant for assessing hingeyness todayâ. âŠď¸
That said, there could also be cases in which a problemâs window of opportunity closing soon would be a reason not to work on that problem. This could occur if making a difference âin timeâ would be very unlikely, or very costly. For example, it seems to not make sense to prioritise strategies for AI alignment that are optimised for extremely short timelines, and one reason for this is that we may stand little chance anyway if timelines are that short. âŠď¸
Note that Lewis seems to mean âpunting to the futureâ in general, rather than just financial investment. âŠď¸
See also Failures in technology forecasting? A reply to Ord and Yudkowsky and Discontinuous progress in history: an update. âŠď¸
80,000 Hours provides strong arguments that the best roles to take for building career capital thatâs relevant to future impactful roles will often themselves be âdirectlyâ impactful roles. But there are likely to be some situations where the role thatâs the very best for the goal of âdirectlyâ having an impact isnât also the very best for the goal of building valuable career capital. In those situations, how much weight one gives to each of those goals would matter. âŠď¸
I expect Deep-time organizations: Learning institutional longevity from history is also relevant, but I havenât read beyond its abstract. âŠď¸
To take this sort of idea to its extreme, we might wonder whether there are ways we can even avoid having to punt to people at all, by having our intentions automatically implemented somehow. âŠď¸
Itâs also possible there are diminishing returns to additional work or spending on (some) punting activities. For example, perhaps adding another EA movement-builder matters less once there are already 1000 active in that year than when there are just 100 active in that year. This possibility seems worth exploring, but we will set it aside for this post. âŠď¸
MichaelA @ 2020-08-14T10:12 (+5)
(Speaking for myself, not any of my employers, as per usual)
Here are my personal, tentative takeaways after reading and thinking about this topic off and on for several months:
- The case for punting and the case for doing/supporting "direct work" primarily for its "punting-like" benefits (e.g., value of information, field-building) both seem pretty strong.
- The case for doing direct work primarily for its more "direct" benefits seems less strong.
- If memory serves, I think:
- I hadn't thought about these matters much at all last year
- Then, when I heard things like Trammell's 80k episode, I began to feel that the arguments for punting were stronger than the arguments for doing/supporting direct work
- Then, in the course of working on this post, I became more confident about the arguments for punting, and started to think that the key value of direct work might be its punting-like benefits (and that decisions about direct work - e.g., which org to donate to - should perhaps be primarily be based on those types of benefits)
- I think "EA in general" had undervalued the arguments for punting until 2020. But I think that a major shift has occurred in 2020 (see e.g. the many recent posts under the Patient Altruism tag).
- Our discourse may now roughly appropriately balance the case for punting and the case for "direct work now".
- It's hard for me to comment on whether our actions strike the appropriate balance. [I edited this set of points in response to MichaelDickens' comment below.]
- I think EAs may still pay too little attention to the idea that direct work might be valuable primarily for its punting-like benefits, and that that may be the key factor to consider when making decisions about direct work
- I'm quite unsure about how we should allocate resources between punting vs direct work selected for its punting-like benefits
- Next year, I think I'll give 10% of my income to "direct work" orgs/projects/people, which I'll select primarily based on their potential punting-like benefits (e.g., mentoring early-career researchers). And I'll invest as much as I easily can beyond that 10% (which I expect to be >10%) for giving later, once I've accrued interest on it and I know more.
- A good counterargument to me doing that is that I may undergo value drift. To partially address that, I might use a donor advised fund.
- It's also very possible I should invest the 10% as well. A non-negligible factor in me planning to support direct work with 10% of my income is simply that I want to (rather than that I'm confident it's morally best).
MichaelDickens @ 2020-08-14T21:31 (+6)
I think "EA in general" had undervalued the arguments for punting until 2020. But I think that a major shift has occurred in 2020 (see e.g. the many recent posts under the Patient Altruism tag), and we might now be at approximately the right point.
If punting is indeed the right move, then this only seems true with regard to the discourse, not with regard to people's actual behavior. For example, Open Phil spends somewhere around 3% of its budget per year, which is too high on pure "patient longtermist" considerations--Phil Trammell's paper suggested an optimal spend rate of ~0.5% in general, but possibly lower than that if you believe other philanthropists are spending too quickly. (Global poverty donors in particular should be giving 0% per year. This claim seems pretty robustly true.)
Edited to add: I think a rate above 0.5% can be justified based on issues with value drift/expropriation, see https://forum.effectivealtruism.org/posts/3QhcSxHTz2F7xxXdY/estimating-the-philanthropic-discount-rate. AFAIK, nobody has really put work into determining the optimal spending rate, so we don't know what the optimal spending rate is even if we accept the arguments for urgency. My best guess based on my limited research is that the optimal urgent spending rate is something like 1.5% for institutions and 6% for individuals (based on a 0.5% annual probability of existential catastrophe, 0.5% expropriation rate, 0.5% institutional value drift rate, and 5% individual value drift rate).
MichaelA @ 2020-08-14T23:49 (+4)
Ah, good point that we should distinguish the discourse from the behaviours, and that what I said is clearer for the discourse than for the behaviours. I actually intended those sentences to just be about the discourse, but I didn't make that clear. (I've now edited those sentences.)
Also, whether people's discourse is at an appropriate point is probably less decision-relevant than whether their actions are, because:
- it might be more worthwhile to try to push their actions towards the appropriate balance than to push their discourse towards the appropriate balance
- we might want to oversteer one way or the other to compensate for what other people are doing (and this is somewhat less true regarding what people are saying)
Unfortunately, I find it very hard to say whether EAs' actions are, in aggregate, overemphasising "direct work now", overemphasising punting, or striking roughly the right balance. (Alternative terms would be "too urgent" vs "too patient" vs roughly right.) This is because I don't have a strong sense of what balance EAs are currently striking or of what balance they should be striking. (Though I've found your work helpful on the latter point.)
Also, I realise now that I'm basing my assessment of EA's discourse primarily on what I see on the forum and what I hear from the EAs I speak to, who are mostly highly engaged. This probably gives me a misleading picture, as ideas probably diffuse faster to these groups than to EAs in general.
MichaelA @ 2020-08-14T09:25 (+2)
There are two subquestions that didn't feel important/commonly discussed enough to be worth including in the (already long!) post itself, but that felt important/commonly discussed enough to not simply delete. So I'll add them here.
The first of these subquestions fits under "How will âleverage over the futureâ change over time?" The second fits under "How effectively can we âpunt to the futureâ?"
How has leverage changed over history?
This is relevant to MacAskill's âinductive argument against HoHâ.
Would punting be less likely to be effective in worlds where itâd be most useful?
Plausibly, resources that can be dedicated towards longtermist causes are especially valuable if a global catastrophe is likely to occur. But also plausibly, the likelier it is that such a catastrophe would occur, the likelier it is that punting actions will turn out to fail. This could occur due to, for example, resources being wiped out, the rule of law being disrupted, or relevant social movements unravelling.
Likewise, plausibly, resources that can be dedicated towards longtermist causes are especially valuable if EA, longtermism, and/or related values are likely to become less widespread or disappear entirely. But also plausibly, the likelier it is that that happens, the less likely it is that the people weâd be punting to would act in ways weâd endorse (reducing the effectiveness of our punting).
It seems possible that examples like these point towards a more general correlation between how valuable successful punting would be and how likely punting is to fail. In other words, this may suggest punting would be least likely to work in the worlds where it'd be mot valuable. This may reduce the expected value of punting. (But this is all somewhat speculative.)
I believe Kit and Shulman discuss similar ideas, though I may be misinterpreting them.