Adding important nuances to "preserve option value" arguments
By MichaelA🔸 @ 2023-01-08T09:30 (+36)
This is a quickly produced writeup of some thoughts that are probably (a) obvious to some people, (b) basically covered in some existing writings, but (c) still useful for some people to read.[1] After writing this, I re-read the 2017 post Hard-to-reverse decisions destroy option value and concluded that that's more useful than this post and makes this post somewhat but not totally redundant; that post is consistent with but doesn't emphasise what I say here.[2]
Key takeaways
I fairly commonly hear (and make) arguments like "This action would be irreversible. And if we don't take the action now, we can still do so later. So, to preserve option value, we shouldn't take that action, even if it would be good to do the action now if now was our only chance."[3]
This is relevant to actions such as:
- doing field-building to a new target audience for some important cause area
- publicly discussing of some important issue in cases where that discussion could involve information hazards, make the issue polarized/partisan, or make our community seem wacky
I think this sort of argument is often getting at something important, but in my experience such arguments are usually oversimplified in some important ways. This post is a quickly written attempt to provide a more nuanced picture of that kind of argument. My key points are:
- "reversibility" is a matter of degree (not a binary), and is about the expected extent to which the counterfactual effects we're considering causing would (a) fade by default if we stop fuelling them, and/or (b) could be reversed by us if we actively tried to reverse them.
- Sometimes we may be surprised to find that something does seem decently reversible.
- The "option value" we retain is also a matter of degree, and we should bear in mind that delaying an action (a) often gradually reduces total benefits and (b) sometimes means missing key windows of opportunity.
- Delaying can only be better than acting now if we expect we'll be able to make a better-informed decision later and/or we expect the action to become more net-positive later.
- If we don't expect our knowledge will improve in relevant ways nor that the act will become more valuable/less harmful, or we expect minor improvements that are outweighed by the downsides or delay, we should probably just act now if the action does seem good.
But again, I still think "option value" arguments are often getting at something important; I just think we may often make better decisions if we also consider the above three nuances when making "option value" arguments. And, to be clear, I definitely still think it's often worth avoiding, delaying, or consulting people about risky-seeming actions rather than just taking them right now.[4]
1. On "irreversibility"
In some sense, all actions are themselves irreversible - if you do that action, you can never make it literally the case that you didn't do that action. But, of course, that doesn't matter. The important question is instead something like "If we cause this variable to move from x to y, to what extent would our counterfactual impact remain even if we later start to wish we hadn't had that impact and we adjust our behaviors accordingly?" E.g., if we make a given issue something that's known by and salient to a lot of politicians and policymakers, to what extent, in expectation, will that continue to be true even if we later realise we wish it wasn't true?
And this is really a question of degree, not a binary.
There are two key reasons why something may be fairly reversible:
- Our counterfactual effects may naturally wash out
- The variable may gradually drift back to the setting it was at before our intervention
- Or it may remain at the setting we put it to, but with it becoming increasingly likely over time that that would've happened even in the absence of our intervention, such that our counterfactual impact declines
- For example, let's say we raise the salience of some issue to politicians and policymakers because it seems ~60% likely that that's a good idea, ~20% likely it's ~neutral, and ~20% likely it's a bad idea. Then we later determine it seems it was a bad idea after all, so we stop taking any actions to keep salience high. In that case:
- The issue may gradually fall off these people's radars again, as other priorities force themselves higher up the agenda
- Even if the issue remains salient or increases in salience, it could be that this or some fraction of it would've happened anyway, just on a delay
- This is likely for issues that gradually become obviously real and important and where we notice the issues sooner than other key communities do
- We could imagine a graph with one line showing how salience of the issue would've risen by default without us, another line showing how salience rises earlier or higher if we make that happen, and a third line for if we take the action but then stop. That third line would start the same as the "we make that happen" line, then gradually revert toward the "what would've happened by default" line.
- We may be able to actively (partially) reverse our effects
- I expect this effect would usually be less important than the "naturally wash out" effect.
- Basically because when I tried to think of some examples, they all seemed either hard to achieve big results from or like they'd require "weird" or "common sense bad" actions like misleading people.
- But perhaps sometimes decently large effects could be achieved from this?
- For example, we could try to actively reduce the salience of an issue we previously increased the salience of, such as by contacting the people who we convinced and who most started to increase the issue's salience themselves (e.g., academics who started publishing relevant papers), and explaining to them our reasoning for now thinking it's counterproductive to make this issue more salient.
- I expect this effect would usually be less important than the "naturally wash out" effect.
2. On "we can still do it later"
In some sense, it's always the case that if you don't take an action at a given time, you can't later do exactly that same action or achieve exactly the same effects anymore. Sometimes this hardly matters, but sometimes it's important. The important question is something like "If we don't take this action now, to what extent could we still achieve similar expected benefits with similarly low expected harms via taking a similar action later on?"
I think very often significant value is lost by delaying net-positive actions. E.g., in general and all other factors held constant:
- delaying field-building will reduce the number of full-time-equivalent years spent on key issues before it's "too late anyway" (e.g., because an existential catastrophe has happened or the problem has already been solved)
- delaying efforts to improve prioritization & understanding of some issue will reduce the number of "policy windows" that occur between those efforts & the time when it's too late anyway
I also think that sometimes delay could mean we miss a "window of opportunity" for taking an action with a similar type and balance of benefits to harms of the action we have in mind. That is, there may not just be a decay in the benefits, but rather a somewhat "qualitative" shift in whether "something like this action" is even on the table. For example, we may miss the one key policy window we were aiming to affect.
(Somewhat relevant: Crucial questions about optimal timing of work and donations.)
3. Will we plausibly have more reason to do it later than we do now?
Delaying can only be better than acting now if at least one of the following is true:
- We expect we'll be able to make a better-informed decision later
- e.g., because our relevant knowledge will improve
- We expect the action to become more net-positive later
- e.g., because we expect favorable changes in background variables - the time will become "more ripe"
The more we expect those effects, the stronger the case for delay. The less we expect those effects, the weaker the case for delay. A simplified way of saying this is "Why bother delaying your decision if you'd just later be facing the same or worse decision with the same or worse info?"
This can be weighed up against the degree to which we should worry about irreversibility and the degree to which we should worry about the costs of delay, in order to decide whether to act now. (Assuming the act does seem net positive & worth prioritizing, according to our current all-things-considered best guess.)
I think it's usually true that we'll (in expectation) be able to make a better-informed decision later, but how true that is can vary a lot between cases, and that magnitude matters if there are costs to delay.
I think it's sometimes true that the action will become more net-positive later, but probably usually the opposite is true (as discussed in the prior section).
I wrote this post in a personal capacity.
- ^
E.g., I haven't recently or extensively read how economists talk about option value, and it's totally plausible to me that these nuances are made quite clear in those writings. But even if so, this post could still be useful to readers who haven't read or have forgotten those writings.
- ^
I originally read the post in ~2019 and found it useful but later forgot its details. I then re-read it after drafting this post, then added this initial note about it and footnote 4 but otherwise left my post unchanged.
- ^
A good explanation of this argument is Hard-to-reverse decisions destroy option value. That post does contain the three nuances this post covers. But in my experience people raising this sort of argument elsewhere seem to often be unaware of these nuances or to just not have them saliently in mind. (But I'm not claiming this is a terrible oversight - I suspect decently often these nuances wouldn't flip the decision anyway.)
- ^
Sometimes it's also possible to "pilot" a version of some risky, potentially hard-to-reverse action - i.e., to take a version of the action that has less upside but also is less risky or easier to reverse than the "full" action. A key reason to do that would be to gain more clarity on how high the upsides, downsides, and reversibility of the full action would be.
Vasco Grilo @ 2023-01-26T14:54 (+2)
Thanks for writing this, Michael. Somewhat relatedly, I really liked this episode of The 80,000 Hours Podcast with Brian Christian.
We tend to think of deciding whether to commit to a partner, or where to go out for dinner, as uniquely and innately human problems. The message of the book [Algorithms to Live By] is simply: they are not. In fact they correspond – really precisely in some cases – to some of the fundamental problems of computer science.