Clarifications on diminishing returns and risk aversion in giving
By Robert_Wiblin @ 2022-11-25T15:02 (+148)
In April when we released my interview with SBF, I attempted to very quickly explain his views on expected value and risk aversion for the episode description, but unfortunately did so in a way that was both confusing and made them sound more like a description of my views rather than his.
Those few paragraphs have gotten substantial attention because Matt Yglesias pointed out where it could go wrong, and wasn't impressed, thinking that I'd presented an analytic error as "sound EA doctrine".
So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:
- Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.
- Returns become sublinear more quickly when you're working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.
- This sublinearity becomes especially pronounced when you're considering giving on the scale of billions rather than millions of dollars.
- There are other major practical considerations that point in favour of risk-aversion as well.
(SBF appears to think the effects above are smaller than Matt or I do, but it's hard to know exactly what he believes, so I'll set that aside here.)
———
The offending paragraphs in the original post were:
"If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.
But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.
This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million.
The point from the conversation that I wanted to highlight — and what is clearly true — is that for an individual who is going to spend the money on themselves, the fact that one quickly runs out of any useful way to spend the money to improve one's well-being makes it far more sensible to receive $1 billion with certainty than to accept a 90% chance of walking away with nothing.
On the other hand, if you plan to spend the money to help others, such as by distributing it to the world's poorest people, then the good one does by dispersing the first dollar and the billionth dollar are much closer together than if you were spending them on yourself. That greatly strengthens the case for taking the risk of receiving nothing in return for a larger amount on average, relative to the personal case.
But: the impact of the first dollar and the billionth dollar aren't identical, and in fact could be very different, so calling the approach 'totally rational' was somewhere between an oversimplification and an error.
———
Before we get to that though, we should flag a practical consideration that is as important, or maybe more so, than getting the shape of the returns curve precisely right.
As Yglesias points out, once you have begun a foundation and people are building organisations and careers in the expectation of a known minimum level of funding for their field, there are particular harms to risking your entire existing endowment in a way that could leave them and their work stranded and half-finished.
While in the hypothetical your downside is meant to be capped at zero, in reality, 'swinging for the fences' with all your existing funds can mean going far below zero in impact.
The fact that many risky actions can result in an outcome far worse than what would have happened if you simply did nothing, is a reason for much additional caution, one that we wrote about in a 2018 piece titled 'Ways people trying to do good accidentally make things worse, and how to avoid them'. I regret that I failed to ask any questions that highlighted this critical point in the interview.
(This post won't address the many other serious issues raised by the risk-taking at FTX, which, according to news reports, have gone far beyond accepting the possibility of not earning much profit, and which can't be done justice here.
If those reports are accurate, the risk-taking at FTX was not just a coin flip that came up tails — it was immoral and perhaps criminal itself due to the misappropriation of other people's money for the purpose of risky investments. This has resulted in incalculable harm to customers, investors, trust in broader society, and has set back all the causes some of FTX's staff said they wanted to help.)
———
To return to the question of declining returns and risk aversion — just as one slice of pizza is delicious but a tenth may not be enjoyable to eat at all, people trying to use philanthropy to do good do face 'declining marginal returns' as they incrementally try to give away more and more money.
How fast that happens is a difficult empirical question.
But if one is funding the fairly niche and neglected problems SBF said he cared the most about, it's fair to say that any foundation would find it difficult to disperse $15 billion to projects they were incredibly excited about.
That's because a foundation with $15 billion would end up being a majority of funding for those areas, and so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold, depending on how broad a net they tried to cast. That 'glut' of funding would result in some more mediocre projects getting the green light.
Assuming someone funded projects starting with the ones they believed would have the most impact per dollar, and then worked down — the last grant made from such a large pot of money will be clearly worse, and probably have less than half the expected social impact per dollar as the first.
So between $1 billion with certainty versus a 10% chance of $15 billion, one could make a theoretical case for either option — but if it were me I would personally lean towards taking the $1 billion with certainty.[1]
Notice that by contrast, if I were weighing up a guaranteed $1 million against a 10% chance of $15 million, the situation would be very different. For the sectors I'd be most likely to want to fund, $15 million from me, spread out over a period of years, would represent less than a 1% increase, and so wouldn't overwhelm their capacity to sensibly grow, leading the marginal returns to decline more slowly. So in that case, setting aside my personal interests, I would opt for the 10% chance of $15 million.
———
Another massive real-world consideration we haven't mentioned yet which pushes in favour of risk aversion is the following: how much you are in a position to donate is likely to be strongly correlated with how much other donors are able to donate.
In practice risk-taking around philanthropy will mostly centres on investing in businesses. But businesses tend to do well and poorly together, in cycles, depending on broad economic conditions. So if your bets don't pay off, say, because of a recession, there's a good chance other donors will have less to give as well. As a result, you can't just take the existing giving of other donors for granted.
This is one reason for even small donors to have a reasonable degree of risk aversion. If they all adopt a risk-neutral strategy they may all get hammered at once and have to massively reduce their giving simultaneously, adding up to a big negative impact in aggregate.
This is a huge can of worms that has been written about by Christiano and Tomasik as far back as 2013, and more recently by my colleague Benjamin Todd.
———
This post has only scratched the surface of the analysis one could do on this question, and attempted to show how tricky it can be. For instance, we haven't even considered:
- Uncertainty about how many other donors might join or drop out of funding similar work in future.
- Indirect impacts from people funding similar work on adjacent projects.
- Uncertainty about which problems you'll want to fund solutions to in future.
I regret having swept those and other complications under the rug for the sake of simplicity in a way that may well have confused some listeners to the show and seemed like an endorsement of an approach that is risk-neutral with respect to dollar returns, which would in fact be severely misguided.
(If you'd like to hear my thoughts on FTX more generally as opposed to this technical question you can listen to some comments I put out on The 80,000 Hours Podcast feed.)
Misha_Yagudin @ 2022-11-26T16:22 (+73)
Apologies for maybe sounding harsh: but I think this is plausibly quite wrong and nonsubstantive. I am also somewhat upset that such an important topic is explored in a context where substantial personal incentives are involved.
One reason is that the post that gives justice to the topic should explore possible return curves, and this post doesn't even contextualize betting with how much money EA had at the time (~$60B)/has now(~$20B) until the middle of the post where it mentions it in passing: "so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold." Arguing that some degree of risk aversion is, indeed, implied by diminishing returns is trivial and has little implications on practicalities.
I wish I had time to write about why I think altruistic actors probably should take a 10% chance of 15B vs. a 100% chance of 1B. Reverse being true would imply a very roughly ≥3x drop in marginal cost-effectiveness upon adding 15B of funding. But I basically think there would be ways to spend money scalably and at current "last dollar" margins.
In GH, this sorta follows from how OP's bar didn't change that drastically in response to a substantial change to OP funds (short of $15B, but still), and I think OP's GH last dollar cost-effectiveness changed even less.
In longtermism, it's more difficult to argue. But a bunch of grants that pass the current bar are "meh," and I think we can probably have some large investments that are better than the current ones in the future. If we had much more money in longtermism, buying a big stake in ~TSMC might be a good thing to do (and it preserves option value, among other things). And it's not unimaginable that labs like Anthropic might want to spend $10Bs in the next decade(s) to match the potential AI R&D expenses of other corporate actors (I wouldn't say it's clearly good, but having the option to do so seems beneficial).
I don't think the analysis above is conclusive or anything. I just want to illustrate what I see as a big methodological flaw of the post (not looking at actual returns curves when talking about diminishing returns) and make a somewhat grounded in reality case for taking substantial bets with positive EV.
Robert_Wiblin @ 2022-11-29T17:22 (+22)
Hi Misha — with this post I was simply trying to clarify that I understood and agreed with critics on the basic considerations here, in the face of some understandable confusion about my views (and those of 80,000 Hours).
So saying novel things to avoid being 'nonsubstantial' was not the goal.
As for the conclusion being "plausibly quite wrong" — I agree that a plausible case can be made for both the certain $1 billion or the uncertain $15 billion, depending on your empirical beliefs. I don't consider the issue settled, the points you're making are interesting, and I'd be keen to read more if you felt like writing them up in more detail.[1]
The question is sufficiently complicated that it would require concentrated analysis by multiple people over an extended period to do it full justice, which I'm not in a position to do.
That work is most naturally done by philanthropic program managers for major donors rather than 80,000 Hours.
I considered adding in some extra math regarding log returns and what that would imply in different scenarios, but opted not to because i) it would take too long to polish, ii) it would probably confuse some readers, iii) it could lead to too much weight being given to a highly simplified model that deviates from reality in important ways. So I just kept it simple.
I'd just note that maintaining a controlling stake in TSMC would tie up >$200 billion. IIRC that's on the order of 100x as much as has been spent on targeted AI alignment work so far. For that to be roughly as cost-effective as present marginal spending on AI or other existential risks, it would have to be very valuable indeed (or you'd have to think current marginal spending was of very poor value). ↩︎
pseudonym @ 2022-11-25T16:19 (+37)
Rob,
Thanks for this clarification and acknowledgement of what happened with the podcast. Hope you're doing better since your last post.
One question on how I should be interpreting the statements describing your views:
So it seems worth clarifying what I actually do think. In brief, I entirely agree with Matt Yglesias that:
- Returns to additional money are certainly not linear at large scales, which counsels in favour of risk aversion.
- Returns become sublinear more quickly when you're working on more niche cause areas like longtermism, relative to larger cause areas such as global poverty alleviation.
- This sublinearity becomes especially pronounced when you're considering giving on the scale of billions rather than millions of dollars.
- There are other major practical considerations that point in favour of risk-aversion as well.
———
While in the hypothetical your downside is meant to be capped at zero, in reality, 'swinging for the fences' with all your existing funds can mean going far below zero in impact.
The fact that many risky actions can result in an outcome far worse than what would have happened if you simply did nothing, is a reason for much additional caution, one that we wrote about in a 2018 piece titled 'Ways people trying to do good accidentally make things worse, and how to avoid them'.
———
So between $1 billion with certainty versus a 10% chance of $15 billion, one could make a theoretical case for either option — but if it were me I would personally lean towards taking the $1 billion with certainty.
———
I regret having swept those and other complications under the rug for the sake of simplicity in a way that may well have confused some listeners to the show and seemed like an endorsement of an approach that is risk-neutral with respect to dollar returns, which would in fact be severely misguided.
Just wanted to clarify whether I'm meant to be interpreting these as "these are my views and they were my views at the time of the SBF podcast", or "In hindsight, I agree with these views now, but didn't hold this view at the time", or "I think I always believed this, but just didn't really think about this when we published the podcast", or something else?
The reason I ask is because the post makes it sound like the first interpretation, but if these were your views and always have been, to the point where you are saying an approach that is risk-neutral with respect to dollar returns would be "severely misguided", it seems difficult to reconcile that with the justification of publishing the relevant quote[1] as "for the sake of simplicity".
If you are happy to publish things like "you should just go with whatever has the highest expected value", "this is the totally rational approach" for the sake of simplicity when you actually don't endorse the claim (or even consider it severely misguided), what does that mean about other content on 80,000 hours? What else has been published for the sake of "simplicity" that you actually don't endorse, or consider severely misguided? I find this option hard to believe because it's not consistent with the publication/editorial standards I expect from 80,000 hours, nor its Director of Research, and it's an update I'm rather hesitant about making.
Sorry if this wasn't worded as politely or kindly as it could have been, and I hope you interpret me seeking clarification here as charitable. I'm aware there may be other possibilities I'm not thinking of, and wanted to ask because I didn't want to jump to any conclusions. I'm hoping this gives you an opportunity to clarify things for me and others who might be similarly confused.
Thanks!
Edit: Added this quote from the podcast, taken from davidc's comment below:
"But when it comes to doing good, you don’t hit declining returns like that at all. Or not really on the scale of the amount of money that any one person can make. So you kind of want to just be risk neutral."
- ^
"If you were offered a 100% chance of $1 million to keep yourself, or a 10% chance of $15 million — it makes total sense to play it safe. You’d be devastated if you lost, and barely happier if you won.
But if you were offered a 100% chance of donating $1 billion, or a 10% chance of donating $15 billion, you should just go with whatever has the highest expected value — that is, probability multiplied by the goodness of the outcome [in this case $1.5 billion] — and so swing for the fences.
This is the totally rational but rarely seen high-risk approach to philanthropy championed by today’s guest, Sam Bankman-Fried. Sam founded the cryptocurrency trading platform FTX, which has grown his wealth from around $1 million to $20,000 million."
Robert_Wiblin @ 2022-11-29T17:17 (+30)
Thanks for the question Pseudonym — I had a bunch of stuff in there defending the honour of 80k/my colleagues, but took it out as it sounded too defensive.
So I'm glad you've given me a clear chance to lay out how I was thinking about the episode and the processes we use to make different kinds of content so you can judge how much to trust them.
Basically, yes — I did hold the views above about risk aversion for as long as I can recall. I could probably go find supporting references for that, but I think the claim should be believable because the idea that one should be truly risk neutral with respect to dollars at very large amounts just obviously makes no sense and would be in direct conflict with our focus on neglected areas (e.g. IIRC if you hold the tractability term of our problem framework constant then you get logarithmic returns to additional funding).
When I wrote that SBF's approach was 'totally rational', in my mind I was referring to thinking in terms of expected value in general, not to maximizing expected $ amounts, though I appreciate that was super unclear which is my fault.
Podcast interviews and their associated blog posts do not lay out 80,000 Hours staff's all-things-considered positions and never have (with the possible exception of Benjamin Todd talking about our 'key ideas').
They're a chance to explore ideas — often with people I partially disagree with — and to expose listeners to the diversity of views out there. For an instance of that from the same interview, I disagree with SBF on broad vs narrow longtermism but I let him express his views to provide a counterpoint to the ones listeners will be familiar with hearing from me.
The blog posts I or Keiran write to go with the episodes are rarely checked by anyone else on the team for substance. They're probably the only thing on the site that gets away with that lack of scrutiny, and we'll see whether that continues or not after this experience. So blame for errors should fall on us (and in this case, me).
Reasons for that looser practice include:
- They're usually more clearly summarising a guest's opinions rather than ours.
- They have to be imprecise, as podcast RSS feeds set a 4000 character limit for episode descriptions (admittedly we overrun these from time to time).
- They're written primarily to highlight the content of the episode so interested people can subscribe and/or listen to the episode.
- Even if the blog post is oversimplified, the interview itself should hopefully provide more subtlety.
By comparison our articles like key ideas or our AI problem profile are debated over and commented on endlessly. On this issue there's our short piece on 'How much risk to take'.
Not everyone agrees with every sentence of course, but little goes out without substantial review.
We could try to make the show as polished as articles, more similar to say, a highly produced show like Planet Money. But that would involve reducing output by more than half, which I think the audience would overall dislike (and would also sabotage the role the podcast plays in exposing people to ideas we don't share).
You or other readers might be curious as to what was going through my head when I decided to prioritise the aspect of expected value that I did during the interview itself:
- We hadn't explained the concept of expected value and ambition in earning to give and other careers very much before. Many listeners won't have heard of expected value or if they have heard of it, not know what it is. So the main goal I had in mind was to get us off the ground floor and explain the basic case there. As such these explanations were aimed at a different audience than Effective Altruism Forum regulars, who would probably benefit more from advanced material like the interview with Alan Hajék.
- The great majority of listeners (99.9%+) are not dealing with resources on the scale of billions of dollars, and so in my mind the priority was to get them to see the case for not being very risk averse, inasmuch as they're a small fraction of all the effort going into their problem and aren't at risk of causing massive harm to others.
- I do wish I had pointed out that this only applies if they're not taking the same correlated risks as everyone else in the field — that was a predictable mistake in my view and something that wasn't as prominent in my mind as it ought to have been or is today.
- Among the tiny minority of people who are dealing with resources or careers at scales over $100 million, by that point they're now mostly thinking about these issues full-time or have advisors who do, and are likely to think up or be told the case for risk aversion (it should become obvious through personal experience to someone sensible in such a situation).
I do think I made a mistake ex ante not to connect personal and professional downside risk more into this discussion. We had mentioned it in previous episodes and an article I read which went out in audio form on the podcast feed itself, but at the time I thought of seeking upside potential, and the risk of doing more harm than good, as more conceptually and practically distinct issues than I do now after the last month.
Overall I think I screwed up a bunch about this episode. If you were inclined to think that the content of these interviews is reliable the great majority of the time, then this is a helpful reminder that ideas are sometimes garbled on the show — and if something sounds wrong to you, it might well just be because it's wrong. I'm sorry about that, and we try to keep it at reasonable levels, though with the format we have we'll never get it to zero.
But if it were me I wouldn't update much on the quality of the written articles as they're produced pretty differently and by different people.
Linch @ 2022-11-30T00:26 (+6)
Overall I think I screwed up a bunch about this episode. If you were inclined to think that the content of these interviews is reliable the great majority of the time, then this is a helpful reminder that ideas are sometimes garbled on the show — and if something sounds wrong to you, it might well just be because it's wrong. I'm sorry about that, and we try to keep it at reasonable levels, though with the format we have we'll never get it to zero.
FWIW I've generally assumed that the content in those interviews are wrong pretty often, certainly I'd expect the average interview to have at least one egregious mistake.
I don't think this should be too surprising, being fully accurate for 2h+ on interesting topics is very hard.
pseudonym @ 2022-11-30T00:14 (+1)
Rob,
Thanks, I appreciated this response. I have a few thoughts but I don't want the focus on pushbacks to give the impression I think negatively of what you said-I think overall it was a positive update. It's also easier for me to sit and push back and say things that just sound like hindsight bias, but I'm erring on the side of sharing them because I'm taking you at face value RE: these being views you have already held for as long as you recall.
As you allude to below, I think it's really hard in a podcast setting to cover all the nuances and be super precise with language, and I think that's understandable. OTOH, from the 2020 EA survey: "more than half (50.7%) of respondents cited 80,000 Hours as important for them getting involved in EA." 80,000 hours is one of the most public facing EA organizations, and what 80,000 hours publishes will often be seen as "what EA thinks", and I think one initial reaction when this happened was something like "Maybe 80,000 hours don't really take that seriously enough" or something (the pushback Ben received online when tweeting the climate change problem profile was another example of how these kinds of public facing concerns seemed to be underrated, especially because the tweet was later deleted) and I hope this will be considered more seriously when deciding what (if any) changes are appropriate going forward.
Another point: it seems a little weird to say the blog post gets away with less scrutiny because the interview provides more subtlety and then not actually provide more subtlety in the interview, which is I think what happened here? Like if you can't explore the nuance during the podcast because of the podcast format, that's understandable, but it doesn't seem reasonable to then also say that you don't cover it in the accompanying blog post because you intend for the subtlety to be covered in the podcast. It's also not like you're deciding about whether to include layer 5 and 6 of the nuance, but whether to include a disclaimer about a view that you personally find severely misguided.
I guess one possible suggestion might be to review the transcript/blog post and add relevant caveats and disclaimers after the podcast (especially since there's already a relevant article you've already published on it). I think a general disclaimer would be an even lower cost version, but less helpful in this specific case where you appear to be putting aside your disagreement with SBF's views and actively not pushing back on it for the express purpose of better communication with the viewers?
The great majority of listeners (99.9%+) are not dealing with resources on the scale of billions of dollars, and so in my mind the priority was to get them to see the case for not being very risk averse, inasmuch as they're a small fraction of all the effort going into their problem and aren't at risk of causing massive harm to others.
I do think it's important to consider harm to themselves and possibly their dependents as a consideration here, even if they aren't operating in the scale of billions. Also while I agree with the point about tiny minority etc, you probably don't want to stake reputational risks to 80,000 hours or the EA movement more broadly on whether or not your listeners or guests are 'sensible'.
I agree it seems valuable to let guests talk about points of disagreement, but where you do this it seems important to be clear at some stage whether you are letting them talk about their views because you want to showcases a different viewpoint, or at least that you aren't endorsing their message, and especially if the message is a potentially harmful one. It also minimizes scenarios where you pretty reasonably justify yourself but people from the outside or those who are less charitable find it hard to tell the difference between how you've justified yourself in this comment VS a world where you were endorsing SBF's views followed by some combination of post-hoc rationalization/hindsight bias when things turned out poorly (in this case, I wouldn't consider it uncharitable if people thought you were in fact endorsing SBF's stated views, based just on the podcast and blog). I think this could be harmful not only for you, but also for 80,000 hours and the EA movement more broadly.
Again, thanks for all your work, and I'm aware it's easier for me to sit behind a pseudonym and throw critical comments over than actually do the work you have to do-but I'm doing this with the intention of hopefully contributing to something constructive.
davidc @ 2022-11-26T02:11 (+29)
Seems worthwhile to quote the relevant bit of the interview:
====
Sam Bankman-Fried: If your goal is to have impact on the world — and in particular if your goal is to maximize the amount of impact that you have on the world — that has pretty strong implications for what you end up doing. Among other things, if you really are trying to maximize your impact, then at what point do you start hitting decreasing marginal returns? Well, in terms of doing good, there’s no such thing: more good is more good. It’s not like you did some good, so good doesn’t matter anymore. But how about money? Are you able to donate so much that money doesn’t matter anymore? And the answer is, I don’t exactly know. But you’re thinking about the scale of the world there, right? At what point are you out of ways for the world to spend money to change?
Sam Bankman-Fried: There’s eight billion people. Government budgets run in the tens of trillions per year. It’s a really massive scale. You take one disease, and that’s a billion a year to help mitigate the effects of one tropical disease. So it’s unclear exactly what the answer is, but it’s at least billions per year probably, so at least 100 billion overall before you risk running out of good things to do with money. I think that’s actually a really powerful fact. That means that you should be pretty aggressive with what you’re doing, and really trying to hit home runs rather than just have some impact — because the upside is just absolutely enormous.
Rob Wiblin: Yeah. Our instincts about how much risk to take on are trained on the fact that in day-to-day life, the upside for us as individuals is super limited. Even if you become a millionaire, there’s just only so much incrementally better that your life is going to be — and getting wiped out is very bad by contrast.
Rob Wiblin: But when it comes to doing good, you don’t hit declining returns like that at all. Or not really on the scale of the amount of money that any one person can make. So you kind of want to just be risk neutral. As an individual, to make a bet where it’s like, “I’m going to gamble my $10 billion and either get $20 billion or $0, with equal probability” would be madness. But from an altruistic point of view, it’s not so crazy. Maybe that’s an even bet, but you should be much more open to making radical gambles like that.
Sam Bankman-Fried: Completely agree. ...
Robert_Wiblin @ 2022-11-29T17:19 (+30)
Hey David, yep not our finest moment, that's for sure.
The critique writes itself so let me offer some partial explanation:
- Extemporaneous speech is full of imprecision like this where someone is focused on highlighting one point (in this case the contrast between appropriate individual vs altruistic risk aversion) and misses others. With close scrutiny I'm sure you could find many other cases of me presenting ideas as badly as that, and I'd imagine the same is true for all interview shows edited at the same level as ours.
Fortunately one upside of the conversation format is I think people don't give it undue weight, because they accurately perceive it as being scrappy in this way. (That said, I certainly do wish I had been more careful here and hopefully alarm bells will be more likely to go off in my head in a future similar case!)
I don't recall people criticising this passage earlier, and I suspect that's because prior to the FTX crash it was natural to interpret it less literally and as more pointing towards a general issue.
-
You can hear that with the $10b vs $0/20b comparison as soon as I said it I realised it wasn't right and wanted to pare it back ("Maybe that’s an even bet"), because there's no expected financial gain there. I should have compared it against $5b or something but couldn't come up with the right number on the spot.
-
I was primarily trying to think in terms of the sorts of sums the great majority of listeners could end up dealing with, which is only very rarely above >$1b, which led me to add "or not really on the scale of the amount of money that any one person can make".
If you'd criticised me for saying this in May I would have said that I was highlighting the aspect of the issue that was novel and relevant for most listeners, and that by the time someone is a billionaire donor they will have / should have already gotten individualised advice and not be relying on an introductory interview like this to guide them. They're also likely to have become aware of the risk aversion issue just through personal experience and common sense (all the super-donors I know of certainly are aware of these issues, though I'm sure they each give it different weight).
All that said, the above passage is pretty cringe, and hopefully this experience will help us learn to steer clear of similar mistakes in future.
davidc @ 2022-11-29T19:38 (+1)
Thanks!
Nathan Young @ 2022-11-25T17:07 (+17)
While it isn't remotely what you were talking about, a point that I often confuse in my head is about how you "win"/"lose" this money. Losing financially is one thing, losing through crime is another.
If FTX had lost $10bn on a standard trade (like Meta's recent foray into VR, which may or may not turn out well) we'd have a completely different discussion to what happened. In the FTX case, their behaviour looks to lose far more than just the capital - they caused lots of harm and lost people's respect and goodwill also. In that sense, the trade took their capital to 0 and then caused a load of damage besides. Ex-ante it was much worse than it looked, even without a discussion of utility curves.
RobertJones @ 2022-11-26T15:10 (+4)
A confusion is introduced in the quoted passage by the shift from the personal to the general. You personally cannot lose more than all your assets, because of bankruptcy. But bankruptcy just shifts any further losses to your creditors, so once we shift to thinking about global benefits and harms, the loss is no longer capped in that way.
Sharmake @ 2022-11-25T17:14 (+1)
More generally, the assumption that returns can't be negative is the worrying assumption. Either SBF didn't realize that or he thought that depositors didn't matter.
Sam Elder @ 2022-11-27T01:53 (+6)
One under-discussed aspect of this: What shape does your utility curve look like for negative dollar returns?
I've been trying to figure out why SBF seems very much to have not just been risk-neutral in his business approach, but quite probably was actively risk-seeking, seeking to make correlated bets (mainly amounting to longs on crypto in general) that all crashed this year.
It seems quite possible to me that SBF saw the downside of Alameda/FTX losing $10B as not nearly as bad as the upside of them making $10B would be good. Consider:
- Depositors losing their money means that you're taking from people mostly in developed countries who likely have some cash to spare.
- SBF's parents are law professors who could probably help him legally if he ran into trouble.
- Even if SBF and the rest of his leadership end up in jail, that's only harm to a small number of people, compared to the many he could help in a positive situation.
- The ensuing media firestorm has at least made a larger number of people aware of the ideas of EA, which are compelling on their own independent of the goodness of their practitioners.
To be clear, I'm not endorsing this perspective at all... I'm just trying to see if SBF could have been reasoning along these lines, even if he wasn't doing so publicly.
For the rest of us, particularly those trying to act based on the funding being provided, I think it would have been far more helpful to actually examine the potential downside risk that SBF himself was already highlighting with his approach to risk.
This would have meant Rob asking questions like: "If you endorse these sort of high-risk double-or-nothing bets, and you've made it clear that you're not letting up on that even now that you've made billions, should we anticipate a decent likelihood of hearing that FTX has gone bankrupt sometime soon?" Visualizing, and more broadly discussing that very real possibility would have hopefully muted the impact on the EA community when it actually came to pass. And then, after dwelling on the seemingly-zero downside possibility, the natural follow-up question would dive into SBF's valuation of negative returns.
I feel like the story that Rob told fell into the classic winner's fallacy mindset of highlighting a risk someone took seemingly after it was successful. The issue was that those risks weren't just in the past.
david_reinstein @ 2022-11-25T19:02 (+4)
Do you think this presentation influenced/would have influenced SBF at all?
At the time, I assumed (and got the sense through his discussion) that he was extremely sophisticated, and given his quantitative and finance skills and background, would have already been taking these points on board (but just simplifying for presentation). But it's also possible for very smart and sophisticated people to overlook some obvious things, particular if their brains are occupied in many areas at once.
Guy Raveh @ 2022-11-26T21:55 (+3)
I struggle to follow the logic that would permit this risk taking in the first place, even without all these caveats. As you said:
a foundation with $15 billion would end up being a majority of funding for those areas, and so effectively increase the resources going towards them by more than 2-fold, and perhaps as much as 5-fold... by contrast... $15 million from me, spread out over a period of years, would represent less than a 1% increase.
This is indeed a big difference. If you're looking at a small-ish donation, it makes sense to ask if it's uncorrelated with other similar donations, and of yes to take the option with the higher expected value, because over a large number of such choices it's probable that the average donation would indeed have that value. In contrast, if you're looking at a donation in the billions of dollars, this EV logic is almost entirely irrelevant - even if it were uncorrelated with other donations, you don't have a hundred or a thousand donations of this size! The idea that we can actually expect do get the EV is just wrong. We in fact never get it.
So you can decide to be more or less risk averse, but you can't really pretend you're not risking a billion dollars here and hide behind EV maximisation.
davidc @ 2022-11-26T02:16 (+3)
When I listened to the interview, I briefly thought to myself that that level of risk-neutrality didn't make sense. But I didn't say anything about that to anyone, and I'm pretty sure I also didn't play through in my head anything about the actual implications if Sam were serious about it.
I wonder if we could have taken that as a red flag. If you take seriously what he said, it's pretty concerning (implies a high chance of losing everything, though not necessarily anything like what actually happened)!