Why does Elon Musk suck so much at calibration?

By Evan_Gaensbauer @ 2022-11-06T18:10 (+8)

I won't moralize about Elon Musk as a personality but what should be of greater importance to effective altruists anyway is how the impacts of all his various decisions are, for lack of better terms, high-variance, bordering on volatile. From the outside view, there are stock answers to the question of why he would be like this:

Problems like these are recognized in EA already. During the first few years, one of the growing pains in EA was learning to recognize when a top, young Ph.D. with no experience outside academia shouldn't be the CEO of an organization doing something totally different from academic research. Another one of the growing pains was learning how to point that out to people in EA in positions of status, authority, control over resources, etc

This isn't a snide jab at Will MacAskill. He in fact recognized this problem before most and has made the wise choice of not being the CEO of the CEA for a decade now even though he could have kept the job forever if he wanted. This is a general problem in EA of many academics having to repeatedly learn they have little to no comparative advantage, if not a comparative disadvantage, in people and operations management. The fact that there is such a fear of criticizing the decisions or views of high-status leaders, someone like Holden Karnofsky, in EA that it's now a major liability to the movement. Meanwhile, Holden writes entire series of essays trying to make transparent his own reasoning of why he oversees an organization that hires a hundred people to tell Holden how the world really works and how to do the most good in umpteen different ways.

Some of the individuals about who there is the greatest concern that may end up in a personality cult, information silo, or echo chamber, like Holden, are putting in significant effort to avoid becoming out of touch with reality and minimizing any negative, outsized impact of their own biases. Yet it's not apparent if Musk makes any similar efforts. So, what, if any, are the reasons specific to Musk as a personality causing him to be so inconsistent in the ways effective altruists should care about most?


Geoffrey Miller @ 2022-11-06T23:25 (+4)

I'm having a bit of trouble reading between the lines here.

Is this post complaining about Elon Musk taking over Twitter, as if that's a bad thing? Or about him being outspoken and controversial in general?

There is a highly coordinated smear campaign against Elon Musk happening now across many news outlets, from people who are politically opposed to  free speech. But I do not think that EAs should take the smear campaign very seriously.

Evan_Gaensbauer @ 2022-11-09T00:58 (+2)

I acknowledged in some other comments that I wrote this post sloppily, so I'm sorry for the ambiguity. Musk's recent purchase of Twitter and its ongoing consequences is part of why I've made this post. It's not about it being bad that he bought Twitter. The series of mistakes that has

It's not about him being outspoken and controversial. The problem is Musk's not being sufficiently risk-averse and potentially having blindspots that could have a significant negative impact on his EA-related/longtermist efforts.

Jackson Wagner @ 2022-11-09T00:46 (+2)

Agree with Geoffrey that it is very hard to understand this post without examples of what is meant by Elon's "calibration".  What do you mean in your very last sentence: "what, if any, are the reasons specific to Musk as a personality causing him to be so inconsistent in the ways effective altruists should care about most"?  Please give some examples -- are you implying that buying Twitter in the hopes of making conversation freer and more rational is not a good EA cause area?  Or implying that maybe it is a good EA cause area, but Musk is a terrible person to run said project?  Or implying that Musk's other projects, like SpaceX and Tesla, are a waste of effort from an EA perspective?  (I would remind you that Elon's goal has not just been to work on the most important possible cause areas with the money he has, but to found profitable companies that make progress on important-ish causes, such that he can get more money to roll into more important causes in the future.  Evidently one can make more lots of money in electric car manufacturing that you can't make in bednet distribution or lobbying for better pandemic preparedness policy.)  Maybe you agree with my parenthetical, but you think that Twitter will not be a moneymaking proposition for Elon, or you think that he should give up on trying to get richer and richer and switch now to working on the most important EA causes. 

About twitter, I would note that Elon has been in charge for just a few days -- I don't think it's clear yet whether Elon had an "uncalibrated" sense of his capabilities and will ruin Twitter through incompetence, or if he will succeed at improving it.  Maybe after a few months or a few years, the answer of whether Musk's ownership has been good or bad for Twitter will be more clear.

More generally, I would think that many attempts to launch billion-dollar companies are subject to "high variance" -- that is just an unfortunate fact of life when you are trying to do ambitious things.  Many of Elon's companies have been close to bankruptcy at one point or another, but so far they have made it through.  Conversely, nobody doubts that Sam Bankman-Fried is a very smart guy, but FTX (although it may have been very close to succeeding and becoming even bigger than it was) is currently being forced to sell itself to Binance for pennies on the dollar.

Personally, I take pride in the EA community's enthusiasm for "hits-based giving", and its willingness to consider low-probability, high-consequence events seriously.  Unfortunately, taking action in this complex world requires making decisions under high uncertainty (including uncertainty about one's own capabilities and strengths/weaknesses).  For instance, I aspire to someday found an EA-aligned charitable organization, even though my only previous job experience has been as an aerospace engineer.  It's possible that I am deluded about my personal charity-running capacities, and it's possible that I'm furthermore deluded such that I'll never be able to recognize the ways in which I'm deluded about my charity-running capacities.  But I think in this situation, it is often reasonable to go ahead and found the charity anyways -- otherwise fear and uncertainty will preclude any ambitious action!  As Nathan Young says about SBF and the implosion of FTX -- "It is unclear if ex-ante this was a bad call from them. There is lots we don't know."

Evan_Gaensbauer @ 2022-11-09T04:44 (+4)

None of Musk's projects are by themselves bad ideas. None of them are obviously a waste of effort either. I agree the impacts of his businesses are mostly greater than the impact of his philanthropy, while the opposite is presumably the case for most philanthropists in EA. 

I agree his takeover of Twitter so far doesn't strongly indicate whether Twitter will be ruined. He has made it much harder for himself to achieve his goals with Twitter, though, through a series of many mistakes he has made during the last year in the course of buying Twitter.

The problem is that he is someone who is able to have an impact that's neither based strictly in business nor philanthropy. A hits-based approach based on low-probability, high-consequence events will sometimes include a low risk of highly negative consequences. The kind of risk tolerance associated with a hits-based approach doesn't work when misses could be catastrophic:

  • His attempts in the last month to intervene in the war in Ukraine and disputes over Taiwan's sovereignty seem to speak for themselves as at least a yellow flag. That's enough of a concern even ignoring whatever impacts he has on domestic politics in the United States. 
  • The debacle of whether OpenAI as an organization will be a net positive for AI alignment and the involvement of effective altruism in the organization's foundation is thought of by some as one of the worst mistakes in the history of AI safety/alignment. Elon Musk played a crucial role in OpenAI's founding and has acknowledged he made mistakes with OpenAI since he has distanced himself from the organization. In general, the overall impact he has had on AI alignment is ambiguous. He remains one of a small number of individuals who have the most capability to impact public responses to advancing AI other than world leaders, though it's not clear whether or how much he could be relied on to have a positive impact on AI safety/AI alignment in the future.

These are only a couple examples of the potential impact and risks of the decisions he makes that are unlike anything that any individual in EA has done before. An actor in his position should have a greater deal of fear and uncertainty that should at least inspire someone to be more cautious. My assumption is he isn't cautious enough. I asked my initial question in the hope the causes of his recklessness can be identified, to aid in formulating adequate protocols for responding to the potentially catastrophic errors he commits in the future.