Log-normal lamentations
By Gregory Lewis🔸 @ 2015-05-19T21:07 (+28)
[Morose. Also very roughly drafted. Cross]
Normally, things are distributed normally. Human talents may turn out to be one of these things. Some people are lucky enough to find themselves on the right side of these distributions – smarter than average, better at school, more conscientious, whatever. To them go many spoils – probably more so now than at any time before, thanks to the information economy.
There’s a common story told about a hotshot student at school whose ego crashes to earth when they go to university and find themselves among a group all as special as they thought they were. The reality might be worse: many of the groups the smart or studious segregate into (physics professors, Harvard undergraduates, doctors) have threshold (or near threshold)-like effects: only those with straight A’s, only those with IQs > X, etc. need apply. This introduces a positive skew to the population: most (and the median) are below the average, brought up by a long tail of the (even more) exceptional. Instead of comforting ourselves at looking at the entire population to which we compare favorably, most of us will look around our peer group and find ourselves in the middle, and having to look a long way up to the best. 1
Yet part of growing up is recognizing there will inevitably be people better than you are – the more able may be able to buy their egos time, but no more. But that needn’t be so bad: in several fields (such as medicine) it can be genuinely hard to judge ‘betterness’, and so harder to find exemplars to illuminate your relative mediocrity. Often there are a variety of dimensions to being ‘better’ at something: although I don’t need to try too hard to find doctors who are better at some aspect of medicine than I (more knowledgeable, kinder, more skilled in communication etc.) it is mercifully rare to find doctors who are better than me in all respects. And often the tails are thin: if you’re around 1 standard deviation above the mean, people many times further from the average than you are will still be extraordinarily rare, even if you had a good stick to compare them to yourself.
Look at our thick-tailed works, ye average, and despair! 2
One nice thing about the EA community is that they tend to be an exceptionally able bunch: I remember being in an ‘intern house’ that housed the guy who came top in philosophy at Cambridge, the guy who came top in philosophy at Yale, and the guy who came top in philosophy at Princeton – and although that isn’t a standard sample, we seem to be drawn disproportionately not only from those who went to elite universities, but those who did extremely well at elite universities. 3 This sets the bar very high.
Many of the ‘high impact’ activities these high achieving people go into (or aspire to go into) are more extreme than normal(ly distributed): log-normal commonly, but it may often be Pareto. The distribution of income or outcomes from entrepreneurial ventures (and therefore upper-bounds on what can be ‘earned to give’), the distribution of papers or citations in academia, the impact of direct projects, and (more tenuously) degree of connectivity or importance in social networks or movements would all be examples: a few superstars and ‘big winners’, but orders of magnitude smaller returns for the rest.
Insofar as I have ‘EA career path’, mine is earning to give: if I were trying to feel good about the good I was doing, my first port of call would be my donations. In sum, I’ve given quite a lot to charity – ~£15,000 and counting – which I’m proud of. Yet I’m no banker (or algo-trader) – those who are really good (or lucky, or both) can end up out of university with higher starting salaries than my peak expected salary, and so can give away more than ten times more than I will be able to. I know several of these people, and the running tally of each of their donations is often around ten times my own. If they or others become even more successful in finance, or very rich starting a company, there might be several more orders of magnitude between their giving and mine. My contributions may be little more than a rounding error to their work.
A shattered visage
Earning to give is kinder to the relatively minor players than other ‘fields’ of EA activity, as even though Bob’s or Ellie’s donations are far larger, they do not overdetermine my own: that their donations dewormed 1000x children does not make the 1x I dewormed any less valuable. It is unclear whether this applies to other ‘fields': Suppose I became a researcher working on a malaria vaccine, but this vaccine is discovered by Sally the super scientist and her research group across the world. Suppose also that Sally’s discovery was independent of my own work. Although it might have been ex anteextremely valuable for me to work on malaria, its value is vitiated when Sally makes her breakthrough, in the same way a lottery ticket loses value after the draw.
So there are a few ways an Effective Altruist mindset can depress our egos:
- It is generally a very able and high achieving group of people, setting the ‘average’ pretty high.
- ‘Effective Altruist’ fields tend to be heavy-tailed, so that being merely ‘average’ (for EAs!) in something like earning to give mean having a much smaller impact when compared to one of the (relatively common) superstars.
- (Our keenness for quantification makes us particularly inclined towards and able to make these sorts of comparative judgements, ditto the penchant for taking things to be commensurate).
- Many of these fields have ‘lottery-like’ characteristics where ex ante and ex postvalue diverge greatly. ‘Taking a shot’ at being an academic or entrepreneur or politician or leading journalist may be a good bet ex ante for an EA because the upside is so high even if their chances of success remain low (albeit better than the standard reference class). But if the median outcome is failure, the majority who will fail might find the fact it was a good idea ex ante of scant consolation – rewards (and most of the world generally) run ex post facto.
What remains besides
I don’t haven’t found a ready ‘solution’ for these problems, and I’d guess there isn’t one to be found. We should be sceptical of ideological panaceas that can do no wrong and everything right, and EA is no exception: we should expect it to have some costs, and perhaps this is one of them. If so, better to accept it rather than defend the implausibly defensible.
In the same way I could console myself, on confronting a generally better doctor: “Sure, they are better at A, and B, and C, … and Y, but I’m better at Z!”, one could do the same with regards to the axes one’s ‘EA work’. “Sure, Ellie the entrepreneur has given hundreds of times more money to charity, but what’s she like at self-flagellating blog posts, huh?” There’s an incentive to diversify as (combinatorically) it will be less frequent to find someone who strictly dominates you, and although we want to compare across diverse fields, doing so remains difficult. Pablo Stafforini has mentioned elsewhere whether EAs should be ‘specialising’ more instead of spreading their energies over disparate fields: perhaps this makes that less surprising. 4
Insofar as people’s self-esteem is tied up with their work as EAs (and, hey, shouldn’t it be, in part?) There perhaps is a balance to be struck between soberly and frankly discussing the outcomes and merits of our actions, and being gentle to avoid hurting our peers by talking down their work. Yes, we would all want to know if what we were doing was near useless (or even net negative), but this should be broken with care. 5
‘Suck it up’ may be the best strategy. These problems become more acute the more we care about our ‘status’ in the EA community; the pleasure we derive from not only doing good, but doing more good than our peers; and our desire to be seen as successful. Good though it is for these desires to be sublimated to better ends (far preferable all else equal that rivals choose charitable donations rather than Veblen goods to be the arena of their competition), it would be even better to guard against these desires in the first place. Primarily, worry about how to do the most good. 6
Notes:
- As further bad news, there may be progression of ‘tiers’ which are progressively more selective, somewhat akin to stacked band-pass filters: even if you were the best maths student at your school, then the best at university, you may still find yourself plonked around median in a positive-skewed population of maths professors – and if you were an exceptional maths professor, you might find yourself plonked around median in the population of fields medalists. And so on (especially – see infra – if the underlying distribution is something scale-free).
- I wonder how much this post is a monument to the grasping vaingloriousness of my character…
- Pace: academic performance is not the only (nor the best) measure of ability. But it is a measure, and a fairly germane one for the fairly young population ‘in’ EA.
- Although there are other more benign possibilities, given diminishing marginal returns and the lack of people available. As a further aside, I’m wary of arguments/discussions that note bias or self-serving explanations that lie parallel to an opposing point of view (“We should expect people to be more opposed to my controversial idea than they should be due to status quo and social desirability biases”, etc.) First because there are generally so many candidate biases available they end up pointing in most directions; second because it is unclear whether knowing about or noting biases makes one less biased; and third because generally more progress can be made on object level disagreement than on trying to evaluate the strength and relevance of particular biases.
- Another thing I am wary of is Crocker’s rules: the idea that you unilaterally declare: ‘don’t worry about being polite with me, just tell it to me straight! I won’t be offended’. Naturally, one should try and separate one’s sense of offense from whatever information was there – it would be a shame to reject a correct diagnosis of our problems because of how it was said. Yet that is very different from trying to eschew this ‘social formatting’ altogether: people (myself included) generally find it easier to respond well when people are polite, and I suspect this even applies to those eager to make Crocker’s Rules-esque declarations. We might (especially if we’re involved in the ‘rationality’ movement) want to overcome petty irrationalities like incorrectly updating on feedback because of an affront to our status or self esteem. Yet although petty, they are surprisingly difficult to budge (if I cloned you 1000 times and ‘told it straight’ to half, yet made an effort to be polite with the other half, do you think one group would update better?) and part of acknowledging our biases should be an acknowledgement that it is sometimes better to placate them rather than overcome them.
- cf. Max Ehrmann put it well:
… If you compare yourself with others, you may become vain or bitter, for always there will be greater and lesser persons than yourself.
Enjoy your achievements as well as your plans. Keep interested in your own career, however humble…
null @ 2015-05-21T23:48 (+5)
If anyone is ever at a point where they are significantly discouraged by thoughts along these lines (as I've been at times), there's an Effective Altruist self-help group where you can find other EAs to talk to about how you're feeling (and it really does help!). The group is hidden, but if you message me, I can point you in the right direction (or you can find information about it on the sidebar of the Effective Altruist facebook group).
null @ 2015-05-22T00:26 (+3)
I wonder if there's a large amount of impact to be had in people outside of the tail trying to enhance the effectiveness of people in the tail (these might look like being someone's personal assistant or sidekick, introducing someone in the tail to someone cool outside of the EA movement, being a solid employee for someone who founds an EA startup, etc.)? Being able to improve impact of someone in the tail (even if you can't quantify what you accomplished) might avert the social comparison aspect, as one would feel like they'd be able to at least take partial credit for the accomplishments of the EA superstars.
null @ 2015-05-20T22:28 (+2)
Thanks for writing this. I often feel quite similar - I can find being in contact with so many amazing people either inspiring or oddly demotivating!
That's the old status-conscious monkey brain talking (everyone's grasping vaingloriousness), and we shouldn't feed it, but it's good to acknowledge that it's there from time to time.
Overall, I think the EA movement is pretty good at being positive. I've found such criticism as there is is usually self-criticism - if anything, I find people to be unusually generous with praise, which is lovely. I think you hit the nail on the head with your four sources of ego-damage. And yeah, I think the right thing to do is to try and not be bothered. For bonus points, remember to praise people when they do good things!
null @ 2015-05-20T03:40 (+1)
What Ryan said. There’s a lot of academic produce in the movement that I’m in awe of, but since we’re working toward a common goal, it doesn’t irk me ego-wise in the least. Rather it motivates me to learn to produce such output myself. It also makes me want to hug the authors a lot.
What I do find worrying is the prospect of having one’s own work displaced by superior work that didn’t built upon the first. That’s a pity, and I don’t know of any perfect way to avoid it. Publishing quickly and incrementally could help, and in some areas it may be possible to coordinate such things with all others. But surely that doesn’t solve the problem in general.
null @ 2015-05-19T22:02 (+1)
Life's not all a competition, Greg! ;)
null @ 2015-05-19T21:38 (+1)
I can think of a few key dystopias (brave new world, the rise of meritocracy 1870-2033, We) that have utilitarian reasoning taken to a drastic conclusion in some way. Many of them point to this dynamic and the implicit critique is that to think about human value in this was is base. But we have this kind of comparative social destruction permeating our society anyway at the moment, its just more general than EA points. Nice article and great to explore - another risk of the movement we should register ! :)
Perhaps its the virtue of decision-theoretically consistent behaviour we should be praising and celebrating as well as the big splash results?
null @ 2015-05-20T13:12 (+7)
I've often found the EAs around me to be
(i) very supportive of taking on things that are ex ante good ideas, but carry significant risk of failing altogether, and
(ii) good at praising these decisions after they have turned out to fail.
It doesn't totally remove the sting to have those around you say "Great job taking that risk, it was the right decision and the EV was good!" and really mean it, but I do find that it helps, and it's a habit I'm trying to build to praise these kinds of things after the fact as much as I praise big successes.
Of course there is some tension; often, if a thing fails to produce value, it's useful to figure out how we could have anticipated that failure, and why it might not have been the right decision ex ante. Balance, I guess.