Personal fit is different from the thing that you already like
By Joris P @ 2024-03-17T20:56 (+80)
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked. |
This draft lacks the polish of a full post, but the content is almost there. The kind of constructive feedback you would normally put on a Forum post is very welcome. |
I wrote most of this last year. I also think I’m making a pretty basic point and don’t think I’m articulating it amazingly, but I’m trying to write more and can imagine people (especially newer to EA) finding this useful - so here we go
Last week[1] I was at an event with a lot of people relatively new to EA - lots of them had recently finished the introductory fellowship. Talking through their plans for the future, I noticed that many of them used the concept ‘personal fit’ to justify their plans to work on a problem they had already found important before learning about EA.
They would say they wanted to work on combating climate change or increasing gender equality, because
- They had studied this and felt really motivated to work on it
- Therefore, their ‘personal fit’ was really good for working on this topic
- Therefore surely, it was the highest impact thing they could be doing.
I think a lot of them were likely mistaken, in one or more of the following ways:
- They overestimated their personal fit for roles in these (broad!) fields
- They underestimated the differences in impact between career options and cause areas
- They thought that they were motivated to do the most good they could, but in fact they were motivated by a specific cause
To be clear: the ideal standard here is probably unattainable, and I surely don’t live up to it. However, if I could stress one thing, it would be that people scoping out their career options could benefit from first identifying high-impact career options, and only second thinking about which ones they might have a great personal fit for - not the other way around.
- ^
This was last year
NickLaing @ 2024-03-18T05:24 (+13)
I probably agree but this is really really hard to do when you have spent most or all your adult life being passionate about one cause or topic. Many would disagree with me, but my first step here would be encouraging them to maximize their impact in the path someone is passionate about, while encouraging them to be part of the EA community/thinking in some way.
I think keeping someone engaged with thinking about good maximization is more important than trying to push them hard towards cause neutrality.
David T @ 2024-03-18T20:15 (+7)
I agree narrowly with the idea that the option people could have the most impact in isn't necessarily the thing they like.
But the option a person could have the most early career impact in isn't necessarily what's on EA-recommended lists either:
- The fact climate change is not "neglected" from a funding and organizations working on it perspective actually means more good opportunities for the median nonspecialist graduate to find impactful early career work in that field,. Whereas EA orgs are famously not easy to get into.
- Individual impact is not the same as organization-level or dollar spend impact. So a graduate with quantitative research skills probably won't produce very different output to anyone else GiveWell could have hired for that role instead, but they might radically change the impactfulness of a charity focused on gender-equality interventions. And there can be potentially very impactful interventions in QALY/WELLBY terms in gender equality, and low-impact or even negative interventions in x-risk or health fields. (Effective organizations probably aren't improved by hiring less engaged candidates either)
And they're not entirely wrong that doing stuff they're already engaged with is a "fit" even if they don't have any specialist skills in that field because
- There's an opportunity cost to investing time into learning about new fields rather than just doing.
- People generally perform better for longer at things they like, and comparative advantage is a thing.
- Some fields are much more accepting of "generalists" than others, especially some of the most-recommended EA fields
So whilst it's entirely possible that people are engaging in motivated reasoning, there's also reason to be cautious about going the other way, and deferring too much to impactful cause area recommendations, or assuming that people can't be more impactful in cause areas the likes of OpenPhil think are not neglected.
Mjreard @ 2024-03-18T10:17 (+7)
I think this is a version of a more general form of motivated reasoning where one seeks out a variable in an argument which is:
- imprecise,
- ambiguous,
- dependent on multiple other hard-to-track variables, or
- a variable over which they can claim unique knowledge (here, 'what I am good at personally and how good at it I am')
which they can then ratchet up to the maximum value for things they want to believe and the minimum value for things they don't want to believe.
I noticed this acutely in the comments on the 80k/Rational Animations crossover video, namely things like "If you become a doctor, you don't know how many life-saving situations you run into" (imprecision about likelihoods) or "Dr. Nalin couldn't have achieved what he did without the help of many others, down to the bricklayers and garbagemen who provided the essentials he needed to focus" (ambiguity/dependencies about credit).
Finding low-confrontation ways to point such things out seems valuable. Maybe the Scout Mindset remains the best work here.
It is scary and painful for people to admit they were mistaken, especially about their basic narratives concerning what's valuable or what they intended to do with their lives. I'd guess highlighting that truth-seeking is a broader, more-endorsed narrative – that also implies lots of changing your mind – is one way to shake people out of these more contingent narratives.
Amber Dawn @ 2024-03-19T09:46 (+6)
I guess I weakly disagree: I think that motivation and already having roots in an issue really are a big part of personal fit - especially now that lots of "classic EA jobs" seem highly oversubscribed, even if the cause areas are more neglected than they should be.
Like to make this more concrete, if your climate-change-motivated young EA was like 'well, now that I've learnt about AI risk, I guess I should pursue that career, ?', but they don't feel excited about it. Even if they have the innate ability to excel in AI safety, they will still have to outcompete people who have already built up expertize there, many of whom will find it easier to motivate themselves to work hard because they are interested in AI.
(On the object level, I assume that many roles in climate change and gender equality stuff are in fact more impactful than many roles in more canonical EA cause areas).
Lorenzo Buonanno @ 2024-03-19T17:11 (+2)
See Holden Karnofsky's aptitudes-based perspective.
I definitely agree that "some people scoping out their career options could benefit from first identifying high-impact career options, and only second thinking about which ones they might have a great personal fit for". But others could benefit from the opposite consideration, especially when taking into account moral and epistemic uncertainty about the relative value of different cause areas, and replaceability in areas where they would be limited to less specialized roles.
I think there's a real tension between "it's best for everyone to just work on their favourite thing" and "it's best for everyone to go work at OpenAI on AI Policy," and people make mistakes in both directions, both in their own careers and when giving advice to others. I personally believe that there are enough high-impact opportunities in climate change (esp. considering air quality) and gender equality (esp. in a global sense) for them to be great areas in which to build aptitudes and do the most good, but it's definitely not a given.
To be clear, I don't think this post says anything wrong, and I agree with it; although I don't see the same recommendation often made to people who work on mechanistic interpretability or cause-prioritization because they already liked it. (It's usually people criticizing the EA movement that say things like: "There are a lot of people in EA who just wanted a legitimate reason or excuse to sit around and talk about these big questions. But that made it feel like it’s a real job and they’re doing something good in the world instead of just sitting in a room and talking about philosophy.")
Vasco Grilo @ 2024-03-20T15:40 (+4)
Nice point, Joris! Relatedly, readers may want to check 80,000 Hours' new series on building skills.
If we were going to summarise all our advice on how to get career capital in three words, we’d say: build useful skills.
In other words, gain abilities that are valued in the job market — which makes your work more useful and makes it easier to bargain for the ingredients of a fulfilling job — as well as those that are specifically needed in tackling the world’s most pressing problems.
So today, we’re launching our series on the most useful skills for making a difference — which you can find here. It covers why we recommend each skill, how to get started learning them, and how to work out which is the best fit for you.
Ardenlk @ 2024-03-18T13:43 (+4)
I like this post and also worry about this phenomenon.
When I talk about personal fit (and when we do so at 80k) it's basically about how good you are at a thing/the chance that you can excel.
It does increase your personal fit for something to be intuitively motivated by the issue it focuses on, but I agree that it seems way too quick to conclude then that your personal fit with that is higher than other things (since there are tons of factors and there are also lots of different jobs for each problem area), let alone that that means you should work on that issue all things considered (since personal fit is not the only factor).