Thoughts on doing good through non-standard EA career pathways

By Buck @ 2019-12-30T02:06 (+171)

(Thanks to Beth Barnes, Asya Bergal, and especially Joshua Teperowski-Monrad for comments. A lot of the ideas in this post originated in friends of mine who didn’t want to write them up; they deserve most of the credit for good ideas, I just organized the ideas and wrote it up.)

80000 Hours writes (under the heading “Apply an unusual strength to a needed niche”):

If there’s any option in which you might excel, it’s usually worth considering, both for the potential impact and especially for the career capital; excellence in one field can often give you opportunities in others.
This is even more likely if you’re part of a community that’s coordinating or working in a small field. Communities tend to need a small number of experts covering each of their main bases.
For instance, anthropology isn’t the field we’d most often recommend someone learn, but it turned out that during the Ebola crisis, anthropologists played a vital role, since they understood how burial practices might affect transmission and how to change them. So, the biorisk community needs at least a few people with anthropology expertise.

I think that there are many people who do lots of good through pursuing a career that they were a particularly good fit for, rather than by trying to fit themselves into a top-rated EA career. But I also think it’s pretty easy to pursue such paths in a way that isn’t very useful. In this post I’m going to try to build on this advice to describe some features of how I think these nonstandard careers should be pursued in order to maximize impact.

I’m going to misleadingly use the term “nonstandard EA career” to mean “a career that isn’t one of 80K’s top suggestions”. (I’m going to abbreviate 80,000 Hours as 80K.)

I’m not very confident in my advice here, but even if the advice is bad, hopefully the concepts and examples are thought provoking.

Doing unusual amounts of good requires unusual actions

If you want to do an unusual amount of good, you probably need to take some unusual actions. (This isn’t definitionally true, but I think most EAs should agree on it–at the very least, most EAs think you can do much more good than most people do by donating an affordable but unusual share of your income to GiveWell recommended nonprofits.)

One approach to this is working in a highly leveraged job on a highly leveraged problem. This is the approach suggested by the 80K career guide. They came up with a list of career options, like doing AI safety technical research, or working at the CDC on biosecurity, or working at various EA orgs, which they think are particularly impactful and which they think have room for a bunch of EAs.

Another classic choice is donating to unusually effective nonprofits, which is a plan where you didn’t have to choose a particularly specific career path (though taking a specific career path is extremely helpful), the unusual effectiveness comes from the choice to donate to an unusually effective place.

The nice thing about taking one of those paths is that you might not need to do anything else unusual in order to have a lot of impact.

There are also some reasons to consider doing something that isn’t EtG or an 80K recommendation. For example:

But if you go into a career that wasn’t as carefully selected for impact, then you’re going to have to do something unusual in order to be highly impactful. Phrased differently: if you want to do as much good as the 50th percentile operations staff at an EA org, I suspect you’re going to have to do more good than the 90th percentile anthropologist, and probably more than the 99th percentile.

(I got worried that someone was going to comment on this post with incontrovertible proof that anthropologists actually do lots of great work, so I spent a wholly unreasonable amount of time investigating this. Some notes on that: The American Anthropological Association has a page about Careers in Anthropology here which you can take a look at. It doesn’t mention any careers which look as good to me as preventing pandemics. The AAA did a survey of anthropologists which thought that 12% of respondents with anthropology master’s degrees worked in humanitarian work, health, or international development. When I click around on anthropology career websites they don’t mention this kind of work very much. On the other hand, this paper says “The number of local anthropologists engaged as partners in malaria control efforts was impressive, although largely not published in medical anthropological journals. It is important to remember that, at this time, from the perspective of the academic field of medical anthropology, applied research for health programs as clients was considered research of low value.”, which implies otherwise. Also, when I skimmed most of an anthropology textbook recently, it didn’t seem to put emphasis on the possibility for anthropologists to do useful altruistic work.)

When I think about how people can do unusual amounts of good through a career path which doesn’t have good average impact outcomes, a few particular strategies stand out to me, including:

“Good judgement”

I think there’s an important distinction to be drawn between nonstandard EA career plans which require you to have great epistemics and judgement and career paths which don’t. But to describe that distinction, I first need to explain what I mean by “good judgement”.

When I say someone has good judgement, I mean that I think they’re good at the following things:

These skills allow people to do things like the following:

I think it’s likely that there exist things you can read and do which make you better at having good judgement about what’s important in a field and strategically pursuing high impact opportunities within it. I suspect that other people have better ideas, but here are some guesses. (As I said, I don’t think that I’m overall great at this, though I think I’m good at some subset of this skill.)

Doing good in a way that requires self-direction and good judgement

In this kind of career path, you try to produce value by gaining some expertise that EA doesn’t have, and then doing good work that otherwise wouldn’t be done by people in that field.

EA has a lot of questions which an excellent historian could answer. I think EAs have done useful work researching (among other things) the history of long-range forecasting, early field growth, and nuclear policy. And it would be great to have more similar work.

However, to do the kind of work that I’d be excited about seeing from historians, you’d need to have skills and intellectual intuitions that are quite different from those usually expected in a historian. For example, I want people with a skeptical mindset, who take quantitative and big-picture approaches when those approaches are reasonable, who have patience for careful fact checking, and who have a good sense of what might be useful to EA for them to study. Some historians have these properties, but I think they’re not the main values that history as an academic discipline tries to instill in people who are being trained up by PhD programs.

And so I don’t think that EA will get the kind of historians it could use by finding people who already want to be historians and who fit in great with historians and telling them to go learn from the historians how to do history. This is tricky, because I think that that might be the majority of people who are tempted by that section of the 80K website to stay studying history.

I think that this mismatch is closely related to the fact that we want EA historians at all. If you didn’t have to be an unusual historian to maximize your EA impact, then EA could just hire normal historians to do whatever history work it needed done. (I suspect that even though biosecurity needs some anthropologists, no EAs motivated by biosecurity should go into anthropology.)

You can break down the kinds of helpful unusualness into two broad categories, which correspond to reasons that we can’t just hire historians:

For both of these categories, you’re relying on the quality of your own judgement more than you would be if you were following a more standard career path, because you’re more on your own: very few EAs will know which methods it would be really helpful for an EA historian to pursue, or which methods of historical research are effective at answering a particular type of historical question.

I think the quality of judgement required here is pretty high, and that you should consider trying to figure out how good your judgement is (or how good it could become) before you start going down career paths which won’t be helpful if you don’t have good judgement. I don’t think my judgement is good enough that I’d be comfortable trying to do good by going into a field where I wasn’t able to get good advice from EAs with better judgement than me; I feel like if I tried really hard to improve my quality of judgement, I might get good at it but I probably wouldn’t. (That said, I think it’s worth my time to practice having good judgement.)

This makes me think that for a lot of nonstandard EA career paths like this, the required level of commitment and context on EA is higher than it would be if you’re doing engineering at OpenAI or something like that. And I think that if you don’t have that commitment and context, you’re likely to fail to have much impact.

Another example

I recently talked to an EA who’s been working as a software engineer at a respectable tech company for the last two years. They were considering taking a job where they’d be developing technology which would make it more secure to use cloud computing for sensitive computations.

I think that novel mechanisms for secure computation might be helpful for AI x-risk and maybe other things. I’d like it if there was someone who was paying close attention to developments in this field, and trying to figure out which developments are important, and making friends with various experts in different parts of the field.

However, I could also imagine this person going into that field and it not being helpful at all. I imagine them getting really focused on the subproblem that they happen to be employed working on, or getting a promotion that ends up with them getting expertise in something other than the technical questions that are most helpful.

Overall I said I thought that taking the job was a good idea.

Doing good, web developer style

There are lots of EAs who do a lot of good by pursuing jobs which don’t require these difficult judgement calls; in these cases, they try to do good by pursuing a career that they have good personal fit for and which does good via them working for someone else. For example: web developers, operations staff, management, marketing, communications.

These jobs are sometimes hard to fill. For example: somehow, Ought hasn’t yet hired an engineering team lead, despite the fact that there are many full stack engineers in EA, Ought is working on a popular choice for top cause area, and they have strong endorsements from respected members of the EA community. I’ve seen a few other occasions where promising EA projects were bottlenecked on web development effort, for example CEA a few years ago and more recently LessWrong and OpenAI.

I think that a reasonable strategy for impact would be to say “I’m a web developer, so I’m going to become an excellent god damn web developer and then eventually someone’s going to hire me and I’m going to do an amazing job for them”.

AFAICT, if you want impact via this kind of strategy, you need to do a few things right:

I think that if you’re not willing/able to join pretty speculative or early stage roles that aren’t well defined yet, you’re missing out on like 75% of the expected impact. This is fine, but I don’t want people to do it by accident.

We can also analyse this EA career plan through the lens of why we can’t hire non-EAs to do the stuff:

So I think that you can do good via paths like this, but again, it’s not exactly an easy option–to do it well, it’s helpful to be somewhat strategic and it’s important to have a level of flexibility which is empirically unusual. I think people following paths like this might be able to substantially increase their impact by specifically thinking about how they can increase the probability that they are hired to do useful direct work someday.

Conclusion

If you want to do an unusual amount of good, you obviously have to be doing something unusual. If you do a standard top-recommended EA career, you might not need to do anything more unusual than that to get impressive impact results.

I don’t have solid evidence for this, but I am kind of worried that people might make a mistake where they go from “It’s possible to do lots of good in nonstandard careers” and “The thing I currently want to do involves doing a nonstandard career” to “It’s possible to do lots of good by doing the thing I currently want to do”. My guess is that if you want to do lots of good in a nonstandard EA career, you need to do something nonstandard within that career.

Related posts:


Stefan_Schubert @ 2019-12-30T10:48 (+55)

Thanks for this post. I think discussions about career prioritisation often become quite emotional and personal in a way that clouds people's judgements. Sometimes I think I've observed the following dynamic.

1. It's argued, more or less explicitly, that EAs should switch career into one of a small number of causes.

2. Some EAs are either not attracted to those careers, or are (or at least believe that they are) unable to successfully pursue those careers.

3. The preceding point means that there is a painful tension between the desire to do the most good, and one's personal career prospects. There is a strong desire to resolve that tension.

4. That gives strong incentives to engage in motivated reasoning: to arrive at the conclusion that actually, this tension is illusory; one doesn't need to engage in tough trade-offs to do the most good. One can stay on doing roughly what one currently does.

5. The EAs who believe in point 1 - that EAs should switch career to other causes - are often unwilling to criticise the reasoning described in 4. That's because these issues are rather emotional and personal, and that some may think it's insensitive to criticise people's personal career choices.


I think similar dynamics play out with regards to cause prioritisation more generally, decisions whether to fund specific projects which many feel strongly about, and so on. The key aspects of these dynamics are 1) that people often are quite emotional about their choice, and therefore reluctant to give up on it even in the face of better evidence and 2) that others are reluctant to engage in serious criticism of the former group, precisely because the issue is so clearly emotional and personal to them.


One way to mitigate these problems and to improve the level of debate on these issues is to discuss the object-level considerations in a detached, unemotional way (e.g. obviously without snark); and to do so in some detail. That's precisely what this post does.

Denise_Melchin @ 2020-01-04T19:56 (+19)

5. also has a negative impact on the people who are trying to decide between different career options and would actually be happy to hear constructive criticism. I often feel like I cannot trust others to be honest in their feedback if I'm deciding between career options because they prefer to be 'nice'.

SiebeRozendal @ 2020-01-11T12:41 (+18)

Can I add the importance of patience and trust/faith here?

I think a lot of non-standard career paths involve doing a lot of standard stuff to build skill and reputation, while maintaining a connection with EA ideas and values and keeping an eye open for unusual opportunities. It may be 10 or 20 years before someone transitions into an impactful position, but I see a lot of people disengaging from the community after 2-3 years if they haven't gotten into an impactful position yet.

Furthermore, trusting that one's commitment to EA and self-improvement is strong enough to lead to an impactful career 10 years down the line can create a self-fulfilling prophecy where one views their career path as "on the way to impact" rather than "failing to get an EA job". (I'm not saying it's easy to build, maintain, and trust one's commitment though.)

In addition, I think having good language is really important for keeping these people motivated and involved. We have "building career capital" and Tara MacAulay's term of "Journeymen" but these are not catchy enough I'm afraid.

SiebeRozendal @ 2020-01-11T12:59 (+8)

This might be just restrating what you wrote, but regarding learning unusual and valuabe skills outside of standard EA career paths:

I believe there is a large difference in the context of learning a skill. Two 90th-percentile quality historians with the same training would come away with very different usefulness for EA topics if one learned the skills keeping EA topics in mind, while the other only started thinking about EA topics after their training. There is something about immediately relating and applying skills and knowledge to real topics that creates more tailored skills and produces useful insights during the whole process, which cannot be recreated by combining EA ideas with the content knowledge/skills at the end of the learning process. I think this relates to something Owen Cotton-Barratt said somewhere, but I can't find where. As far as I recall, his point was that 'doing work that actually makes an impact' is a skill that needs to be trained, and you can't just first get general skills and then decide to make an impact.

Personally, even though I did a master's degree in Strategic Innovation Management with longtermism ideas in mind, I didn't have enough context and engagement with ideas on emerging technology to apply the things I learned to EA topics. In addition, I didn't have the freedom to apply the skills. Besides the thesis, all grades were based on either group assignments or exams. So some degree of freedom is also an important aspect to look for in non-standard careers.

Lukas_Finnveden @ 2020-01-12T21:22 (+2)

Owen speaks about that in his 80k interview.

Aaron Gertler @ 2020-05-22T03:56 (+3)

This post was awarded an EA Forum Prize; see the prize announcement for more details.

My notes on what I liked about the post, from the announcement:

Many people who want to do a lot of good are pursuing or will pursue careers that aren’t among the top suggestions of 80,000 Hours. Thus, it seems highly valuable to consider ways in which people can aim to increase their impact across a wide range of career paths.

Aside from tackling a promising topic, Buck’s post also does some specific things I like:

  • He begins with an extended quote from another article, and later spends a lot of time addressing the particular example cited in that quote (the potential impact of anthropology). When content directly expands upon previous content that’s often a sign that we’re making progress in an area. I’d love to see more posts that explore the deeper implications of briefly-stated ideas in previous EA material.
  • He presents most of his concrete advice in list form, making it relatively easy for someone to revisit this post and skim through it to find suggestions that might be applicable to their current position.
  • He carefully points out the ways in which his advice does and doesn’t break with “standard” advice. For example, while he discusses ways to make an impact in careers that aren’t standard 80,000 Hours recommendations, he also notes that doing so might be a lot harder, and that there are still strong reasons to consider recommended positions.

One thing I’d have been interested to see: More real-world examples of people in the community who have done a lot of good through unusual career paths. This could have provided evidentiary support (or the opposite) for some of the ideas Buck presented.

MaxRa @ 2020-01-01T14:45 (+3)

I found this post very useful to think about my own career, thanks for writing it up. My prospects also don't fall neatly into the top recommended paths, so I'd be interested in more discussion how to train my "good judgement".

Summarizing your ingredients of good judgment:

  1. Spotting the important questions (e.g. what do I need to learn to improve my decision the most?)
  2. Having good research intuitions (good quick guesses, think critically about evidence)
  3. Having good sense about how the world works and what plans are likely to work.
  4. Knowing when they’re out of their depth, knowing who to ask for help, knowing who to trust.

What do you think about participating in a forecasting platform, e.g. Good Judgement Open or Metaculus? It seems to cover all ingredients, and even be a good signal for others to evaluate your judgement quality. When I participated in GJO for a couple of months, I was demotivated by the lack feedback for the reasoning in my forecasts. I only could look at the reasoning of other forecasters and at my Brier score, of course.

P.S: Your thinking appears to be very clear and you appear rather competent, so I wonder if your bar of "good enough judgement" to reasonably pursue non-standard paths is too high. I also wonder if people whose judgement you trust would agree with your diagnosis that you wouldn't have good enough judgement for a non-standard path.

Buck @ 2020-01-09T08:05 (+6)
What do you think about participating in a forecasting platform, e.g. Good Judgement Open or Metaculus? It seems to cover all ingredients, and even be a good signal for others to evaluate your judgement quality.

Seems pretty good for predicting things about the world that get resolved on short timescales. Sadly it seems less helpful for practicing judgement about things like the following:

  • judging arguments about things like the moral importance of wild animal suffering, plausibility of AI existential risk, and existence of mental illness
  • long-term predictions
  • predictions about small-scale things like how a project should be organized (though you can train calibration on this kind of question)

Re my own judgement: I appreciate your confidence in me. I spend a lot of time talking to people who have IMO better judgement than me; most of the things I say in this post (and a reasonable chunk of things I say other places) are my rephrasings of their ideas. I think that people whose judgement I trust would agree with my assessment of my judgement quality as "good in some ways" (this was the assessment of one person I asked about this in response to your comment).

cole_haus @ 2020-01-03T00:49 (+2)

Tangential point of information: Cliometrics and cliodynamics are quantitative, big-picture approaches to history. (Unfortunately, the books/articles I've seen have actually been disappointing. If anyone has reading recommendations, I'd be very enthused.)

Habryka @ 2020-01-03T03:26 (+3)

I have found the handbook of cliometrics pretty useful: https://link.springer.com/referencework/10.1007%2F978-3-642-40458-0

cole_haus @ 2020-01-03T03:39 (+3)

Thanks. I think I initially passed over this because I tend to prefer textbooks or other non-handbook books as a first introduction, but I'll give it a second look.

Tristan Williams @ 2023-05-07T19:47 (+1)

The worry section made me giggle and I really appreciated it, and felt a kinship in undertaking such a process when I've written things before :)