The career and the community

By richard_ngo @ 2019-03-21T12:35 (+93)

tl;dr: for the first few years of their careers, and potentially longer, most effective altruists should focus on building career capital (which isn’t just 'skills’!) rather than doing good or working at EA orgs. However, there are social dynamics which push new grads towards working at EA orgs, which we should identify and counteract. Note that there are a lot of unsubstantiated claims in this post, and so I’d be grateful for pushback on anything I’m incorrect about (throughout the post I’ve highlighted assumptions which do a lot of work but which I haven’t thoroughly justified). This post and this post point at similar ideas less lengthily.

Contents: 1. Advantages of working at EA organisations. 2. Advantages of external career pathways. 3. Social dynamics and implicit recommendations. 4. Building a community around moonshots. 5. Long-term and short-term constraints.

What are the most important bottlenecks limiting the amount of good the effective altruism movement can do? The original message (or at least, the one which was received by the many people who went into earning to give) was that we were primarily funding-constrained. The next message was that direct work was the most valuable career pathway, and was talent-constrained. This turned out to be a very misleading phrase, and led a bunch of talented people, particularly recent graduates who hadn’t yet built up on-the-job skills, to be unable to find EA jobs and become disillusioned with the movement. And so 80,000 Hours recently published a blog post which tries to correct that misconception by arguing that instead of being bottlenecked on general talent, EA lacks people with specific skills which can help with our favoured cause areas, for example the skill of doing great AI safety research.

I have both a specific and a general disagreement with this line of thinking; I’ll discuss the specific one first. I worry that ‘skills-constrained’ is open to misinterpretation in a similar way as ‘talent-constrained’ was. It’s true that there are a lot of skills which are very important for EA cause areas. However, often people are able to be influential not primarily because of their skills, but because of their career capital more broadly. (Let me flag this more explicitly as assumption 1.) For example, I’m excited about CSET largely because I think that Jason Matheny and his team have excellent networks and credentials, specific familiarity with how US politics works, and generally high competence. These things seem just as important to me as their ‘skills’. Similarly, I think a large part of the value of getting into YCombinator or Harvard or the Thiel Fellowship comes from signalling + access to networks + access to money. But the more we rely on explicit arguments about what it takes to do the most good, the more likely we are to underrate these comparatively nebulous advantages. And while 80,000 Hours does talk about general career capital being valuable, we’ve already seen that the specific headline phrases they use can have a disproportionate impact. It seems plausible to me that EAs who hear about the ‘skill gap’ will prioritise developing skills over other forms of career capital, and thereby harm their long-term ability to do good compared with their default trajectory (especially since people are generally are able to make the biggest difference later in their careers).

I don’t want to be too strong on this point. Credentials are often overrated, and many people fall into the trap of continually creating career capital without ever using it to pursue their true goals. In addition, becoming as skilled as possible is often the best way to both amass career capital and do good in the long term. However, this is just one example of my general disagreement with standard EA attitudes towards careers: I think that we tend to overrate new grads working at EA organisations or directly on EA causes, compared with entering other jobs which are less immediately relevant to EA but which may allow them to do more good later. I think that people underrating the value of career capital is one reason for this. Another reason, which I’ll return to later on in this post, is the social dynamics of the EA community.

Advantages of working at EA organisations

A third reason is that the arguments in favour of the latter aren’t made often enough. So let’s explore in more detail the cost/benefit analysis facing a new grad who’s trying to decide whether to join an EA org, versus working elsewhere for a few years (or analogously, deciding whether to do a PhD specifically on an EA research area, versus a different topic in the same field). The main advantages of doing the former:

  1. You’ll be able to focus on learning exactly the skills which are most valuable in doing good, rather than whichever happen to be required by your other job. Also, if you’re mistaken about what skills actually matter, you’ll get feedback about that more quickly.
  2. You’ll build a stronger EA network. You may be more motivated by being surrounded by people with the same values as you, which may make your own values less likely to drift.
  3. You’ll be able to do important work while building skills and experience.

I think these are all important points, but I have reservations about each of them. On the first point, my sense is that the most important skills can be learned in a range of positions and are fairly transferable (assumption 2). For example, I think that experience leading most teams is transferable to leading most other teams. I also think that PhDs matter more for teaching people how to do good research in their field than for building expertise on any particular topic.

On point 2, while it’s important to meet other EAs, I’d say that the returns from doing so start diminishing significantly after the point where you know enough people to have second-degree connections with most other EAs. Connecting with people outside the EA bubble is much more counterfactually valuable since it increases the contacts available to the movement as a whole, especially if they’re open to EA ideas or interested in collaboration. This also makes our community less insular. I do think motivation and value drift are serious concerns, though.

On point 3, I claim that the work you do in the first few years of your career is generally much less valuable than what you can do later, and so this shouldn’t be a major priority (assumption 3). This may not have been true a few years ago, when there were many more low-hanging fruit, but EA has now funnelled hundreds of people into careers in favoured areas. And given that as a new grad you cost your organisation money and take up the time of whoever is supervising you, my (very uninformed) guess is that it typically takes anywhere from 6 months to 2 years in your first job to actually become net positive in value. This is not counting the risk of actively doing harm, which I’ll discuss shortly.

Advantages of external career pathways

By contrast, there are quite a few advantages of spending the first few years (or potentially even decades) of your career in non-EA jobs:

  1. Build better career capital (skills, networks, reputation, money).
    1. Almost all of the best mentorship you can get for most skills exists outside the EA community, so you can learn fastest by accessing that. For research, the ideal would be getting a PhD with a professor who cares a lot about and is good at supervising your research. For ops roles, the ideal might be working in a startup under a successful serial entrepreneur.
    2. You can take more risks, because success in your current job isn’t your main goal. Experimenting and pushing boundaries is generally a good way to learn. Taking this to one extreme, you could found a startup yourself. This is particularly useful for the sort of people who learn better from doing than from any sort of supervision.
    3. EA orgs like the Open Philanthropy Project are prestigious in certain circles, but any candidate who actually gets an offer (i.e. is in the top ~1% of their very pre-selected applicant pool) should also be able to access more conventionally prestigious opportunities if they aim for those.
    4. As discussed above, your networks will be less insular. Also, since EAs are generally quite young, you’ll meet more experienced and successful people in external jobs. These are generally the most valuable people to have in your network.
    5. Having exposure to a diverse range of perspectives and experiences is generally valuable. For example, if you’ve spent your entire career only interacting with EAs, you’re probably not very well-prepared for a public-facing role. Working elsewhere gives you that breadth.
  2. Better for EA orgs.
    1. Evaluating job candidates is just a very difficult process. One of the best ways to predict quality of work is to look at previous experience. If candidates for EA jobs had more experience, it would be easier to find the best ones.
    2. There’s less risk of new hires causing harm from things like producing bad external-facing work, not understanding how to interact professionally within organisations, or creating negative publicity. (I think people have quite a variety of opinions on what percentage of long-termist projects are net-negative, but some people put that number quite high).
    3. With more experienced new hires, the most impactful people currently at EA orgs will be able to spend less time on supervision and more time on their other work.
  3. Better for the EA community.
    1. I discuss this point in the Building a community around moonshots section below.
  4. Better for the world.
    1. Lots of good career development opportunities (e.g. working in consulting) are highly-paid, and so new grads working in them can funnel a fair bit of money towards important causes, as well as saving enough to provide a safety net for career changes or entrepreneurship.
    2. Having people working in a range of industries allows them to spot opportunities that EA wouldn’t otherwise discover, and also build skills that EA accidentally or mistakenly coordinated away from. (I discuss this more at the very end of the post).
    3. Having senior people in a range of industries massively improves our ability to seize those opportunities. If you go out into an industry and find that it suits you very well, and that you have good career prospects in it, you can just continue in that with the goal of leveraging your future position to do good.

I think this last point in particular is crucial. It would be really great to have EAs who are, say, senior executives at the World Bank, because such people can affect the trajectories of whole governments towards prioritising important values. But it’s difficult to tell any given student “you should gamble your career on becoming one of those executives”. After spending a couple of years on that career trajectory, though, it should become much clearer whether there’s potential for you to achieve that, and whether anything in this space is valuable to continue pursuing. If not, you can try another field (80,000 Hours emphasises the importance of experimenting with different fields in finding your personal fit). And if none of those work, you can always apply for direct work with your new expertise. How much more valuable will you be in those roles, compared with yourself as a new grad? Based on assumption 3 as flagged above, I think plausibly over an order of magnitude in many cases, but I’m very uncertain about this and would welcome more discussion of it.

One worry is that even if people go out and build important skills, there won’t be EA jobs to come back to, because they’ll have been filled in the meantime. But I predict that the number of such jobs will continue to grow fairly fast, and that they’ll grow even faster if people can be hired without needing supervision. Another concern is that the jobs will be filled by people who are less qualified but have spent more time recently engaging in the EA community. But if a strong internal hiring bias exists, then it seems like an even better idea to diversify our bets by having people working all over the place.

Compared with those concerns, I worry more about the failure mode in which we take a bunch of people who would otherwise have had tremendously (conventionally) successful careers, and then make them veer off from those careers because we underrate how much you can do from a position of conventional success, and how transferable the skills you develop in reaching that point are. I also worry about the failure mode in which we just don’t have the breadth of real-world experience to identify extremely valuable opportunities. For example, it may be that in a given country the time is ripe for an EA-driven political campaign, but all the people who would otherwise have gone into politics have made career changes. (Yes, politics is currently a recommended path, but it wasn’t a few years ago - what else have we been overlooking?) And in any case, we can’t expect the community at large to only listen to explicit recommendations when there are also very strong implicit recommendations in play.

Social dynamics and implicit recommendations

Something that has happened quite a bit lately is that people accuse 80,000 Hours of being incorrect or misleading, and 80,000 Hours responds by pointing to a bunch of their work which they claim said the correct thing all along. (See here and here and here and here and here). I don’t have strong opinions on whether or not 80,000 Hours was in fact saying the correct thing. What I want to point out, though, is that career advice happens in a social context in which working at EA orgs is high status, because the people giving out money and advice are the people who do direct work at EA orgs, who are older and more experienced in EA and have all the social connections. And of course people are encouraging about careers at these orgs, because they believe they’re important, and also don’t want to badmouth people in their social circle. Young people like that social circle, they want themselves and their friends to stay in it, and so overall there’s this incentive towards a default path which just seems like the best thing based on the people you admire. If you then can’t get onto that path, that has tangible effects on your happiness and health and that of the community, as has been discussed in this post and its comments.

We also have to consider that, despite the considerations in favour of external career-building I listed above, it is probably also optimal for some people to go straight into working at EA orgs, if they’re clearly an excellent fit and can pick up skills very fast. And so we don’t just face the problem of reducing social pressure to work at EA orgs, but also the much harder problem of doing so while EA orgs are trying to hire the best people. Towards that goal, each org has an incentive to encourage as many people as possible to apply to them, the overall effect of which is to build up a community-wide impression that such jobs are clearly a good idea (an information cascade from the assumption that so many other applicants can’t be wrong), and make them even more selective and therefore prestigious. In such a situation, it’s easy to gloss over the sort of counterfactual analysis which we usually consider to be crucial when making big career decisions (how much worse is the 6th best applicant to the Open Philanthropy Project than the 5th, anyway? Is that bigger than the difference between the 5th best applicant spreading EA ideas at McKinsey, versus not doing so?)

Another way of putting this: saying only true things is not sufficient for causing good outcomes. Given that there’s always going to be a social bias towards working at EA orgs, as a community we need to be proactive in compensating for that. And we need to do so in a way that minimises the perception of exclusivity. How have we done this wrong? Here’s one example: earning to give turned out to be less useful than most people thought, and so it became uncool (even without any EA thought leaders explicitly disparaging it). Here’s another: whether or not it’s true that most of the value of student EA groups is in identifying and engaging “core EAs”, it’s harmful to our ability to retain community members who either aren’t in a position to do the things that are currently considered to be “core EA” activities, or else have different judgements about what’s most effective.

(As an aside, I really dislike portrayals of EA as “doing the UTMOST good”, as opposed to “doing LOTS of good”. Measuring down from perfection rather than up from the norm is basically the textbook way to make yourself unhappy, especially for a group of people selected for high scrupulosity. It also encourages a lack of interest in the people who aren’t doing the utmost good from your perspective.)

Building a community around moonshots

I like the idea of hits-based giving. It makes a lot of sense. The problem is that if you dedicate your career to something, and then it turns out to be a ‘miss’, that sucks for you. And it particularly sucks if your community assigns status based largely on how much good you do. That provides an additional bias towards working at an EA org, so that your social position is safe.

What we really want, though, is to allow people to feel comfortable taking risks (assumption 4). Maybe that risk involves founding a startup, or starting a PhD despite being unsure whether you’ll do well or burn out. Maybe it involves committing to a strategy which is endorsed by the community, despite the chance that it will later be considered a mistake; maybe it means sticking to a strategy which the community now thinks is a mistake. Maybe it just turns out to be the case that influencing the long-term future is so hard that only 50% or 5% or 1% of EAs can actually make a meaningful difference. I think that one of the key ways we can make people feel more comfortable is by being very explicit that they are still a welcome and valued part of this community even if whatever it is they’re trying to do doesn’t turn out to be very impactful.

To be clear, this is incredibly difficult in any community. I think, however, that the higher the percentage of the EA community who works at EA orgs, the more difficult it will be to have that welcoming and inclusive community. By contrast, if more of the EAs who were most committed at university end up in a diverse range of jobs and fields, it’ll be easier for others who aren’t on the current bandwagon to feel valued. More generally, the less binary the distinction between “committed” and “uncommitted” EAs, the healthier the community in the long term (assumption 5).

I particularly like the framing of this problem used in this post: we need to find a default “task Y” by which a range of people from different backgrounds and in different life circumstances can engage with EA. Fortunately, figuring out which interventions are effective to donate to, and then donating to them, is a pretty great task Y. The “figuring out” bit doesn’t have to be original research: I think there’s a pressing need for people to collate and distill the community’s existing knowledge.* And of course the “donating” bit isn’t new, but the problem is that we’ve stopped giving nearly as much positive social reinforcement to donors, because of the “EA isn’t funding-constrained” meme, and because now there’s a different in-group who are working directly at EA orgs. (I actually think that EA is more funding-constrained than it’s often made out to be, for reasons which I’ll try to explain in a later post; in the meantime see Peter Hurford’s EA forum comments.) Regardless of what cause area or constraints you think are most pressing, though, I think it’s important for the community that if people are willing to make the considerable sacrifice of donating 10% or more of their salary, we are excited and thankful about that.

Long-term and short-term constraints

It’s important to distinguish between current constraints and the ones we’ll face in 10-20 years.** If we expect to be constrained by something in 15 years’ time, then that suggests we’re also currently constrained on our ability to build pipelines to get more of that thing. If that thing is “people with certain career capital”, and there are many talented young EAs who are in theory capable of gaining that career capital over the next decade, then we’re bottlenecked by anything that will stop them gaining it in practice. From one perspective, that’s the lack of experienced mentors and supervisors at EA orgs. But from the perspective I’ve been espousing above, our internal culture and social dynamics may be bottlenecks in the medium term, because they stop people from finding the positions where they can best develop their careers.

An alternative view is that in 15 years’ time we’ll still be constrained by a career capital gap - not because young EAs have been developing their careers in the wrong way, but because the relevant skills and connections are just so difficult to obtain that most won’t manage to do so. If that is the case, we should try to be very transparent that the bar to contributing to long-termist causes is very high - but even more importantly, take steps (such as those discussed in the previous section) to ensure that our community can remain healthy even if most of the people in it aren’t doing the most prestigious thing, or tried to do it and failed. That seems like an achievable goal as long as people know what risks they're taking - e.g. as long as they have accurate expectations of how likely they are to get a full-time EA job, or of how likely it is that their startup will receive money from EA funds.

(80,000 Hours is the easiest group to blame for people getting a misleading impression, because they’re in the business of giving career advice, but it seems unlikely that a dedicated EA who’s spent hundreds of hours discussing these topics would get a totally mistaken view of the career landscape just from a few articles. During my job search, I personally had a pretty strong view that getting into AI safety would be easy, and I don’t explicitly recall reading any 80,000 Hours articles which said that - it was more of a gestalt impression, mostly gained from the other students around me.)

Personally I don’t think that the bar to contributing to long-termism is so high that most EAs can’t have a significant positive impact. But I do think that personal fit plays a huge role, because if you’re inspired by your field, if you love it and live it and breathe it, you’ll do much better than if you only care about it for instrumental purposes. (In particular, I work in AI safety and think it’s very important, but I’d only encourage most people to pivot into the field if they are, or think they can become, fascinated by AI, AI safety, or hacking and building things.)

The opposite of choosing based on personal fit is overcoordination towards a small set of options. Effective altruism is a community, and communities are, in a sense, defined by their overcoordination towards a shared conception of what’s cool. That’s influenced by things like 80,000 Hours’ research, but the channel from careful arguments to community norms is a very lossy one, which suffers from all the biases I explained above, and which is very difficult to tailor to individual circumstances. So herd behaviour is something we need to be very cautious of (particularly in light of epistemic modesty arguments, which I find compelling. EAs have been selected for being good at basically just one thing, which is taking philosophical arguments about morality seriously. So every time our career advice diverges from standard career advice, we should be wary that we’re missing things.) Of course, the ability to coordinate is also our greatest strength. For example, I think it’s great that altruistic EAs have pivoted away from being GPs in first-world countries due to arguments about replaceability effects. But to my mind the real problem with being a GP is that the good you can do is bounded by the number of patients you can see. Hans Rosling had the same background, but leveraged it to do much more good, both through his research and through his public advocacy. So if there’s one piece of career advice I’d like to spread in EA, it’s this: find the field which most fascinates you while also having high potential for leverage if you do very well in it, and strive towards that.

Thanks to Denise Melchin, Beth Barnes and Ago Lajko for commenting on drafts. All errors and inaccuracies are mine.

* As one example, there have been a whole bunch of posts about career trajectories recently. I think these are valuable (else I wouldn’t have written my own) but there’s just so much information in so disorganised a format that efforts to clarify and summarise the arguments that have been raised, and how they relate to each other, would probably be even more valuable.

** As I wrote this, I realised just how silly it is to limit my analysis to 10-20 years given that long-termism is such a major part of EA. But I don’t know how to think about social dynamics over longer timeframes, and I’m not aware of any work on this in the EA context (this is the closest I’ve seen). If there actually isn’t any such analysis, doing that seems like a very important priority.


Michelle_Hutchinson @ 2019-03-23T21:54 (+62)

[I work for 80,000 Hours]

Thanks for your thoughts. I’m afraid I won’t be able to address everything, but I wanted to share a few considerations.

There were a few points here I particularly liked:

There are a few things I disagree with:

You seem to be fairly positive about pretty broad capital building (eg working at McKinsey). While we used to recommend working in consulting early in people’s careers, we’ve updated pretty substantially away from that in favour of taking a more directed approach to your career. The idea is to try to find the specific area you think you think is most suited to you and where you’ll have the most impact, and then to try out roles directly relevant to that. That’s not to say, of course, that it will be clear what type of role you should pursue, but rather that it seems worth thinking about which types of role seem best suited to you, and then trying out things of that type. Often, people who are able to acquire prestigious generalist jobs (like McKinsey) are able to acquire more useful targeted jobs that would be nearly as good of a credential. For example, if you think you might be interested going into policy, it is probably better to take a job at a top think tank (especially if you can do work on a topic that’s relevant to one of our priority problem such as national security or emerging technology policy) than to do something like management consulting. The former has nearly as much general prestige, but has much more information value to help you decide whether to pursue policy, and will allow you to build up a network, knowledge (including tacit knowledge), and skills which are more relevant to roles in priority areas that you might aim for later in your career. One heuristic we sometimes use to compare the career capital of two opportunities is to ask in which option you'd expect your career to be more advanced in a priority path 5-10 years down the line. It's sometimes the case that spending years getting broad career capital and then shifting into a relevant area will progress you faster than acquiring more targeted career capital but in our experience, narrow career capital wins out more often.

I agree that it’s really important for people to find jobs that truly interest them and which they can excel at. Having said that, I’m not that keen on the advice to start your career decision with what most fascinates you. Personally, I haven’t found it obvious what I’ll find interesting until I try it, which makes the advice not that action guiding. More importantly, in order to help others as much as we can, we really need to both work on the world’s most pressing problems and find what inputs are most needed in order to make progress on them. While this will describe a huge range of roles in a wide variety of areas, it will still be the minority of jobs. That makes me think it’s better to approach career decisions by first thinking through what problems in the world you think most need solving and what the biggest bottlenecks to them being solved are, followed by which of those tasks seem interesting and appealing to you, rather than starting with the question of which jobs seem most interesting and appealing.

I’m a little worried that people will take away the message from your piece that they shouldn’t apply to EA organisations early in their careers, or should turn down a job there if offered one. Like I said - the vast majority of the highest impact roles will be outside EA organisations, and of course there’ll be many people who are better suited to work elsewhere. But it still seems to be the case that organisations like the Open Philanthropy Project and GiveWell are occasionally interested in hiring people 0-2 years out of university. And while there seem to be some people to whom working at EA organisations seems more appealing than it should, there are also many people for whom it seems less appealing or cognitively available than it should. For example, while the people on this forum are likely to be very inclined to apply for jobs at EA organisations, many of the people I talk to in coaching don’t know that much about various EA organisations and why they might be good places to work.

I think the thing to bear in mind is that it’s important not only to apply for jobs at EA organisations. The total number of jobs advertised at EA organisations at any one time is only small, and new graduates should expect to apply to tens of jobs before getting one. Typically, the cost of applying to a valuable direct work job is fairly small relative to the benefit if it turns out you learn that you’re already in a position to start making large contributions to a priority area, as long as you’re at the same time applying to jobs that would help you generate career capital.

Unfortunately, as you say, it seems very difficult to convey accurate impressions - whether about how hard it is to get into various areas, or what kind of skill bottlenecks we currently think there are. I think this is in part due to people having such different starting points. I both come across people who had the impression that it was easy to get into AI safety or EA organisations and then struggled to do so, and people who thought it was so competitive there was no point in them even trying who (when strongly encouraged to do so) ended up excelling. We’re hoping that focusing more on the long-form material like the podcast will help to get a more nuanced picture across for people coming from different starting points.

richard_ngo @ 2019-03-25T13:08 (+19)

One other thing that I just noticed: looking at the list of 80k's 10 priority paths found here, the first 6 (and arguably also #8: China specialist) are all roles for which the majority of existing jobs are within an EA bubble. On one hand, this shows how well the EA community has done in creating important jobs, but it also highlights my concern about us steering people away from conventionally successful careers and engagement with non-EAs.

Michelle_Hutchinson @ 2019-03-26T17:23 (+32)

I actually don’t agree that the majority of of roles for our first 6 priority paths are ‘within the EA bubble’: my view is that this is only true of ‘working in EA organisations’ and ‘operations management in EA organisations’. As a couple of examples: ‘AI policy research and implementation’ is, as you indicate, something that could be done at places like FHI or CSET. But it might also mean joining a think tank like the Center for American Security, the Belfer Center or RAND; or it could mean joining a government department. EA orgs are pretty clearly the minority in both our older and newer articles on AI policy. ‘Global priorities researcher’ in academia could be done at GPI (where I used to work), but could also be done as an independent academic, whether that simply means writing papers on relevant topics, or joining/building a research group like the Institute for Future Studies (https://www.iffs.se/en/) in Stockholm.

One thing that could be going on here is that the roles people in the EA community hear about within a priority path are skewed towards those at EA orgs. The job board is probably better than what people hear about by word of mouth in the community, but it still suffers from the same skew - which we’d like to work towards reducing.

Max_Daniel @ 2019-03-26T18:07 (+12)

Thank you, this concrete analysis seems really useful to understand where the perception of skew toward EA organizations might be coming from.

Last year I talked to maybe 10 people over email, Skype, and at EA Global, both about what priority path to focus on, and then what to do within AI strategy. Based on my own experience last year, your "word of mouth is more skewed toward jobs at EA org than advice in 80K articles" conjecture feels true, though not overwhelmingly so. I also got advice from several people specifically on standard PhD programs, and 80K was helpful in connecting me with some of these people, for which I'm grateful. However, my impression (which might be wrong/distorted) was that especially people who themselves were 'in the core of the EA community' (e.g. working at an EA org themselves vs. a PhD student who's very into EA but living outside of an EA hub) favored me working at EA organizations. It's interesting that I recall few people saying this explicitly but have a pretty strong sense that this was their view implicitly, which maybe means that my guess about what is generally approved of within EA rather than people's actual views is behind this impression. It could even be a case of pluralistic ignorance (in which case public discussions/post like this would be particularly useful).

Anyway, here are a few other hypotheses of what might contribute to a skew toward 'EA jobs' that's stronger than what 80K literally recommends:

  • Number of people who meet the minimal bar for applying: Often, jobs recommended by 80K require specialized knowledge/skills, e.g. programming ability or speaking Chinese. By contrast, EA orgs seem to open a relatively large number of roles where roughly any smart undergraduate can apply.
  • Convenience: If you're the kind of person who naturally hears about, say, the Open Phil RA job posting, it's quite convenient to actually apply there. It costs time, but for many people 'just time' as opposed to creativity or learning how to navigate an unfamiliar field or community. For example, I'm a mathematician who was educated in Germany and considered doing a PhD in political science in the US. It felt like I had to find out a large number of small pieces of information someone familiar with the US education system or political science would know naturally. Also the option just generally seemed more scary and unattractive because it was in 'unfamiliar terrain'. Relatedly, it was much easier to me to talk to senior staff at EA organizations than it was to talk to, say, a political science professor at a top US university. None of these felt like an impossible bar to overcome, but it definitely seemed to me that they skewed my overall strategy somewhat in favor of the 'familiar' EA space. I generally felt a bit that given that there's so much attention on career choice in EA I had surprisingly little support and readily available knowledge after I had decided to broadly "go into AI strategy" (which I feel like my general familiarity with EA would have enabled me to figure out anyway, and was indeed my own best guess before I found out that many others agreed with this). NB as I said 80,000 Hours was definitely somewhat helpful even in this later stage, and it's not clear to me if you could feasibly have done more (e.g. clearly 80K cannot individually help anyone with my level of commitment and potential to figure out details of how to execute their career plan). [I also suspect that I find things like figuring out the practicalities of how to get into a PhD program unusually hard/annoying, but more like 90th than 99th percentile.] But maybe there's something we can collective do to help correct this bias, e.g. the suggestion of nurturing strong profession-specific EA networks seems like it would help with enabling EAs to enter that profession as well (as can research by 80K e.g. your recent page on US AI policy). To the extent that telling most people to work on AI prevents the start of such networks this seems like a cost to be aware of.
  • Advice for 'EA jobs' is more unequivocal, see this comment.
richard_ngo @ 2019-03-24T14:34 (+13)

Hi Michelle, thanks for the thoughtful reply; I've responded below. Please don't feel obliged to respond in detail to my specific points if that's not a good use of your time; writing up a more general explanation of 80k's position might be more useful?

You're right that I'm positive about pretty broad capital building, but I'm not sure we disagree that much here. On a scale of breadth to narrowness of career capital, consulting is at one extreme because it's so generalist, and the other extreme is working at EA organisations or directly on EA causes straight out of university. I'm arguing against the current skew towards the latter extreme, but I'm not arguing that the former extreme is ideal. I think something like working at a top think tank (your example above) is a great first career step. (As a side note, I mention consulting twice in my post, but both times just as an illustrative example. Since this seems to have been misleading, I'll change one of those mentions to think tanks).

However, I do think that there are only a small number of jobs which are as good on so many axes as top think tanks, and it's usually quite difficult to get them as a new grad. Most new grads therefore face harsher tradeoffs between generality and narrowness.

More importantly, in order to help others as much as we can, we really need to both work on the world’s most pressing problems and find what inputs are most needed in order to make progress on them. While this will describe a huge range of roles in a wide variety of areas, it will still be the minority of jobs.

I guess my core argument is that in the past, EA has overfit to the jobs we thought were important at the time, both because of explicit career advice and because of implicit social pressure. So how do we avoid doing so going forward? I argue that given the social pressure which pushes people towards wanting to have a few very specific careers, it's better to have a community default which encourages people towards a broader range of jobs, for three reasons: to ameliorate the existing social bias, to allow a wider range of people to feel like they belong in EA, and to add a little bit of "epistemic modesty"-based deference towards existing non-EA career advice. I claim that if EA as a movement had been more epistemically modest about careers 5 years ago, we'd have a) more people with useful general career capital, b) more people in things which didn't use to be priorities, but now are, like politics, c) fewer current grads who (mistakenly/unsuccessfully) prioritised their career search specifically towards EA orgs, and maybe d) more information about a broader range of careers from people pursuing those paths. There would also have been costs to adding this epistemic modesty, of course, and I don't have a strong opinion on whether the costs outweight the benefits, but I do think it's worth making a case for those benefits.

We’ve updated pretty substantially away from that in favour of taking a more directed approach to your career

Looking at this post on how you've changed your mind, I'm not strongly convinced by the reasons you cited. Summarised:

1. If you’re focused on our top problem areas, narrow career capital in those areas is usually more useful than flexible career capital.

Unless it turns out that there's a better form of narrow career which it would be useful to be able to shift towards (e.g. shifts in EA ideas, or unexpected doors opening as you get more senior).

2. You can get good career capital in positions with high immediate impact

I've argued that immediate impact is usually a fairly unimportant metric which is outweighed by the impact later on in your career.

3. Discount rates on aligned-talent are quite high in some of the priority paths, and seem to have increased, making career capital less valuable.

I am personally not very convinced by this, but I appreciate that there's a broad range of opinions and so it's a reasonable concern.

It still seems to be the case that organisations like the Open Philanthropy Project and GiveWell are occasionally interested in hiring people 0-2 years out of university. And while there seem to be some people to whom working at EA organisations seems more appealing than it should, there are also many people for whom it seems less appealing or cognitively available than it should. For example, while the people on this forum are likely to be very inclined to apply for jobs at EA organisations, many of the people I talk to in coaching don’t know that much about various EA organisations and why they might be good places to work.

Re OpenPhil and GiveWell wanting to hire new grads: in general I don't place much weight on evidence of the form "organisation x thinks their own work is unusually impactful and worth the counterfactual tradeoffs".

I agree that you have a very difficult job in trying to convey key ideas to people who are are coming from totally different positions in terms of background knowledge and experience with EA. My advice is primarily aimed at people who are already committed EAs, and who are subject to the social dynamics I discuss above - hence why this is a "community" post. I think you do amazing work in introducing a wider audience to EA ideas, especially with nuance via the podcast as you mentioned.

Milan_Griffes @ 2019-03-23T22:12 (+1)

Could you add a tl;dr?

(I couldn't deal with the wall of text, but seems like there's probably a lot of good points here.)

Max_Daniel @ 2019-03-22T18:38 (+24)

Related: Julia Galef's post about 'Planners vs. Hayekians'. See in particular how she describes the Hayekians' conclusion, which sounds similar to (though stronger than) your recommendation:

Therefore, the optimal approach to improving the world is for each of us to pursue projects we find interesting or exciting. In the process, we should keep an eye out for ways those projects might yield opportunities to produce a lot of social value — but we shouldn’t aim directly at value-creation.

My impression is that I've been disagreeing for a while with many EAs (my sample is skewed toward people working full-time at EA orgs in Oxford and especially Berlin) about how large the 'Hayekian' benefits from excellence in 'conventional' careers are. That is, how many unanticipated benefits will becoming successful in some field X have? I think I've consistently been more optimistic about this than most people I've talked to, which is one of several reasons for being less excited about 'EA jobs' relative to other options than I think many EAs. My reasoning here seems to broadly agree with yours, and I'm glad to see it spelled out that well.

(Apologies if you've linked to that in your post already, I didn't thoroughly check all links.)

vishal @ 2019-03-27T23:47 (+20)

Thread is too long to fully process, but I'll try to re-phrase what seems to be a crucial & perhaps-not-disputed point here:

If you have big enough wins on your record, early on, you can do pretty much anything.

If you're optimizing for max impact in a decades-long career (which Michelle & Richard both seem to agree is the right framing), then pursuing opportunities with extreme growth trajectories seems like a good strategy.

Where are the Elon Musks and Peter Thiels (early career trajectory-wise) in the EA community? Why are so few EAs making it into leadership positions at some of the most critical orgs?

When talking to someone really talented graduating from university and deciding what to do next, I'd probably ask them why what they're doing immediately might allow for outsize returns / unreasonably fast growth (in terms of skills, network, credibility, money, etc.). If no compelling answer, I'd say they're setting themselves up for relative mediocrity / slow path to massive impact. It's similar to the Sheryl Sandberg(?) quote implying that one should join a breakout stage company to supercharge one's career, no matter what the role: “If you're offered a seat on a rocket ship, don't ask what seat. Just get on.”

I think the point I'm making here is an extension of this: early on, don't ask which rocket ship, either. Just get on one, and you'll win. (Make sure to build systems that prevent value drift).

Path to impact is much easier once you've solved network, skills, finances, credibility, leadership ability, confidence (this last one is crucial and under-discussed). At that point, time becomes the only bottleneck.

This is written primarily for generalist (i.e. non-technical / non-research) talent. Technical & research-oriented careers probably follow different patterns, though the underlying principles probably still apply.

[@80k/Richard: I'd be curious to get a 1-ish-sentence response, e.g. "You're wrong, and need to go do some reading and come back so we can have an informed discussion" or "You're right, and this matches up with how we think about things". PS, this is my first-ever online interaction with the EA community!]

aarongertler @ 2019-03-28T00:32 (+15)

Welcome to the EA community! I liked your first post on the Forum, and I hope you'll come back to make many more.

Now that that's been said, here's my response, which may sound oppositional, but which I intend to be more along the lines of "trying to get on the same page, since I think we actually agree on a lot of stuff". Overall, I think your vision of success is pretty close to what people at 80K might say (though I could be wrong and I certainly don't speak for them).

Where are the Elon Musks and Peter Thiels (early career trajectory-wise) in the EA community? Why are so few EAs making it into leadership positions at some of the most critical orgs?

The thing about Elon Musk and Peter Thiel is that it was hard to tell that they would become Musk and Thiel. There are many more "future Elon Musks" than there are "people who become Elon Musk after everything shakes out".

For all I know, we may have some of those people in the community; I think we certainly have a higher expected number of those people per-capita than almost any other community in the world, even if that number is something like "0.3". (The tech-billionaire base rate is very low.)

I don't really know what you mean by "the most critical orgs", since EA seems to be doing well there already:

  • The Open Philanthropy Project has made hundreds of millions of dollars in grants and is set to do hundreds of millions more -- they aren't on the scale of Sequoia or Y Combinator, but they're similar to a mid-size venture fund (if my estimates about those funds aren't too off-base).
  • GiveWell is moving $40-50 million/year and doesn't seem likely to slow down. In fact, they're looking to double in size and start funding lots of new projects in areas like "changing national law".
  • DeepMind and OpenAI, both of which could become some of the most influential technical projects in history, have a lot of employees (including executives) who are familiar with EA or active participants in the community.
  • A former head of IARPA, the CIA's R&D department (roughly speaking), is now the head of an AI think tank in Washington DC whose other staffers also have really impressive resumes. (Tantum Collins, a non-executive researcher who appears halfway down the page, is a "Principal for Research and Strategy" at DeepMind and co-authored a book with Stanley McChrystal.)
  • It's true that we haven't gotten the first EA senator, or the first EA CEO of a FAANG company (Zuckerberg isn't quite there yet), but I think we're making reasonable progress for a movement that was founded ten years ago in a philosopher's house and didn't really "professionalize" until 2013 or so.

Meanwhile...

  • EA philosophy seems to have influenced, or at least caught the attention of, many people who are already extremely successful (from Gates and Musk to Vitalik Buterin and Patrick Collison).
  • We have support from some of the world's most prominent philosophers, quite a few other major-league academics (e.g. Philip Tetlock), and several of the world's best poker players (who not only donate a portion of their tournament winnings, but also spend their spare time running fundraisers for cash grants and AI safety).
  • We have a section that's at least 50% devoted to EA causes in a popular online publication.

There's definitely room to grow and improve, but the trajectory looks... well, pretty good. Anecdotally, I didn't pay much attention to new developments in EA between mid-2016 and mid-2018.

When talking to someone really talented graduating from university and deciding what to do next, I'd probably ask them why what they're doing immediately might allow for outsize returns / unreasonably fast growth (in terms of skills, network, credibility, money, etc.). If no compelling answer, I'd say they're setting themselves up for relative mediocrity / slow path to massive impact.

I generally agree with this, though one should be careful with one's rocket ship, lest it crash. Theranos is the most obvious example; Tesla may yet become another, and plenty of others burned up in the atmosphere without getting much public attention.

--

I work for CEA, but these views are my own.

richard_ngo @ 2019-03-28T11:24 (+6)

I agree that all of the things you listed are great. But note that almost all of them look like "convince already-successful people of EA ideas" rather than "talented young EAs doing exceptional things". For the purposes of this discussion, the main question isn't when we get the first EA senator, but whether the advice we're giving to young EAs will make them more likely to become senators or billion-dollar donors or other cool things. And yes, there's a strong selection bias here because obviously if you're young, you've had less time to do cool things. But I still think your argument weighs only weakly against Vishal's advocacy of what I'm tempted to call the "Silicon Valley mindset".

So the empirical question here is something like, if more EAs steer their careers based on a Silicon Valley mindset (as opposed to an EA mindset), will the movement overall be able to do more good? Personally I think that's true for driven, high-conscientiousness generalists, e.g. the sort of people OpenPhil hires. For other people, I guess what I advocate in the post above is sort of a middle ground between Vishal's "go for extreme growth" and the more standard EA advice to "go for the most important cause areas".

vishal @ 2019-03-28T23:20 (+20)

I'm ok with calling this the "Silicon Valley mindset" - since it recommends a growth-oriented career mindset, like the Breakout List philosophy, with the ultimate success metric being impact - though it's important to note that I'm not advocating for everybody to go start companies. Rather, I'm describing a shift in focus towards extreme career capital growth asap (rather than direct impact asap) in any reasonably relevant domain, subject to the constraint of robustly avoiding value drift. This seems like the optimal approach for top talent, in aggregate, if we're optimizing for cumulative impact over many decades, and if we think we can apply the venture capitalist mindset to impact (thinking of early-career talent as akin to early-stage startups).

aarongertler @ 2019-03-29T04:47 (+8)

Thanks for this reply!

Sorry for not realizing you worked at DeepMind; my comment would have looked different had I known about our shared context. (Also, consider writing a bio!)

I think we're aligned in our desire to see more early-career EAs apply to those roles (and on most other things). My post aimed to:

1. Provide some background on some of the more "successful" people associated with EA.

2. Point out that "recruiting people with lots of career capital" may be comparable to "acquiring career capital" as a strategy to maximize impact. Of course, the latter makes the former easier, if you actually succeed, but it also takes more time.

On point (2): What fraction of the money/social capital EA will someday acquire "already exists"? Is our future going to look more like "lots of EA people succeeded", or "lots of successful people found EA"?

Historically, both strategies seem to have worked for different social movements; the most successful neoliberals grew into their influence, while the Fabian Society relied on recruiting top talent. (I'm not a history expert, and this could be far too simple.)

--

One concern I have about the "maximize career capital" strategy is that it has tricky social implications; it's easy for a "most people should do X" message to become "everyone who doesn't do X is wrong", as Richard points out. But career capital acquisition doesn't lead to as much direct competition between EAs, and could produce more skill-per-person in the process, so perhaps it's actually just better for most people.

Some of my difficulty in grasping the big picture for the community as a whole is that I don't have a sense for what early-career EAs are actually working on. Sometimes, it feels like everyone is a grad student or FAANG programmer (not much potential for outsize returns). At other times, it feels like everyone is trying to start a company or a charity (lots of potential, lots of risk).

Is there any specific path you think not enough people in the community are taking from a "big wins early" perspective? Joining startups? Studying a particular field?

--

Finally, on the subject of risk, I think I'm going to take this comment and turn it into a post. (Brief summary: Someday, when we look back on the impact of EA, we'll have a good sense for whose work was "most impactful", but that shouldn't matter nearly as much to our future selves as the fact that many unsuccessful people still tried their best to do good, and were also part of the movement's "grand story".) I hope we keep respecting good strategy and careful thinking, whether those things are attached to high-risk or low-risk pursuits.

vishal @ 2019-03-29T14:30 (+4)

I don't have enough data to know if there are specific paths not enough people are taking, but I'm pretty certain there's a question that not enough people are asking within the paths they're taking: how is what I'm doing *right now* going to lead to a 10x/100x/1,000x win, in expectation? What's the Move 37 I'm making, that nobody else is seeing? This mentality that can be applied in pretty much any career path.

richard_ngo @ 2019-03-29T17:09 (+6)

Note that your argument here is roughly Ben Pace's position in this post which we co-wrote. I argued against Ben's position in the post because I thought it was too extreme, but I agree with both of you that most EAs aren't going far enough in that direction.

Max_Daniel @ 2019-03-22T19:01 (+20)

I don't have relevant data nor have I thought very systematically about this, but my intuition is to strongly agree with basically everything you say.

In particular, I feel that the "Having exposure to a diverse range of perspectives and experiences is generally valuable." squares fairly well with my own experience. There just are so many moving parts to how communities and organizations work - how to moderate meetings, how to give feedback, how much hierarchies and structure to have etc. etc. - that I think it's fairly hard to even be aware of the full space of options (and impossible to experiment with a non-negligible fraction of it). Having an influx of people with diverse experiences in that respect can massively multiply the amount of information available on these intangible things. This seems particularly valuable to EA to me because I feel that relative to the community's size there's an unusual amount of conformity on these things within EA, perhaps due to the tight social connections within the community and the outsized influence of certain 'cultural icons'.

Personally, I feel that I've learned a lot of the (both intellectual and interpersonal) skills that are most useful in my work right now outside of EA, and in fact that outside of EA's core focus (roughly, what are the practical implications of 'sufficiently consequentialist' ethics) I've learned surprisingly little in EA even after correcting for only having been in the community for a small fraction of my life.

(Perhaps more controversially, I think this also applies to the epistemic rather than the purely cultural or organizational domain: I.e. my claim roughly is that things like phrasing lots of statement in terms of probabilities, having discussions mostly in Google docs vs. in person, the kind of people one circulates drafts to, how often one is forced to face a situation where one has to explain one's thoughts to people one has never met before, and various small things like that affect the overall epistemic process in messy ways that are hard to track or anticipate other than by actually having experienced how several alternatives play out.)

DavidNash @ 2019-03-22T22:42 (+6)

Similar to Milan I agree with the main point of your comment and also think that the EA community conforms less than the majority of communities.

Maybe ironically, I also think that there is a relative lack of experience with communities in general among a lot of people interested in EA, which makes it harder for people to know what is expected, such as using group slang, strong identities, close connections and group 'rituals' which are very common in most communities.

Max_Daniel @ 2019-03-24T20:14 (+3)

Thank you, your comment made me realize both that I maybe wasn't quite aware what meaning and connotations 'community' has for native speakers, and maybe that I was implicitly comparing EA against groups that aren't a community in that sense. I guess it's also quite unclear to me if I think it's good for EA to be a community in this sense.

Milan_Griffes @ 2019-03-22T20:12 (+5)

+1 to the general thrust of this.

I feel that relative to the community's size there's an unusual amount of conformity on these things within EA

Nitpick: probably not? e.g. compare to US social justice or US social conservatism, which are much larger movements (EA probably < 100,000 total; both of those probably ~ 500,00-10 mil total depending on who you count) and seem to be much more ideologically conforming.

Max_Daniel @ 2019-03-24T20:36 (+9)

Hmm, thanks for sharing your impression, I think talking about specific examples is often very useful to spot disagreements and help people learn from each other.

I've never lived in the US or otherwise participated in one of these communities, so I can't tell from first-hand experience. But my loose impression is that there have been substantial disagreements both synchronically and diachronically within those movements; for example, in social justice about trans* issues or sex work, and in conservatism about interventionist vs. isolationist foreign policy, to name but a few examples. Of course, EAs disagree substantially about, say, their favored cause area. But my impression at least is that disagreements within those other movements can be much more acrimonious (jtbc, I think it's mostly good that we don't have this in EA), and also that the difference in 'cultural vibe' I would get from attending, say, a Black Lives Matters grassroots group meeting vs. a meeting of the Hilary Clinton presidential campaign team is larger than the one between the local EA group in Harvard and the EA Leaders Forum. Do your impressions of these things differ, or were you thinking of other manifestations of conformity?

(Maybe that's comparing apples to oranges because a much larger proportion of EAs are from privileged backgrounds and in their 20s, and if one 'controlled' social justice and conservatism for these demographic factors they'd be closer to EA levels of conformity. OTOH maybe it's something about EA that contributes to causing this demographic narrowness.)

Also, we have an explanation for the conformity within social justice and conservatism that on some readings might rationalize this conformity - namely Haidt's moral foundations theory. To put it crudely, given that you're motivated by fairness and care but not authority etc. maybe it just is rational to hold the 'liberal' bundle of views. (I think that's true only to a limited but still significant extent, and also maybe that the story for why the mistakes reflected by the non-rational parts are so correlated is different from the one for EA in an interesting way.) By contrast, I'm not sure there is a similarly rationalizing explanation for why many EAs agree on both (i) there's a moral imperative for cost-effectiveness, and (ii) you should one-box in Newcomb's problem, and for why many know more about cognitive biases than about the leading theories for why the Industrial Revolution started in Europe rather than China.

Milan_Griffes @ 2019-03-25T17:12 (+3)
Do your impressions of these things differ, or were you thinking of other manifestations of conformity?

I think the cultural vibe you would get at a Dank EA Memes meetup (e.g. "Dank EA Global 2018") would be pretty different from the vibe at a Leverage meetup, and both of those pretty different from the vibe at a GiveWell happy hour.

Agree that there is likely more acrimony in social justice communities than in EA. I actually think this flows from their conformity, as I think there's a lot of pressure to virtue signal & a lot of calling out when a person / group hasn't virtue signaled sufficiently (for whatever criterion of "sufficient"). Somewhat related.

Milan_Griffes @ 2019-03-25T17:17 (+2)
By contrast, I'm not sure there is a similarly rationalizing explanation for why many EAs agree on both (i) there's a moral imperative for cost-effectiveness, and (ii) you should one-box in Newcomb's problem, and for why many know more about cognitive biases than about the leading theories for why the Industrial Revolution started in Europe rather than China.

Super interesting point!

I want to think about this more. Presently, I wouldn't be surprised if (i) to (iii) all appealed more to a certain shape of mind – which could generate conformity along some axes.

richard_ngo @ 2019-03-25T17:30 (+5)

Is this not explained by founder effects from Less Wrong?

Max_Daniel @ 2019-03-26T00:31 (+2)

It probably is, but I don't think this explanation is rationalizing. I.e. I don't think this founder effect would provide a good reason to think that this distribution of knowledge and opinions is conducive to reaching the community's goals.

Milan_Griffes @ 2019-03-25T17:35 (+2)

Sure, but that just pushes the interesting question back a level – question becomes "why was LessWrong a viable project / Eliezer a viable founder?"

Milan_Griffes @ 2019-03-22T20:42 (+3)

Which isn't to say that there's a good level of conformity in EA. I think EA would benefit from having less conformity.

But I think the base rate is really, really bad.

DavidNash @ 2019-03-21T15:00 (+14)

I massively agree with most of this and when talking to people about careers I try to help them find a field that fascinates them and has the potential to be leveraged in the future. At the risk of over simplifying, EA organisations seem to be "experience-constrained" which can't be solved by just getting smart graduates to work in EA jobs.

I think I disagree slightly that there needs to be a "task Y", it may be the case that some people will have an interest in EA but wont be able to contribute. Just as there people who have an interest in evidence based medicine but don't get an opportunity to contribute to medical journals or become doctors. The aim of EA isn't to make use of all resources available, even if it may seem like a lost opportunity not to.

Also I think the EA community is a subset of the EA movement, and lots of people have positive impact whilst rarely or never engaging online/in person and it might be a mistake to focus on just the community part. This post though might lead to people being happier to focus on their own field and potentially reengaging when it makes sense to.

richard_ngo @ 2019-03-21T17:36 (+6)

Thanks for the comment! I find your last point particularly interesting, because while I and many of my friends assume that the community part is very important, there's an obvious selection effect which makes that assumption quite biased. I'll need to think about that more.

I think I disagree slightly that there needs to be a "task Y", it may be the case that some people will have an interest in EA but wont be able to contribute

Two problems with this. The first is that when people first encounter EA, they're usually not willing to totally change careers, and so if they get the impression that they need to either make a big shift or there's no space for them in EA, they may well never start engaging. The second is that we want to encourage people to feel able to take risky (but high expected value) decisions, or commit to EA careers. But if failure at those things means that their career is in a worse place AND there's no clear place for them in the EA community (because they're now unable to contribute in ways that other EAs care about) they will (understandably) be more risk-averse.

Milan_Griffes @ 2019-03-21T18:18 (+10)
I really dislike portrayals of EA as “doing the UTMOST good”, as opposed to “doing LOTS of good”

+1

Also, framings like "the utmost good" presume that we have ethics figured out enough to know what's best, at the end of the day. But we aren't there yet.

John_Maxwell_IV @ 2019-03-22T04:50 (+8)

This is a good post. But since we hear so much about the value of career capital, I thought it'd be useful to link this old post which encouraged people to deprioritize it, just for the sake of an alternate perspective.

Given that there’s always going to be a social bias towards working at EA orgs

I'm not sure this is true. Just a few years ago, it seemed like there was a social bias against working at EA orgs. The "prioritize talent gaps" meme was meant to address this. (I feel like there might be other historical cases of the EA movement overcorrecting in this manner, but no specific instances are coming to mind.)

Milan_Griffes @ 2019-03-25T17:05 (+1)
Similarly, I think a large part of the value of getting into YCombinator or Harvard or the Thiel Fellowship comes from signalling + access to networks + access to money. But the more we rely on explicit arguments about what it takes to do the most good, the more likely we are to underrate these comparatively nebulous advantages.

Major +1

Milan_Griffes @ 2019-03-21T18:24 (+1)
An alternative view is that in 15 years’ time we’ll still be constrained by a career capital gap...

I think there's a strain of apocalyptic thinking operating in some parts of the EA & rationality communities when it comes to career planning.

e.g. if you become emotionally convinced that AGI risk is a real thing, and that there's a substantial probability of a short AGI timeline (short = in the next 10 years), then thinking about your longterm career prospects can feel absurd.

This dynamic probably makes it feel even more important that you start contributing now, because you believe that the window for making a meaningful contribution is very short.

Milan_Griffes @ 2019-03-24T17:21 (+2)

cf. This recent take by Eliezer

richard_ngo @ 2019-03-24T18:47 (+3)

This just seems like an unusually bad joke (as he also clarifies later). I think the phenomenon you're talking about is real (although I'm unsure as to the extent) but wouldn't use this as evidence.

Milan_Griffes @ 2019-03-25T00:16 (+2)

I think he's being Straussian.

Milan_Griffes @ 2019-03-21T18:13 (+1)

Happy to see us converging independently.

Also – great title!