How feasible is it to work at non-EA organizations while still doing genuinely altruistic work of AI risks?

By jackchang110 @ 2025-11-17T13:36 (+13)

My concern is that financial security might become a real bottleneck to do real altruistic work. Even though the EA community is said to be more talent-constrained than funding-constrained, in practice, it seems quite difficult to obtain EA-aligned jobs or research grants (e.g., from Open Philanthropy or related organizations). Many people may therefore need to work in non-EA companies for long periods.

However, I’m unsure how realistic it is to do impactful work in such settings. I'd like to work in AI s-risks field in the future. However, Non-EA companies are profit-oriented, and although some AI companies have AI alignment-related positions, there may be very few jobs related to AI s-risks research (such as preventing AI conflict or digital suffering). My impression is that these s-risks topics are rarely commercially valuable, so opportunities might be very limited. Wild animal suffering opportunties in non-EA companies seem to be quite limited also.

If that’s true, then perhaps a practical approach would be “earning to give for myself” — working in a high-earning but stable job (like medicine), saving a large portion (e.g., $150,000–$200,000 per year), and later using that financial independence to self-fund altruistic research or projects during periods when external funding or EA jobs are unavailable. However that means doing a long period of doing work unrelated to EA at all, it's still way better if I can find EA career opportunities and contribute altruisticly in non-EA organizations.

So my main question is: How easy or difficult is it, in your experience, to find or create altruistic work within non-EA organizations

Thank you very much for your time and patience in reading this long question. Your insight would be very valuable to me.

(My background information of reasons and importance on asking this question is typed on the comments section)


jikifo5403 @ 2025-11-17T15:32 (+2)

It is possible to work at non-EA orgs while still contributing meaningfully to AI risk, but the impact usually comes from what you do outside your main job independent research, collaborations, and earning-to-give to support aligned projects. Direct s-risk or alignment roles in profit-driven companies are rare, so many people build financial stability first and then self-fund research or transition later. It’s not ideal, but it’s a realistic path that still lets you stay engaged, keep learning, and contribute where opportunities exist.

JoA🔸 @ 2025-11-17T21:19 (+1)

Hi! My superficial understanding is that grantmakers in s-risks have a certain bar for what they're open to funding, and that they generally have the capacity to fund a marginal independent researcher if their work is sufficiently promising. If, in the future, you seem like an individual with a track record that is good enough in funders' views (maybe that can come through doing independent research, applying to fellowships, doing non-S-risk related research at AI labs, etc.), then receiving funding will be possible, as money does not seem to be the primary constraint (at leas that's not what grantmakers in the field seem to think). But that is a high bar to pass. 

If you actually manage to save a 150,000$ per year, Macroscopic can advise you in donations to reduce S-risks, which would be a considerable contribution to a cause you seem to care about a lot. (I have no ties to Macroscopic, the information is publically available on their website)

jackchang110 @ 2025-11-18T06:08 (+1)

Thanks for your answering a lot first. Well, I know that most EA organizations and grantmakers said talent is primary constraint. However the fact seems to be it's very difficult to get a job in EA organizations. I'm unsure, but it also seems difficult(like less than 50% success rate) to get independent research fundings from grantmakers. Of course that if you have great talent on researching it'll be way easier to get fundings, but I'll probably just become a mediocrity researcher, therefore I probably can't rely on EA grantmakers to support me.

What do you think about my main question: Is it difficult to find or create altruistic work within non-EA organizations?(especially in reducing AI s-risks)

jackchang110 @ 2025-11-17T13:37 (+1)

I’m actually currently a first-year university student, double-majoring in medicine and computer science. (Different from the US, In Taiwan, medical education begins at the undergraduate level, and one obtains a doctor’s license after completing the medical program.)

 I’ve still been struggling with a major decision: whether I should continue my double major in medicine or focus solely on computer science. In EA's community’s reasoning, medicine seems less relevant to priorities like AI safety or s-risks. However, one major advantage of studying medicine is financial stability. Before transformative AI arrives, I suspect that computer science jobs might become increasingly competitive, whereas doctors may still earn a stable income. Therefore, in an uncertain future, I’ve considered working as a doctor temporarily (perhaps for around 10-15 years), saving most of the earnings to reduce future financial pressure.

(Although, I’m aware that future AI progress could eventually automate much of medical/dentist work.)

Therefore, if it's really difficult to find EA jobs in non-EA companies, it would increase the argument of double majoring in medicine/dentist.

ThaoOnEarth🔹 @ 2025-11-18T02:15 (+1)

How difficult it is for you to double major in both medicine and computer science, given that both majors are time-consuming and require heavy specialization (esp. medicine)?


I think EA is a highly interdisciplinary community, and I doubt with your background you couldn’t find a cause area that combines both (pandemics prevention, AI bio-x-risks). Also, what is the marginal time you expect your medical career will return a certain threshold of financial stability for you to do earnings to give? In the U.S., becoming a doctor usually take 7-15 years and an amount of debts before you actually make $500K. Given these factors, you might also equate both career trajectories and find multiple paths you can pursue at once.

jackchang110 @ 2025-11-18T05:44 (+1)

Hello Thao: Thanks a lot for your patience replying first. 

I don’t think double majoring itself is difficult, but it is very time-consuming. It would require 4–5 additional years of studying medicine and doing hospital internships. Since I believe AI s-risks are probably far more important than bio x-risks and global health, I think it makes more sense to major only in CS and contribute directly, rather than spending those extra years learning medicine.

However, I’m worried that without enough financial security, I might end up working in non-EA organizations until retirement and be unable to focus on the most altruistic work. That’s my main concern: How likely is it to find a career outside EA organizations that still allows me to work on altruistic goals, such as reducing AI s-risks?

In Taiwan, medical and dental school tuition is very cheap, so debt wouldn’t be an issue. In fact, I’m considering switching from medicine to dentistry, because dental residency is 2–4 years shorter than medical residency. Based on my estimation, after graduating it might take around 5 years to earn about $500k if I choose the dentist path.