Latest comments on the EA Forum

Comments on 2025-03-06

tlevin @ 2025-03-05T20:09 (+34) in response to levin's Quick takes

I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins.

Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it's often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, "percent of the problem solved for each percent increase in resources," as if this is likely to be a constant!) and the neglectedness of the cause.

That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they've gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-value work that wasn't already being done within that cause. And so on.

What I mean by "disaster for EA" (despite the wins/exceptions in the previous paragraph) is that I often encounter "but that's not neglected" as a reason not to do something, whether at a personal or organizational or movement-strategy level, and it seems again like a decent initial heuristic but easily overridden by taking a closer look. Sure, maybe other people are doing that thing, and fewer or zero people are doing your alternative. But can't you just look at the existing projects and ask whether you might be able to improve on their work, or whether there still seems to be low-hanging fruit that they're not taking, or whether you could be a force multiplier rather than just an input with diminishing returns? (Plus, the fact that a bunch of other people/orgs/etc are working on that thing is also some evidence, albeit noisy evidence, that the thing is tractable/important.) It seems like the neglectedness heuristic often leads to more confusion than clarity on decisions like these, and people should basically just use importance * tractability (call it "the IT framework") instead.

NickLaing @ 2025-03-06T05:51 (+2)

I love this take and I think you have some strong arguments, but I still think neglectedness makes sense to keep under the under "ITN". It's just a framework it ain't clean and perfect. You're right that, an issue doesn't have to be neglected to be a potentially high impact a cause area. I like the way you put it here.

"Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability'

That's good enough for me to though.

I would also say that especially in global development, relative "importance" might become  less"necessary" part of the framework as well. If we can spend small amounts of money solving relatively solve smallish issues really cost effectively then why not?

You're examples are exceptions too, most of the big EA causes were highly neglected before EA got involved.

When explaining EA to people who haven't heard of it, I think neglectedness might actually be the part which makes the most intuitive sense, and what helps people click. When I explain the outsized impact EA has had on factory farming, or lead elimination, or AI Safety because "those issues didn't have so much attention before, people really get it.

Noah Birnbaum @ 2025-03-05T22:00 (+1) in response to History of diversity efforts and trends in EA

I think the Encompass link is expired. 

Julia_Wise🔸 @ 2025-03-06T03:44 (+2)

Thanks, I've changed it to an article about them.

Davidmanheim @ 2025-03-06T02:44 (+2) in response to Has anyone done research on why Covid-19 didn't become a bigger warning shot for future pandemic prevention?

In my personal view, there was a tremendous failure to capitalize on the crisis by global health security organizations, which were focused on stopping spread, but waited until around mid 2021 to start looking past COVID. This was largely a capacity issue, but it was also a strategic failure, and by the time anyone was seriously looking at things like the pandemic treaty, the window had closed.

Gideon Futerman @ 2025-03-02T14:26 (+18) in response to Gideon Futerman's Quick takes

I wish more work focused on digital minds really focused on answering the following questions, rather than merely investigating how plausible it is that digital minds similar to current day AI's could be sentient:

  1. What does good sets of scenarios for post-AGI governance need to look like to create good/avoid terrible (or whatever normative focus we want) futures, assuming digital minds are the dominant moral patients going into the future 1a) How does this differ dependent on what sorts of things can be digital minds eg whether sentient AIs are likely to happen 'by accident' by creating useful AIs (including ASI systems or sub-systems) vs whether sentient AIs have to be delibrately built? How do we deal with this trade off?

  2. Which of these good sets of scenarios need certain actions to be taken pre-ASI development (actions beyond simply ensuring we don't all die)? Therefore, what actions would we ideally take now to help bring about such good futures? This includes, in my view, what, if any, thicker concept of alignment than 'intent alignment' ought we to use.

  3. Given the strategic, political, geopolitical and technological situation we are in, how, if at all, can we make concrete progress to this? We obviously can't just 'do research' and hope this solves everything. Rather, we ought to use this to guide specific actions that can have impact. I guess this step feels rather hard to do without 1 and 2, but also, as far as I can tell, no one is really doing this?

I'm sure someone has expressed this same set of questions elsewhere, but i've not seen them yet, and at least to me, seem pretty neglected and important

Bradford Saad @ 2025-03-06T01:58 (+3)

I'd also like to see more work on digital minds macrostrategy questions such as 1-3. To that end, I'll take this opportunity to mention that the Future Impact Group is accepting applications for projects on digital minds (among other topics) through EoD on March 8 for its part-time fellowship program. I'm set to be a project lead for the upcoming cohort and would welcome applications from people who'd want to work with me on a digital minds macrostrategy project. (I suggest some possible projects here but am open to others.) 

I think the other project leads listed for AI sentience are all great and would highly recommend applying to work with any of them on a digital minds project (though I'm unsure if any of them are open to macrostrategy projects).

kungfuhobbit @ 2025-03-06T01:35 (+1) in response to The catastrophic situation with USAID just got worse - why the EA community should care

Where can I find facts on the impact to providing contraceptives?

Arepo @ 2025-03-06T01:02 (+2) in response to Announcing: Existential Choices Debate Week (March 17-24)

I'm going to struggle to cast a meaningful vote on this, since I find 'existential risk' terminology as used in the OP more confusing than helpful, since e.g. it includes nonexistential considerations and in practice excludes non-extinction catastrophes from a discussion they should very much be in, in favour of work on the heuristical-but-insufficient grounds of focusing on events that have maximal extinction probability (i.e. AI). 

I've argued here that non-extinction catastrophes could be as or more valuable to work on than immediate extinction events, even if all we care about is the probability of very long-term survival. For this reason I actually find Scott's linked post extremely misleading, since it frames his priorities as 'existential' risk, then pushes people entirely towards working on extinction risk - and gives reasons that would apply as well to non-extinction GCRs. I gave some alternate terminology here, and while I don't want to insist on my own clunky suggestions, I wish serious discussions would be more precise.

tlevin @ 2025-03-05T20:09 (+34) in response to levin's Quick takes

I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins.

Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it's often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, "percent of the problem solved for each percent increase in resources," as if this is likely to be a constant!) and the neglectedness of the cause.

That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they've gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-value work that wasn't already being done within that cause. And so on.

What I mean by "disaster for EA" (despite the wins/exceptions in the previous paragraph) is that I often encounter "but that's not neglected" as a reason not to do something, whether at a personal or organizational or movement-strategy level, and it seems again like a decent initial heuristic but easily overridden by taking a closer look. Sure, maybe other people are doing that thing, and fewer or zero people are doing your alternative. But can't you just look at the existing projects and ask whether you might be able to improve on their work, or whether there still seems to be low-hanging fruit that they're not taking, or whether you could be a force multiplier rather than just an input with diminishing returns? (Plus, the fact that a bunch of other people/orgs/etc are working on that thing is also some evidence, albeit noisy evidence, that the thing is tractable/important.) It seems like the neglectedness heuristic often leads to more confusion than clarity on decisions like these, and people should basically just use importance * tractability (call it "the IT framework") instead.

MichaelDickens @ 2025-03-06T00:06 (+8)

Upvoted and disagree-voted. I still think neglectedness is a strong heuristic. I cannot think of any good (in my evaluation) interventions that aren't neglected.

Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion

I wouldn't think about it that way because "scientific research" is so broad. That feels kind of like saying shrimp welfare isn't neglected because a lot of money goes to animal shelters, and those both fall under the "animals" umbrella.

US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-value work that wasn't already being done within that cause.

If you're talking about polling on AI safety, that wasn't being done at all IIRC, so it was indeed highly neglected.



Comments on 2025-03-05

Brooke O'Connell 🔸 @ 2025-03-01T20:18 (+2) in response to What comes after the Introduction Fellowship? A tool and approach for community organizers

Thanks for your comment Nina! I've definitely kept tabs on and been inspired by what you all are doing at HIP. In fact, we might be using some small pieces as inspiration for a revamp of our career fellowship later this year (more so tailored for current students or recent grads). I'll definitely add a link to the IAP in the 'Activities' tab in the template. 

Nina Friedrich🔸 @ 2025-03-05T23:39 (+1)

Aw, that's fantastic to hear!

Free to get in touch when the time comes - always happy to share some of our learnings!

Noah Birnbaum @ 2025-03-05T22:00 (+1) in response to History of diversity efforts and trends in EA

I think the Encompass link is expired. 

GFRA @ 2025-03-05T20:43 (+1) in response to Retrospective: Letter-writing campaign to MPs on the Victorian Pig Welfare Inquiry

How often do EA advocacy orgs just hire public affairs consultants to do this? Good ones have efficient workflows for basic outreach like this. 

If this doesn’t exist within EA it might be a good general-purpose consultancy service someone could provide. 

These also seem like common problems that could’ve been anticipated in many cases. 

I don’t have much contextual knowledge so may have missed basic facts about this situation. 

tlevin @ 2025-03-05T20:09 (+34) in response to levin's Quick takes

I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins.

Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be impactful at the margin, because more resources mean its more likely that the most cost-effective solutions have already been tried or implemented. But these resources are often deployed ineffectively, such that it's often easier to just directly assess the impact of resources at the margin than to do what the formal ITN framework suggests, which is to break this hard question into two hard ones: you have to assess something like the abstract overall solvability of a cause (namely, "percent of the problem solved for each percent increase in resources," as if this is likely to be a constant!) and the neglectedness of the cause.

That brings me to another problem: assessing neglectedness might sound easier than abstract tractability, but how do you weigh up the resources in question, especially if many of them are going to inefficient solutions? I think EAs have indeed found lots of surprisingly neglected (and important, and tractable) sub-areas within extremely crowded overall fields when they've gone looking. Open Phil has an entire program area for scientific research, on which the world spends >$2 trillion, and that program has supported Nobel Prize-winning work on computational design of proteins. US politics is a frequently cited example of a non-neglected cause area, and yet EAs have been able to start or fund work in polling and message-testing that has outcompeted incumbent orgs by looking for the highest-value work that wasn't already being done within that cause. And so on.

What I mean by "disaster for EA" (despite the wins/exceptions in the previous paragraph) is that I often encounter "but that's not neglected" as a reason not to do something, whether at a personal or organizational or movement-strategy level, and it seems again like a decent initial heuristic but easily overridden by taking a closer look. Sure, maybe other people are doing that thing, and fewer or zero people are doing your alternative. But can't you just look at the existing projects and ask whether you might be able to improve on their work, or whether there still seems to be low-hanging fruit that they're not taking, or whether you could be a force multiplier rather than just an input with diminishing returns? (Plus, the fact that a bunch of other people/orgs/etc are working on that thing is also some evidence, albeit noisy evidence, that the thing is tractable/important.) It seems like the neglectedness heuristic often leads to more confusion than clarity on decisions like these, and people should basically just use importance * tractability (call it "the IT framework") instead.

WillPearson @ 2025-03-05T20:05 (+1) in response to Fractal Governance: A Tractable, Neglected Approach to Existential Risk Reduction

Here is a blog post also written with Claudes help that I hope to engage with home scale experimenters with

Ryan Greenblatt @ 2025-03-05T19:35 (+4) in response to Announcing: Existential Choices Debate Week (March 17-24)

I think reducing the risk of misaligned AI takeover looks like a pretty good usage of people on the margin. My guess is that misaligned AI takeover typically doesn't result in extinction in the normal definition of the term (killing basically all humans within 100 years). (Maybe I think the chance of extinction-defined-normally given AI takeover is 1/3.)

Thus, for me, the bottom line of the debate statement comes down to whether misaligned AI takeover which doesn't result in extinction-defined-normally actually counts as extinction in the definition used in the post.

I don't feel like I understand the definition you give of "a future with 0 value" handles cases like:

"Misaligned AIs takeover and have preferences that on their own have ~0 value from our perspective. However, these AIs keep most humans alive out of a small amount of kindness and due to acausal trade. Additionally, lots of stuff happens in our lightcone which is good due to acausal trade (but this was paid for by some entity that shared our preferences). Despite this, misaligned AI takeover is actually somewhat worse (from a pure longtermist perspective) than life on earth being wiped about prior to this point, because aliens were about 50% likely to be able to colonize most of our lightcone (or misaligned AIs they create would do this colonization) and they share our preferences substantially more than the AIs do."

More generally, my current overall guess at a preference ordering something like: control by a relatively enlightened human society that shares my moral perspectives (and has relatively distributed power > human control where power is roughly as democratic as now > human dictator > humans are driven extinct but primates aren't (so probably other primates develop an intelligent civilization in like 10-100 million years) > earth is wiped out totally (no AIs and no chance for intelligent civilization to re-evolve) > misaligned AI takeover > earth is wiped out and there aren't aliens so nothing ever happens with resources in our lightcone > various s-risk scenarios.

What line here counts as "extinction"? Does moving from misaligned AI takeover to "human control where power is roughly as democratic as now" count as an anti extinction scenario?

Peter Wildeford @ 2025-03-05T19:07 (+13) in response to Peter Wildeford's Quick takes

If you've liked my writing in the past, I wanted to share that I've started a Substack: https://peterwildeford.substack.com/

Ever wanted a top forecaster to help you navigate the news? Want to know the latest in AI? I'm doing all that in my Substack -- forecast-driven analysis about AI, national security, innovation, and emerging technology!

Toby Tremlett🔹 @ 2025-03-05T16:30 (+2) in response to Announcing: Existential Choices Debate Week (March 17-24)

PS- would it still be worth sharing the thesis, or some thoughts from it? You could claim late draft amnesty if you'd like to post it without editing it :)

SiebeRozendal @ 2025-03-05T18:47 (+2)

It's here!

Toby Tremlett🔹 @ 2025-03-05T16:29 (+2) in response to Announcing: Existential Choices Debate Week (March 17-24)

Does footnote #2 on the debate statement cover this? "Our" and "We" are supposed to refer to "earth-originating intelligent life", so "happy sentient AI spreading via VNM probes" would be included. 

[edit- I'm very open to rephrasing it if not]

SiebeRozendal @ 2025-03-05T18:45 (+2)

Ah yeah that seems fine then! "Life" is an imprecise term and I'd prefer "sentience" or "sentient beings" but maybe I'm overdoing it

SiebeRozendal @ 2025-03-05T18:38 (+4) in response to Are US EAs Prepared for Democratic Backsliding?

For forecasts, here's Manifold's US Democracy questions, which I suggest sorting by total traders (and unfortunately, anything n<30 traders becomes quite unreliable) and I also have a Manifold dashboard compiled where questions are grouped a bit more by theme here.

Main questions are:

  • "If Trump is elected, will the US still be a liberal democracy at the end of his term? (58%, n = 191)" - criticism of the V-DEM benchmark here
  • "Will the United States experience a constitutional crisis before 2030? (73%, n = 123)"
  • "Will Donald Trump arrest his political opponents [before 2026]? (41%, n = 76)"
  • "Will a sitting US President refuse to follow or ignore a Supreme Court ruling by 2032? (55%, n = 68)"
  • "Will Donald Trump remain de facto leader of the United States beyond the end of his second term? (7%, n = 44)"
David Mathers🔸 @ 2025-03-05T11:32 (+3) in response to Are US EAs Prepared for Democratic Backsliding?

It's relevant because if people's opposition to woke is driven by racism or dislike of leftist-coded things or groups, that will currently also drive  opposition to foreign aid, which is meant to help Black people and is broadly (centre) left coded*. (There are of course old-style Bush II type conservatives who both hate the left and like foreign aid, so this sort of polarization is not inevitable at the individual level, but it does happen.) 


*Obviously there are lots of aid critics as you go further left who think it is just a instrument of US imperialism etc. And some centrists and centre-left people are aid critics too of course. 

River @ 2025-03-05T18:35 (+1)

Again you are not making the connection, or maybe not seeing my basic point. Even if someone dislikes leftist-coded things, and this causes them both to oppose wokism and to oppose foreign aid, this still does not make opposition to foreign aid about anti-wokism. The original post suggested there was a causal arrow running between foreign aid and wokism, not that both have a causal arrow coming from the same source.

titotal @ 2025-03-05T09:29 (+1) in response to Are US EAs Prepared for Democratic Backsliding?

This again seems like another "bubble" thing. The vast majority of conservatives do not draw a distinction between USAID and foreign aid in general. And I would guess they do associate foreign aid with "woke", because "woke" is a word that is usually assigned based on vibes alone, for the things perceived as taking away from the average american to give to some other minority. Foreign aid involves spending american money to help foreigners, it's absolutely perceieved as "woke". 

Look, I wish we lived in a world where people were rational and actually defined their terms and made their decisions accordingly, but that's not the world we live in. 

River @ 2025-03-05T18:13 (+3)

EA is an offshoot of the rationalist movement! The whole point of EA's existence is to try to have better conversations, not to accept that most conversations suck and speak in vibes!

I also don't think it's true that conservatives don't draw the distinction between foreign aid and USAID. Spend five minutes listening to any conservative talk about the decision to shut down USAID. They're not talking about foreign aid being bad in general. They are talking about things USAID has done that do not look like what people expect foreign aid to look like. They seem to enjoy harking on the claim that USAID was buying condoms for Gaza. Now, whether or not that claim is true, and whether or not you think it is good to give Gazans condoms, you have to admit that condoms are not what anybody thinks of when they think of foreign aid.

Gideon Futerman @ 2025-03-02T14:26 (+18) in response to Gideon Futerman's Quick takes

I wish more work focused on digital minds really focused on answering the following questions, rather than merely investigating how plausible it is that digital minds similar to current day AI's could be sentient:

  1. What does good sets of scenarios for post-AGI governance need to look like to create good/avoid terrible (or whatever normative focus we want) futures, assuming digital minds are the dominant moral patients going into the future 1a) How does this differ dependent on what sorts of things can be digital minds eg whether sentient AIs are likely to happen 'by accident' by creating useful AIs (including ASI systems or sub-systems) vs whether sentient AIs have to be delibrately built? How do we deal with this trade off?

  2. Which of these good sets of scenarios need certain actions to be taken pre-ASI development (actions beyond simply ensuring we don't all die)? Therefore, what actions would we ideally take now to help bring about such good futures? This includes, in my view, what, if any, thicker concept of alignment than 'intent alignment' ought we to use.

  3. Given the strategic, political, geopolitical and technological situation we are in, how, if at all, can we make concrete progress to this? We obviously can't just 'do research' and hope this solves everything. Rather, we ought to use this to guide specific actions that can have impact. I guess this step feels rather hard to do without 1 and 2, but also, as far as I can tell, no one is really doing this?

I'm sure someone has expressed this same set of questions elsewhere, but i've not seen them yet, and at least to me, seem pretty neglected and important

Ryan Greenblatt @ 2025-03-05T18:08 (+4)

I think work of the sort you're discussing isn't typically called digital minds work. I would just describe this as "trying to ensure better futures (from a scope-sensitive longtermist perspective) other than via avoiding AI takeover, human power grabs, or extinction (from some other source)".

This just incidentally ends up being about digital entities/beings/value because that's where the vast majority of the value probably lives.


The way you phrase (1) seems to imply that you think large fractions of expected moral value (in the long run) will be in the minds of laborers (AIs we created to be useful) rather than things intentionally created to provide value/disvalue. I'm skeptical.

Maxime Riché @ 2025-03-05T17:46 (+8) in response to Announcing: Existential Choices Debate Week (March 17-24)

Great news! 

> If there are other posts you think more people should read, please comment them below. I might highlight them during the debate week, or before. 

I am in the process of publishing a series of posts ("Evaluating the Existence Neutrality Hypothesis") related to the theme of the debate ("Extinction risks" VS "Alignment risks / Future value"). The series is about evaluating how to update on those questions given our best knowledge about potential space-faring civilizations in the universe. 

I will aim to publish several of the remaining posts during the debate week.

NickLaing @ 2025-03-04T10:12 (+4) in response to The catastrophic situation with USAID just got worse - why the EA community should care

Thanks Dorothy those are good points.

I'm a huge fan of grassroots organising although its so hard to quantify the benefits of I'm not sure it falls in the realm of what effective altruism can easily get behind. Also grassroots organisation shouldn't need very much money - I've been involved in the past and we didn't need much!

I'm only a fan of leveraging existing orgs if they can really show they are achieving wat they say they are. How do we know current advocates are effective? I can't see a reason reason for me why EA orgs shouldn't be able to start up and work on this over time - I would take a few EA orgs failing at advocacy as some signal that it might not be a super cost effective approach.

I'm skeptical of the idea that there are magical orgs out there doing things that new orgs couldn't replicate at all. Could you suggest some existing orgs that you think are very good that EA could support?

EA does support some too. One example that EA does support with stacks of cash in this space is the Center for Global Development. I've had one terrible interaction with them that lowered my confidence in whether they really are a good faith org and also their general competency, but that doesn't mean they aren't doing impactful stuff.

 

Dorothy M. @ 2025-03-05T17:24 (+1)

Great points Nick. Can I message you/are you open to moving this offline? I want to keep the conversation going and have a lot of thoughts but some I'm not comfortable sharing publicly yet (not 100% convinced of my position) and I need to be sensitive given my job and relationships with other orgs in the space. 

Teresa Jakovlev @ 2025-03-05T12:02 (+2) in response to Can Monitoring and Evaluation Drive Organizational Change? Lessons from The Mission Motor

That's amazing news! Thank you for sharing your insights, they are very valuable to us :)

Nicoll Peracha @ 2025-03-05T17:13 (+1)

You're welcome. Let us know please if you have any questions :)

OllieBase @ 2025-02-22T23:51 (+9) in response to More virtual/hybrid EAG(x) participation options?

Hi Eevee,

As you know, the EA Global team are currently running the event in Oakland, but we've seen this and will share some thoughts after the event (and some time off).

FYI this was briefly discussed a few years back.

OllieBase @ 2025-03-05T17:02 (+2)

 @Eevee🔹 

In our survey data from EAG London 2021, where we tried this, we see that the virtual participants had a lower likelihood to recommend (8.1 vs. 9.1) and made ~4x fewer connections than in-person attendees (10.2 vs. 2.4).

I think Lizka expressed the main case against well (as does Neel)

lots of in-person attendees or speakers who would want to interact with people who are attending virtually are too busy with the in-person conference, the organizers are split between the two sides (and largely focus on the more involved in-person side), and there's a bit more confusion about how everything works.

I expect that this effect will be even stronger now that there are regular virtual events (i.e. fewer virtual attendees would attend hybrid events). If the main benefit comes from watching content, that's usually posted on Youtube shortly after the event (though not livestreamed)
 

SummaryBot @ 2025-03-05T16:57 (+1) in response to History of diversity efforts and trends in EA

Executive summary: This post provides a historical overview of diversity, equity, and inclusion (DEI) efforts in the Effective Altruism (EA) community, detailing key organizational initiatives, hiring practices, community discussions, and demographic trends over time.

Key points:

  1. Organizational efforts (2015-2024): EA institutions have launched various initiatives to support underrepresented groups, such as mentorship programs (e.g., Magnify Mentoring), identity-based meetups, travel grants, hiring policies, and demographic-focused workshops at EA conferences.
  2. Hiring and staffing strategies: EA organizations have tested strategies to improve diversity, including outreach to underrepresented candidates, anonymized applications, and emphasis on trial tasks over credentials, with mixed success in increasing representation.
  3. Community discussions and research: There have been numerous EA Forum posts, studies, and internal discussions on diversity, particularly regarding gender balance, racial representation, and inclusivity in EA spaces. Some debates have been contentious, especially around racial justice and epistemics.
  4. Demographic trends in EA (2014-2024): The EA community remains predominantly male, white, and left-leaning, but recent EA survey data indicates increasing gender and racial diversity, particularly among newer cohorts.
  5. Challenges and impact: While diversity efforts have led to some progress, issues remain in retention, inclusivity, and balancing DEI initiatives with EA's broader goals. Some initiatives have had limited impact or unclear long-term effects.
  6. Future directions: Further research and community feedback may help refine DEI strategies, particularly around geographic diversity, retention of underrepresented groups, and inclusivity at EA events.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Toby Tremlett🔹 @ 2025-03-05T16:29 (+2) in response to Announcing: Existential Choices Debate Week (March 17-24)

Does footnote #2 on the debate statement cover this? "Our" and "We" are supposed to refer to "earth-originating intelligent life", so "happy sentient AI spreading via VNM probes" would be included. 

[edit- I'm very open to rephrasing it if not]

Toby Tremlett🔹 @ 2025-03-05T16:30 (+2)

PS- would it still be worth sharing the thesis, or some thoughts from it? You could claim late draft amnesty if you'd like to post it without editing it :)

SiebeRozendal @ 2025-03-05T16:02 (+6) in response to Announcing: Existential Choices Debate Week (March 17-24)

I really like that you've chosen this topic and think it's an important one! I wrote my MA Philosophy thesis on this (in 2019, now outdated).

On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive

I want to flag that I disagree with this framing, as it's very anthropocentric. There are futures in which we go extinct but that are nevertheless highly valuable (happy sentient AI spreading via VNM probes). Perhaps more empirically relevant, I expect almost all effects to go via making the transition to superintelligence go well, and the most distinct action is focusing on digital sentience (which has little effect on extinction risk and much effect on the value of the future). 

Toby Tremlett🔹 @ 2025-03-05T16:29 (+2)

Does footnote #2 on the debate statement cover this? "Our" and "We" are supposed to refer to "earth-originating intelligent life", so "happy sentient AI spreading via VNM probes" would be included. 

[edit- I'm very open to rephrasing it if not]

Will Aldred @ 2025-01-22T13:29 (+13) in response to Preparing Effective Altruism for an AI-Transformed World

+1. I appreciated @RobertM’s articulation of this problem for animal welfare in particular:

I think the interventions for ensuring that animal welfare is good after we hit transformative AI probably look very different from interventions in the pretty small slice of worlds where the world looks very boring in a few decades.

If we achieve transformative AI and then don’t all die (because we solved alignment), then I don’t think the world will continue to have an “agricultural industry” in any meaningful sense (or, really, any other traditional industry; strong nanotech seems like it ought to let you solve for nearly everything else). Even if the economics and sociology work out such that some people will want to continue farming real animals instead of enjoying the much cheaper cultured meat of vastly superior quality, there will be approximately nobody interested in ensuring those animals are suffering, and the cost for ensuring that they don’t suffer will be trivial.

[...] if you think it’s at all plausible that we achieve TAI in a way that locks in reflectively-unendorsed values which lead to huge quantities of animal suffering, that seems like it ought to dominate effectively all other considerations in terms of interventions w.r.t. future animal welfare.

I’ve actually tried asking/questioning a few animal welfare folks for their takes here, but I’ve yet to hear back anything that sounded compelling (to me). (If anyone reading this has an argument for why ‘standard’ animal welfare interventions are robust to the above, then I’d love to hear it!)

Tristan Katz @ 2025-03-05T16:21 (+1)

Animal welfare guy tuning in. My own take is that the majority of the world actually is almost entirely indifferent about animal suffering, so if AI tries to reflect global values (not just the values of the progressive, elite silicon valley bubble) there is a real risk that it will be indifferent to animal suffering. Consider how Foie Gras is still legal in most countries, or bullfighting, both of which are totally unnecessary. And those are just examples from western countries.

I think it's very likely that TAI will lock in only a very mild concern for animal welfare. Or perhaps, concern for animal welfare in certain contexts (e.g. pets), and none in others (e.g. chicken). Maybe that will lead to a future without factory farming, but it will lead to a future with unnecessary animal suffering nonetheless. 

What I'm not sure about is: how do we ensure that TAI locks in a strong valuation of animal welfare? One route is to try to change how much society cares about animal welfare, and hope that TAI then reflects that. I guess this is the hope of many animal advocates. But I admit that seems too slow to work at this stage, so I agree that animal advocates should probably prioritize trying to influence those developing AI right now.

Melanie Brennan @ 2025-03-05T15:36 (+2) in response to Explaining EA: What I learned from giving a TEDx talk about EA

Thanks a lot for sharing this refreshing take on explaining EA concepts to a broader audience (without necessarily mentioning "EA" itself) - appreciate it! 

Yuval Shapira @ 2025-03-05T16:19 (+1)

Thank you kindly for your comment! :) 

KMF @ 2025-03-05T00:58 (+6) in response to History of diversity efforts and trends in EA

"2023: Magnify Mentoring expands to serve people from underrepresented groups generally. “It includes, but is not limited to, people from low to middle income countries, people of color, people from low-income households, etc.” - The intention here was to pilot a round for people from underrepresented groups not captured by gender. We haven't reached consensus as to whether we will continue. It depends mostly on the impact asssessment of the round (which concludes this month). While it is accurate to say Magnify initially focused on improving diversity and inclusivity in the EA community, the organization's strategy is now focused on supporting the careers and wellbeing broadly of people who are working in evidence-based initiaves with or without an EA link. I mention this mostly because I don't want people to self-select out of applying for mentorship or mentoring with us. 

Julia_Wise🔸 @ 2025-03-05T16:15 (+2)

Thanks for the correction! I've adjusted the entries, do let me know if there's anything still not right.

SiebeRozendal @ 2025-03-05T16:02 (+6) in response to Announcing: Existential Choices Debate Week (March 17-24)

I really like that you've chosen this topic and think it's an important one! I wrote my MA Philosophy thesis on this (in 2019, now outdated).

On the margin[1], it is better to work on reducing the chance of our[2] extinction than increasing the value of futures where we survive

I want to flag that I disagree with this framing, as it's very anthropocentric. There are futures in which we go extinct but that are nevertheless highly valuable (happy sentient AI spreading via VNM probes). Perhaps more empirically relevant, I expect almost all effects to go via making the transition to superintelligence go well, and the most distinct action is focusing on digital sentience (which has little effect on extinction risk and much effect on the value of the future). 

Seth Ariel Green 🔸 @ 2025-03-05T15:34 (+3) in response to Seth Ariel Green 's Quick takes

Anyone else get a pig butchering scam attempt lately via DM on the forun? 

I just got the following message 

> Happy day to you, I am [X] i saw your profile today and i like it very much,which makes me to write to you to let you know that i am interested in you,therefore i will like you to write me back so that i will tell you further about myself and send you also my picture for you to know me physically. 

[EMAIL]

I reported the user on their profile and opened a support request but just FYI


 

Toby Tremlett🔹 @ 2025-03-05T15:46 (+2)

We've got 'em. Apologies to anyone else who got this message. 

Seth Ariel Green 🔸 @ 2025-03-05T15:34 (+3) in response to Seth Ariel Green 's Quick takes

Anyone else get a pig butchering scam attempt lately via DM on the forun? 

I just got the following message 

> Happy day to you, I am [X] i saw your profile today and i like it very much,which makes me to write to you to let you know that i am interested in you,therefore i will like you to write me back so that i will tell you further about myself and send you also my picture for you to know me physically. 

[EMAIL]

I reported the user on their profile and opened a support request but just FYI


 

Toby Tremlett🔹 @ 2025-03-05T15:39 (+4)

Thanks for sharing Seth. Would you mind DMing me their name? I'll ban the account, and mods will look into this. 

Melanie Brennan @ 2025-03-05T15:36 (+2) in response to Explaining EA: What I learned from giving a TEDx talk about EA

Thanks a lot for sharing this refreshing take on explaining EA concepts to a broader audience (without necessarily mentioning "EA" itself) - appreciate it! 

Seth Ariel Green 🔸 @ 2025-03-05T15:34 (+3) in response to Seth Ariel Green 's Quick takes

Anyone else get a pig butchering scam attempt lately via DM on the forun? 

I just got the following message 

> Happy day to you, I am [X] i saw your profile today and i like it very much,which makes me to write to you to let you know that i am interested in you,therefore i will like you to write me back so that i will tell you further about myself and send you also my picture for you to know me physically. 

[EMAIL]

I reported the user on their profile and opened a support request but just FYI


 

NickLaing @ 2025-03-04T19:34 (+9) in response to We need a new Artesunate - the miracle drug fades


Hi James and thanks for posting here on the forum, appreciate someone who is obviously a m malaria expert weighing in here with this useful feedback. I will say I was aware of much of the factual information in your feedback, but chose to leave it out for storytelling purposes - for better or worse. 

Overall I was going for a short, simplified narrative article which briefly walked through 5 "acts" in the story of artesunate, while highlighting the incredible discovery story, pointing out that we may have unnecesarily delayed the mass roll out of artemisin treatment and the emerging resistnace issues. I’m writing for a general audience so I didn't focus on scientific details or get into the weeds, while doing my best to not to be misleading . Of course I compromised at times to bring the story out more vividly. I would argue that I make few factual errors, rather I missed out some aspects of the story that could be considered important. I’m interested if you disagree with any of my major points in the article, besides the important scientific information that I omitted?  You write that I "confuse the main issues" and I'm interested in what you think those main issues are?

Thanks to your great feedback I've made a few changes based on your feedback to be more precise - I suspect you won’t be completely satisfied as I still leave out much detail but I hope it helps.

  1. I didn’t want to get into the nitty gritty of treatment during the war (simplicity again), but you’re right that for much the war chloroquine was dominant. I’ve changed the wording to to “Chloroquine and quinine took too long to cure malaria and side effects could be rough”. I wonder what you make of the US army’s reports of both apparent chloroquine and Quinine resistance during the war? My main point here was that the Vietnam war stimulated the development of artemesin derivatives.
     
  2. I’ve added in those 1996 studies to the timeline thanks!. This only further adds in my mind to demonstrate how slow we were to figure out how much better artemisin derivatives could be than the status quo. I found these 2 studies super interesting as the signal in both of them was leaning towards artemether being better than quinine (although of course not statistically significant). Unfortunately here in Northern Uganda we still use artemether injections for some of our patients here that can’t afford artesunate (aware that it is not as good), as artemether is less than half the price of artesunate. Fortunately in only the last 2 years, artesunate prices have reduced by about 30% which is great so we are using less artemether than ever.
     
  3. I chose to leave out the combination treatment part of the story (see below for reasons), but for accuracy’s sake in the final paragraph have added the combination therapy point and changed to “In combination with other medication, artesunate cures over 99%...” 

A little pushback

  1. You're right that I didn't get into differentiating between the different artemesin derivatives. Although this is a little imprecise, I don’t think explaining the nature of these different derivatives is a critically important piece of information for the story. Again I’m simplifying for storytelling reasons. 
     
  2. I considered describing the combination therapy part of the story, which I agree is important, but decided to leave it out because the injectable artesunate seems to have had a far more important mortality impact than co-artem, and it would have meant telling a slghtly confusing parallel story ;). Feel free to push back here and I agree there is a good argument for adding them to the story - if I wrote a longer article I would have.

    - This Rethink Priorities research here estimates that injectable artesunate saved about 1.7 million lives by 2022 while co-artem saved abuot 300,000. This would mean Artesunate is responsible for 85% of the lives saved by artemisin derivatives and co-artem only 15%. This makes sense to me as artesunate provides a large mortality benefit in severe malaria, while other medications if taken properly cure malaria almost as effectively as co-artem. I agree there are many other benefits from co-artem in uncomplicated malaria (side effects, shorter course, faster clearance) and there is a mortality benefit vs alternatives, but far less extreme than for artesunate.

  • Much of the co-artem development story happens in Southeast Asia, which while important accounts for under 5% of malaria deaths.

3) don’t really understand your disagreement with my statement here "adding a 3rd drug to the treatment cocktail – a blunt and expensive instrument but one that could buy us another 10 years." You write "No this is wrong. New drugs currently in development are being developed as "triple" therapy. There are good pharmacological reasons for this around prevention of the emergence of resistance.”

What's "wrong" about my statement here exactly? The Lancet article I quoted discusses the idea of adding a third drug to artemether-lumefantrane as you say, and yes its to avoid the emergence of resistance. It will be expensive to add a third drug and I consider a 3 drug combination a bit of a sledgehammer/blunt tool. Perhaps we largely agree here?

Thanks again for the feedback and I hope to hear more from you here on the forum :).

jwatowatson @ 2025-03-05T15:29 (+3)

I disagree with point 2. ACTs have had a huge impact on malaria mortality and morbidity, primarily because they are so effective, well tolerated, and replaced a completely failing drug (chloroquine). ACTs have lasted in Africa >20 years before starting to succumb to resistance. They have had an enormous impact. 

The Rethink Priorities estimate concerns coartem dispersible only, compared to a counterfactual of receiving crushed tablet formulations of Coartem. Two problems: Coartem is the Novartis brand name for artemether lumefantrine (AL), and the dispersible is only a proportion (kids under 3? Not sure about this point) of all AL treatments. And the counterfactual is still an ACT! Novartis only supplies around 10% of all AL. AL is about 70% if all ACTs. 

The correct counterfactual question are: how many kids would die from malaria if quinine was still first line treatment for severe malaria; and how many kids would die from malaria if ACTs did not exist (eg if treatment for uncomplicated malaria only used existing non artemisinin drugs). The second counterfactual is really hard to estimate with any confidence because ACTs were such a massive revolution in the treatment of malaria. 

To put severe malaria versus uncomplicated malaria in perspective: donor funded procurement of ACTs in 2022 was 257M (Chai estimates). For injectable & rectal artesunate this was 45M (almost 6x difference). The fact that AL (or ACTs in general) were primarily developed and tested in Asia is irrelevant: their use today is in Africa.
 

Regarding point 3: the future of antimalarial treatment for uncomplicated malaria will be triple combination therapy. For the next 5 to 10 years this will likely be with existing drugs, possibly in combination with new drugs (e.g. ganaplacide). Triple therapy is not a blunt tool, it is what is needed to prevent the emergence of resistance. 
 

Vasco Grilo🔸 @ 2025-03-05T14:37 (+2) in response to Nuance in Proxies

Assuming we know of both organizations above that their contributions were necessary, both can claim having helped 600,000 chickens, without needing to help 1,200,000 chickens in total.

This problem cannot be mitigated by thinking probabilistically. If there is probability p_s_A (p_s_B) of organisation A (B) being successful acting alone, p_s of organisations A and B being successful acting together, p_A (p_B) of organisation A (B) acting, and impact N given success, the expected counterfactual value of:

  • A acting is CV_A = ((1 - p_B)*p_s_A + p_B*p_s - p_B*p_s_B)*N.
  • B acting is CV_B = ((1 - p_A)*p_s_B + p_A*p_s - p_A*p_s_A)*N.

The sum of the expected counterfactual values of A and B is CV = CV_A + CV_B = ((1 - p_A - p_B)*(p_s_A + p_s_B) + (p_A + p_B)*p_s)*N. This can be as large as 2*N when A and B can never succeed alone (p_s_A, p_s_B = 0), A and B always succeed acting together (p_s = 1), and A and B are certain to act (p_A, p_B = 1).

The problem is solved using Shapley values. The expected Shapley value of:

  • A acting is SV_A = ((1 - p_B)*p_s_A + p_B*p_s/2 - p_B*p_s_B)*N.
  • B acting is SV_B = ((1 - p_A)*p_s_B + p_A*p_s/2 - p_A*p_s_A)*N.

The sum of the expected Shapley values of A and B is SV = SV_A + SV_B = ((1 - p_A - p_B)*(p_s_A + p_s_B) + (p_A + p_B)/2*p_s)*N. This can only be as large as N when A and B can never succeed alone (p_s_A, p_s_B = 0), A and B always succeed acting together (p_s = 1), and A and B are certain to act (p_A, p_B = 1).

OscarD🔸 @ 2025-03-03T14:32 (+2) in response to Published report: Pathways to short TAI timelines

I think I would have found this more interesting/informative if the scenarios (or other key parts of the analysis) came with quantitative forecasts. I realise of course this is hard, but without this I feel like we are left with many things being 'plausible'. And then do seven "plausible"s sum to make a "likely"? Hard to say! That said, I think this could be a useful intro to arguments for short timelines to people without much familiarity with this discourse.

Zershaaneh Qureshi @ 2025-03-05T14:26 (+3)

Thanks for the feedback! I essentially agree that quantitative forecasts would be a valuable addition to this work. Actually, I'd previously planned to include a basic model as an appendix alongside the qualitative analysis of the full report, but just didn't end up having the capacity at the time to pull together forecasts that felt genuinely meaningful. (My first attempt resulted in a model which was wildly sensitive to small changes in the values of each parameter, and just didn't feel useful.) I welcome any efforts to build on the ideas/scenarios in my report in this way, and would be keen to chat to anyone doing work in this rough direction!

For what it's worth (and I indicate this in the 'Purpose' section at the start of the report), I think the main utility of the report as it stands -- as a largely qualitative piece of work -- is less in its establishing a conclusion about the plausibility/likelihood of short timelines and more in its use as a resource for understanding and engaging with the timelines debate. With respect to the former goal, it would definitely benefit from more quantification than I currently provide. But my hope is more that this report will be something people can use to familiarise themselves with the debate, refer to in their own work, build upon, etc. 

geoffrey @ 2025-03-02T23:44 (+4) in response to Got two postgrad offers - which should I choose?

Echoing what Eva said, I think you should consider waiting a year then apply for IDE / applied econ masters. An IDE program is probably the right fit given your goals, but I don't know any beyond Yale's IDE which expects you to already have worked in development first.

For Applied Econ, I like University of Maryland's Applied Economics Master's program. The program only requires Calc I and is very transparent about what it can do. Dev / global health placements, content, and networking will take a huge hit compared to IDE programs though.

You can use the year in the workforce to save money and take online classes on the side. Believe me, you'll want the savings. Development and global health can be financially tough in early career.

In general, the econ and math background required isn't too high for these type of real-world Master's degrees. Working + getting good grades in first-semester calculus, first-semester probability and stats, intermediate micro, and intermediate macro may be enough for admission.

rl1004 @ 2025-03-05T13:05 (+1)

Understood, thanks so much for the advice!

Trym Braathen🔸 @ 2025-03-05T13:05 (+2) in response to Test Your Knowledge of the World's Biggest Problems

Cool quiz! 

VettedCauses @ 2025-03-03T21:17 (+1) in response to Should every strategy be publicly intelligible?

Sorry if it came across like this post was intended to be feedback for you specifically -- your post was just one of the most recent examples that came to mind of charity evaluation being done publicly. I have no idea if anything I'm saying here applies in the case of Sinergia. 

No worries! The reason we thought you were saying this applies to Sinergia (and our review of them) is because your post says:

people have criticized charities for having publicly facing messages that don't always align with what the critic thinks is right. This group Vetted Causes (example) is just one recent example of this sort of pattern.

Could you clarify what you meant when you said Vetted Causes is an example of this pattern?

If you think it is always unacceptable to provide false information to the public, even if that's a part of an organization's theory of change, I do disagree. 

What we asked was if you think it is acceptable for Sinergia to provide the false information that they did about Alibem's surgical castration practices. Could you please clarify this specific point before we move on to broader points?

conflictaverse @ 2025-03-05T12:53 (+1)

To be clear, I think my post could apply to your review, as my post reflects a general concern that when doing charity evaluations people often don't have sufficient context to know if they're accurately assessing cost effectiveness or general purpose. But I haven't followed Sinergia closely so I have no idea the extent to which it is in fact applicable to Sinergia -- I'd need to be an insider to know that. 

So what I meant when I said Vetted Causes' review could be an example of this pattern is that it appears you are doing reviews of strategies and organizations without being either extremely experienced at that strategy or extremely familiar with the organization's potentially private intentions. I have no idea if in fact Sinergia or the other organizations you have reviewed have private intentions that are different from their publicly stated goals -- I'm raising that as a possibility for any charity, as a factor that makes evaluating strategy difficult. Of course, if you in fact have an enormous amount of campaigning experience and access to Sinergia's private strategy documents, please do correct my misapprehension!

While I think you're last question is reasonable due to the direction of the conversation, I'm nevertheless not going to answer it, because it takes us off the topic of this post and into criticism/discussion of the content of your review in particular (as opposed to the general principle I am trying to focus on with this post: that sometimes organizations have non-publicly shareable strategies, and that makes accurately evaluating them challenging or impossible). 

Animal Charity Evaluators @ 2025-03-05T09:02 (+4) in response to Announcing our 2024 Charity Recommendations

Thanks for the questions!

As noted in GWWC's report, our reasoning for recommending ÇHKD is that we think they're very plausibly competitive with our other recommended charities, such as Sinergia. Sinergia's CEA rested on more high uncertainty assumptions than ÇHKD’s did, and their CEA covered a smaller percentage of their work. We think it's reasonable to support both a charity that we are more certain is highly cost-effective (such as ÇHKD) as well as one that we are more uncertain is extremely cost-effective (such as Sinergia). We also think ÇHKD may have more potential to have increased cost-effectiveness in the future, based on their recent shift to focus attention on winning commitments from larger retailers.

There are a few things we'd like to note when it comes to SWP and ALI:

  1. They were evaluated in different years (SWP in 2023 and ALI in 2024) with different methodologies for assessing cost-effectiveness. In 2023, we assessed cost-effectiveness using weighted factor models that consider achievement quantity and quality, whereas in 2024 we switched to back-of-the-envelope calculations of impact per dollar. Because of this, there was no direct comparison between the shrimp stunning programs at SWP and ALI. However, the next time we evaluate SWP we expect to create an impact per dollar estimate, in which case the estimates you’ve created (including differentiating slaughter via ice slurry vs asphyxiation) will come in handy.
  2. ALI's shrimp work only accounts for ~38% of their overall expenditure, and we had strong reasons to recommend them for their other work (policy outreach, the Aquatic Animal Alliance, etc.).

While ACE values plurality, we don't take a "best-in-class" approach and wouldn't rule out recommending multiple charities doing similar work.

Thanks, Vince

Vasco Grilo🔸 @ 2025-03-05T12:23 (+2)

Thanks, Vince!

their [Sinergia's] CEA covered a smaller percentage of their work.

I think this can indeed be important. I estimated Sinergia Animal's meal replacement program in 2023 was 0.107 % as cost-effective as their cage-free campaigns. So I would say that x % of their marginal funding going towards their meal replacement program would decrease their marginal cost-effectiveness by around x %. I think your CEAs should ideally refer to the expected additional funding caused by ACE's recommendations, not a fraction of the organisations' past work. GWWC's evaluation argued for this too if I recall correctly.

We think it's reasonable to support both a charity that we are more certain is highly cost-effective (such as ÇHKD) as well as one that we are more uncertain is extremely cost-effective (such as Sinergia).

Even if the organisation whose cost-effectiveness is more certain is way less cost-effective in expectation? If so, I encourage you to disclaim your recommendations are risk averse (as GiveWell does with respect to their Top Charities Fund), and clarify how much.

While ACE values plurality, we don't take a "best-in-class" approach and wouldn't rule out recommending multiple charities doing similar work.

Would you still recommend many organisations doing similar work if you thought their cost-effectiveness differed significantly? I would drop a recommendation whenever the reduction in impact linked to the recommended organisation receiving less funds was exceeded by the increase in impact linked to other organisations receiving more funds. For example, if you thought recommendation A was 10 % as cost-effective at the margin as recommendation B, and that dropping recommendation A would decrease the funds of A by 100 k$, increase the funds of B by 50 k$, increase the funds of roughly neutral (non-recommended) organisations by 40 k$, increase donations to your movement grants' fund by 10 k$, and believed this fund was 2 times as cost-effective at the marfin as recommendation B, dropping recommendation A would be as good as directing 60 k$ (= (-100*0.1 + 50 + 40*0 + 10*2)*10^3) to B. In this case, it would be worth dropping recommendation A. Have you considered reasoning along these lines to decide on whether to make a recommendation or not? I understand there is lots of uncertainty about comparisons between the marginal cost-effectiveness of organisations, and how dropping or adding a recommendation would influence the funding of your recommendations. However, you are already making judgements about these implicitly. I think being explicit about your assumptions would help you clarify them, and improve them in the future, thus eventually leading to better decisions.

Linch @ 2025-03-05T02:03 (+5) in response to Linch's Quick takes

(x-posted from LW)

Single examples almost never provides overwhelming evidence. They can provide strong evidence, but not overwhelming.

Imagine someone arguing the following:
 

1. You make a superficially compelling argument for invading Iraq

2. A similar argument, if you squint, can be used to support invading Vietnam

3. It was wrong to invade Vietnam

4. Therefore, your argument can be ignored, and it provides ~0 evidence for the invasion of Iraq.

In my opinion, 1-4 is not reasonable. I think it's just not a good line of reasoning. Regardless of whether you're for or against the Iraq invasion, and regardless of how bad you think the original argument 1 alluded to is, 4 just does not follow from 1-3.
___
Well, I don't know how Counting Arguments Provide No Evidence for AI Doom is different. In many ways the situation is worse:

a. invading Iraq is more similar to invading Vietnam than overfitting is to scheming. 

b. As I understand it, the actual ML history was mixed. It wasn't just counting arguments, many people also believed in the bias-variance tradeoff as an argument for overfitting. And in many NN models, the actual resolution was double-descent, which is a very interesting and confusing interaction where as the ratio of parameters to data points increases, the test error first falls, then rises, then falls again! So the appropriate analogy to scheming, if you take it very literally, is to imagine first you have goal generalization, than goal misgeneralization, than goal generalization again. But if you don't know which end of the curve you're on, it's scarce comfort. 

Should you take the analogy very literally and directly? Probably not. But the less exact you make the analogy, the less bits you should be able to draw from it. 

---

I'm surprised that nobody else pointed out my critique in the full year since the post was published. Given that it was both popular and had critical engagement, I'm surprised that nobody else mentioned my criticism, which I think is more elementary than the sophisticated counterarguments other people provided. Perhaps I'm missing something. 

When I made my arguments verbally to friends, a common response was that they thought the original counting arguments were weak to begin with, so they didn't mind weak counterarguments to it. But I think this is invalid. If you previously strongly believed in a theory, a single counterexample should update you massively (but not all the way to 0). If you previously had very little faith in a theory, a single counterexample shouldn't update you much. 

titotal @ 2025-03-05T12:18 (+4)

1-4 is only unreasonable because you've written a strawman version of 4. Here is a version that makes total sense:

1. You make a superficially compelling argument for invading Iraq

2. A similar argument, if you squint, can be used to support invading Vietnam

3. This argument for invading vietnam was wrong because it made mistakes X, Y, and Z

4. Your argument for invading Iraq also makes mistakes X, Y and Z

5. Therefore, your argument is also wrong. 

Steps 1-3 are not strictly necessary here, but they add supporting evidence to the claims. 

As far as I can tell from the article, they are saying that you can make a counting argument that argues that it's impossible to make a working SGD model. They are using this a jumping off point to explain the mistakes that would lead to flawed counting arguments, and then they spend the rest of the article trying to prove that the AI misalignment counting argument is making these same mistakes. 

You can disagree with whether or not they have actually proved that AI misalignment made a comparable mistake, but that's a different problem to the one you claim is going on here. 

Teresa Jakovlev @ 2025-03-05T12:02 (+2) in response to Can Monitoring and Evaluation Drive Organizational Change? Lessons from The Mission Motor

That's amazing news! Thank you for sharing your insights, they are very valuable to us :)

Sören Klinger @ 2025-03-05T11:48 (+2) in response to Announcing: Existential Choices Debate Week (March 17-24)

According to my understanding the last two posts from the "Further reading" section rather represent disagreement with the proposed debate statement given their emphasis on s-risks.

Toby Tremlett🔹 @ 2025-03-05T11:49 (+2)

Thanks for the close reading- wish I could say that had been a test. I'll edit it now :)

Sören Klinger @ 2025-03-05T11:48 (+2) in response to Announcing: Existential Choices Debate Week (March 17-24)

According to my understanding the last two posts from the "Further reading" section rather represent disagreement with the proposed debate statement given their emphasis on s-risks.

River @ 2025-03-04T21:42 (+5) in response to Are US EAs Prepared for Democratic Backsliding?

I appreciate that you have a pretty nuanced view here. Much of it I agree with, some of it I do not, but I don't want to get into these weeds. I'm not sure how any of it undermines the point that wokism and opposition to foreign aid are basically orthogonal.

David Mathers🔸 @ 2025-03-05T11:32 (+3)

It's relevant because if people's opposition to woke is driven by racism or dislike of leftist-coded things or groups, that will currently also drive  opposition to foreign aid, which is meant to help Black people and is broadly (centre) left coded*. (There are of course old-style Bush II type conservatives who both hate the left and like foreign aid, so this sort of polarization is not inevitable at the individual level, but it does happen.) 


*Obviously there are lots of aid critics as you go further left who think it is just a instrument of US imperialism etc. And some centrists and centre-left people are aid critics too of course. 

Vasco Grilo🔸 @ 2025-02-25T18:51 (+2) in response to Expected value of rejecting a job offer when there are other ongoing applications

Thanks for clarifying, Matta!

If I understand your post correctly, you are saying that by being recruited as fund manager for AWF, you will direct $186k to SWP, whereas if you are not recruited, the next candidate will allocate these funds to other interventions whose impact is comparatively negligible, so that the value of your work for 1 year in this position will be 186-0 = $186k.

Yes, that is practically it. In rigour, the 2nd best candidate would also direct funds to interventions as cost-effective as SWP. I assumed I would direct 186 k$ more than whatever they would.

What do you think?

@Mata'i Souchon, I have updated this paragraph. I agree more actors would be responsible for the impact linked to AWF granting more to organisations as cost-effective as SWP (me, AWF, their donors, and the organisations) than to that linked to me donating more to such organisations (me, and the organisations). My counterfactual value, which was I estimated in my post, is the same in both cases, but my Shapley value, which is what matters, is larger in the latter. In both cases, all the actors I listed are necessary to produce impact, so I think I would be responsible for 25 % (= 1/4) of the impact linked to AWF granting more to organisations as cost-effective as SWP, but 50 % (= 1/2) of the impact linked to me donating more to such organisations. So I believe I should have weighted the former 50 % (= 0.25/0.5) as heavily as I originally did in my post. I have now corrected for this by halving the impact of my direct work I originally estimated. The ratio between the expected value from turning down and accepting the offer from Anonymous Organisation went from 1.20 to 1.07.

Thanks to your comments, I went from a ratio of 1.67 to 1.07. My decision would have been the same based on this, but it is a significant update. Thanks for engaging!

Vasco Grilo🔸 @ 2025-03-05T11:12 (+2)

@Mata'i Souchon, I have updated this paragraph. I agree more actors would be responsible for the impact linked to AWF granting more to organisations as cost-effective as SWP (me, AWF, their donors, and the organisations) than to that linked to me donating more to such organisations (me, and the organisations). My counterfactual value, which was I estimated in my post, is the same in both cases, but my Shapley value, which is what matters, is larger in the latter.

I have reverted the changes regarding the Shapley value. Thinking more about it, I realised what matters is not the number of necessary actors, but whether their actions are sufficiently independent from my decision about the offer, which I think they are.

Zach Stein-Perlman @ 2025-03-04T19:20 (+8) in response to Announcing: Existential Choices Debate Week (March 17-24)

One key question for the debate is: what can we do / what are the best ways to "increas[e] the value of futures where we survive"?

My guess is it's better to spend most effort on identifying possible best ways to "increas[e] the value of futures where we survive" and arguing about how valuable they are, rather than arguing about "reducing the chance of our extinction [vs] increasing the value of futures where we survive" in the abstract.

Toby Tremlett🔹 @ 2025-03-05T11:00 (+2)

I agree- this is what I mean by my clarification of the tractability point above. One of the biggest considerations for me personally in this debate is whether there are any interventions in the 'increasing the value of the future' field which are as robust in their value as extinction risk reduction. 

Toby_Ord @ 2019-12-02T11:39 (+57) in response to Shapley values: Better than counterfactuals

While I think the Shapley value can be useful, there are clearly cases where the counterfactual value is superior for an agent deciding what to do. Derek Parfit clearly explains this in Five Mistakes in Moral Mathematics. He is arguing against the 'share of the total view' and but at least some of the arguments also apply to the Shapley value too (which is basically an improved version of 'share of the total'). In particular, the best things you have listed in favour of the Shapley value applied to making a moral decision correctly apply when you and others are all making the decision 'together'. If the others have already committed to their part in a decision, the counterfactual value approach looks better.

e.g. on your first example, if the other party has already paid their $1000 to P, you face a choice between creating 15 units of value by funding P or 10 units by funding the alternative. Simple application of Shapley value says you should do the action that creates 10 units, predictably making the world worse.

One might be able to get the best of both methods here if you treat cases like this where another agent has already committed to a known choice as part of the environment when calculating Shapley values. But you need to be clear about this. I consider this kind of approach to be a hybrid of the Shapley and counterfactual value approaches, with Shapley only being applied when the other agents' decisions are still 'live'. As another example, consider your first example and add the assumption that the other party hasn't yet decided, but that you know they love charity P and will donate to it for family reasons. In that case, the other party's decision, while not yet made, is not 'live' in the relevant sense and you should support P as well.

If you are going to pursue what the community could gain from considering Shapley values, then look into cases like this and subtleties of applying the Shapley value further — and do read that Parfit piece.

Vasco Grilo🔸 @ 2025-03-05T10:54 (+2)

Great points, Toby!

If the others have already committed to their part in a decision, the counterfactual value approach looks better.

More generally, some actors should maximise their counterfactual value if their actions are sufficiently independent from those of others. These need not have commited to some actions. The key is that the probability distribution of the actions of others is not much affected by those of the actors maximising counterfactual value.

Eva @ 2025-03-02T21:48 (+5) in response to Got two postgrad offers - which should I choose?

I would agree that econ has the potential to have more impact than a development studies degree, but neither program is an econ program. (Maybe that specific MPP has a lot of econ content, but MPP programs in general do not, and if this one does I would not know.)

If you are going to work for an international organization, either the MPhil or MPP would be fine but the MPhil might open more doors through name recognition.

Alternatively, there are a few master's programs out there that really focus on tech-ing people up. For example, USF has an applied economics and an IDE program that are well-regarded. A bunch of master's programs are trying to distinguish themselves on quantitative skills and I don't expect all of them to require an econ background, so maybe it's worth looking around more.

Even working for a year and applying to more quantitative programs saves you a year over doing two masters.

rl1004 @ 2025-03-05T10:42 (+1)

Definitely, there are some degrees here in the UK that act as economics conversion courses, so I might look into those. Thanks for all the advice!

tobycrisford 🔸 @ 2023-03-27T10:16 (+9) in response to Cooperative or Competitive Altruism, and Antisocial Counterfactuals

Point taken, although I think this is analogous to saying: Counterfactual analysis will not leave us predictably worse off if we get the probabilities of others deciding to contribute right.

Vasco Grilo🔸 @ 2025-03-05T10:26 (+2)

Agreed. Great point, Toby!

titotal @ 2025-03-04T22:03 (–39) in response to Habryka [Deactivated]'s Quick takes

The original post is only 700 words, and this is like half that length. Can you not give people the dignity of reading their actual parting words? 

OscarD🔸 @ 2025-03-05T10:21 (+17)

Pablo and I were trying to summarise the top page of Habryka's comments that he linked to (~13k words) not this departure post itself.

River @ 2025-03-04T21:46 (+5) in response to Are US EAs Prepared for Democratic Backsliding?

You missed my point. I agree that foreign aid is charged along partisian lines. My point was that most things that are charged along partisian lines are not charged along woke/anti-woke lines. Foreign aid is not an exception to that rule, USAID is..

titotal @ 2025-03-05T09:29 (+1)

This again seems like another "bubble" thing. The vast majority of conservatives do not draw a distinction between USAID and foreign aid in general. And I would guess they do associate foreign aid with "woke", because "woke" is a word that is usually assigned based on vibes alone, for the things perceived as taking away from the average american to give to some other minority. Foreign aid involves spending american money to help foreigners, it's absolutely perceieved as "woke". 

Look, I wish we lived in a world where people were rational and actually defined their terms and made their decisions accordingly, but that's not the world we live in. 

Indrek Kivirik @ 2025-03-05T09:21 (+1) in response to undefined

The link to the Rethink Priorities page (https://rethinkpriorities.org/ea-survey) is broken.

Vasco Grilo🔸 @ 2025-02-25T15:52 (+4) in response to Announcing our 2024 Charity Recommendations

Thanks for sharing! Have you considered recommending:

  • Sinergia Animal, but not ÇHKD?
    • You estimate Sinergia Animal's cage-free campaigns are 17.3 (= 935/54.0) times as cost-effective as those from ÇHKD.
  • SWP, but not ALI?
    • I estimate SWP has stunned 15.0 k shrimps per $, which is 3.03 (= 15.0*10^3/(4.95*10^3)) times your estimate for ALI.
    • However, you assumed the shrimps helped by ALI were originally slaughtered via ice slurry without eletrical stunning, whereas I think 95 % of those helped by SWP were originally slaughtered via air asphyxiation without eletrical stunning. I estimate the benefits per shrimp helped if they are originally slaughtered via air asphyxiation are 21.0 (= 0.0447/0.00213) times as large as if they are originally slaughtered via ice slurry. So I believe the fraction of shrimp ALI helped which were originally slaughtered via air asphyxiation without eletrical stunning is quite important.
    • I estimate SWP has been 20.0 (= 0.0426/0.00213) times as impactful as it would have been if all the shrimp they helped were originally slaughtered via ice slurry. So I estimate SWP has been 60.6 (= 3.03*20.0) times as cost-effective as ALI's efforts to stun shrimps if the shrimps they helped were originally slaughtered via ice slurry.

Do you think your cost-effectiveness estimates are far from comparable even if they refer to similar interventions? Thanks to @Kevin Xia 🔸 for noting your cost-effectiveness estimates differed significantly even for similar interventions, which motivated me to make this comment.

@Aidan Whitfield🔸, I think this comment may be relevant to Giving What We Can's (GWWC's) evaluations of evaluators.

Animal Charity Evaluators @ 2025-03-05T09:02 (+4)

Thanks for the questions!

As noted in GWWC's report, our reasoning for recommending ÇHKD is that we think they're very plausibly competitive with our other recommended charities, such as Sinergia. Sinergia's CEA rested on more high uncertainty assumptions than ÇHKD’s did, and their CEA covered a smaller percentage of their work. We think it's reasonable to support both a charity that we are more certain is highly cost-effective (such as ÇHKD) as well as one that we are more uncertain is extremely cost-effective (such as Sinergia). We also think ÇHKD may have more potential to have increased cost-effectiveness in the future, based on their recent shift to focus attention on winning commitments from larger retailers.

There are a few things we'd like to note when it comes to SWP and ALI:

  1. They were evaluated in different years (SWP in 2023 and ALI in 2024) with different methodologies for assessing cost-effectiveness. In 2023, we assessed cost-effectiveness using weighted factor models that consider achievement quantity and quality, whereas in 2024 we switched to back-of-the-envelope calculations of impact per dollar. Because of this, there was no direct comparison between the shrimp stunning programs at SWP and ALI. However, the next time we evaluate SWP we expect to create an impact per dollar estimate, in which case the estimates you’ve created (including differentiating slaughter via ice slurry vs asphyxiation) will come in handy.
  2. ALI's shrimp work only accounts for ~38% of their overall expenditure, and we had strong reasons to recommend them for their other work (policy outreach, the Aquatic Animal Alliance, etc.).

While ACE values plurality, we don't take a "best-in-class" approach and wouldn't rule out recommending multiple charities doing similar work.

Thanks, Vince

gergo @ 2025-03-05T07:57 (+4) in response to Is it too hard to do good through EA?

Thanks for writing this up! 

Scale 1:1 advising with options for peers and buddy cohorts

For those interested in AIS, aisafety.com has a list of advisors to whom you can talk.

gergo @ 2025-03-05T07:53 (+5) in response to From Comfort Zone to Frontiers of Impact: Pursuing A Late-Career Shift to Existential Risk Reduction

It was really amazing to read your post, thank you for writing it up. I will make sure to share it with experienced professionals that I talk to! I think you are doing all the right things, so I hope you land a full-time role or start your own org soon!

andrewpei @ 2025-03-05T05:01 (+1) in response to How AI Takeover Might Happen in Two Years

Thanks for writing this, I think there is a lot of opportunity to create art to make the ideas that EAs often talk about like an AI takeover more concrete.

Linch @ 2025-03-05T02:03 (+5) in response to Linch's Quick takes

(x-posted from LW)

Single examples almost never provides overwhelming evidence. They can provide strong evidence, but not overwhelming.

Imagine someone arguing the following:
 

1. You make a superficially compelling argument for invading Iraq

2. A similar argument, if you squint, can be used to support invading Vietnam

3. It was wrong to invade Vietnam

4. Therefore, your argument can be ignored, and it provides ~0 evidence for the invasion of Iraq.

In my opinion, 1-4 is not reasonable. I think it's just not a good line of reasoning. Regardless of whether you're for or against the Iraq invasion, and regardless of how bad you think the original argument 1 alluded to is, 4 just does not follow from 1-3.
___
Well, I don't know how Counting Arguments Provide No Evidence for AI Doom is different. In many ways the situation is worse:

a. invading Iraq is more similar to invading Vietnam than overfitting is to scheming. 

b. As I understand it, the actual ML history was mixed. It wasn't just counting arguments, many people also believed in the bias-variance tradeoff as an argument for overfitting. And in many NN models, the actual resolution was double-descent, which is a very interesting and confusing interaction where as the ratio of parameters to data points increases, the test error first falls, then rises, then falls again! So the appropriate analogy to scheming, if you take it very literally, is to imagine first you have goal generalization, than goal misgeneralization, than goal generalization again. But if you don't know which end of the curve you're on, it's scarce comfort. 

Should you take the analogy very literally and directly? Probably not. But the less exact you make the analogy, the less bits you should be able to draw from it. 

---

I'm surprised that nobody else pointed out my critique in the full year since the post was published. Given that it was both popular and had critical engagement, I'm surprised that nobody else mentioned my criticism, which I think is more elementary than the sophisticated counterarguments other people provided. Perhaps I'm missing something. 

When I made my arguments verbally to friends, a common response was that they thought the original counting arguments were weak to begin with, so they didn't mind weak counterarguments to it. But I think this is invalid. If you previously strongly believed in a theory, a single counterexample should update you massively (but not all the way to 0). If you previously had very little faith in a theory, a single counterexample shouldn't update you much. 

Felix_Werdermann 🔸 @ 2025-03-05T01:44 (+1) in response to Shapley values: Better than counterfactuals

Very nice text, thank you for writing it!!
I’m not sure whether this statement is universally true (and I’m also not entirely clear on what exactly it means):
"Agents individually optimizing for cost-effectiveness in terms of Shapley value globally optimize for total cost-effectiveness."

Let’s take Example 2: If the invention of calculus had a very large benefit, then both Newton and Leibniz optimized their cost-effectiveness in terms of Shapley value by working on it. However, the global cost-effectiveness would have been higher if only one of them had made the invention and the other had contributed to something else valuable instead.

One solution would be to first decide for the project with the highest cost-effectiveness (in Shapley value) and then recalculate the Shapley values. In that case, either Newton or Leibniz would work on the invention of calculus (depending on who had lower costs), and the other would not. But there are still situations where this approach does not lead to the highest cost-effectiveness (if the Shapley value is based on coalitions that are unrealistic due to limited available resources):

Let's assume that the three charities A, B, and C can finance a campaign for better chicken welfare with $1m. If only Charity A runs the campaign, it helps 200,000 chickens, and the same applies to Charity B. If Charities A and B launch a joint campaign, it helps 600,000 chickens. Charity C can only work alone and would help 250,000 chickens.

If a donor had $1m available, they would have to choose Charity A or B according to the Shapley value (300,000 chickens), but in reality, they would only help 200,000 chickens (assuming that a joint campaign by Charity A and B with $0.5m each is not possible or would also only help 200,000 chickens). It would be better to give the $1m to Charity C and help 250,000 chickens.

KMF @ 2025-03-05T00:58 (+6) in response to History of diversity efforts and trends in EA

"2023: Magnify Mentoring expands to serve people from underrepresented groups generally. “It includes, but is not limited to, people from low to middle income countries, people of color, people from low-income households, etc.” - The intention here was to pilot a round for people from underrepresented groups not captured by gender. We haven't reached consensus as to whether we will continue. It depends mostly on the impact asssessment of the round (which concludes this month). While it is accurate to say Magnify initially focused on improving diversity and inclusivity in the EA community, the organization's strategy is now focused on supporting the careers and wellbeing broadly of people who are working in evidence-based initiaves with or without an EA link. I mention this mostly because I don't want people to self-select out of applying for mentorship or mentoring with us. 

Felix_Werdermann 🔸 @ 2025-03-05T00:50 (+5) in response to Nuance in Proxies

Thank you, @Kevin Xia 🔸 , for the text!
I also find the Shapley value very interesting for attributing impact—I wasn't familiar with it before, so thanks for the hint, @Vasco Grilo🔸 !

I think it depends on what decisions are being guided by the "impact share." If the goal is to determine how a donor should allocate their money, then in your first example, the Shapley value is probably more suitable than simple counterfactuals. However, if Organization A has already decided that it has fulfilled its role in securing a corporate commitment and now Organization B is deciding whether to do the same, then counterfactuals are useful here (which are identical to Shapley values with only one actor).

Even though the Shapley value is a good reference point for donors when distributing funds, I don’t think the best overall strategy is necessarily to donate to the charities with the highest cost-effectiveness in terms of Shapley value. Instead, donations should also be "coordinated." This becomes particularly clear in the second example: If Organization A and Organization B had nearly the same costs for referring to a grant, their cost-effectiveness would also be nearly the same, and a donor would most likely have to support either no charity or both charities for that purpose. It is obviously smarter to fund only one (or none) of them in this case.

One solution would be to first fund the project with the highest cost-effectiveness (in Shapley value) and then recalculate the Shapley values. In the second example, this would mean that first, Organization A or Organization B is funded (whichever has slightly lower costs), and then the other organization is no longer funded for this purpose.

However, in the first example, problems could arise if the total donation budget is insufficient to fund both organizations, meaning that in the end, the money has no effect at all.

Even though this scenario may seem unrealistic (since Organization A’s actions would likely still have a positive impact, even if Organization B does nothing), this problem also appears in a slightly modified model that may be more realistic. Let’s assume that if Organization A or Organization B acts alone, they would help 200,000 chickens. The Shapley value per organization would still be 300,000 chickens, but if the funds are not sufficient to support both organizations, funding one of them would only help 200,000 chickens. In that case, it would be better to fund a third charity, Charity C, which could help 250,000 chickens (in Charity C’s campaign, no other organizations would play a role).

Ryan Kidd @ 2025-03-05T00:00 (+10) in response to Examples of success from city and national groups

Also, Apollo Research and Leap Labs grew out of the MATS London Office (what later became LISA). I realize this was an AI safety office, not an EA office, but it feels significant.



Comments on 2025-03-04

Tobias Häberli @ 2025-03-04T23:48 (+7) in response to Could this be an unusually good time to Earn To Give?

Some simplifying assumptions:

  • £50k starting net worth
  • Only employed for the next 4 years
  • £300k salary, £150k after tax, £110k after personal consumption
  • 10% interest on your savings for 4 years
  • Around £635k at end of 2030

This is only slightly more than the average net worth of for UK 55 to 64 year olds

Overall, if this plan worked out near perfectly, it would place you in around the 92 percentile of wealth in the UK

This would put you in a good, but not great position to invest to give. 

Overall it seems to me as if you’re trying to speedrun getting incredibly wealthy in 4 years. This is generally not possible with salaried work (the assumption above put you around the 99-99.5 percentile of salaries), but might be more feasible through entrepreneurship.

Some other considerations:

  • Working in such a high paying job, even in financial services, will probably not allow you to study and practice investing. You will not be an expert on AI investing and investing in general in 2030, which would be a problem if you believe such expertise was necessary for you to invest to give.
  • Quite a lot of EAs will be richer than this in 2030. My rough guess is more than 500. Your position might be useful but is likely to be far from unique.
  • You might want to think through your uncertainties about how useful money will be to achieve your goals in 2030-2040. If there’s no more white collar jobs in 2030, in 2035 the world might be very weird and confusing.
  • If there is a massive increase of overall wealth in 2030-2040 due to fast technological progress, a lot of problems you might care about will get solved by non-EAs. Charity is a luxury good for the rich, more people will be rich, charity on average solves much more problems than it creates.
  • Technological progress itself will potentially solve a lot of the problems you care about.
  • (Also agree with Marcus’s point.)
eleanor mcaree @ 2025-03-04T23:28 (+19) in response to On deference to funders

Draft amnesty equivalent of a comment (i.e. I haven't put much thought into it.) I really enjoyed this post and agreed with a lot of what is in there. 

For context, I manage a modest sized grant portfolio in the animal welfare space and think a lot about these (that is, my) shortcomings. 

As an example, I had a meeting with a grant recipient last week where they explained that one of their programs wasn't working out as planned. It was refreshing to hear this kind of honesty from a grant recipient because it is so rare. 

Re experience outside of grant making, I have tended to volunteer as much as possible, but that is a limited substitute for all that you learn in paid campaigning. 

I'd be interested to hear if anyone else has other ideas how to improve the problems Abraham describes here. 

Karen Singleton @ 2025-03-04T23:20 (+1) in response to Retrospective: Letter-writing campaign to MPs on the Victorian Pig Welfare Inquiry

Thanks both for sharing this retrospective - it's a great summary of the campaigns' efforts, challenges and learnings. I appreciate the work that went into this, particularly the manual coordination and outreach—it’s no small task to organise a campaign like this!

I think the decision to focus on two key recommendations was a smart and pragmatic decision, people are easily overwhelmed and we need people to care and act.

Your insights about communication preferences (email over messenger) are useful. 

We run campaigns which involve letter/email templates in the style of your possible alternative #2 but that then faces the issues of MP offices receiving similar messages which may then be discounted /not counted as individual correspondence. Use of some automation /AI tool may help avoid this but we've not explored that space a lot.

Thanks again for describing your actions so clearly and sharing the insights.

FYI the UK Voters for Animals link doesn't seem to go where you want it to!

Pablo @ 2025-03-04T15:22 (+4) in response to Habryka [Deactivated]'s Quick takes

Here's another summary. I used Gemini 2.0 Flash (via the API), and this prompt:

The following is a series of comments by Habryka, in which he makes a bunch of criticisms of the effective altruism (EA) movement. Please look at these comments and provide a summary of Habryka’s main criticisms.

  1. Lack of leadership and accountability: He believes EA leadership is causing harm and lacks mechanisms for correcting course.
  2. Emphasis on PR and narrative control: He condemns EA organizations' risk aversion, guardedness, and attempts to control the narrative around FTX, prioritizing public image over transparency.
  3. Inadequate community health: He laments conformity pressures, fears of reprisal for dissent, and insufficient efforts to cultivate a culture of open disagreement.
  4. Entanglement with FTX: He faults EA leadership, particularly Will MacAskill, for endorsing Sam Bankman-Fried and entangling the movement with FTX despite warnings about SBF's character.
  5. Hero worship and lack of respect for intellectual leaders: He criticizes the hero worship of MacAskill, contrasting it with MacAskill's perceived lack of engagement with other intellectual leaders in the community. He sees this as part of a pattern of MacAskill prioritizing popularity and prestige over community health and epistemic integrity.
  6. Misleading communications and lack of transparency: He criticizes CEA for making inaccurate and misleading statements, for omitting crucial context in communications, and for concealing information about funding decisions.
  7. Scaling too quickly and attracting grifters: He worries that EA's rapid growth and increased funding attract deceptive actors and create perverse incentives.
  8. Overreliance on potentially compromised institutions: He expresses concerns about EA's deep ties to institutions like Oxford University, which may stifle intellectual exploration and operational capacity.
  9. Ignoring internal warnings about FTX: He reveals that he and others warned EA leadership about Sam Bankman-Fried's reputation for dishonesty, but those warnings went unheeded. He suggests he personally observed potentially illegal activities by SBF but chose not to share this information more widely.
  10. Flawed due diligence and poor judgment in grantmaking: He feels EA leadership's due diligence on SBF was inadequate and that they made poor judgments in providing him with substantial resources. He extends this criticism to grantmaking practices more generally.
  11. Unfair distribution of resources: He argues that the current distribution of funds within EA doesn't adequately compensate those doing object-level work and undervalues their contributions relative to donors. He argues for a system that recognizes the implicit tradeoff many have made in pursuing lower-paying EA-aligned careers.
  12. Centralized media policy and negative experiences with journalists: While supporting a less centralized media policy, he also cautions against interacting with journalists, as they frequently misrepresent interviewees and create negative experiences.
     
titotal @ 2025-03-04T22:03 (–39)

The original post is only 700 words, and this is like half that length. Can you not give people the dignity of reading their actual parting words? 

titotal @ 2025-03-04T11:28 (+9) in response to Are US EAs Prepared for Democratic Backsliding?

I don't think foreign aid is at risk of being viewed as woke. Even the conservative criticisms of USAID tend to focus on things that look very ideological and very not like traditional foreign aid.

This just isn't true. Yes, exaggerated claims of "wastefulness" are one of the reasons they are against it, but there are many more who are ideologically opposed to foreign aid altogether. 

I can link you to this exchange I had with a conservative, where they explictly stated that saving the lives of a billion foreigners would not be worth increasing the national deficit by 4%, because they are ideologically opposed to american taxpayer money saving foreign lives, no matter how efficiently they do it. Or see the insanely aggressive responses to this seemingly innocuous scott alexander tweet. Or here is a popular right wing meme specifically mocking liberals for having large moral circles. 

I suspect that you are in a bubble, where the conservatives you know are fine with foreign aid, so you extend that to the rest of conservatives. But in a broader context, 73% of republicans want to cut foreign aid, while only 33% of democrats do. 

River @ 2025-03-04T21:46 (+5)

You missed my point. I agree that foreign aid is charged along partisian lines. My point was that most things that are charged along partisian lines are not charged along woke/anti-woke lines. Foreign aid is not an exception to that rule, USAID is..

David Mathers🔸 @ 2025-03-04T13:15 (+9) in response to Are US EAs Prepared for Democratic Backsliding?

"And fundamentally opposition to wokism is motivated by wanting to treat all people equally regardless of race or sex"

I think this  true of a lot of public opposition to wokeism: plenty liberals, socialist and libertarians with very universalist cosmopolitan moral views find a lot of woke stuff annoying, plenty working class people of colour are not that woke on race, and lots of moderate conservatives believe in equality of this sort. Many people in all these groups genuinely express opposition to various woke ideas based on a genuine belief in colourblindness and its gender equivalent, and even if that sort of views is somehow mistaken it is very annoying and unfair when very woke people pretend that it is always just a mask for bigotry. 

But it absolutely is not true of all opposition to woke stuff, or all but a tiny minority:

Some people are genuinely openly racist, sexist and homophobic, in the sense that they will admit to being these things. If you go and actually read the infamous "neoreactionnaries" you will find them very openly attacking the very idea of "equality". They are a tiny group, but they do have the ear of some powerful people: definitely Peter Thiel, probably J.D. Vance (https://www.nytimes.com/2025/01/18/magazine/curtis-yarvin-interview.html). 

But in addition very many ordinary American Christians believe that men in some sense have authority/leadership over women, but would sincerely (and sometimes accurately) deny feeling hostile to women. For example the largest Protestant denomination in the United States is Southern Baptism, and here's the NYT reporting on them making women even more banned from leadership with the organization than they already were, all of 2 years ago: https://www.nytimes.com/2023/06/14/us/southern-baptist-women-pastors-ouster.html There are 13 million Southern Baptists, which isn't a huge share of the US population, but many other conservative Protestant denominations also forbid women to serve in leadership positions and there are a lot of conservative Protestants overall, and some Catholics, and officially the Catholic Church itself share this view. Of course, unlike the previous group, almost all of these people will claim that men and women in some sense have equal value. But almost all woke people who openly hate on white men will also claim to believe everyone has equal value, and develop elaborate theory about why their seemingly anti-white male views are actually totally compatible with that. If you don't believe the latter, I wouldn't believe this group either that men being "the head of the household" is somehow compatible with the good, proper kind of equality. (Note that it's not primarily the sincerity of that belief I am skeptical of, just it's accuracy.) 

As for sexuality, around 29% of Americans still oppose same-sex marriage: https://news.gallup.com/poll/1651/gay-lesbian-rights.aspx Around a quarter think having gay sex/being gay is immoral: https://www.statista.com/statistics/225968/americans-moral-stance-towards-gay-or-lesbian-relations/

More generally, outgroup bias is a ubiquitous feature of human cognition. People can have various groups that wokeness presents itself as protecting as their outgroup, and because of outgroup bias some of those people will then oppose wokeness as a result of that bias. This is actually a pretty weak claim, compatible with the idea that woke or liberal people have equal or even greater levels of outgroup bias as conservatives. And it means that even a lot of people who sincerely claim to hold egalitarian views are motivated to oppose wokeness at least partially because of outgroup bias. (Just as some Americans liberals who are not white men and claim to be in some sense egalitarian in fact have dislike of white men as a significant motivation behind their political views: https://www.bbc.com/news/world-us-canada-45052534 There are obviously people like Jeong on the right. Not a random sample, but go on twitter and you'll see dozens of them.) 

Literally all of these factions/types of person on the right have reason to oppose wokeness that are not a preference for colourblindness and equality of opportunity (the last group may of course also genuinely be aggravated by open woke attacks on those things yes, it's not an either or.) Since there are lots of these people, and they are generally interested enough in politics to care about wokeness in the first place, there is no reason whatsoever to think they are not well represented in the population  of "people who oppose wokeness". The idea that no one really opposes wokeness except because they believe in a particular centre-right version of colourblind equality of opportunity both fails to take account of what the offficial, publicly stated beliefs of many people on the right actually are, and also fails to apply very normal levels of everyday skepticism to the stated  motivations of (other) anti-woke people who endorse colourblindness. 

River @ 2025-03-04T21:42 (+5)

I appreciate that you have a pretty nuanced view here. Much of it I agree with, some of it I do not, but I don't want to get into these weeds. I'm not sure how any of it undermines the point that wokism and opposition to foreign aid are basically orthogonal.

Arthur Malone🔸 @ 2025-03-04T20:42 (+1) in response to There are a lot of upcoming retreats/conferences between March and July (2025)

There's also the EA Student Summit: London happening on April 5, applications open now

The event is structured to benefit students but we'd love established EA professionals to attend as mentors or representatives for their orgs; there will be time for professional networking and a reception for mentors in addition to the time spent speaking with students.

Quinn McHugh (he/him) @ 2025-03-04T20:13 (+3) in response to EA Survey 2024: Demographics

Thanks for taking the time to run and analysis this survey.

Are there plans to include questions about income and/or financial stability in next year’s survey?

Rationale: I believe this data would be valuable in providing individuals with a clearer understanding of the financial security of others in the EA community and could help newcomers assess whether the advice they receive is relevant to their own financial situation. Many recommendations and norms within EA - such as unconventional career choices, significant donation pledges, or risk-taking in pursuit of impact - can have vastly different implications depending on who's making the recommendation or reinforcing the norms.

If a significant portion of the community has financial security, it’s possible that commonly shared advice assumes a level of stability that not all newcomers have. Understanding the financial realities of EA members could help provide more contextually appropriate guidance and ensure that discussions around impact, risk, and career planning are inclusive of people from diverse economic backgrounds.

Would love to hear your thoughts on this!

David_Moss @ 2025-03-04T20:39 (+2)

Thanks for the comment!

I agree these would be interesting things to include.

  • We used to ask about income as part of our donation data section. But we've not included the donation questions since EAS 2020. If you like, I can provide further analyses of the income data from prior surveys. On the face of it, EA income and donations have been surprisingly low historically. But this is likely explained by the high percentage of student/very early career people in the community.
  • Also in 2020, we were asked to include a question about experiences of financial or employment instability as a child, which found relatively low levels. As noted in that post, we would generally recommend using a different measure if you want a proxy for SES, but this is the one we were asked to include. I do think that SES is a relatively neglected demographic (EA seems to be strikingly high SES background).
Nicoll Peracha @ 2025-03-04T16:55 (+4) in response to Can Monitoring and Evaluation Drive Organizational Change? Lessons from The Mission Motor

Hi Vasco, Great question we've been looking into for a while now. We indeed use cost-effectiveness as one factor to decide which organizations to support more intensively. We also look at other factors.

The research base (and practicing MEL) is not yet very well developed for animal interventions. Also, interventions that are cost-effective now, might not be in a few years from now if e.g. the context changes. Besides the evidence base not being robust (yet), it is also more difficult to assess the longer-term effects of interventions.

MEL can contribute to building an evidence base for interventions and to know when to pivot or scale. It is therefore important for The Mission Motor to not only support interventions that are assessed as being cost-effective and impactful now, but also to help collect data on existing, or novel interventions without a firm evidence base yet, that have the potential to be impactful.

What we've landed on is to use a set of proxies primarily for organizations we support longer-term. As said, (potential) cost-effectiveness is a factor, next to other factors such as the ability to grow in MEL capacity (can we still contribute?) and organizational characteristics such as learning attitude, capacity to work on MEL, and a certain stability.

Let me know if you have input or questions plse! We'll be evaluating this system, and probably updating it regularly.

Vasco Grilo🔸 @ 2025-03-04T20:35 (–4)

Thanks for the context, Nicoll!

MEL can contribute to building an evidence base for interventions and to know when to pivot or scale. It is therefore important for The Mission Motor to not only support interventions that are assessed as being cost-effective and impactful now, but also to help collect data on existing, or novel interventions without a firm evidence base yet, that have the potential to be impactful.

I very much agree with the 1st sentence above. On the other hand, I think the vast majority of animal welfare organisations lacks the potential to become 10 % as cost-effective as SWP. So I believe being highly selective about which organisations to work with would still be good.

SiobhanB @ 2025-03-04T20:22 (+2) in response to Funding Diversification for Mid-Large EA Organizations is Nearly Impossible in the Short-Medium Term

I suspect you could cut salary expenditure quite a bit without noticeably reducing the output of many EA organisations, especially in animal advocacy. I think EA needs a Doge. 

Quinn McHugh (he/him) @ 2025-03-04T20:13 (+3) in response to EA Survey 2024: Demographics

Thanks for taking the time to run and analysis this survey.

Are there plans to include questions about income and/or financial stability in next year’s survey?

Rationale: I believe this data would be valuable in providing individuals with a clearer understanding of the financial security of others in the EA community and could help newcomers assess whether the advice they receive is relevant to their own financial situation. Many recommendations and norms within EA - such as unconventional career choices, significant donation pledges, or risk-taking in pursuit of impact - can have vastly different implications depending on who's making the recommendation or reinforcing the norms.

If a significant portion of the community has financial security, it’s possible that commonly shared advice assumes a level of stability that not all newcomers have. Understanding the financial realities of EA members could help provide more contextually appropriate guidance and ensure that discussions around impact, risk, and career planning are inclusive of people from diverse economic backgrounds.

Would love to hear your thoughts on this!

Theodore Ehrenborg @ 2025-03-04T15:04 (+4) in response to Could this be an unusually good time to Earn To Give?

(Thanks for providing lots of details in the post. Standard disclaimer that you know the most about your strengths/weaknesses, likes/dislikes, core values, etc)

I recommend going for the job. It sounds like you have a uniquely good chance at getting it, and otherwise I'd assume it'd go to someone who wasn't going to donate a lot of the salary.

After you get the job, I'd recommend thinking/reading/discussing a lot about the best way and time to give.

Regarding:
> This may not be a claim that I would stand by upon reflection.

> my reason for making them is largely a deferral to people better informed on the subject than I


You say you're not currently an expert, but I'd guess it wouldn't take so long (100 hours, so maybe a few months of weekends) for you to become an expert in the specific questions that determine when and how you should donate. Questions like:
- When will we develop superintelligence?
- Given that we do, how likely are humans to stay in control?
- Given that we stay in control, what would the economy look like?
- Given that the future economy looks like [something], what's the most impactful time and way to donate?
 - Wild guess that I haven't thought about much: even if you'd be much richer in the future because the stock market will go up a lot, maybe it's still better to donate all you can to AMF now. Reasoning: you can't help someone in the future if they died of malaria before the AI makes the perfect malaria vaccine

Whatever your final beliefs are, having the high-paying job allows you to have a large impact.

It looks like the other path you're considering is "mid to senior operations management roles at EA". I would guess you could give enough money to EA orgs so they could hire enough ops people to do more work than you could have done directly (but maybe there's some kind of EA ops work where you have a special hard-to-buy talent?)
 

Tom Gardiner @ 2025-03-04T20:07 (+3)

Thanks for the input, Theodore!

I agree that my chances of getting a trader role are higher than average and whoever would get the job instead is almost certainly not going to donate appreciable sums. Naturally, I would devote a very large amount of time and energy to the decision of how to give away this money. 

I'm very sceptical about my ability to become an "expert" on these questions surrounding AI. This is largely based on my belief that my most crippling flaw is a lack of curiosity but I also doubt that anyone could come up with robust predictions on these questions through casual research inside a year.

My intuition is strongly in the other direction regarding donating to AMF now (with the caveat that I have been donating to GiveWell's top charity portfolio for years). I don't have strong credence on how the cost of a DALY will change in the future, but I am confident it won't increase by a greater percentage than tactful investments. It is a tragedy that anyone dies before medicine advances to the point of saving them but we must triage our giving opportunities. 

I'd never been convinced that Earning To Give in the conventional sense would be a more impactful career for me than operations management work. My social network (which could be biased) consistently implies the EA community has a shortage of management talent. A large amount of money is already being thrown at solving this problem, particularly in the Bay Area and London. 

Zach Stein-Perlman @ 2025-03-04T19:15 (+12) in response to Announcing: Existential Choices Debate Week (March 17-24)

I want to make salient these propositions, which I consider very likely:

  1. In expectation, almost all of the resources our successors will use/affect comes via von Neumann probes (or maybe acausal trade or affecting the simulators).
  2. If 1, the key question for evaluating a possible future from scope-sensitive perspectives is will the von Neumann probes be launched, and what is it that they will tile the universe with? (modulo acausal trade and simulation stuff)
  3. [controversial] The best possible thing to tile the universe with (maybe call it "optimonium") is wildly better than what you get if you not really optimizing for goodness,[1] so given 2, the key question is will the von Neumann probes tile the universe with ~the best possible thing (or ~the worst possible thing) or something else?

Considerations about just our solar system or value realized this century miss the point, by my lights. (Even if you reject 3.)

  1. ^

    Related:

    Call computronium optimized to produce maximum pleasure per unit of energy "hedonium," and that optimized to produce maximum pain per unit of energy "dolorium," as in "hedonistic" and "dolorous." Civilizations that colonized the galaxy and expended a nontrivial portion of their resources on the production of hedonium or dolorium would have immense impact on the hedonistic utilitarian calculus. Human and other animal life on Earth (or any terraformed planets) would be negligible in the calculation of the total. Even computronium optimized for other tasks would seem to be orders of magnitude less important.

    So hedonistic utilitarians could approximate the net pleasure generated in our galaxy by colonization as the expected production of hedonium, multiplied by the "hedons per joule" or "hedons per computation" of hedonium (call this H), minus the expected production of dolorium, multiplied by "dolors per joule" or "dolors per computation" (call this D).

Zach Stein-Perlman @ 2025-03-04T19:55 (+4)

Given 3, a key question is what can we do to increase P(optimonium | ¬ AI doom)?

For example:

  • Averting AI-enabled human-power-grabs might increase P(optimonium | ¬ AI doom)
  • Averting premature lock-in and ensuring the von Neumann probes are launched deliberately would increase P(optimonium | ¬ AI doom), but what can we do about that?
  • Some people seem to think that having norms of being nice to LLMs is valuable for increasing P(optimonium | ¬ AI doom), but I'm skeptical and I haven't seen this written up.

(More precisely we should talk about expected fraction of resources that are optimonium rather than probability of optimonium but probability might be a fine approximation.)

Scott Smith 🔸 @ 2025-02-27T00:08 (+5) in response to We need a new Artesunate - the miracle drug fades

Thank you Nick. A lot of interesting information for an accessible 5-minute read. 

As we speak, artesunate still cures over 99.9% of malaria cases - if a little slower than before. But we don’t know when that will become 99%, then only 90% at which point we’ll need alternatives. 

I'm trying to get my head around the meaning of these numbers. The questions below are largely academic (no goal in mind), surely difficult, and likely dumb, so please feel no pressure to devote valuable time :)

(1) Are these cure rates in reference to the outcome of Therapeutic Efficacy Studies (TES) as described in this WHO document (Section 4.1) and where the outcome is an "adequate clinical and parasitological response (ACPR)" (p. 11 of that document)?

If it is:

(2) What is your best estimate for how often ACPR would be "achieved" without treatment?* I believe this would be difficult to answer. My estimate right now based off close to zero rationale: 4% (95% CI: 0.25% - 60%).

(3) What would you estimate the effect on saving a child's life would be in a drop from 100% to 90%?* Maybe can be quantified as (but you likely have a better way): For every 1000 children who come into a clinic for malaria treatment, how many would survive if (a) artesunate is administered versus if (b) no treatment was provided, given (i) 100% efficacy versus (ii) 90% efficacy.

My suspicion is that the ratio of p(ACPR | artesunate) : p(ACPR | no treatment) underestimates the effect on mortality (and severe malaria) due to the binary outcome measure not capturing benefits from artesunate reducing without eliminating the degree of parasitaemia(?)

Thanks again.

*in high transmission areas where inclusion criteria is "patients with fever, aged 6–59 months, with an asexual parasitaemia ranging between 2000 and 200 000 parasites/μL"

NickLaing @ 2025-03-04T19:44 (+5)

Thanks Scott interesting questions

1) TO answer this I'm just saying that an artemisin derivative plus another medication (co-artem) will still cure malaria completely almost all of the time, even if it takes longer

I don't have the answer to 2 or 3 exactly and dont' have the time to look into it but you're thnking along the right lines. For every 1,000 children who came into a clinic for malaria, at least 950 would survive with no treatment, but even those that survive are likely to encounter a range of problems such as anemia, low energy, recurrent fevers etc.  Also like you say people would be more prone to dying from other diseases as well after being weakened from malaria, as is well established in the case of diarrheal disease. Malaria actually weakens immunity directly as well. If I recall correctly somewhere between 1 in 10 and 1 in 20 severe malaria cases has co-infection with a bacterial infection.

jwatowatson @ 2025-03-01T05:49 (+10) in response to We need a new Artesunate - the miracle drug fades

The story of artemisinin resistance is important and worth telling. Artemisinin resistance is probably one of the major malaria related public health emergencies in Africa right now.

However, this story entirely confuses the main issues and is riddled with factual errors. I would suggest the author retracts it and consults with experts to write an accurate version.

Some of the major problems/factual errors:

  • There is no distinction made between the different uses of the artemisinin derivatives (artesunate, artemether, arteether, dihydroartemisinin being the main ones). Artesunate is primarily used for the treatment of severe malaria (injectable or rectal). But artemisinin derivatives are also used in combination with slowly eliminated partner drugs (lumefantrine, amodiaquine, mefloquine) to treat uncomplicated malaria (oral treatment). Injectable (and rectal) artesunate are life saving drugs for severe malaria but ACTs (artemisinin-based combination therapies) have had a huge impact on malaria attributable mortality because they replaced chloroquine for the treatment of uncomplicated malaria. See the letter by Ataran et al from 2004. ACTs are not even mentioned in the timeline! They were being used in Southeast Asia by the late-90s following pivotal trials.
  • The paragraph on the Vietnam war makes it sound like quinine was the main antimalarial drug in use during the Vietnam war. Chloroquine had been discovered in 1934, and was developed after WW2. It was very widely used (even put in table salt in Cambodia). Chloroquine was a remarkably effective drug but first reports of resistance were in 1957 on the Thailand/Cambodia border Piperaquine, mefloquine, sulphadoxine-pyrimethamine were developed in the 1950s/1960s.
  • "2003 – Small RCT shows artesunate “as effective as quinine” for severe malaria". Not only does this completely ignore the development of ACTs (recommended by WHO as first line treatment in 2006 after the Attaran letter), but this was not the first trial of artemisinin derivatives in severe malaria. The development of artemisinin derivatives given parenterally for severe malaria was much more complex. Initially artemether was chosen as the candidate drug. Two large trials (over 500 patients in each) published in 1996 (one in Africa, one in Asia) showed that artemether was as good as quinine. However, because of its pharmacology (artemether is variably absorbed) it wasn't clearly better. A trial done between 1996-2003 in Vietnam showed that artesunate cleared parasites faster due to better absorption, this then led to the SEAQUAMAT study. So the history as presented is wrong and misleading.
  • "artesunate still cures over 99.9% of malaria cases": this is meaningless. Artesunate or any artemisinin derivative is not given on its own to treat malaria. The therapeutic objective in severe malaria is to save life (around 5-20% of patients still die following IV artesunate, depending on their severity at presentation). The therapeutic objective in the treatment of uncomplicated malaria is to stop progression to severe illness and clear all parasites from the body. The artesunate component on its own does not clear all parasites when given over 3 days (even without resistance). The brilliant idea of ACTs is that by combining a fast acting but rapidly cleared artemisinin derivative with a slow acting but slowly cleared partner drug, you leave only a few parasites for the partner drug to clear up. This means that even when the partner drug doesn't work very well, in combination with an artemisinin derivative it can treat the infection. Artemisinin resistance means that there are more parasites that the partner drug needs to clear and so emergence of partner drug resistance occurs. Then the ACT fails to treat the infection. This is what has happened in Southeast Asia. Lumefantrine is the partner drug in around 70% of all treatments used in Africa (artemether/lumefantrine, known by the brand name coartem). If lumefantrine resistance emerged, this would be a major disaster.
  • "adding a 3rd drug to the treatment cocktail – a blunt and expensive instrument but one that could buy us another 10 years." No this is wrong. New drugs currently in development are being developed as "triple" therapy. There are good pharmacological reasons for this around prevention of the emergence of resistance.

In summary, this article is highly misleading on the history of the development of the artemisinins into usable treatments for malaria.

NickLaing @ 2025-03-04T19:34 (+9)


Hi James and thanks for posting here on the forum, appreciate someone who is obviously a m malaria expert weighing in here with this useful feedback. I will say I was aware of much of the factual information in your feedback, but chose to leave it out for storytelling purposes - for better or worse. 

Overall I was going for a short, simplified narrative article which briefly walked through 5 "acts" in the story of artesunate, while highlighting the incredible discovery story, pointing out that we may have unnecesarily delayed the mass roll out of artemisin treatment and the emerging resistnace issues. I’m writing for a general audience so I didn't focus on scientific details or get into the weeds, while doing my best to not to be misleading . Of course I compromised at times to bring the story out more vividly. I would argue that I make few factual errors, rather I missed out some aspects of the story that could be considered important. I’m interested if you disagree with any of my major points in the article, besides the important scientific information that I omitted?  You write that I "confuse the main issues" and I'm interested in what you think those main issues are?

Thanks to your great feedback I've made a few changes based on your feedback to be more precise - I suspect you won’t be completely satisfied as I still leave out much detail but I hope it helps.

  1. I didn’t want to get into the nitty gritty of treatment during the war (simplicity again), but you’re right that for much the war chloroquine was dominant. I’ve changed the wording to to “Chloroquine and quinine took too long to cure malaria and side effects could be rough”. I wonder what you make of the US army’s reports of both apparent chloroquine and Quinine resistance during the war? My main point here was that the Vietnam war stimulated the development of artemesin derivatives.
     
  2. I’ve added in those 1996 studies to the timeline thanks!. This only further adds in my mind to demonstrate how slow we were to figure out how much better artemisin derivatives could be than the status quo. I found these 2 studies super interesting as the signal in both of them was leaning towards artemether being better than quinine (although of course not statistically significant). Unfortunately here in Northern Uganda we still use artemether injections for some of our patients here that can’t afford artesunate (aware that it is not as good), as artemether is less than half the price of artesunate. Fortunately in only the last 2 years, artesunate prices have reduced by about 30% which is great so we are using less artemether than ever.
     
  3. I chose to leave out the combination treatment part of the story (see below for reasons), but for accuracy’s sake in the final paragraph have added the combination therapy point and changed to “In combination with other medication, artesunate cures over 99%...” 

A little pushback

  1. You're right that I didn't get into differentiating between the different artemesin derivatives. Although this is a little imprecise, I don’t think explaining the nature of these different derivatives is a critically important piece of information for the story. Again I’m simplifying for storytelling reasons. 
     
  2. I considered describing the combination therapy part of the story, which I agree is important, but decided to leave it out because the injectable artesunate seems to have had a far more important mortality impact than co-artem, and it would have meant telling a slghtly confusing parallel story ;). Feel free to push back here and I agree there is a good argument for adding them to the story - if I wrote a longer article I would have.

    - This Rethink Priorities research here estimates that injectable artesunate saved about 1.7 million lives by 2022 while co-artem saved abuot 300,000. This would mean Artesunate is responsible for 85% of the lives saved by artemisin derivatives and co-artem only 15%. This makes sense to me as artesunate provides a large mortality benefit in severe malaria, while other medications if taken properly cure malaria almost as effectively as co-artem. I agree there are many other benefits from co-artem in uncomplicated malaria (side effects, shorter course, faster clearance) and there is a mortality benefit vs alternatives, but far less extreme than for artesunate.

  • Much of the co-artem development story happens in Southeast Asia, which while important accounts for under 5% of malaria deaths.

3) don’t really understand your disagreement with my statement here "adding a 3rd drug to the treatment cocktail – a blunt and expensive instrument but one that could buy us another 10 years." You write "No this is wrong. New drugs currently in development are being developed as "triple" therapy. There are good pharmacological reasons for this around prevention of the emergence of resistance.”

What's "wrong" about my statement here exactly? The Lancet article I quoted discusses the idea of adding a third drug to artemether-lumefantrane as you say, and yes its to avoid the emergence of resistance. It will be expensive to add a third drug and I consider a 3 drug combination a bit of a sledgehammer/blunt tool. Perhaps we largely agree here?

Thanks again for the feedback and I hope to hear more from you here on the forum :).

Zach Stein-Perlman @ 2025-03-04T19:20 (+8) in response to Announcing: Existential Choices Debate Week (March 17-24)

One key question for the debate is: what can we do / what are the best ways to "increas[e] the value of futures where we survive"?

My guess is it's better to spend most effort on identifying possible best ways to "increas[e] the value of futures where we survive" and arguing about how valuable they are, rather than arguing about "reducing the chance of our extinction [vs] increasing the value of futures where we survive" in the abstract.

Zach Stein-Perlman @ 2025-03-04T19:15 (+12) in response to Announcing: Existential Choices Debate Week (March 17-24)

I want to make salient these propositions, which I consider very likely:

  1. In expectation, almost all of the resources our successors will use/affect comes via von Neumann probes (or maybe acausal trade or affecting the simulators).
  2. If 1, the key question for evaluating a possible future from scope-sensitive perspectives is will the von Neumann probes be launched, and what is it that they will tile the universe with? (modulo acausal trade and simulation stuff)
  3. [controversial] The best possible thing to tile the universe with (maybe call it "optimonium") is wildly better than what you get if you not really optimizing for goodness,[1] so given 2, the key question is will the von Neumann probes tile the universe with ~the best possible thing (or ~the worst possible thing) or something else?

Considerations about just our solar system or value realized this century miss the point, by my lights. (Even if you reject 3.)

  1. ^

    Related:

    Call computronium optimized to produce maximum pleasure per unit of energy "hedonium," and that optimized to produce maximum pain per unit of energy "dolorium," as in "hedonistic" and "dolorous." Civilizations that colonized the galaxy and expended a nontrivial portion of their resources on the production of hedonium or dolorium would have immense impact on the hedonistic utilitarian calculus. Human and other animal life on Earth (or any terraformed planets) would be negligible in the calculation of the total. Even computronium optimized for other tasks would seem to be orders of magnitude less important.

    So hedonistic utilitarians could approximate the net pleasure generated in our galaxy by colonization as the expected production of hedonium, multiplied by the "hedons per joule" or "hedons per computation" of hedonium (call this H), minus the expected production of dolorium, multiplied by "dolors per joule" or "dolors per computation" (call this D).

Karen Singleton @ 2025-03-04T04:47 (+3) in response to Is it too hard to do good through EA?

Thanks for sharing your post on what I think is an important topic. 

I like how you have clearly stated the problem and you make a great point about the complexity of engaging with EA, especially for mid-career professionals.  I agree the HIP programme seems to be an effective way to navigate this space more smoothly, and I can sympathise with the challenges you’ve highlighted.

Building on your ideas regarding career stage-based filtering and personalised recommendations, making it easier for newcomers to find relevant pathways without cognitive overload, could there be an interactive "guided exploration" tool—perhaps an AI-driven assistant or decision-tree quiz—that helps users quickly assess their best starting point based on their background, interests and constraints? I wonder if anyone has experimented with similar tools before or are there existing resources that could be adapted for this? 

Subha @ 2025-03-04T19:10 (+2)

Thanks for your response Karen.  

Interestingly - the idea of building an AI-led solution was the starting point of this journey for me :-)  I'm not pursuing it for a number of reasons (especially - it's sub-optimal to add another organization in what I see as already a crowded ecosystem which is tough to navigate).  However, for those interested in pursuing this -- this article has a useful overview of current usage of AI-tools in supporting Career Counselling in colleges - Navigation & Guidance in the Age of AI: 5 Trends to Watch.  

Also, at a broader level in Analytics modeling -- this falls under the broad purview of matching-problems -- which form the core of personalized recommendation engines in most platform-model businesses.  I'm not tech-enough to detail this further -- but can help interested folks connect with experts if they'd like to learn more.

Habryka [Deactivated] @ 2025-03-02T19:07 (+106) in response to Habryka [Deactivated]'s Quick takes

So long and thanks for all the fish. 

I am deactivating my account.[1] My unfortunate best guess is that at this point there is little point and at least a bit of harm caused by me commenting more on the EA Forum. I am sad to leave behind so much that I have helped build and create, and even sadder to see my own actions indirectly contribute to much harm.

I think many people on the forum are great, and at many points in time this forum was one of the best places for thinking and talking and learning about many of the world's most important topics. Particular shoutouts to @Jason, @Linch, @Larks, @Neel Nanda and @Lizka for overall being great commenters. It is rare that I had conversations with any of you that I did not substantially benefit from. 

Also great thanks to @JP Addison🔸 for being the steward of the forum through many difficult years. It's been good working with you. I hope @Sarah Cheng can turn the ship around as she takes over responsibilities. I still encourage you to spin out of CEA. I think you could fundraise. Of course the forum is responsible for more than 3% of CEA's impact by I think most people's lights, and all you need is 3% of CEA's budget to make a great team.

I have many reasons for leaving, as I have been trying to put more distance between me and the EA community. I won't go into all of them, but I do encourage people to read my comments over the last 2 years to get a sense of them, I think there is some good writing in there. 

The reason I think I would be most amiss to not mention here is the increasing sense of disconnect I have been feeling between what once was a thriving and independent intellectual community, open to ideas and leadership from any internet weirdo that wants to do as much good as they can, and the present EA community whose identity, branding and structure is largely determined by a closed-off set of leaders with little history of intellectual contributions, and with little connection to what attracted me to this philosophy and community in the first place. The community feels very leaderless and headless these days, and in the future I only see candidates for leadership that are worse than none. Almost everyone who has historically been involved in a leadership position has stepped back and abdicated that role.

I no longer really see a way for arguments, or data, or perspectives explained on this forum to affect change in what actually happens with the extended EA community, especially in domains like AI Safety Research, AGI Policy, internal community governance, or more broadly steering humanity's development of technology in positive directions. I think while shallow criticism often gets valorized, the actual life of someone who tries to make things better by trying to reward and fund good work and hold people accountable, is one of misery and adversarial relationship, accompanied by censure, gaslighting and overall a deep sense of loneliness.

To be clear, there has always been an undercurrent of this in the community. When I was at CEA back in 2015 we frequently and routinely deployed highly adversarial strategies to ensure we maintained more control over what people understood what EA meant, and who would get so shape it, and the internet weirdos were often a central target of our efforts to make others less influential. But it is more true now. The EA Forum was not run by CEA at the time, and maybe that was good, and funding was not so extremely centralized in a single large foundation, and that foundation still had a lot more freedom and integrity back then.

It's been a good run. Thanks to many of you, and ill wishes to many others. When the future is safe, and my time is less sparse, I hope we can take the time and figure out who was right in things. I certainly don't speak with confidence on many things I have disagreed with others on, only with conviction to try to do good even in a world as confusing and uncertain as this and to not let the uncertainty prevent me from saying what I believe. It sure seems like we all made a difference, just unclear what sign.

  1. ^

    I won't use the "deactivate account" future which would delete my profile. I am just changing my user name and bio to indicate I am no longer active.

Ben_West🔸 @ 2025-03-04T18:47 (+19)

It feels appropriate that this post has a lot of hearts and simultaneously disagree reacts. We will miss you, even (perhaps especially) those of us who often disagreed with you. 

I would love to reflect with you on the other side of the singularity. If we make it through alive, I think there's a decent chance that it will be in part thanks to your work.

calebp @ 2025-03-04T17:49 (+2) in response to Where are all the deepfakes?

Fwiw I think the "deepfakes will be a huge deal" stuff has been pretty overhyped and the main reason we haven't seen huge negative impacts is that society already has reasonable defences against fake images that prevent many people from getting misled by them.

I don't think this applies to many other mouse style risks that the AI X-risk community cares about. 

For example the main differences in my view between AI-enabled deepfakes and AI-enabled biorisks are:
* marginal people getting access to bioweapons is just a much bigger deal than marginal people being able to make deepfakes
* there is much less room for the price of deepfakes to decrease than the cost of developing a bioweapon (photoshop has existed for a long time and expertise is relatively cheap). 

Jason @ 2025-03-04T15:53 (+10) in response to On deference to funders

So it could make a lot of sense to update your actions towards that funder, more than would be the case if you had all the power.

 

That makes a lot of sense. However, updating actions toward a funder because of their power is one thing; updating beliefs is another. 

So there are several questions lurking for me here -- you mentioned one, whether deference to OP is "explained more by the fact that OP is powerful than that it is respected" (the true cause of deference). But the other question is what people tell themselves (and others) about why they defer to OP's views, and that could even be the more important question from an epistemic standpoint.

If Org A chooses to do X, Y, and Z in significant part because OP is powerful (and it would not have done so otherwise), it's important for Org A to be eagle-eyed about its reasoning (at least internally). Cognitive dissonance reduction is a fairly powerful force, and it's tempting to come out about to the view that X, Y, and Z are really important when you're doing them for reasons other than an unbiased evaluation of their merits.

One could argue that we should give ~0 deference to OP's opinions when updating our viewpoints, even if we alter our actions. These opinions already get great weight in terms of what gets done for obvious practical reasons, so updating our own opinions in that direction may (over?)weight them even more. 

Moreover, non-OP views probably influence other people's views even if they are not consciously given any weight. As noted above, there's the cognitive dissonance reduction effect. There's also the likelihood that X, Y, and Z are getting extra buzz due to OP's support of those ideas (e.g., they are discussed more, people are influenced by seeing organizations that follow X, Y, and Z achieve results due to their favorable funding posture, etc.). Filtering out these kinds of effects on one's nominally independent thinking is difficult. If people defer to what OP thinks on top of experiencing these indirect effects, then it's reasonable to think they are functionally double-counting OP's opinion.

Ozzie Gooen @ 2025-03-04T17:28 (+6)

That roughly sounds right to me. 

I think that power/incentives often come first, then organizations and ecosystems shape their epidemics to some degree in order to be convenient. This makes it quite difficult what causally led to what. 

At the same time, I'm similarly suspicious of a lot of epistemics. It's obviously not just beliefs that OP likes that will be biased to favor convenience. Arguably a lot of these beliefs just replace other bad beliefs that were biased to favor other potential stakeholders or other bad incentives. 

Generally I'm quite happy for people and institutions to be quite suspicious of their worldviews and beliefs, especially ones that are incentivized by their surroundings. 

(I previously wrote about some of this in my conveniences post here, though that post didn't get much attention.)

Elijah Whipple @ 2025-03-02T23:25 (+3) in response to Cage-free in the US

This is very helpful and interesting, thank you for the information! Would most/all of the follow-up campaigns that THL have done be findable online? For instance, when I search stores like "Trader Joe's cage free" I don't find much besides things from 2016, and I assumed that meant that there weren't follow-up campaigns. Is that impression probably right?

Caroline Mills @ 2025-03-04T17:23 (+3)

I'm glad it was helpful! No, the follow-up would not necessarily be online, unfortunately. It's something we track internally for our own strategic purposes and impact assessment. But a lot of progress is made through behind-the-scenes negotiations—we only launch public campaigns if companies don't make progress during the negotiation phase. And if there is a public campaign, sometimes part of the final negotiation is that we agree to take all our public materials down in exchange for the company publicly reporting its progress. 

Trader Joe's is a bit of a weird example—it seems like they are making progress (they have lots of in-store signage, for example) but they haven't publicly reported their cage-free progress. But since we suspect they're already making progress, they wouldn't be a particularly meaningful campaign target. So your suspicion about Trader Joe's is right, there haven't been follow-up campaigns, but you couldn't generalize that to other companies.

geoffrey @ 2025-03-04T17:20 (+23) in response to History of diversity efforts and trends in EA

This is great stuff. I often find it hard to remember a lot of initiatives have happened (despite having read 80% of this list already) so this timeline is a good reference

As an aside, I think others may benefit from reading about diversity initiatives outside EA to remember this is hard problem. It's totally consistent for EA to be above-the-curve on this and still not move the needle much (directionally I think those two things are true but not confident on magnitudes), so linking some stuff I've been reading lately:

  • DEI Deconstructed: Your No-Nonsense Guide to Doing the Work and Doing It Right has two neat chapters at the beginning: one on corporate initiatives over the past 10 years and another on US initiatives since World War II. The tidbits on US military diversity initiatives (and their mixed results) were new to me and sadly not something I see talked about much elsewhere. The rest of the book seemed to be a corporate strategy workbook which I didn't find useful but others might.
  • Affirmative Action and the Quality-Fit Trade-Off explores the stronger forms of affirmative action practiced at law school + elite undergraduate universities. It summarizes the economic theory and evidence on whether these stronger forms can "backfire" for minority students. Ultimately, it's hard to say but mild forms seem good. Plato Stanford Encylopedia of Philosophy takes a legal and philosophical look at how the justification and practice for affirmative action has shifted over time.
Rohin Shah @ 2025-03-04T10:27 (+15) in response to Habryka [Deactivated]'s Quick takes

Of course, it's true that they could ignore serious criticism is they wanted to, but my sense is that people actually quite often feel unable to ignore criticism.

As someone sympathetic to many of Habryka's positions, while also disagreeing with many of Habryka's positions, my immediate reaction to this was "well that seems like a bad thing", c.f.

shallow criticism often gets valorized

I'd feel differently if you had said "people feel obliged to take criticism seriously if it points at a real problem" or something like that, but I agree with you that the mechanism is more like "people are unable to ignore criticism irrespective of its quality" (the popularity of the criticism matters, but sadly that is only weakly correlated with quality).

Sarah Cheng @ 2025-03-04T17:03 (+8)

I appreciate you sharing your views on this! I agree that as a whole, this is suboptimal.

I don't currently feel confident enough about the take that "shallow criticism often gets valorized" to prioritize tackling it, though I am spending some time thinking about moderation and managing user-generated content and I expect that the mod team (including myself) will discuss how we'd like to handle critical comments, so this will probably come up in our discussions.

I'm kind of worried that there's not necessarily an objective truth to how shallow/low-quality any particular criticism is, and I personally would prefer to err on the side of allowing more criticism. So it's possible that not much changes in the public discourse, and any interventions we do may need to be behind the scenes (such as our team spending more time talking with people who get criticized).

Grayden 🔸 @ 2025-02-28T08:46 (+9) in response to Historical examples of stewardship in EA

You narrative talks about the movement switching from earn to give to career-focused. I think that has huge survivorship bias in it. There are now many GWWC pledgers who would not call themselves EA. As the movement became bigger, the career-focused side began to dominate discourse because there’s a lot more to say if you are career-focused and trying to coordinate things rather than if you are head down earning money.

JP Addison🔸 @ 2025-03-04T17:01 (+2)

I think this is a good and useful point. And one that's underappreciated in general.

Vasco Grilo🔸 @ 2025-03-03T17:37 (+2) in response to Can Monitoring and Evaluation Drive Organizational Change? Lessons from The Mission Motor

Thanks for the update, Nicoll and Tom!

From mid-2023 to 2024, we worked with 27 organizations, providing both short-term support (19 organizations, up to 3 months) and longer-term support (8 organizations, 4+ months).

Are you aiming to work with the most cost-effective animal welfare organisations? I think you would have to spend more to increase by 1 % their cost-effectiveness than that of a random organisation helping farmed animals. However, I believe the most cost-effective animal welfare organisations are way more cost-effective, such that you would have to spend less to achieve the same absolute increase in cost-effectiveness (which is the product between the initial cost-effectiveness, and relative increase in it you caused). I would say there is lots of variation in the cost-effectiveness of animal welfare organisations:
 

  • I estimated the Shrimp Welfare Project (SWP) has been 412 and 173 times as cost-effective as broiler welfare and cage-free campaigns.
  • I estimated Veganuary in 2024 and School Plates in 2023 were 1.20 % and 19.4 % as cost-effective as cage-free campaigns.
  • I estimated the Fish Welfare Initiative's (FWI's) farm program from January to September 2024 was 1.55 % as cost-effective as cage-free campaigns.
  • I estimated Sinergia Animal's meal replacement program in 2023 was 0.107 % as cost-effective as their cage-free campaigns.
Nicoll Peracha @ 2025-03-04T16:55 (+4)

Hi Vasco, Great question we've been looking into for a while now. We indeed use cost-effectiveness as one factor to decide which organizations to support more intensively. We also look at other factors.

The research base (and practicing MEL) is not yet very well developed for animal interventions. Also, interventions that are cost-effective now, might not be in a few years from now if e.g. the context changes. Besides the evidence base not being robust (yet), it is also more difficult to assess the longer-term effects of interventions.

MEL can contribute to building an evidence base for interventions and to know when to pivot or scale. It is therefore important for The Mission Motor to not only support interventions that are assessed as being cost-effective and impactful now, but also to help collect data on existing, or novel interventions without a firm evidence base yet, that have the potential to be impactful.

What we've landed on is to use a set of proxies primarily for organizations we support longer-term. As said, (potential) cost-effectiveness is a factor, next to other factors such as the ability to grow in MEL capacity (can we still contribute?) and organizational characteristics such as learning attitude, capacity to work on MEL, and a certain stability.

Let me know if you have input or questions plse! We'll be evaluating this system, and probably updating it regularly.

Nick Corvino @ 2025-03-03T05:14 (+4) in response to Explaining all the US semiconductor export controls

Thank you for writing this! I was looking for a comprehensive outline of the export regulations and this is exactly what I needed. 

ZacRichardson @ 2025-03-04T16:14 (+3)

That's awesome to hear. Thanks Nick!

Ozzie Gooen @ 2025-03-03T22:17 (+21) in response to On deference to funders

I think this is a significant issue, though I imagine a lot of this can be explained more by the fact that OP is powerful than that it is respected. 

If your organization is highly reliant on one funder, then doing things that funder regards as good is a major factor that will determine if you will continue to get funding, even if you might disagree. So it could make a lot of sense to update your actions towards that funder, more than would be the case if you had all the power.

I think that decentralizing funding is good insofar as the nonprofit gets either more power (to the extent that this is good) or better incentives. There are definitely options where one could get more funding, but that funding could come from worse funders, and then incentives decline.

Ultimately, I'd hope that OP and other existing funders can improve, and/or we get other really high-quality funders. 

Jason @ 2025-03-04T15:53 (+10)

So it could make a lot of sense to update your actions towards that funder, more than would be the case if you had all the power.

 

That makes a lot of sense. However, updating actions toward a funder because of their power is one thing; updating beliefs is another. 

So there are several questions lurking for me here -- you mentioned one, whether deference to OP is "explained more by the fact that OP is powerful than that it is respected" (the true cause of deference). But the other question is what people tell themselves (and others) about why they defer to OP's views, and that could even be the more important question from an epistemic standpoint.

If Org A chooses to do X, Y, and Z in significant part because OP is powerful (and it would not have done so otherwise), it's important for Org A to be eagle-eyed about its reasoning (at least internally). Cognitive dissonance reduction is a fairly powerful force, and it's tempting to come out about to the view that X, Y, and Z are really important when you're doing them for reasons other than an unbiased evaluation of their merits.

One could argue that we should give ~0 deference to OP's opinions when updating our viewpoints, even if we alter our actions. These opinions already get great weight in terms of what gets done for obvious practical reasons, so updating our own opinions in that direction may (over?)weight them even more. 

Moreover, non-OP views probably influence other people's views even if they are not consciously given any weight. As noted above, there's the cognitive dissonance reduction effect. There's also the likelihood that X, Y, and Z are getting extra buzz due to OP's support of those ideas (e.g., they are discussed more, people are influenced by seeing organizations that follow X, Y, and Z achieve results due to their favorable funding posture, etc.). Filtering out these kinds of effects on one's nominally independent thinking is difficult. If people defer to what OP thinks on top of experiencing these indirect effects, then it's reasonable to think they are functionally double-counting OP's opinion.

Davidmanheim @ 2025-03-04T15:47 (+4) in response to From Conflict to Coexistence: Rewriting the Game Between Humans and AGI

This seems great - I'd love to see it completed, polished a bit, and possibly published somewhere. (If you're interested in more feedback on that process, feel free to ping me.)

Toby Tremlett🔹 @ 2025-03-04T13:00 (+4) in response to Habryka [Deactivated]'s Quick takes

FWIW it looks like claude is only summarising this quick take (all the quotes are from it)

Pablo @ 2025-03-04T15:22 (+4)

Here's another summary. I used Gemini 2.0 Flash (via the API), and this prompt:

The following is a series of comments by Habryka, in which he makes a bunch of criticisms of the effective altruism (EA) movement. Please look at these comments and provide a summary of Habryka’s main criticisms.

  1. Lack of leadership and accountability: He believes EA leadership is causing harm and lacks mechanisms for correcting course.
  2. Emphasis on PR and narrative control: He condemns EA organizations' risk aversion, guardedness, and attempts to control the narrative around FTX, prioritizing public image over transparency.
  3. Inadequate community health: He laments conformity pressures, fears of reprisal for dissent, and insufficient efforts to cultivate a culture of open disagreement.
  4. Entanglement with FTX: He faults EA leadership, particularly Will MacAskill, for endorsing Sam Bankman-Fried and entangling the movement with FTX despite warnings about SBF's character.
  5. Hero worship and lack of respect for intellectual leaders: He criticizes the hero worship of MacAskill, contrasting it with MacAskill's perceived lack of engagement with other intellectual leaders in the community. He sees this as part of a pattern of MacAskill prioritizing popularity and prestige over community health and epistemic integrity.
  6. Misleading communications and lack of transparency: He criticizes CEA for making inaccurate and misleading statements, for omitting crucial context in communications, and for concealing information about funding decisions.
  7. Scaling too quickly and attracting grifters: He worries that EA's rapid growth and increased funding attract deceptive actors and create perverse incentives.
  8. Overreliance on potentially compromised institutions: He expresses concerns about EA's deep ties to institutions like Oxford University, which may stifle intellectual exploration and operational capacity.
  9. Ignoring internal warnings about FTX: He reveals that he and others warned EA leadership about Sam Bankman-Fried's reputation for dishonesty, but those warnings went unheeded. He suggests he personally observed potentially illegal activities by SBF but chose not to share this information more widely.
  10. Flawed due diligence and poor judgment in grantmaking: He feels EA leadership's due diligence on SBF was inadequate and that they made poor judgments in providing him with substantial resources. He extends this criticism to grantmaking practices more generally.
  11. Unfair distribution of resources: He argues that the current distribution of funds within EA doesn't adequately compensate those doing object-level work and undervalues their contributions relative to donors. He argues for a system that recognizes the implicit tradeoff many have made in pursuing lower-paying EA-aligned careers.
  12. Centralized media policy and negative experiences with journalists: While supporting a less centralized media policy, he also cautions against interacting with journalists, as they frequently misrepresent interviewees and create negative experiences.
     
Theodore Ehrenborg @ 2025-03-04T15:04 (+4) in response to Could this be an unusually good time to Earn To Give?

(Thanks for providing lots of details in the post. Standard disclaimer that you know the most about your strengths/weaknesses, likes/dislikes, core values, etc)

I recommend going for the job. It sounds like you have a uniquely good chance at getting it, and otherwise I'd assume it'd go to someone who wasn't going to donate a lot of the salary.

After you get the job, I'd recommend thinking/reading/discussing a lot about the best way and time to give.

Regarding:
> This may not be a claim that I would stand by upon reflection.

> my reason for making them is largely a deferral to people better informed on the subject than I


You say you're not currently an expert, but I'd guess it wouldn't take so long (100 hours, so maybe a few months of weekends) for you to become an expert in the specific questions that determine when and how you should donate. Questions like:
- When will we develop superintelligence?
- Given that we do, how likely are humans to stay in control?
- Given that we stay in control, what would the economy look like?
- Given that the future economy looks like [something], what's the most impactful time and way to donate?
 - Wild guess that I haven't thought about much: even if you'd be much richer in the future because the stock market will go up a lot, maybe it's still better to donate all you can to AMF now. Reasoning: you can't help someone in the future if they died of malaria before the AI makes the perfect malaria vaccine

Whatever your final beliefs are, having the high-paying job allows you to have a large impact.

It looks like the other path you're considering is "mid to senior operations management roles at EA". I would guess you could give enough money to EA orgs so they could hire enough ops people to do more work than you could have done directly (but maybe there's some kind of EA ops work where you have a special hard-to-buy talent?)
 

NickLaing @ 2025-02-28T19:25 (+3) in response to Cause neutrality doesn't mean all EA causes matter equally

Could you maybe quote an example where orgs list "cause neutrality" as a reason for listing a wide range of causes. I completely agree with your argument it just seems unlikely these super switched on orgs would make that argument.

Toby Tremlett🔹 @ 2025-03-04T14:54 (+2)

That would be helpful- but that's also why this is a draft amnesty post. I'm not sure what I read that made me want to write this, and if I did find it, I didn't want to call out a particular Forum commenter. The rationale for posting this without a specific target is partially to see if this resonates as a problem or not. 

SummaryBot @ 2025-03-04T14:53 (+1) in response to From Conflict to Coexistence: Rewriting the Game Between Humans and AGI

Executive summary: Instead of relying solely on internal alignment of AGI, this paper explores how structuring external incentives and interdependencies could encourage cooperation and coexistence between humans and misaligned AGIs, building on recent game-theoretic analyses of AGI-human conflict.

Key points:

  1. Traditional AGI safety approaches focus on internal alignment, but this may be uncertain or unachievable, necessitating alternative strategies.
  2. Game-theoretic models suggest that unaligned AGIs and humans could default to a destructive Prisoner’s Dilemma dynamic, where mutual aggression is the rational choice absent external incentives for cooperation.
  3. Extending existing models, this paper explores scenarios where AGI dependence on economic, political, and infrastructural systems could promote cooperation rather than conflict.
  4. Early-stage AGIs, especially those dependent on specific AI labs, may have stronger incentives for cooperation, but these incentives erode as AGIs become more autonomous.
  5. When AGIs integrate deeply into national security structures, the strategic landscape shifts from a zero-sum game to an assurance game, where cooperation is feasible but fragile.
  6. Effective governance strategies should focus on creating structured dependencies and institutional incentives that make peaceful coexistence the rational strategy for AGIs and human actors.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2025-03-04T14:51 (+1) in response to Could this be an unusually good time to Earn To Give?

Executive summary: Given the potential for AI-driven economic upheaval and locked-in wealth inequality, now may be an unusually good time to prioritize Earning To Give—especially for those with lucrative career prospects—so they can later redistribute wealth in a way that mitigates future harms.

Key points:

  1. AI is likely to significantly reduce white-collar job availability by 2030 while also driving enormous GDP growth, leading to unprecedented and entrenched wealth inequality.
  2. Those who accumulate wealth before their labor becomes replaceable may have a unique opportunity to do significant good, as future redistribution mechanisms could be limited.
  3. If AI-induced economic concentration leads to a "technocratic feudal hierarchy," wealthy altruists could become rare actors capable of steering resources toward helping the destitute.
  4. The geopolitical implications of AI-driven economic shifts may further restrict wealth distribution, particularly under nationalistic policies that prioritize domestic citizens over global needs.
  5. While directly working on AI alignment or governance remains a higher priority, individuals without a clear path in those areas might do more good by aggressively pursuing wealth now to give later.
  6. The author personally considers shifting from a military career to high-earning finance roles, weighing whether Earning To Give would be more impactful than working in longtermist EA organizations.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

OscarD🔸 @ 2025-03-04T11:56 (+5) in response to Habryka [Deactivated]'s Quick takes

fyi for anyone like me who doesn't have lots of the backstory here and doesn't want to read through Habryka's extensive corpus of EAF writings, here is Claude 3.7 Sonnet's summary based on the first page of comments Habryka links to.

Based on Habryka's posts, I can provide a summary of his key disagreements with EA leadership and forum administrators that ultimately led to his decision to leave the community.

Key Disagreements

  1. Leadership and Accountability: Habryka repeatedly expresses concern about what he sees as a "leaderless" EA community. He believes the community has shifted from being driven by independent intellectual contributors to being determined by "a closed-off set of leaders with little history of intellectual contributions." He argues that almost everyone who was historically in leadership positions has stepped back and abdicated their roles.
  2. Institutional Integrity: He criticizes EA organizations, particularly CEA (Centre for Effective Altruism), for prioritizing growth, prestige, and public image over intellectual integrity. In his posts, he describes personal experiences at CEA where they "deployed highly adversarial strategies" to maintain control over EA's public image and meaning.
  3. FTX Situation: Habryka was particularly critical of how EA leadership handled Sam Bankman-Fried (SBF) and FTX. He claims to have warned people about SBF's reputation for dishonesty, but these warnings were not heeded. He criticizes Will MacAskill and others for their continued endorsement of SBF despite red flags, and was frustrated by the lack of transparency and open discussion after FTX's collapse.
  4. Risk-Aversion and PR Focus: He repeatedly criticizes what he perceives as excessive risk-aversion and PR-mindedness among EA organizations. He argues this approach prevents honest discussion of important issues and contributes to a culture of conformity.
  5. Funding Centralization: Habryka expresses concern about EA funding being increasingly centralized through a single large foundation (likely referring to Open Philanthropy), arguing this concentration of resources creates unhealthy power dynamics.
  6. Community Culture: He criticizes the shift in EA culture away from what he describes as "a thriving and independent intellectual community, open to ideas and leadership from any internet weirdo" toward something more institutional and conformist.
  7. Failure to Create Change: Habryka states that he no longer sees "a way for arguments, or data, or perspectives explained on this forum to affect change in what actually happens with the extended EA community," particularly in domains like AI safety research and community governance.

His departure post suggests a deep disillusionment with the direction of the EA community, expressing that while many of the principles of EA remain important, he believes "EA at large is causing large harm for the world" with "no leadership or accountability in-place to fix it." He recommends others avoid posting on the EA Forum as well, directing them to alternatives like LessWrong.

Will Aldred @ 2025-03-04T14:49 (+42)

Hmm, I’m not a fan of this Claude summary (though I appreciate your trying). Below, I’ve made a (play)list of Habryka’s greatest hits,[1] ordered by theme,[2][3] which might be another way for readers to get up to speed on his main points:

Leadership

Reputation[5]

Funding

Impact

  1. ^

    ‘Greatest hits’ may be a bit misleading. I’m only including comments from the post-FTX era, and then, only comments that touch on the core themes. (This is one example of a great comment I haven’t included.)

  2. ^

    although the themes overlap a fair amount

  3. ^

    My ordering is quite different to the karma ordering given on the GreaterWrong page Habryka links to. I think mine does a better job of concisely covering Habryka’s beliefs on the key topics. But I’d be happy to take my list down if @Habryka disagrees (just DM me).

  4. ^

    For context, Zachary is CEA’s CEO.

  5. ^

SummaryBot @ 2025-03-04T14:49 (+1) in response to On deference to funders

Executive summary: The EA community exhibits an unusual degree of deference to funders, leading to strategic shifts based on minimal feedback, distorted information flows, and misaligned incentives, which could be mitigated by diversifying grantmaking structures and reducing automatic deference to funders' opinions.

Key points:

  1. Unusual deference to funders – Unlike other charitable communities, EA organizations often treat funders’ opinions as highly authoritative, even when they lack direct expertise in the work being funded.
  2. Funders lack critical information – They often receive incomplete or distorted data, particularly regarding negative aspects of projects, due to incentives for grantees to present overly positive narratives.
  3. Misalignment of values – Major EA funders, such as Open Philanthropy, do not always align with EA consensus, yet their funding choices often set de facto strategic priorities for the movement.
  4. Grantmaking differs from direct work – Funders typically specialize in evaluating grants rather than executing projects, leading to potential misjudgments in funding decisions.
  5. Potential solutions – Reducing deference to funders, increasing the number of funders and evaluators, and distributing grantmaking decisions more widely could improve funding quality and ecosystem resilience.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.