Latest comments on the EA Forum

Comments on 2025-01-23

John Salter @ 2025-01-21T19:47 (+10) in response to We don't want to post again "This might be the last AI Safety Camp"

Two questions I imagine prospective funders would have:

  1. Can you give some indication as to the value of stripends? It's not clear how the benefits trade off against that cost. It's tempting to think that stripends are responsible for >80% of the costs but bring <20% of the benefit.
  2. What would your attendees have been doing otherwise?

     

Linda Linsefors @ 2025-01-23T13:14 (+2)

If I calculated correct, in the fully funded version, stipends would be 76% of the cost. Not quite >80% but close. I think I agree that stipends is not much more than than 20% of the value. 

Basically I agree with you that stipends are the least cost effective part of AISC. This is why stipends are lowest on the funding priority. 

However it is possible for stipend to be less necessary than the rest, but still worth paying. They are in the budget because, if someone wants to fund it, we would like to hand out stipends. 

I think giving stipends to low participants from low income countries are probably cost effective, but it's probably better to prioritise runway for future camps rather than stipends for everyone else. If you know any donors who would like to earmark their donation this way, or any other way, tell them to contact us. 

Karthik Tadepalli @ 2025-01-23T05:32 (+4) in response to Should EAs help employees in the developing world move to the West?

I think Jason is saying that the "support to emigrate" was limited to recommendations.

Jason @ 2025-01-23T13:05 (+2)

Yes, that's correct.

Aleks_K @ 2025-01-23T12:30 (+3) in response to What are typical payment terms for a grant-funded project contract?

I don't really understand the question here: I an organisation contracts someone to do work for them, they usually agree on a specific amount, either a fixed price or an hourly/daily rate. What are the specifics of your scenario here? Should the amount be conditional on how much funding the organisation receives for that specific work? That seems a quite strange approach to me. Or are you expecting that the contractor commits to doing the work but might not get paid if a grant application is unsuccessful? I don't really think anyone would or should agree to that. The right approach should be to wait with actually hiring the contractor until the organisation has the money to pay for them.

Toby Tremlett🔹 @ 2025-01-23T11:52 (+4) in response to Toby Tremlett's Quick takes

The RSPCA is holding a "big conversation", culminating in a citizens' assembly. If you have opinions about how animals in the UK are treated (which you probably do), you can contribute your takes here.
A lot of the contributions are very low quality, so I think EA voices have a good chance of standing out and having their opinions shared with a broader audience. 

annaleptikon @ 2025-01-23T11:32 (+1) in response to Potential new cause area: Obesity

Solved by Ozempic?

Cullen 🔸 @ 2025-01-23T11:04 (+4) in response to Ways the world is getting better - discussion thread

Same-sex marriage became legal in Thailand this week.

NickLaing @ 2023-05-02T17:26 (+9) in response to Review of The Good It Promises, the Harm It Does

Thanks this is brilliantly articulated and also fun to read. The sad thing is that strong arguments can be made for the expected value of many forms of smart, even semi-revolutionary activist methods (and have been on this forum) but like you said they seem not keen to engage with our framework at all.

Perhaps they should have hired you to help them write more compelling book!

I have one observation about your use of the term "social justice" which surprised me a little. The way I used to use the word at least, I would consider effective altruism's approach to be an effective "social justice" approach, to rectify social injustices like kids dying from malaria and factory farming. But your use (and perhaps the generally accepted use now) is very different. Like here...

"We should fully expect maximizing welfare to generate complaints about “inequitable cause prioritization” (p. 82) from those who care more about social justice."

I would have thought of maximizing welfare to be one form of social justice, and they would certainly not be mutually exclusive, but apparently the term now has more sinister connotations. When I think of social justice, I think of MLK, Ghandi and Mandela, all pragmatic activists who were wildly successful at creating lasting change with a cost effectiveness and scale that should make effective altruists salivate. I find it sad that "social justice" these days seems to carry the baggage of something like a...

Far-left-slightly-unstable-person-ranting-on-the-internet-about-identity-politics

And "social justice warrior" seems to be even worse. Until 15 years ago I would have been proud to identify as a social justice warrior, ie someone who thoughtfully and effectively fights injustice against animals, malaria deaths as well as racial inequity, but I fear the politicisation and pigeonholing of the term may now be irreversible.

Mind you I've lived in Uganda for the last 10 years and so perhaps I'm just behind on a legitimate evolution of the English language!

Would be interested to hear your thoughts.

dominicroser @ 2025-01-23T10:45 (+1)

Just to say that I very much agree with your sadness. What a deplorable turn in language use that "social justice activism" has now become associated with a certain kind of social justice activism!

PS: of course, we can make a theoretical distinction between promoting aggregrate welfare and promoting justice (eg when one option benefits millions at the expense of one person -- this might be considered unjust but welfare-increasing). But in practice, promoting justice and aggregate welfare are much more overlapping than is often recognized -- but as you and I do seem to recognize...
 

Rohin Shah @ 2025-01-23T09:39 (+2) in response to Notes on risk compensation

What you call the "lab's" utility function isn't really specific to the lab; it could just as well apply to safety researchers. One might assume that the parameters would be set in such a way as to make the lab more C-seeking (e.g. it takes less C to produce 1 util for the lab than for everyone else).

But at least in the case of AI safety, I don't think this is the case. I doubt I could easily distinguish a lab capabilities researcher (or lab leadership, or some "aggregate lab utility function") from an external safety researcher if you just gave me their utility functions over C and S. (AI safety has significant overlap with transhumanism; relative to the rest of humanity they are way more likely to think there are huge benefits to development of safe AGI.) In practice it seems like the issue is more like epistemic disagreement.

You could still recover many of the conclusions in this post by positing that an increase to S leads to a proportional decrease in probability of non-survival, and the proportion is the same between the lab and everyone else, but the absolute numbers aren't. I'd still feel like this was a poor model of the real situation though.

Daniel Abiliba @ 2025-01-23T08:14 (+1) in response to Ways the world is getting better - discussion thread

Dual-AI bed nets prevented 13 million malaria cases in pilot program in 17 countries. 

https://www.statnews.com/2024/04/17/malaria-prevention-next-generation-insectidal-nets-saved-lives/

yanni kyriacos @ 2025-01-23T06:49 (+5) in response to Preparing Effective Altruism for an AI-Transformed World

If I wasn’t working on AI Safety I’d work on near term (< 5 years) animal welfare interventions.

Tobias Häberli @ 2025-01-22T16:02 (+15) in response to Preparing Effective Altruism for an AI-Transformed World

One GHW example: The impact of AI tutoring on educational interventions (via Arjun Panickssery on LessWrong). 

There have been at least 2 studies/impact evaluations of AI tutoring in African countries finding extraordinarily large effects:

Summer 2024 — 15–16-year olds in Nigeria
They had 800 students total. The treatment group studied with GPT-based Microsoft Copilot twice weekly for six weeks, studying English. They were just provided an initial prompt to start chatting—teachers had a minimal “orchestra conductor” role—but they achieved “the equivalent of two years of typical learning in just six weeks.”

 

February–August 2023 — 8–14-year-olds in Ghana
An educational network called Rising Academies tested their WhatsApp-based AI math tutor called Rori with 637 students in Ghana. Students in the treatment group received AI tutors during study hall. After eight months, 25% of the subjects attrited from inconsistent school attendance. Of the remainder, the treatment group increased their scores on a 35-question assessment by 5.13 points versus 2.12 points for the control group. This difference was “approximately equivalent to an extra year of learning” for the treatment group.
 

Should this significantly change how excited EAs are about educational interventions? I don't know, but I've also not seen a discussion of this on the forum (this post about MOOC & AI tutors that received ~zero engagement).

Mo Putera @ 2025-01-23T06:47 (+5)

This writeup by Vadim Albinsky at Founders Pledge seems related: Are education interventions as cost effective as the top health interventions? Five separate lines of evidence for the income effects of better education [Founders Pledge] 

The part that seems relevant is the charity Imagine Worldwide's use of the "adaptive software" OneBillion app to teach numeracy and literacy. Despite Vadim's several discounts and general conservatism throughout his CEA he still gets ~11x GD cost-effectiveness. (I'd honestly thought, given the upvotes and engagement on the post, that Vadim had changed some EAs' minds on the promisingness of non-deworming education interventions.) The OneBillion app doesn't seem to use AI, but they already (paraphrasing) use "software to provide a complete, research-based curriculum that adapts to each child’s pace, progress, and cultural and linguistic context", so I'm not sure how much better Copilot / Rori would be?

Quoting some parts that stood out to me (emphasis mine):

This post argues that if we look at a broad enough evidence base for the long term outcomes of education interventions we can conclude that the best ones are as cost effective as top GiveWell grants. ... 

... I will argue that the combined evidence for the income impacts of interventions that boost test scores is much stronger than the evidence GiveWell has used to value the income effects of fighting malaria, deworming, or making vaccines, vitamin A, and iodine more available. Even after applying very conservative discounts to expected effect sizes to account for the applicability of the evidence to potential funding opportunities, we find the best education interventions to be in the same range of cost-effectiveness as GiveWell’s top charities. ...

When we apply the above recommendations to our median recommended education charity, Imagine Worldwide, we estimate that it is 11x as cost effective as GiveDirectly at boosting well-being through higher income. ...

Imagine Worldwide (IW) provides adaptive software to teach numeracy and literacy in Malawi, along with the training, tablets and solar panels required to run it. They plan to fund a six-year scale-up of their currently existing program to cover all 3.5 million children in grades 1-4 by 2028. The Malawi government will provide government employees to help with implementation for the first six years, and will take over the program after 2028. Children from over 250 schools have received instruction through the OneBillion app in Malawi over the past 8 years. Five randomized controlled trials of the program have found learning gains of an average of 0.33 standard deviations.  The OneBillion app has also undergone over five additional RCTs in a broad range of contexts with comparable or better results.

Linda Linsefors @ 2025-01-22T16:23 (+2) in response to Why AI Safety Camp struggles with fundraising (FBB #2)
  • Most of these suggestions are based on speculations. I'd like a bit more evidence that it would actually make a difference, before re-structuring. Funders are welcome to reach out to us.

Responding to my self.

There is one thing (that is mentioned in the post) we know is getting in the way of funding, which is Remmelt's image. But there wouldn't be and AISC without Remmelt. 

I don't expect pretending to be two different programs would help much.

However, donating anonymously is an option. We have had anonymous donations in the past from people who don't want to entangle their reputation with ours.

yanni kyriacos @ 2025-01-23T06:22 (+2)

Fwiw www.aisafetyanz.com.au was a pretty easy setup using wix. Maybe 10 hours of work (initially).

Larks @ 2025-01-23T04:47 (+2) in response to Should EAs help employees in the developing world move to the West?

Yes and no -- the only concrete thing I see @WillieG having done was "sign[ing] letters of recommendation for each employee, which I later found out were used to pad visa applications." 

Sounds like they did more than this, though the description is vague:

We invested a lot of time and money into training these employees, with the expectation that they (as members of the college-educated elite) would help lead human rights reform in the country long after our project disbanded.

Karthik Tadepalli @ 2025-01-23T05:32 (+4)

I think Jason is saying that the "support to emigrate" was limited to recommendations.

Jason @ 2025-01-22T21:13 (+5) in response to Should EAs help employees in the developing world move to the West?

Yes and no -- the only concrete thing I see @WillieG having done was "sign[ing] letters of recommendation for each employee, which I later found out were used to pad visa applications." 

I would find refusing to write a letter of recommendation on "brain drain" concerns to go beyond not funding emigration efforts. I'd view this as akin to a professor refusing to write a recommendation letter for a student because they thought the graduate program to which the student wanted to apply was a poor use of resources (e.g., underwater basketweaving). Providing references for employees and students is an implied part of the role, while vetoing the employee or student's preferences based on the employer's/professor's own views is not.

In contrast, I would agree with your frame of reference if the question were whether the EA employer should help fund emigration and legal fees, or so on.

Larks @ 2025-01-23T04:47 (+2)

Yes and no -- the only concrete thing I see @WillieG having done was "sign[ing] letters of recommendation for each employee, which I later found out were used to pad visa applications." 

Sounds like they did more than this, though the description is vague:

We invested a lot of time and money into training these employees, with the expectation that they (as members of the college-educated elite) would help lead human rights reform in the country long after our project disbanded.

WillieG @ 2025-01-23T03:52 (+4) in response to Should EAs help employees in the developing world move to the West?

Folks, I appreciate that this is an issue a lot of people are emotionally invested in. And I want to thank @NickLaing and @Tym for their substantive and carefully considered comments.

I do want to reiterate the question I asked at the end--Has anyone encountered formal policies (perhaps in HR?) about matters like this? 

NickLaing @ 2025-01-23T04:12 (+4)

I haven't and doubt you will but interested to hear if there are any examples!

ludwigbald @ 2024-07-10T20:46 (+4) in response to Center for Effective Aid Policy has shut down

I feel like effective aid policy is at a similar stage to what animal well-being was at a few decades ago. People would agree that animal well-being is good, but they wouldn't feel it's important.

Maybe we need an org that does targeted public campaigns on how a certain aid organization is wasting money, combining that with pushing them to a commitment to more effectiveness. This approach has worked with some meat-intensive companies, and it might also work for non-profits if it can threaten their donor base.

Tyler Kolota @ 2025-01-23T03:56 (+1)

If you publicize how the government aid org is wasting money, the entire budget may more likely get cut, not redirected to more effective aid.

May be better to highlight what the effective aid could do.

WillieG @ 2025-01-23T03:52 (+4) in response to Should EAs help employees in the developing world move to the West?

Folks, I appreciate that this is an issue a lot of people are emotionally invested in. And I want to thank @NickLaing and @Tym for their substantive and carefully considered comments.

I do want to reiterate the question I asked at the end--Has anyone encountered formal policies (perhaps in HR?) about matters like this? 

yanni @ 2025-01-23T01:52 (+3) in response to bruce's Quick takes

What did we say about making jokes on the forum Nick?

NickLaing @ 2025-01-23T03:07 (+2)

It's true we've discussed this already...

titotal @ 2025-01-22T10:02 (+5) in response to titotal's Quick takes

I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that 

  1. AI will be a revolutionary technology that affects nearly every aspect of society.
    1. Somehow, if we just say the right words, we can stop the issue of AI from becoming politically polarised. 

I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas. If EA doesn’t embrace this reality, probably some other left-wing anti-AI movement is going to pop up, and it’s going to leave you in the dust. 

huw @ 2025-01-23T02:26 (+2)

I think that the appropriate medium-term fit for the movement will be with organised labour (whether left or right!), as I’ve said before here. The economic impacts are not currently strong enough to have been felt in the unemployment rate, particularly since anti-inflationary policies typically prop up the employment rate a bit. But they will presumably be felt soon, and the natural home for those affected will be in the labour movement, which despite its currently weakened state will always be bigger and more mobile than, say, PauseAI.

(Specifically in tech, where I have more experience in labour organising, the largest political contingent among the workers has always been on the labour left. For example, [Bernie Sanders was far and away the most donated to candidate among big tech employees in 2020](https://www.theguardian.com/us-news/2020/mar/02/election-2020-tech-workers-donations-bernie-sanders).)

In that world, the best thing EAs can do is support that movement. Not necessarily explicitly or directly—I can see a world where Open Phil lobbies to strengthen the U.S. NLRB and overturn key Supreme Court decisions such as Janus. But, such a move will be perceived as highly political, and I wonder if the allergy to labour-left politics within EA precludes it.

GraceAdams🔸 @ 2025-01-23T02:06 (+4) in response to Looking into Project 2025: USAID

Thanks David - really helpful to be able to read about this succinctly!

NickLaing @ 2025-01-22T04:05 (–1) in response to bruce's Quick takes

Its OK man because Sam has promised to donate 500 million a year to EA causes!

yanni @ 2025-01-23T01:52 (+3)

What did we say about making jokes on the forum Nick?



Comments on 2025-01-22

David Mathers🔸 @ 2025-01-22T09:19 (+2) in response to Why aren't relocated births accounted for in cost-effectiveness analyses of family planning charities?

I'm not sure I subscribe to any form of utilitarianism, and I'm not sure what my view in population ethics is. But I am confident that the mere fact that a life would be below average well-being does not make adding it to the world a bad thing. 

David Hammerle @ 2025-01-22T23:07 (+1)

I see.  And that IS relevant to my original question regarding family planning in settings with high child mortality.

Benjamin M. @ 2025-01-22T21:38 (+6) in response to Benjamin M.'s Quick takes

There's probably something that I'm missing here, but:

  • Given that the dangerous AI capabilities are generally stated to emerge from general-purpose and agentic AI models, why don't people try to shift AI investment into narrower AI systems? Or try to specifically regulate those systems?

Possible reasons: 

  • This is harder than it sounds
  • General-purpose and agentic systems are inevitably going to outcompete other systems
  • People are trying to do this, and I just haven't noticed, because I'm not really an AI person
  • Something else

Which is it?

Milan Weibel🔹 @ 2025-01-22T22:17 (+3)

General-purpose and agentic systems are inevitably going to outcompete other systems

There's some of this: see this Gwern post for the classic argument.

People are trying to do this, and I just haven't noticed

LLMs seem by default less agentic than the previous end-to-end RL paradigm. Maybe the rise of LLMs was an exercise in deliberate differential technological development. I'm not sure about this, it is personal speculation.

titotal @ 2025-01-22T10:02 (+5) in response to titotal's Quick takes

I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that 

  1. AI will be a revolutionary technology that affects nearly every aspect of society.
    1. Somehow, if we just say the right words, we can stop the issue of AI from becoming politically polarised. 

I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas. If EA doesn’t embrace this reality, probably some other left-wing anti-AI movement is going to pop up, and it’s going to leave you in the dust. 

Milan Weibel🔹 @ 2025-01-22T22:06 (+6)

Left-progressive online people seem to be consolidating on an anti-AI position; but mostly derived from resistance to the presumed economic impacts from AI art, badness-by-association inherited from the big tech / tech billionaires / 'techbro' cluster, and on the academic side from concern about algorithmic bias and the like. However, they seem to be failing at extrapolation. "AI bad" gets misgeneralized into skepticism about current and future AI capabilities.

Left-marxist people seem to be thinking a bit more clearly about this (ie extrapolating, applying any economic model at all, looking a bit into the tech). See an example here, or a summary here. However, the labs are based in the US, a country where associating with marxists is a very bad idea if you want your policies to get implemented.

These two leftist stances are mostly orthogonal to concerns about AI x-risk and catastrophic misuse. However, a lot of activists believe that the public's attention is zero-sum. I suspect that is the main reason coalition-building with the preceding two groups has not happened much. However, I think it is still possible.

About the American right: some actors have largely succeeded in marrying China-hawkism with AI-boosterism. I expect this association to be very sticky, but it may be counteracted by reactionary impulses coming from spooked cultural conservatives.

Benjamin M. @ 2025-01-22T21:38 (+6) in response to Benjamin M.'s Quick takes

There's probably something that I'm missing here, but:

  • Given that the dangerous AI capabilities are generally stated to emerge from general-purpose and agentic AI models, why don't people try to shift AI investment into narrower AI systems? Or try to specifically regulate those systems?

Possible reasons: 

  • This is harder than it sounds
  • General-purpose and agentic systems are inevitably going to outcompete other systems
  • People are trying to do this, and I just haven't noticed, because I'm not really an AI person
  • Something else

Which is it?

Jason @ 2025-01-22T21:13 (+5) in response to Should EAs help employees in the developing world move to the West?

Yes and no -- the only concrete thing I see @WillieG having done was "sign[ing] letters of recommendation for each employee, which I later found out were used to pad visa applications." 

I would find refusing to write a letter of recommendation on "brain drain" concerns to go beyond not funding emigration efforts. I'd view this as akin to a professor refusing to write a recommendation letter for a student because they thought the graduate program to which the student wanted to apply was a poor use of resources (e.g., underwater basketweaving). Providing references for employees and students is an implied part of the role, while vetoing the employee or student's preferences based on the employer's/professor's own views is not.

In contrast, I would agree with your frame of reference if the question were whether the EA employer should help fund emigration and legal fees, or so on.

NickLaing @ 2025-01-22T21:23 (+4)

Yep I completely agree with all that and would always write a letter for anyone! The kind of things he might be talking about I think are a bit more extreme like.

  • Funding people to masters courses especially at foreign universities.
  • Actively making connections with people in Western countries helping people get jobs and study opportunities there.
  • Helping people write really really good foreign visa and scholarship applications, putting a lot of time and effort into them and even potentially co-writing sections with people.

    I've done all these things to varying extents and am less inclined to do so now to the same extent given the questions of the OP.

NickLaing @ 2025-01-22T19:32 (+11) in response to Should EAs help employees in the developing world move to the West?

This is not a discussion about anyone forcing anyone to do anything (noone has suggested that), but the original question was about the degree we should potentially fund and support the best workers in our orgs to emigrate. This is a hugely important question, because from experience in Uganda with enough time and resources I could probably help almost any high level qualified and capable person to emigrate but is that really the best thing for me do?

As things stand every country in the world has huge restrictions on emigration, which does often "force" people to stay where they were born, something no one in this discussion thread has the power to do.

The most talented people from low income countries are often much better placed to improve up their own country than we are from richer countries, due to cultural knowledge and connections. In saying that I do agree that far more people from high income countries could be doing a lot of good living and working in low income countries.

Jason @ 2025-01-22T21:13 (+5)

Yes and no -- the only concrete thing I see @WillieG having done was "sign[ing] letters of recommendation for each employee, which I later found out were used to pad visa applications." 

I would find refusing to write a letter of recommendation on "brain drain" concerns to go beyond not funding emigration efforts. I'd view this as akin to a professor refusing to write a recommendation letter for a student because they thought the graduate program to which the student wanted to apply was a poor use of resources (e.g., underwater basketweaving). Providing references for employees and students is an implied part of the role, while vetoing the employee or student's preferences based on the employer's/professor's own views is not.

In contrast, I would agree with your frame of reference if the question were whether the EA employer should help fund emigration and legal fees, or so on.

MatthewDahlhausen @ 2025-01-16T20:06 (+23) in response to Retrospective on the California SB 1308 Campaign

I think it is more likely than not that failure to pass this bill as is was net harmful.

  • Ozone air cleaners are a significant source of indoor air pollution, producing indoor particulate levels just slightly less than second hand smoke. Particulates account for 85%+ of morbidity from indoor air pollution in residences. There is a serious harm in keeping these air cleaners on the market. All major health and air quality organizations oppose them. But there is no ban on their sale, so they remain available to uninformed customers. Killing this bill keeps a major harm on the market.
  • There are many pollution control technologies besides Far-UVC that can reduce infection risk, including UV technologies at longer wavelengths that do not produce ozone. Far-UVC is not a far superior technology, and it's not clear to me that a setback in the Far-UVC industry meaningfully delays adoption of infection control technologies generally.
  • Scrubbers are likely going to be necessary on Far-UVC devices because of how much pollution they produce. As HVAC engineers, we have a duty of care that will likely prohibit using control technologies that worsen indoor air quality. There isn't an easy solution to the problem beyond scrubbers; if you use ventilation or filtration to control it, you could have just gone with a ventilation or filtration solution from the start.
  • The majority of CA buildings are in a mild climate and energy recovery and or economizing is likely going to be a cheaper solution overall compared to room air cleaners in new facilities.

Overall, I'm discouraged at the broad EA obsession with Far-UVC instead of coordinating with leading organizations like ASHRAE to promote the uptake of infectious disease control standards and design generally. In this case, that obsession did cause clear harm, with unclear benefit.

Gavriel Kleinwaks @ 2025-01-22T21:05 (+5)

Hi Matthew, thanks for the clear and thoughtful response. I just want to emphasize first that my team really hoped this bill would pass, with our amendment, but the political process didn't allow for that. Regardless of our intentions, it's reasonable for you to still identify harm in the outcome. 

All my arguments were laid out in the post--I'd guess we just have different grounding assumptions about, among other things: the importance of preparing to fight future airborne superspreading-driven pathogens, the potential for far-UV to become a cheaper and more accessible consumer product than longer wavelengths, the potential relative impact of far-UV vs alternatives like filtration, the impact that far-UV could have on pathogens in an already reasonably-ventilated room, and the value of investing in far-UV equipped with scrubbers.

Of course, I just said "potential" and "could" a lot above. You're right that the benefit was uncertain. As I wrote, I had serious concerns during this effort, but we couldn't avoid acting under uncertainty. 

I also want to emphasize that far-UV is in a particularly vulnerable development stage relative to its potential value, but we're fighting to improve indoor air quality broadly, not just focusing on far-UV. 1Day Sooner's current IAQ project is more focused on filter implementation.

Larks @ 2025-01-22T20:32 (+5) in response to Looking into Project 2025: USAID

Thanks for providing this summary!

Alex (Αλέξανδρος) @ 2025-01-22T20:15 (+2) in response to Google AI Accelerator Open Call

It's great idea - I sent you my suggestion by dm

D0TheMath @ 2025-01-22T16:44 (–3) in response to Should EAs help employees in the developing world move to the West?

I think it seems pretty evil & infantilizing to force people to stay in their home country because you think they’ll do more good there. The most you should do is argue they’ll do more good in their home country than a western country, then leave it up to them to decide.

I will furthermore claim that if you find yourself disagreeing, you should live in the lowest quality of living country you can find, since clearly that is the best place to work in your own view.

Maybe I have more faith in the market here than you do, but I do think that technical & scientific & economic advancement do in fact have a tendency to not only make everywhere better, but permanently so. Even if the spread is slower than we’d like. By forcing the very capable to stay in their home country we ultimately deprive the world and the future from the great additions they may make given much better & healthier working conditions.

NickLaing @ 2025-01-22T19:32 (+11)

This is not a discussion about anyone forcing anyone to do anything (noone has suggested that), but the original question was about the degree we should potentially fund and support the best workers in our orgs to emigrate. This is a hugely important question, because from experience in Uganda with enough time and resources I could probably help almost any high level qualified and capable person to emigrate but is that really the best thing for me do?

As things stand every country in the world has huge restrictions on emigration, which does often "force" people to stay where they were born, something no one in this discussion thread has the power to do.

The most talented people from low income countries are often much better placed to improve up their own country than we are from richer countries, due to cultural knowledge and connections. In saying that I do agree that far more people from high income countries could be doing a lot of good living and working in low income countries.

Karla Still 🔸 @ 2025-01-22T18:57 (+3) in response to Long-distance development policy

Thanks for writing the post! I'm not an expert in the area and would be interested in learning more about the topic. 

Regarding R&D in the US, it reminds me of the Founders Pledge Climate Change Fund's strategy which focused on reducing energy poverty (at least some years ago. Their strategy might have changed based on their website.) 

In general, why are you focusing mainly on US development policies in the section "What would we need to do to make this work?" I understand it's one of the biggest players, but one could make arguments for policy work in other countries as well, e.g., citizens of small EU countries trying to impact the EU, as the representation and power relative to the population size can be high. 

When it comes to "should EA do this" I think of it as, would I recommend someone who is doing EA-focused career planning to pursue a career influencing long-distance development policy if they are a good fit for it? Even if this post doesn't result in new "EA orgs" getting founded I think it is valuable discussion as this might be read by people with an EA mindset considering pursuing development policy careers or working in the field. 

SummaryBot @ 2025-01-22T18:21 (+1) in response to Training Data Attribution: Examining Its Adoption & Use Cases

Executive summary: Training Data Attribution (TDA) is a promising but underdeveloped tool for improving AI interpretability, safety, and efficiency, though its public adoption faces significant barriers due to AI labs' reluctance to share training data.

Key points:

  1. TDA identifies influential training data points to understand their impact on model behavior, with gradient-based methods currently the most practical approach.
  2. Running TDA on large-scale models is now feasible but remains untested on frontier models, with efficiency improvements expected within 2-5 years.
  3. Key benefits of TDA for AI research include mitigating hallucinations, improving data selection, enhancing interpretability, and reducing model size.
  4. Public access to TDA tooling is hindered by AI labs’ desire to protect proprietary training data, avoid legal liabilities, and maintain competitive advantages.
  5. Governments are unlikely to mandate public access to training data, but selective TDA inference or alternative data-sharing mechanisms might mitigate privacy concerns.
  6. TDA’s greatest potential lies in improving AI technical safety and alignment, though it may also accelerate capabilities research, potentially increasing large-scale risks.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Tobias Häberli @ 2025-01-22T17:41 (+3) in response to Preparing Effective Altruism for an AI-Transformed World

Thanks for the thoughtful comment!

Re point 1: I agree that the likelihood and expected impact of transformative AI exist on a spectrum. I didn’t mean to imply certainty about timelines, but I chose not to focus on arguing for specific timelines in this post.

Regarding the specific points: they seem plausible but are mostly based on base rates and social dynamics. I think many people’s views, especially those working on AI, have shifted from being shaped primarily by abstract arguments to being informed by observable trends in AI capabilities and investments.


 

Forumite @ 2025-01-22T18:04 (+1)

Cheers, and thanks for the thoughtful post! :)

I'm not sure that the observable trends in current AI capabilities definitely point to an almost-certainty of TAI. I love using the latest LLMs, I find them amazing, and I do find it plausible that next-gen models, plus making them more agent-like, might be amazing (and scary). And I find it very, very plausible to imagine big productivity boosts in knowledge work. But the claim that this will almost-certainly lead to a rapid and complete economic/scientific transformation still feels at least a bit speculative, to me, I think...

Forumite @ 2025-01-22T16:55 (+8) in response to Preparing Effective Altruism for an AI-Transformed World

Point 1: Broad agreement with a version of the original post's argument  

Thanks for this. I think I agree with you that people in the global health and animal spaces should, at the margin, think more about the possibility of Transformative AI (TAI), and short-timeline TAI. 

For animal-focussed people, maybe there’s an argument that because the default path of a non-TAI future is likely so bad for animals (eg persuading people to stop eating animals is really hard, persuading people to intervene to help wild animals is really hard, etc), that we might, actually, want to heavily “bet” on futures *with* TAI, because it’s only those futures which hold out the prospect of a big reduction in animal suffering. So we should optimise our actions for worlds where TAI happens, and try to maximise the chances that these futures go very well for non-human animals. 

I think this is likely less true for global health and wellbeing, where plausibly the global trends look a lot better.

 

Point 2: Some reasons to be sceptical about claims of short-timeline Transformative AI 

Having said that, there’s something about the apparent certainty that “TAI is nigh” in the original post, which prompted me to want to scribble down some push-back-y thoughts. Below are some plausible-sounding-to-me reasons to be sceptical about high-certainty claims that TAI is close. I don’t pretend that these lines of thought in-and-of-themselves demolish the case for short-timeline TAI, but I do think that they are worthy of consideration and discussion, and I’d be curious to hear what others make of them:

  • The prediction of short-timeline TAI is based on speculation about the future. Humans very often get this type of speculation wrong.
  • Global capital markets aren’t predicting short-timeline TAI. A lot of very bright people, who are highly incentivised to make accurate predictions about the future shape of the economy, are not betting that TAI is imminent.
  • Indeed, most people in the world don’t seem to think that TAI is imminent. This includes tonnes of *really* smart people, who have access to a lot of information.
  • There’s a rich history of very clever people making bold predictions about the future, based on reasonable-sounding assumptions and plausible chains of reasoning, which then don’t come true - e.g. Paul Ehrlich’s Population Bomb.
  • Sometimes, even the transhumanist community - where notions of AGI, AI catastrophe risk, etc, started out - get excited about a certain technological risk/trend, but then it turns out not to be such a big deal - e.g. nanotech, “grey goo”, etc in the ‘80s and ‘90s.
  • In the past, many radical predictions about the future, based on speculation and abstract chains of reasoning, have turned out to be wrong.
  • Perhaps there’s a community effect whereby we all hype ourselves up about TAI and short timelines. It’s exciting, scary and adrenaline-inducing to think that we might be about to live through ‘the end of times’.
  • Perhaps the meme of “TAI is just around the corner/it might kill us all” has a quality which is psychologically captivating, particularly for a certain type of mind (eg people who are into computer science, etc); perhaps this biases us. The human mind seems to be really drawn to “the end is nigh” type thinking.
  • Perhaps questioning the assumption of short-timeline TAI has become low-status within EA, and potentially risky in terms of reputation, funding, etc, so people are disincentivised to push back on it. 

To restate: I don’t think any of these points torpedo the case for thinking that TAI is either inevitable, and/or imminent. I just think they are valid considerations when thinking about this topic, and are worthy of consideration/discussion, as we try to decide how to act in the world. 

Tobias Häberli @ 2025-01-22T17:41 (+3)

Thanks for the thoughtful comment!

Re point 1: I agree that the likelihood and expected impact of transformative AI exist on a spectrum. I didn’t mean to imply certainty about timelines, but I chose not to focus on arguing for specific timelines in this post.

Regarding the specific points: they seem plausible but are mostly based on base rates and social dynamics. I think many people’s views, especially those working on AI, have shifted from being shaped primarily by abstract arguments to being informed by observable trends in AI capabilities and investments.


 

Forumite @ 2025-01-22T16:55 (+8) in response to Preparing Effective Altruism for an AI-Transformed World

Point 1: Broad agreement with a version of the original post's argument  

Thanks for this. I think I agree with you that people in the global health and animal spaces should, at the margin, think more about the possibility of Transformative AI (TAI), and short-timeline TAI. 

For animal-focussed people, maybe there’s an argument that because the default path of a non-TAI future is likely so bad for animals (eg persuading people to stop eating animals is really hard, persuading people to intervene to help wild animals is really hard, etc), that we might, actually, want to heavily “bet” on futures *with* TAI, because it’s only those futures which hold out the prospect of a big reduction in animal suffering. So we should optimise our actions for worlds where TAI happens, and try to maximise the chances that these futures go very well for non-human animals. 

I think this is likely less true for global health and wellbeing, where plausibly the global trends look a lot better.

 

Point 2: Some reasons to be sceptical about claims of short-timeline Transformative AI 

Having said that, there’s something about the apparent certainty that “TAI is nigh” in the original post, which prompted me to want to scribble down some push-back-y thoughts. Below are some plausible-sounding-to-me reasons to be sceptical about high-certainty claims that TAI is close. I don’t pretend that these lines of thought in-and-of-themselves demolish the case for short-timeline TAI, but I do think that they are worthy of consideration and discussion, and I’d be curious to hear what others make of them:

  • The prediction of short-timeline TAI is based on speculation about the future. Humans very often get this type of speculation wrong.
  • Global capital markets aren’t predicting short-timeline TAI. A lot of very bright people, who are highly incentivised to make accurate predictions about the future shape of the economy, are not betting that TAI is imminent.
  • Indeed, most people in the world don’t seem to think that TAI is imminent. This includes tonnes of *really* smart people, who have access to a lot of information.
  • There’s a rich history of very clever people making bold predictions about the future, based on reasonable-sounding assumptions and plausible chains of reasoning, which then don’t come true - e.g. Paul Ehrlich’s Population Bomb.
  • Sometimes, even the transhumanist community - where notions of AGI, AI catastrophe risk, etc, started out - get excited about a certain technological risk/trend, but then it turns out not to be such a big deal - e.g. nanotech, “grey goo”, etc in the ‘80s and ‘90s.
  • In the past, many radical predictions about the future, based on speculation and abstract chains of reasoning, have turned out to be wrong.
  • Perhaps there’s a community effect whereby we all hype ourselves up about TAI and short timelines. It’s exciting, scary and adrenaline-inducing to think that we might be about to live through ‘the end of times’.
  • Perhaps the meme of “TAI is just around the corner/it might kill us all” has a quality which is psychologically captivating, particularly for a certain type of mind (eg people who are into computer science, etc); perhaps this biases us. The human mind seems to be really drawn to “the end is nigh” type thinking.
  • Perhaps questioning the assumption of short-timeline TAI has become low-status within EA, and potentially risky in terms of reputation, funding, etc, so people are disincentivised to push back on it. 

To restate: I don’t think any of these points torpedo the case for thinking that TAI is either inevitable, and/or imminent. I just think they are valid considerations when thinking about this topic, and are worthy of consideration/discussion, as we try to decide how to act in the world. 

D0TheMath @ 2025-01-22T16:44 (–3) in response to Should EAs help employees in the developing world move to the West?

I think it seems pretty evil & infantilizing to force people to stay in their home country because you think they’ll do more good there. The most you should do is argue they’ll do more good in their home country than a western country, then leave it up to them to decide.

I will furthermore claim that if you find yourself disagreeing, you should live in the lowest quality of living country you can find, since clearly that is the best place to work in your own view.

Maybe I have more faith in the market here than you do, but I do think that technical & scientific & economic advancement do in fact have a tendency to not only make everywhere better, but permanently so. Even if the spread is slower than we’d like. By forcing the very capable to stay in their home country we ultimately deprive the world and the future from the great additions they may make given much better & healthier working conditions.

Linda Linsefors @ 2025-01-22T16:36 (+6) in response to Why AI Safety Camp struggles with fundraising (FBB #2)

I you read this post and you decide that the reason why AISC is not getting funded, are not good reasons for not funding AISC, then you have a donation opportunity! 

Unless donors don’t care about optics at all, paying Remmelt’s salary is a difficult ask.

There is an easy fix to this. You can donate anonymously.

 

Donation link

Linda Linsefors @ 2025-01-22T16:32 (+4) in response to Why AI Safety Camp struggles with fundraising (FBB #2)

Perhaps they could add an appendix to their funding proposal where they answer some common objections they would expect people to have

Correctly guessing what misconception others will have is hard. But discussions on earlier drafts on this post, did inspire us to start drafting something like that. Thanks.

A colleague of mine said that [if you want to attract high-profile research leads], “you are only as strong as your weakest project” - which I thought was well put.

We're not trying to attract high profile research leads. We're trying to start worthwhile projects and collaborations that would otherwise not have happened. If a high-profile researcher want's minions/mentees/collaborators, they don't need AISC, and I don't mind if they use some other recourse (e.g. SPAR, MATS, posting on LW) to find people.

Linda Linsefors @ 2025-01-22T16:03 (+3) in response to Why AI Safety Camp struggles with fundraising (FBB #2)

Donors want to know that if they donate to keep it alive, you're going to restructure the program towards something more financially viable

  • Most of these suggestions are based on speculations. I'd like a bit more evidence that it would actually make a difference, before re-structuring. Funders are welcome to reach out to us.
  • Funding is currently especially bad. It's possible that if AISC can just survive a bit longer, things will get better.
  • AISC has survived each year since the program started in 2017. Which means just doing what we think is the best program, has a pretty good track record of being funded. 

 

I think it would be valuable for AI Safety Camp to refresh its website in order to make it look more professional and polished. The easiest way to accomplish this would be to make it a project in the next round.

No it wouldn't. Leading a project is a lot of work, significantly more work than it's worth putting into our website, and we're almost guaranteed to end up with something that is significantly higher work to maintain. We recently moved from WordPress to Google Site because it's the lowest effort platform to work with.

Linda Linsefors @ 2025-01-22T16:23 (+2)
  • Most of these suggestions are based on speculations. I'd like a bit more evidence that it would actually make a difference, before re-structuring. Funders are welcome to reach out to us.

Responding to my self.

There is one thing (that is mentioned in the post) we know is getting in the way of funding, which is Remmelt's image. But there wouldn't be and AISC without Remmelt. 

I don't expect pretending to be two different programs would help much.

However, donating anonymously is an option. We have had anonymous donations in the past from people who don't want to entangle their reputation with ours.

crunk004 @ 2025-01-22T16:08 (+2) in response to Should EAs help employees in the developing world move to the West?

I think not adopting policies or helping people to immigrate would be a very tough sell, given (my impression, at least) of the overwhelmingly strong evidence of immigration on quality of life and economic growth - I was under the impression that the evidence was pretty strong on the "brain drain=good" side, though I could be wrong. An important part of being EA is being evidence based, and I'd need to see evidence that brain drain is actually bad on net.

This also seems very morally problematic - "US passport for me but not for thee" doesn't seem like something I would be comfortable supporting ethically without very strong evidence otherwise. Forcing someone to work and live somewhere against their will seems really bad. I wouldn't want to be plucked up, moved to a developing country, be forced to work, and told I couldn't leave, and I'd encourage people to not do that to others as well.

Chris Leong @ 2025-01-22T05:46 (+7) in response to Why AI Safety Camp struggles with fundraising (FBB #2)

Thanks for this post. I think it makes some great suggestions about how AI Safety Camp could become a more favorable funding target. One thing I'll add, I think it would be valuable for AI Safety Camp to refresh its website in order to make it look more professional and polished. The easiest way to accomplish this would be to make it a project in the next round.

Regarding research leads, I don't think they should focus too much on prestige as they wouldn't be able to compete on this front, and I think a core part of their value proposition is providing the infrastructure to host "wild and ambitious projects". That said, I'm not suggesting that they should only host projects along these lines. I think it's valuable for AI Safety Camp to also host a bunch of solid and less speculative projects for various reasons (not excessively distorting the ecosystem towards wild ideas, reducing the chance that people bouncing off doing an AI safety completely, providing folk with the potential to be a talented research lead with the opportunity to build the cred to be a lead for a more prestigious program), but more for balance, rather than this being the core value that they aim to deliver.

Regarding the funding, I suspect that setting the funding goal to $300,000 likely depresses fundraising as it primes people towards thinking their donation wouldn't make a difference. It's very easy for people to overlook that the minimum funding required is only $15,000.

One last point: you can only write "this may be the last AI Safety camp" so many times. Donors want to know that if they donate to keep it alive, you're going to restructure the program towards something more financially viable. So I'd encourage the organizers to take on board some of the suggestions in this post.

Linda Linsefors @ 2025-01-22T16:03 (+3)

Donors want to know that if they donate to keep it alive, you're going to restructure the program towards something more financially viable

  • Most of these suggestions are based on speculations. I'd like a bit more evidence that it would actually make a difference, before re-structuring. Funders are welcome to reach out to us.
  • Funding is currently especially bad. It's possible that if AISC can just survive a bit longer, things will get better.
  • AISC has survived each year since the program started in 2017. Which means just doing what we think is the best program, has a pretty good track record of being funded. 

 

I think it would be valuable for AI Safety Camp to refresh its website in order to make it look more professional and polished. The easiest way to accomplish this would be to make it a project in the next round.

No it wouldn't. Leading a project is a lot of work, significantly more work than it's worth putting into our website, and we're almost guaranteed to end up with something that is significantly higher work to maintain. We recently moved from WordPress to Google Site because it's the lowest effort platform to work with.

Tobias Häberli @ 2025-01-22T16:02 (+15) in response to Preparing Effective Altruism for an AI-Transformed World

One GHW example: The impact of AI tutoring on educational interventions (via Arjun Panickssery on LessWrong). 

There have been at least 2 studies/impact evaluations of AI tutoring in African countries finding extraordinarily large effects:

Summer 2024 — 15–16-year olds in Nigeria
They had 800 students total. The treatment group studied with GPT-based Microsoft Copilot twice weekly for six weeks, studying English. They were just provided an initial prompt to start chatting—teachers had a minimal “orchestra conductor” role—but they achieved “the equivalent of two years of typical learning in just six weeks.”

 

February–August 2023 — 8–14-year-olds in Ghana
An educational network called Rising Academies tested their WhatsApp-based AI math tutor called Rori with 637 students in Ghana. Students in the treatment group received AI tutors during study hall. After eight months, 25% of the subjects attrited from inconsistent school attendance. Of the remainder, the treatment group increased their scores on a 35-question assessment by 5.13 points versus 2.12 points for the control group. This difference was “approximately equivalent to an extra year of learning” for the treatment group.
 

Should this significantly change how excited EAs are about educational interventions? I don't know, but I've also not seen a discussion of this on the forum (this post about MOOC & AI tutors that received ~zero engagement).

Afrodite Theochare @ 2025-01-22T15:34 (+1) in response to On Caring

I love this piece. 

my own thoughts:

The mind does attempt to lift the ‘weight of the world’ or the collective of pain and suffering. Though every attempt cannot be accommodated due to a physical capacity or storage limitation. To accommodate will also mean to paralyse, to withdraw, to disfunction or to cease exist, as the pain is tranferable and yet too much, to bear it can really destroy one’s mind. Some stop the attempt to accommodate the pain at an early stage, from the outside it will look like not caring, some will stop it where they can still bear it and function guided by it, end up perhaps here, trying to do something or anything, or the best thing, and some will not be able to stop it, which will leave them be a carcass of a body accommodating a large sum of pain, completely unable to perform, function, and live. 

Consiously or unconsiouly we do decide our capacity for pain. 

Feel free to reach out. If there is anything I can do to help. If you are one of the people experiencing the latter do know you are not alone. 
 

 

Anthony DiGiovanni @ 2025-01-22T15:19 (+2) in response to Maximising expected utility follows from self-evident premises?

To be clear, "preferential gap" in the linked article just means incomplete preferences. The property in question is insensitivity to mild sweetening.

If one was exactly indifferent between 2 outcomes, I believe any improvement/worsening of one of them must make one prefer one of the outcomes over the other

But that's exactly the point — incompleteness is not equivalent to indifference, because when you have an incomplete preference between 2 outcomes it's not the case that a mild improvement/worsening makes you have a strict preference. I don't understand what you think doesn't "make sense in principle" about insensitivity to mild sweetening.

I fully endorse expectational total hedonistic utilitarianism (ETHU) in principle

As in you're 100% certain, and wouldn't put weight on other considerations even as a tiebreaker? That seems extreme. (If, say, you became convinced all your options were incomparable from an ETHU perspective because of cluelessness, you would presumably still all-things-considered-prefer not to do something that injures yourself for no reason.)

Vasco Grilo🔸 @ 2025-01-22T15:34 (+2)

As in you're 100% certain, and wouldn't put weight on other considerations even as a tiebreaker?

Yes.

(If, say, you became convinced all your options were incomparable from an ETHU perspective because of cluelessness, you would presumably still all-things-considered-prefer not to do something that injures yourself for no reason.)

Injuring myself can very easily be assessed under ETHU. It directly affects my mental states, and those of others via decreasing my productivity.

Berke @ 2025-01-22T15:24 (+5) in response to Preparing Effective Altruism for an AI-Transformed World

Strong upvote! I want to say some stuff particularly within the context of global development:

The intersection of AI and global development seems surprisingly unsaturated within EA, or to be more specific, I think a surprisingly few number of EAs think about the following questions:

i) How to leverage AI for development (e.g. AI tools for education, healthcare)  
ii) What interventions and strategies should be prioritized within global health and development in the light of AI developments? (basically the question you ask)

There seems to be a lot of people thinking about the first question outside of EA, so maybe that explains this dynamic, but I have the "hunch" that the primary reason why people don't focus on the first question too much is people deferring too much and selection effects, rather lack of any high-impact interventions. If you care about TAI, you are very likely to work on AI alignment & governance, if you don't want to work on TAI-related things (due to risk-aversion or any other argument/value), you just don't update that much based on AI developments and forecasts. This may also have to do with EA's ambiguity-averse/risk-averse attitude towards GHD characterized by exploiting evidence-based, interventions rather than exploring new highly promising interventions. I think if a student/professional were to come to an EA community-builder and asked "How can I pursue a high-impact career in/upskill in global health R&D or AI-for-development", number of community-builders that can give a sufficiently helpful answer is likely very few to none, I also likely wouldn't be able to give a good answer and point to communities/resources outside of the EA community. 

(Maybe EAs in London or SF people discuss these, but I don't see any discussion of it online, neither do I see any spaces where people who could be discussing these can network/discuss together. If there is anyone who'd like to help create or run an online or in-person AI-for-development or global health R&D fellowship, feel free to shoot a message) 


 

Vasco Grilo🔸 @ 2025-01-21T13:41 (+2) in response to Maximising expected utility follows from self-evident premises?

Thanks, Anthony.

2. Incomplete preferences have at least one qualitatively different property from complete ones, described here, and reality doesn't force you to violate this property.

I read the section you linked, and I understand preferential gaps are the property of incomplete preferences which you are referring to. I do not think preferential gaps make sense in principle. If one was exactly indifferent between 2 outcomes, I believe any improvement/worsening of one of them must make one prefer one of the outcomes over the other. At the same time, if one is roughly indifferent between 2 outcomes, a sufficiently small improvement/worsening of one of them will still lead to one being practically indifferent between them. For example, although I think i) 1 $ plus a chance of 10^-100 of 1 $ is clearly better than ii) 1 $, I am practically indifferent between i) and ii), because the value of 10^-100 $ is negligible.

3. Not that you're claiming this directly, but just to flag, because in my experience people often conflate these things: Even if in some sense your all-things-considered preferences need to be complete, this doesn't mean your preferences w.r.t. your first-order axiology need to be complete.

Both are complete for me, as I fully endorse expectational total hedonistic utilitarianism (ETHU) in principle. In practice, I think it is useful to rely on heuristics from other moral theories to make better decisions under ETHU. I believe the categorical imperative is a great one, for example, although it is very central to deontology

Anthony DiGiovanni @ 2025-01-22T15:19 (+2)

To be clear, "preferential gap" in the linked article just means incomplete preferences. The property in question is insensitivity to mild sweetening.

If one was exactly indifferent between 2 outcomes, I believe any improvement/worsening of one of them must make one prefer one of the outcomes over the other

But that's exactly the point — incompleteness is not equivalent to indifference, because when you have an incomplete preference between 2 outcomes it's not the case that a mild improvement/worsening makes you have a strict preference. I don't understand what you think doesn't "make sense in principle" about insensitivity to mild sweetening.

I fully endorse expectational total hedonistic utilitarianism (ETHU) in principle

As in you're 100% certain, and wouldn't put weight on other considerations even as a tiebreaker? That seems extreme. (If, say, you became convinced all your options were incomparable from an ETHU perspective because of cluelessness, you would presumably still all-things-considered-prefer not to do something that injures yourself for no reason.)

Andreas Chrysopoulos @ 2025-01-22T12:02 (+1) in response to Announcing RISE: A Community-Centered Wellbeing & Growth Platform for EAs

It's a great concept, but there's a reason the EA Virtual Programs are live programs. I think something like this in a live program format would be much more successful in having a positive impact in the lives of EAs.

NoamShwartz @ 2025-01-22T15:12 (+7)

Thank you for sharing! I completely agree that live sessions have significant advantages. In fact, I see some of the RISE tools as an excellent way to complement and deepen live processes rather than replace them - the ability to tailor RISE to specific topics can also be used to tailor it specific programs and contexts.
On the other hand, many people prefer to practice at their own pace or reflect individually. Another important consideration is the limited capacity and higher cost of live programs, which digital tools can help address by making the content more accessible to a wider audience.

David_Moss @ 2025-01-22T13:09 (+8) in response to Preparing Effective Altruism for an AI-Transformed World

Many people believe that AI will be transformative, but choose not to work on it due to factors such a (perceived) lack of personal fit or opportunity, personal circumstances, or other practical considerations.

There may be various other reasons why people choose to work on other areas, despite believing transformative AI is very likely, e.g. decision-theoretic or normative/meta-normative uncertainty.

Tobias Häberli @ 2025-01-22T14:51 (+4)

Thanks for adding this! I definitely didn’t want to suggest the list of reasons was exhaustive or that the division between the two 'camps' is clear-cut.

Jaime Sevilla @ 2021-09-08T16:12 (+20) in response to When pooling forecasts, use the geometric mean of odds

Let's work this example through together! (but I will change the quantities to 10 and 20 for numerical stability reasons)

One thing we need to be careful with is not mixing the implied beliefs with the object level claims.

In this case, person A's claim that the value is  is more accurately a claim that the beliefs of person A can be summed up as some distribution over the positive numbers, eg a log normal with parameters  and  . So the density distribution of beliefs of A is  (and similar for person B, with  ). The scale parameters  intuitively represent the uncertainty of person A and person B.

Taking , these densities look like:

Note that the mean of these distributions is slightly displaced upwards from the median . Concretely, the mean is computed as , and equals 10.05 and 20.10 for person A and person B respectively.

To aggregate the distributions, we can use the generalization of the geometric mean of odds referred to in footnote [1] of the post.

According to that, the aggregated distribution has a density .

The plot of the aggregated density looks like:

I actually notice that I am very surprised about this - I expected the aggregate distribution to be bimodal, but here it seems to have a single peak.

For this particular example, a numerical approximation of the expected value seems to equal around 14.21 - which exactly equals the geometric mean of the means.

I am not taking away any solid conclusions from this exercise - I notice I am still very confused about how the aggregated distribution looks like, and I encountered serious numerical stability issues when changing the parameters, which make me suspect a bug.

Maybe a Monte Carlo approach for estimating the expected value would solve the stability issues - I'll see if I can get around to that at some point.

Meanwhile, here is my code for the results above.

EDIT: Diego Chicharro has pointed out to me that the expected value can be easily computed analytically in Mathematica.

The resulting expected value of the aggregated distribution is .

In the case where  we have then  that the expected value is , which is exactly the geometric mean of the expected values of the individual predictions.

Vasco Grilo🔸 @ 2025-01-22T14:49 (+2)

Thanks, Jaime!

In the case where  we have then  that the expected value is , which is exactly the geometric mean of the expected values of the individual predictions.

I have checked this generalises. If all the lognormals have logarithms whose standard deviation is the same, the mean of the aggregated distribution is the geometric mean of the means of the input distributions.

Kevin Xia 🔸 @ 2025-01-22T13:47 (+13) in response to Preparing Effective Altruism for an AI-Transformed World

I think you make a really important point! You/anyone else interested in this may be interested in talking to @Constance Li and her work with @AI for Animals (Website)

Will Aldred @ 2025-01-22T13:29 (+9) in response to Preparing Effective Altruism for an AI-Transformed World

+1. I appreciated @RobertM’s articulation of this problem for animal welfare in particular:

I think the interventions for ensuring that animal welfare is good after we hit transformative AI probably look very different from interventions in the pretty small slice of worlds where the world looks very boring in a few decades.

If we achieve transformative AI and then don’t all die (because we solved alignment), then I don’t think the world will continue to have an “agricultural industry” in any meaningful sense (or, really, any other traditional industry; strong nanotech seems like it ought to let you solve for nearly everything else). Even if the economics and sociology work out such that some people will want to continue farming real animals instead of enjoying the much cheaper cultured meat of vastly superior quality, there will be approximately nobody interested in ensuring those animals are suffering, and the cost for ensuring that they don’t suffer will be trivial.

[...] if you think it’s at all plausible that we achieve TAI in a way that locks in reflectively-unendorsed values which lead to huge quantities of animal suffering, that seems like it ought to dominate effectively all other considerations in terms of interventions w.r.t. future animal welfare.

I’ve actually tried asking/questioning a few animal welfare folks for their takes here, but I’ve yet to hear back anything that sounded compelling (to me). (If anyone reading this has an argument for why ‘standard’ animal welfare interventions are robust to the above, then I’d love to hear it!)

Soemano Zeijlmans @ 2025-01-22T13:28 (+2) in response to Ways the world is getting better - discussion thread

Even though the Trump presidency denies the consensus on and importance of climate change, there could still be ways to make progress: https://effectiveenvironmentalism.substack.com/p/can-we-make-climate-progress-under 

Chris Leong @ 2025-01-22T13:26 (+4) in response to Preparing Effective Altruism for an AI-Transformed World

I gave this a strong upvote because regardless of whether or not you agree with these timelines or Tobias' conclusion, this is a discussion that the community needs to be having. As in, it's hard to argue that the possibility of this is remote enough these days that it makes sense to ignore it.

I would love to see someone running a course focusing on this (something broader than the AI Safety Fundamentals course). Obviously this is speculative, but I wouldn't be surprised if the EA Infrastructure Fund were interested in funding a high-quality proposal to create such a course.

bruce @ 2025-01-19T08:25 (+37) in response to bruce's Quick takes

Reposting from LessWrong, for people who might be less active there:[1]

TL;DR

  • FrontierMath was funded by OpenAI[2]
  • This was not publicly disclosed until December 20th, the date of OpenAI's o3 announcement, including in earlier versions of the arXiv paper where this was eventually made public.
  • There was allegedly no active communication about this funding to the mathematicians contributing to the project before December 20th, due to the NDAs Epoch signed, but also no communication after the 20th, once the NDAs had expired.
  • OP claims that "I have heard second-hand that OpenAI does have access to exercises and answers and that they use them for validation. I am not aware of an agreement between Epoch AI and OpenAI that prohibits using this dataset for training if they wanted to, and have slight evidence against such an agreement existing."

Tamay's response:

  • Seems to have confirmed the OpenAI funding + NDA restrictions
  • Claims OpenAI has "access to a large fraction of FrontierMath problems and solutions, with the exception of a unseen-by-OpenAI hold-out set that enables us to independently verify model capabilities."
    • They also have "a verbal agreement that these materials will not be used in model training."


Edit: Elliot (the project lead) points out that the holdout set does not yet exist (emphasis added): 

As for where the o3 score on FM stands: yes I believe OAI has been accurate with their reporting on it, but Epoch can't vouch for it until we independently evaluate the model using the holdout set we are developing.[3]

============

Some quick uncertainties I had:

  • What does this mean for OpenAI's 25% score on the benchmark?
  • What steps did Epoch take or consider taking to improve transparency between the time they were offered the NDA and the time of signing the NDA?
  • What is Epoch's level of confidence that OpenAI will keep to their verbal agreement to not use these materials in model training, both in some technically true sense, and in a broader interpretation of an agreement? (see e.g. bottom paragraph of Ozzi's comment).
  1. ^

    Epistemic status: quickly summarised + liberally copy pasted with ~0 additional fact checking given Tamay's replies in the comment section

  2. ^

    arXiv v5 (Dec 20th version) "We gratefully acknowledge OpenAI for their support in creating the benchmark."

  3. ^

    See clarification in case you interpreted Tamay's comments (e.g. that OpenAI "do not have access to a separate holdout set that serves as an additional safeguard for independent verification") to mean that the holdout set already exists

NunoSempere @ 2025-01-22T13:10 (+5)

I've known Jaime for about ten years. Seems like he made an arguably wrong call when first dealing with real powaah, but overall I'm confident his heart is in the right place.

David_Moss @ 2025-01-22T13:09 (+8) in response to Preparing Effective Altruism for an AI-Transformed World

Many people believe that AI will be transformative, but choose not to work on it due to factors such a (perceived) lack of personal fit or opportunity, personal circumstances, or other practical considerations.

There may be various other reasons why people choose to work on other areas, despite believing transformative AI is very likely, e.g. decision-theoretic or normative/meta-normative uncertainty.

Cullen 🔸 @ 2025-01-22T13:01 (+37) in response to Cullen's Quick takes

Both Sam and Dario saying that they now believe they know how to build AGI seems like an underrated development to me. To my knowledge, they only started saying this recently. I suspect they are overconfident, but still seems like a more significant indicator than many people seem to be tracking.

freedomandutility @ 2025-01-22T12:08 (+4) in response to Long-distance development policy

I’d add any low hanging fruit in climate advocacy to this list. The costs of carbon emissions in rich countries are disproportionately borne by poorer countries.


Also policies which shift R&D investment towards infectious diseases, where spillovers to poorer countries are likely to be larger.

Andreas Chrysopoulos @ 2025-01-22T12:02 (+1) in response to Announcing RISE: A Community-Centered Wellbeing & Growth Platform for EAs

It's a great concept, but there's a reason the EA Virtual Programs are live programs. I think something like this in a live program format would be much more successful in having a positive impact in the lives of EAs.

natasha-ahuja @ 2025-01-22T11:14 (+3) in response to Ways the world is getting better - discussion thread

It's great to see a bunch of OWID charts here. For those interested, Nick Kristof does an article reflecting on the year gone by nearly every year. Here is his most recent one!

I really liked the way he ended the article:

I’m a backpacker, and sometimes, on a steep slog uphill through pelting rain or snow, it’s good to rest against a tree for a moment and try to remember that hiking is fun — to recharge myself for the next push uphill. That’s likewise the usefulness of a periodic reminder that the arc of human progress is still evident in metrics that matter most, such as the risk of a child dying, and that we truly can get over the next damn hill.

Toby Tremlett🔹 @ 2025-01-22T11:13 (+2) in response to Ways the world is getting better - discussion thread

EAs continue to approach new causes (to us) with beginners mind, and I'm continually motivated by it. Some examples:
- ARMoR's great work on anti-microbial resistance.
- This group of volunteers approaching screwworms from an animal welfare point of view.
-  (the last example in the three has now slipped my mind; this list is incomplete, you can help by expanding it)
These ideas are new, and they could always fail, or encounter some roadblock which causes those involved to switch to other paths to impact. But I love that EA continues to inspire people to look at the world's problems afresh, and find new ways to solve them. Keep going!

Toby Tremlett🔹 @ 2025-01-22T11:02 (+5) in response to Ways the world is getting better - discussion thread

The cost of transfer fees for remittances (specifically money sent back by migrants to their home country) has fallen from around 8% on average in 2011 to around 6% on average today. That means billions more for people on low incomes around the world. Pretty cool. 
A line graph titled "Sending money to the Global South has become cheaper" illustrates the average fees for remittances sent by migrants to various regions from 2011 to 2020. The graph features three colored lines representing Africa (blue), South America (purple), and Asia (orange). 

The y-axis represents the average fee percentage, ranging from 2% to 8%, with horizontal dotted lines indicating 3%, 4%, 6%, and 8% fee levels. The x-axis shows the years from 2011 to 2020. 

The overall trend shows a decline in sending costs over the past decade, yet all regions still exceed the target of a 3% fee set by the United Nations for 2030. The data source is noted as the World Bank (2024).

titotal @ 2025-01-22T10:02 (+5) in response to titotal's Quick takes

I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that 

  1. AI will be a revolutionary technology that affects nearly every aspect of society.
    1. Somehow, if we just say the right words, we can stop the issue of AI from becoming politically polarised. 

I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas. If EA doesn’t embrace this reality, probably some other left-wing anti-AI movement is going to pop up, and it’s going to leave you in the dust. 

Katrina Loewy @ 2025-01-21T18:35 (+9) in response to Cost-effectiveness of paying farmers to use more humane pesticides to decrease the suffering of wild insects

Thank you for taking a stab at analyzing pest control interventions! 

Pesticides differ widely in their impact on non-target wildlife, including widespread, painful, sub-lethal effects. I think that these impacts should be included when ranking pesticides by welfare footprint. 

Vasco Grilo🔸 @ 2025-01-22T09:57 (+3)

Thanks for the comment, and welcome to the EA Forum, Katrina! Great point. I speculated the effects on target species make cost-effectiveness 50 % as large[1], but I have little idea about how accurate this is, and which pesticides achieve a better combination between effects on target and non-target species. I assume WAI is doing research which can inform this.

  1. ^

    This can be thought of as the mean of a uniform distribution ranging from -0.5 to 1.5.

NickLaing @ 2025-01-22T05:07 (+4) in response to Should EAs help employees in the developing world move to the West?

One quick response I have is that Poland is a bit of a straw man - much smaller numbers go back to very por countries like Nigeria.

Tym 🔸 @ 2025-01-22T09:51 (+5)

Yeah the quality of life in Poland is ahead of most of the world, and in most comparisons there's no equivalence in circumstances, The Poland vs developing economy GDP per capita differences range from ~10x (Nigeria) to 30x (Niger, CAR). 

I re-examined my Syria example and I think many of the returnees could feasibly be individuals with very poor economic prospects in their host countries—specifically, those in the bottom quartiles of incomes in Lebanon, Iraq, or Jordan, which collectively host 1.6 million Syrians. Some of these individuals may have also lived in camps, which total 275,000 people (though these two figures overlap). For them and those who have left in the past few months of fighting, returning to Syria could offer an opportunity to start better lives and they are likely to be the bulk of the 1 million returnees in the first 6 months.

My argument for returns was more focused on the idea that if these developing countries experience economic booms, people might choose to return there, even if the countries are still somewhat poorer. But this would be a more long-term consideration, its hard to predict and brain-drains by definition make this less likely to happen. Nevertheless, this scenario seems particularly relevant to modern South American examples, like Guyana, about 50% of Guyanese people are part of the diaspora. If Guyana's recent economic boom is sustained, well-redistributed, and if the government manages to defend their borders, (big asks I know) it could potentially bring many Guyanese back.

All the numbers are quite rough. 

Chris Leong @ 2025-01-22T05:46 (+7) in response to Why AI Safety Camp struggles with fundraising (FBB #2)

Thanks for this post. I think it makes some great suggestions about how AI Safety Camp could become a more favorable funding target. One thing I'll add, I think it would be valuable for AI Safety Camp to refresh its website in order to make it look more professional and polished. The easiest way to accomplish this would be to make it a project in the next round.

Regarding research leads, I don't think they should focus too much on prestige as they wouldn't be able to compete on this front, and I think a core part of their value proposition is providing the infrastructure to host "wild and ambitious projects". That said, I'm not suggesting that they should only host projects along these lines. I think it's valuable for AI Safety Camp to also host a bunch of solid and less speculative projects for various reasons (not excessively distorting the ecosystem towards wild ideas, reducing the chance that people bouncing off doing an AI safety completely, providing folk with the potential to be a talented research lead with the opportunity to build the cred to be a lead for a more prestigious program), but more for balance, rather than this being the core value that they aim to deliver.

Regarding the funding, I suspect that setting the funding goal to $300,000 likely depresses fundraising as it primes people towards thinking their donation wouldn't make a difference. It's very easy for people to overlook that the minimum funding required is only $15,000.

One last point: you can only write "this may be the last AI Safety camp" so many times. Donors want to know that if they donate to keep it alive, you're going to restructure the program towards something more financially viable. So I'd encourage the organizers to take on board some of the suggestions in this post.

gergo @ 2025-01-22T09:21 (+4)

Thanks for sharing your thoughts, Chris, I think you made some great additional suggestions. I also agree that that AISC shouldn't try to compete with on the prestige front too much, and they complement each other nicely with SPAR that takes a more top-down approach and only(?) hosts established researches as leads.

David Hammerle @ 2025-01-22T02:16 (+1) in response to Why aren't relocated births accounted for in cost-effectiveness analyses of family planning charities?

I read the article you posted a link to, but I still think maximizing average welfare is a good policy goal.  To me it seems like maximizing average welfare is entirely what altruism is about.  The way I think of it is that each person, when the person is created, has an equal chance of being any one person that has ever or will ever live, so we want such possibilities to be as good as possible, on average.

The first issue that the article describes is that if there were only one person enduring a lot of suffering, the world could be improved by adding a bunch more people also enduring a lot of suffering, but a little bit less.  To me that seems correct.  In that world, having a chance to live a life that involves a little bit less suffering would be an improvement.  Also, oddly, those people would be exist in a sort of a backwards world where the objective is not to live as long as possible, but to live as short a life as possible.

The second issue that the article describes is that adding a bunch of lives with positive welfare, but less than the average, could be worse than adding only one life with a very negative amount of welfare.  Here, again, this makes sense to me.  It's less of a problem to have a very small chance of living the one really bad life than a much larger chance of living a life that is worse than the average, but not as much worse.

But thanks for the reply.  I didn't realize this was so much in contention.  It's good to know.

Regardless, could you possibly tell me which utilitarian theory you ascribe to, and how it would or wouldn't apply to my question regarding family planning charities?  To me it still seems like avoiding that 7.6% chance of dying before the age of 5 years old is a really great advantage of family planning charities in sub-Saharan Africa.

David Mathers🔸 @ 2025-01-22T09:19 (+2)

I'm not sure I subscribe to any form of utilitarianism, and I'm not sure what my view in population ethics is. But I am confident that the mere fact that a life would be below average well-being does not make adding it to the world a bad thing. 

Manuel Allgaier @ 2025-01-22T08:31 (+4) in response to Overcome: Growth and Marginal Cost-Effectiveness Data

FYI: Their website is www.overcome.org.uk 

(Sharing it as I had a bit of trouble finding it, it's not linked in the post and not super easy to Google as there are other therapy services have the same name. I derived it from the email you linked.)

This seems valuable and cost-effective, I hope you reached your funding goals! 

huw @ 2025-01-21T22:17 (+9) in response to Mo Putera's Quick takes

Someone noted that at the rate of US GHD spending, this would cost ~12,000 counterfactual lives. A tremendous tragedy.

Mo Putera @ 2025-01-22T06:56 (+3)

That's heartbreaking. Thanks for the pointer.

Jamie Huang @ 2025-01-22T06:28 (+4) in response to Ways the world is getting better - discussion thread

Games have gotten cheaper over time. Real prices for console video games declined approximately 40% between 1990 and now.

jojo_lee @ 2025-01-17T13:30 (+1) in response to Bad omens for US farmed animal policy work?

I think you put the wrong link in the link to JamesOz's comment at the start of the post!

Tyler Johnston @ 2025-01-22T05:53 (+2)

I did indeed. Thanks for noticing, fixed!

Chris Leong @ 2025-01-22T05:46 (+7) in response to Why AI Safety Camp struggles with fundraising (FBB #2)

Thanks for this post. I think it makes some great suggestions about how AI Safety Camp could become a more favorable funding target. One thing I'll add, I think it would be valuable for AI Safety Camp to refresh its website in order to make it look more professional and polished. The easiest way to accomplish this would be to make it a project in the next round.

Regarding research leads, I don't think they should focus too much on prestige as they wouldn't be able to compete on this front, and I think a core part of their value proposition is providing the infrastructure to host "wild and ambitious projects". That said, I'm not suggesting that they should only host projects along these lines. I think it's valuable for AI Safety Camp to also host a bunch of solid and less speculative projects for various reasons (not excessively distorting the ecosystem towards wild ideas, reducing the chance that people bouncing off doing an AI safety completely, providing folk with the potential to be a talented research lead with the opportunity to build the cred to be a lead for a more prestigious program), but more for balance, rather than this being the core value that they aim to deliver.

Regarding the funding, I suspect that setting the funding goal to $300,000 likely depresses fundraising as it primes people towards thinking their donation wouldn't make a difference. It's very easy for people to overlook that the minimum funding required is only $15,000.

One last point: you can only write "this may be the last AI Safety camp" so many times. Donors want to know that if they donate to keep it alive, you're going to restructure the program towards something more financially viable. So I'd encourage the organizers to take on board some of the suggestions in this post.

Larks @ 2025-01-22T05:23 (+4) in response to Voluntary Salary Reduction

A possible comparison is to dollar-a-year men, successful business leaders who go to work for the government for basically zero.

Tym 🔸 @ 2025-01-21T22:14 (+4) in response to Should EAs help employees in the developing world move to the West?

It's a very thoughtful set of questions! 

Firstly, I think you would be interested to know about Malengo which is a charity which is helping people in impoverished provinces in Uganda to enrol in German Universities and eventually settle there. They often seek out volunteers to mentor these propsective students, it's very rewarding. 

Re brain-drain I have 4 thoughts.

TLDR they are 

1) Lots of talent doesn't flourish in their home countries
2) Advocating for specific visa-pathways can give much more win-win opportunities for all involved
3) Many people go back to home countries when they have the choice/credible opportunity to do so
4) Moral argument, my strong passport is awesome, I have a right to have options as to where I live, others deserve it too

Long version:

1) Is it really a brain drain if talent would be counterfactually lost? For every ambitious underemployed dishwasher in the US theres likely many more people who were born in the wrong place, time and/or body/sexuality/religious family to ever have a fair opportunity to grow and make an impact.  Often it's simply 'brain allocation' and the remittances sent home can have a greater impact than if the immigrant would have not found a good role in the home country. 

2) Advocating for policies of specialised visa pathways can largely by win-win without brain draining effects. Let's say hyopthetical rich country has a shortage of nurses and a developing country has a over-supply in nursing graduates (rare scenario ik) / can really up the amount of trained nurses in a few years. A specialised visa-pathways can heavily benefit both countries. 

3) Often people will come back when given the chance. About half if not most of Polish nationals in the UK have left the country after 2018, largely back to Poland despite the UK still having a much larger GDP per capita. The UN predicts 1 million Syrians will return in the first ~7 months after the end of the civil war, in 14 years 6.7 million left the country, that's quite a significant fraction for such a short amout of time. Many people want to be at their original home country in the long-term (Of course the UN prediction may be wrong).  The perceived opportunities/ trajectories within countries as well as the cultural ties make people to come back.  

4) There's a moral argument here. My entire life was determined by my parent's freedom to move freely with European Union borders. Many of there peers did not make the same choice and stayed in Poland. It's excellent that they had the right to make that choice. I believe people in developing states also should have that right, and its the responsibilities of Governments to give them reasons to stay.

NickLaing @ 2025-01-22T05:07 (+4)

One quick response I have is that Poland is a bit of a straw man - much smaller numbers go back to very por countries like Nigeria.

NickLaing @ 2025-01-22T05:04 (+7) in response to Should EAs help employees in the developing world move to the West?

I think this is a huge and under-recognised problem  with migration - that the very best people who could have made the biggest difference transforming their country end up leaving, mostly doing far less transformative and "cruxy" work in western countries. I live in Uganda and have seen the same phenomenon. 

The strongest pro-immigration argument is usually that we should support migration because remittances are so important and the good done by that can overcome the harms of "brain drain". If the very best people leave though, I think the negative effect can be enormous.

Also see my comment here on this article by Lauren, along very similar lines.

"The best people leave, people that could be innovating, inspiring, leading and starting the best businesses that could grow the country. When you skim off the top 1%, you can "replace" them by training others, but you can't replace their natural brilliant traits that could have led them to transform their countries."

https://www.laurenpolicy.com/p/why-brain-drain-isnt-something-we

This issue was also discussed a little in the comments about my wee piece here.
https://forum.effectivealtruism.org/posts/9TnGxtSjjdpeufaqs/is-nigerian-nurse-emigration-really-a-win-win-critique-of-a

yanni @ 2025-01-20T02:08 (+6) in response to bruce's Quick takes

first funding, then talent, then PR, and now this.

how much juice will OpenAI squeeze out of EA?

NickLaing @ 2025-01-22T04:05 (–1)

Its OK man because Sam has promised to donate 500 million a year to EA causes!

David Mathers🔸 @ 2025-01-20T11:09 (+5) in response to Why aren't relocated births accounted for in cost-effectiveness analyses of family planning charities?

Maximize average welfare is a bad policy goal: https://utilitarianism.net/population-ethics/#:~:text=The%20average%20view%2C%20variable%20value,lives%20with%20positive%20well%2Dbeing.

David Hammerle @ 2025-01-22T02:16 (+1)

I read the article you posted a link to, but I still think maximizing average welfare is a good policy goal.  To me it seems like maximizing average welfare is entirely what altruism is about.  The way I think of it is that each person, when the person is created, has an equal chance of being any one person that has ever or will ever live, so we want such possibilities to be as good as possible, on average.

The first issue that the article describes is that if there were only one person enduring a lot of suffering, the world could be improved by adding a bunch more people also enduring a lot of suffering, but a little bit less.  To me that seems correct.  In that world, having a chance to live a life that involves a little bit less suffering would be an improvement.  Also, oddly, those people would be exist in a sort of a backwards world where the objective is not to live as long as possible, but to live as short a life as possible.

The second issue that the article describes is that adding a bunch of lives with positive welfare, but less than the average, could be worse than adding only one life with a very negative amount of welfare.  Here, again, this makes sense to me.  It's less of a problem to have a very small chance of living the one really bad life than a much larger chance of living a life that is worse than the average, but not as much worse.

But thanks for the reply.  I didn't realize this was so much in contention.  It's good to know.

Regardless, could you possibly tell me which utilitarian theory you ascribe to, and how it would or wouldn't apply to my question regarding family planning charities?  To me it still seems like avoiding that 7.6% chance of dying before the age of 5 years old is a really great advantage of family planning charities in sub-Saharan Africa.

Eli Rose @ 2025-01-22T01:27 (+10) in response to Upcoming changes to Open Philanthropy's university group funding

I edited this post on January 21, 2025, to reflect that we are continuing funding stipends for graduate student organizers for non-EA groups, while stopping funding stipends for undergraduate student organizers. I think that paying grad students for their time is less unconventional than for undergraduates, and also that their opportunity cost is higher on average. Ignoring this distinction was an oversight in the original post.

Simon Holm @ 2025-01-13T20:53 (+10) in response to What are we doing about the EA Forum? (Jan 2025)

If you view the forum from a UX lens and put it in the context of different categories of online community infrastructures (e.g. Facebook/Twitter feed of short posts, Discord/Slack channel-based, Quora/StackExchange/Reddit upvote/question-based and more traditional forums with defined categories/aubcategories and threads), what do you think are the pros and cons of how the Forum is currently structured and how does that facilitate (or not facilitate) what you would like to see happen in online EA community building? Would also be curious to hear how you would compare the Forum to that of the many existing EA Slack channels.

Sarah Cheng @ 2025-01-22T00:57 (+2)

There's a lot I could say here, but I'll try to keep it brief, so this is a bit of a disorganized list. :)

Pros:

  • I think the Forum UX takes bits from other platforms, which enables a bunch of different kinds of interactions (like Question posts, quick takes, longform posts, and reacts).
  • In general we are able to build the UX in such a way that respects users more than most other platforms (like we don't have paid ads and we are not optimizing for clicks or engagement hours).
  • I think separating out karma voting from agree/disagree is important for enabling productive discussions and disagreements.
  • I value openness and accessibility and I appreciate that the Forum is extremely open by default (for example, relative to slack).
  • The fact that discussions here are less transient than say, Twitter, means that we're better able to build common knowledge, and I think it makes discussions feel more like they matter (so people are more willing to put effort into their writing and adhere to high standards).
  • Personally, I think it's good to keep the Forum broadly a unified space (rather than having channels or subreddits for cause areas) because I want the project of EA to be open to new ideas, and I would worry that too much structured separation would encourage silos.

Cons:

  • Not much of the Forum UX updates in realtime, which to me makes it feel a bit old-fashioned or something, but it's hard to say if that matters.
  • I think it's good to keep the Forum broadly a unified space, but this can be confusing for users, and can cause content that has a niche audience to be overlooked.
  • I think optimizing for engagement/fun would potentially build more community here, but at the cost of our actual goals (something like, "being the version of the Forum that most improves the world").
  • There's probably more we can to do make the Forum UX feel delightful and immediately satisfying without harming users.
ben.smith @ 2025-01-16T05:50 (+1) in response to What are we doing about the EA Forum? (Jan 2025)

You could substantially increase your weekly active users, converting monthly active users (MAU) into weekly and even daily users, and increasing MAU as well, by using push notifications to inform users of replies to their posts and comments and other events that are currently only sent as in-forum notifications to most users. Many, many times, I have posted on the forum, sent a comment or reply, and only weeks later seen that there was a response. On the other hand, I will get an email from twitter or bluesky if one person likes my post, and I immediately go on to see who it was. In doing so you will draw people to the forum at the exact time their engagement will encourage others to come back, building up a positive flywheel of engagement.

These features are already built into your forum but are off by default! This surprised me greatly because most online forum--not only feedscrolling websites like X and Facebook, but also forum-style websites like Substack and Wordpress--make it easy or default to get push notifications via email. That builds engagement as I've described. Often when I post on Tyler Cowen's Wordpress-based Marginal Revolution blog, I get a tonne of email notifications of replies and discussions about that topic. It's a bit overwhelming, but it's fun!

Users who just use your notification default (notifications within the website, but no few push notifications) probably make up the vast majority of active users and passive users (if not the most active users). If it is possible to identify users who have not deliberately turned off notifications, I strongly suggest that you flip the default to affect those users who haven't deliberately set a notification policy to send push notifications. This will get a small hit from people who dislike this, but you could mitigate this by e.g., an email in your next digest to inform people of why you are making the change.

I have long thought this was a missing feature on EA Forum; now I know it exists, but is turned off.

@titotal said that it's not a lot of fun to post here. I agree, and I also think that making it more immediately rewarding to post, by informing people of others' engagement with their content as soon as it happens, would make it a lot more fun. It will make me personally very happy if you do this!

Sarah Cheng @ 2025-01-22T00:07 (+2)

Thanks for the suggestions!

  • I agree that emailing users more often will probably get them to return to the site more often.
  • I'm less confident than you [sound] that this will have a major effect.
  • Since our team has been focused on software/product for a while and haven't noticeably increased MAUs, I am skeptical that further work in this space will be the magic bullet. For example, we made significant improvements in site speed and didn't see metrics improve as much as we expected.
  • Our team has been more willing to email users recently (for example, about Forum events) and I want to be careful about going too far and annoying users/causing unsubscribes.
    • Honestly I'm not totally sure why basically none of the default notifications include an email, which makes me somewhat nervous to significantly change this. My guess is that you are a bit unusual in finding lots of email notifications fun, and probably more people would find that overwhelming or annoying.
  • That said, we do plan to test out making our default notification settings more in line with other sites (for example, making karma notifications realtime by default instead of batched daily) and sending a delayed email to new users explaining how they can customize their notification settings.
    • We'll certainly consider changing other notification default settings, but again I want to be careful with this, not just because some people would dislike it, but also because ultimately our goal is not to increase usage. I want people to have a healthy relationship with the Forum, and only use it to the extent that they think is worthwhile.
  • I feel like changing the notification settings for existing users is probably crossing a line.


Comments on 2025-01-21

Vasco Grilo🔸 @ 2025-01-04T13:16 (+2) in response to Meaningfully reducing meat consumption is an unsolved problem: meta-analysis

Thanks, Seth. I wonder whether you are underestimating your own implicit knowledge. Would you be indifferent between my guess of 1.5 % and alternatives guesses of 0.015 % and 150 % (the value can be higher than 100 % because there could be effects after 2024)? Feel free to provide a range for the expected reduction if that helps.

Seth Ariel Green @ 2025-01-21T23:35 (+4)

My implicit  knowledge on the topic of knowledge production (rather than of veganuary) is that rosy results like the one you are citing often do not stand up to scrutiny. Maya raised one very salient objection to a gap between the headline interpretation and the data of a past iteration of this work: https://forum.effectivealtruism.org/posts/vg3rxwcu7una8nSpr/veganuary-s-impact-has-been-huge-here-are-the-stats-to-prove?commentId=32xKWjRjgDc4cyaDj

I believe that if I dig into it, I’ll find other, similar issues. Another way to phrase this: I have pessimistic beliefs about nonstatistical sources of uncertainty and/or bias whose magnitude is itself a hard estimation problem. Sorry for such a meta answer…

yanni kyriacos @ 2025-01-21T23:28 (+5) in response to Yanni Kyriacos's Quick takes

AI Safety has less money, talent, political capital, tech and time. We have only one distinct advantage: support from the general public. We need to start working that advantage immediately.

Hafizrajab1 @ 2025-01-21T23:22 (+2) in response to Why AI Safety Camp struggles with fundraising (FBB #2)

AI Safety Camp’s fundraising struggles seem to stem from structural and communication challenges rather than a lack of impact. Issues like broad focus, leadership optics, and stipend allocation create hurdles for donors. Improving project quality, transparency, and framing their case for different funders could help. It’s worth supporting if you believe in their mission—don’t let funding hesitancy from others deter you.

jacquesthibs @ 2025-01-21T23:04 (+7) in response to jacquesthibs's Quick takes

Are you or someone you know:

1) great at building (software) companies
2) care deeply about AI safety
3) open to talk about an opportunity to work together on something

If so, please DM with your background. If someone comes to mind, also DM. I am looking thinking of a way to build companies in a way to fund AI safety work.

Ian Turner @ 2025-01-21T22:27 (+7) in response to Should EAs help employees in the developing world move to the West?

This question was also discussed in this other forum post, and probably in some other posts that I can’t find. Why Brain Drain Isn't Something We Should Worry About

Mo Putera @ 2025-01-21T05:12 (+38) in response to Mo Putera's Quick takes

I just learned that Trump signed an executive order last night withdrawing the US from the WHO; this is his second attempt to do so. 

WHO thankfully weren't caught totally unprepared. Politico reports that last year they "launched an investment round seeking some $7 billion “to mobilize predictable and flexible resources from a broader base of donors” for the WHO’s core work between 2025 and 2028. As of late last year, the WHO said it had received commitments for at least half that amount".

Full text of the executive order below: 

WITHDRAWING THE UNITED STATES FROM THE WORLD HEALTH ORGANIZATION 

By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered: 

Section 1.  Purpose.  The United States noticed its withdrawal from the World Health Organization (WHO) in 2020 due to the organization’s mishandling of the COVID-19 pandemic that arose out of Wuhan, China, and other global health crises, its failure to adopt urgently needed reforms, and its inability to demonstrate independence from the inappropriate political influence of WHO member states.  In addition, the WHO continues to demand unfairly onerous payments from the United States, far out of proportion with other countries’ assessed payments.  China, with a population of 1.4 billion, has 300 percent of the population of the United States, yet contributes nearly 90 percent less to the WHO.  

Sec. 2.  Actions.  (a)  The United States intends to withdraw from the WHO.  The Presidential Letter to the Secretary-General of the United Nations signed on January 20, 2021, that retracted the United States’ July 6, 2020, notification of withdrawal is revoked.

(b)  Executive Order 13987 of January 25, 2021 (Organizing and Mobilizing the United States Government to Provide a Unified and Effective Response to Combat COVID–19 and to Provide United States Leadership on Global Health and Security), is revoked.

(c)  The Assistant to the President for National Security Affairs shall establish directorates and coordinating mechanisms within the National Security Council apparatus as he deems necessary and appropriate to safeguard public health and fortify biosecurity.

(d)  The Secretary of State and the Director of the Office of Management and Budget shall take appropriate measures, with all practicable speed, to:

(i)    pause the future transfer of any United States Government funds, support, or resources to the WHO;

(ii)   recall and reassign United States Government personnel or contractors working in any capacity with the WHO; and  

(iii)  identify credible and transparent United States and international partners to assume necessary activities previously undertaken by the WHO.

(e)  The Director of the White House Office of Pandemic Preparedness and Response Policy shall review, rescind, and replace the 2024 U.S. Global Health Security Strategy as soon as practicable. 

Sec. 3.  Notification.  The Secretary of State shall immediately inform the Secretary-General of the United Nations, any other applicable depositary, and the leadership of the WHO of the withdrawal.

Sec. 4.  Global System Negotiations.  While withdrawal is in progress, the Secretary of State will cease negotiations on the WHO Pandemic Agreement and the amendments to the International Health Regulations, and actions taken to effectuate such agreement and amendments will have no binding force on the United States.  

Sec. 5.  General Provisions.  (a)  Nothing in this order shall be construed to impair or otherwise affect: 

(i)   the authority granted by law to an executive department or agency, or the head thereof; or 

(ii)  the functions of the Director of the Office of Management and Budget relating to budgetary, administrative, or legislative proposals. 

(b)  This order shall be implemented consistent with applicable law and subject to the availability of appropriations. 

(c)  This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person. 

THE WHITE HOUSE,

    January 20, 2025.

huw @ 2025-01-21T22:17 (+9)

Someone noted that at the rate of US GHD spending, this would cost ~12,000 counterfactual lives. A tremendous tragedy.

Tym 🔸 @ 2025-01-21T22:14 (+4) in response to Should EAs help employees in the developing world move to the West?

It's a very thoughtful set of questions! 

Firstly, I think you would be interested to know about Malengo which is a charity which is helping people in impoverished provinces in Uganda to enrol in German Universities and eventually settle there. They often seek out volunteers to mentor these propsective students, it's very rewarding. 

Re brain-drain I have 4 thoughts.

TLDR they are 

1) Lots of talent doesn't flourish in their home countries
2) Advocating for specific visa-pathways can give much more win-win opportunities for all involved
3) Many people go back to home countries when they have the choice/credible opportunity to do so
4) Moral argument, my strong passport is awesome, I have a right to have options as to where I live, others deserve it too

Long version:

1) Is it really a brain drain if talent would be counterfactually lost? For every ambitious underemployed dishwasher in the US theres likely many more people who were born in the wrong place, time and/or body/sexuality/religious family to ever have a fair opportunity to grow and make an impact.  Often it's simply 'brain allocation' and the remittances sent home can have a greater impact than if the immigrant would have not found a good role in the home country. 

2) Advocating for policies of specialised visa pathways can largely by win-win without brain draining effects. Let's say hyopthetical rich country has a shortage of nurses and a developing country has a over-supply in nursing graduates (rare scenario ik) / can really up the amount of trained nurses in a few years. A specialised visa-pathways can heavily benefit both countries. 

3) Often people will come back when given the chance. About half if not most of Polish nationals in the UK have left the country after 2018, largely back to Poland despite the UK still having a much larger GDP per capita. The UN predicts 1 million Syrians will return in the first ~7 months after the end of the civil war, in 14 years 6.7 million left the country, that's quite a significant fraction for such a short amout of time. Many people want to be at their original home country in the long-term (Of course the UN prediction may be wrong).  The perceived opportunities/ trajectories within countries as well as the cultural ties make people to come back.  

4) There's a moral argument here. My entire life was determined by my parent's freedom to move freely with European Union borders. Many of there peers did not make the same choice and stayed in Poland. It's excellent that they had the right to make that choice. I believe people in developing states also should have that right, and its the responsibilities of Governments to give them reasons to stay.

GoodHorse413🔸 @ 2025-01-19T21:49 (+5) in response to Do you reject axiological hedonism, and what evidence is there for any alternative view?

I'm a long time committed axiological hedonist and have never believed that pleasure was objectively commensurable with suffering, and I also strongly suspect (but could be wrong) that pleasures are heterogenous and therefore not all pleasureable experiences are commensurable with each other (and the same with suffering). I find this makes it easier to explain clear cases of ambiguity in ethics, because I think ambiguity is baked into the axiological ground truth. I do believe that some things are objectively good and some things are objectively bad, but there is no universally accessible objective utility function by which you can rank all things from most to least desirable. Recognizing this clarifies weird edge cases where one form or another of utilitarianism seems to lead to a bad result, like symmetric utilitarianism leading to the repugnant conclusion or negative utilitarianism implying that we should destroy the world. These seem to be examples where maximizing hedonistic utility functions leads to bad things happening, because they are. 

Axiological hedonism follows logically from materialist metaphysics and empiricist epistemology. Good and bad are qualities of experiences rather than external events or objects, hence why reasonable people may disagree whether or not a song was good. One person's experience of listening to the song was good, while the other person's experience was bad. Projecting qualities like "good" and "bad" onto things besides experiences is to mistake the map for the territory. And if anyone doubts that pleasure is good then they just haven't experienced the pleasures I have. 

Nunik @ 2025-01-21T20:35 (+1)

We are mostly in agreement, though I don't quite understand what you meant by:

These seem to be examples where maximizing hedonistic utility functions leads to bad things happening, because they are.

If suffering and pleasure are incommensurable, in what way are such outcomes bad?

I would also be interested in your response to the argument that suffering is inherently urgent, while pleasure does not have this quality. Imagine you are incapable of suffering, and you are currently experiencing pleasure. One could say that you would be indifferent to the pleasure being taken away from you (or being increased to a higher level). Now imagine that you are instead incapable of experiencing pleasure, and you are currently suffering. In this case it would arguably be very clear to you that reducing suffering is important.

John Salter @ 2025-01-21T19:47 (+10) in response to We don't want to post again "This might be the last AI Safety Camp"

Two questions I imagine prospective funders would have:

  1. Can you give some indication as to the value of stripends? It's not clear how the benefits trade off against that cost. It's tempting to think that stripends are responsible for >80% of the costs but bring <20% of the benefit.
  2. What would your attendees have been doing otherwise?

     

David T @ 2025-01-20T22:08 (+1) in response to Do you reject axiological hedonism, and what evidence is there for any alternative view?

I'd add that to the extent conscious experience can be considered "self evident" only one's own experience of pain and pleasure can be "self evident" via conscious experience. 

If Nunik's contention is that only things which achieve that experiential level of validation can be assigned intrinsic value with intuitions carrying zero evidential weight, it seems we would have to disregard our intuitions that other people or creatures might have similar experiences, and attach zero value  to their possible pain/pleasure.

I mean, hedonic egoism is a philosophical position, but perhaps not a well-regarded one on a forum for people trying to be altruistic...

Nunik @ 2025-01-21T19:04 (+2)

What I meant is that the disvalue of suffering becomes evident at the moment of experiencing it. Once you know what disvalue is, the next step is figuring out who can experience this disvalue. Given that you and I e.g. have a very similar nervous system, and that we behave similarly in response to noxious stimuli, my subjective probability that you are capable of suffering will be much higher than the probability that a rock can suffer.

Katrina Loewy @ 2025-01-21T18:35 (+9) in response to Cost-effectiveness of paying farmers to use more humane pesticides to decrease the suffering of wild insects

Thank you for taking a stab at analyzing pest control interventions! 

Pesticides differ widely in their impact on non-target wildlife, including widespread, painful, sub-lethal effects. I think that these impacts should be included when ranking pesticides by welfare footprint. 

MichaelStJules @ 2025-01-19T20:51 (+2) in response to Do you reject axiological hedonism, and what evidence is there for any alternative view?

When you perceive a color, is it not self-evident that the color "looks" a certain way? There is no one doing the looking; it just looks. Color and disvalue are properties of conscious experience, and they are real parts of the world. I would say our subjective experience is in fact the "realest" part of the world because there can be no doubt about its existence, whereas we cannot ever be sure what is really "out there" that we are interpreting.

I'm sympathetic to illusionism about phenomenal properties (illusionism about phenomenal consciousness), i.e. I don't believe consciousness is phenomenal, ineffable, intrinsic, qualitative, etc.. People often mean phenomenal properties or qualia when they talk about things just looking a certain way. This might cut against your claims here.

However, I suspect there are ways to interpret your statements that are compatible with illusionism. Maybe something like your brain is undergoing specific patterns of reactions and discriminations to inputs, and these are distinctive for distinctive colours. What it means to "look" or "feel" a certain way is to just undergo particular patterns of reactions. And it's wired in or cognitively impenetrable: you don't have direct introspective access to the processes responsible for these patterns of reactions, only their effects on you.

Furthermore, everything we respond to and are aware of is filtered through these processes, so "we cannot ever be sure what is really "out there" that we are interpreting".

there can be no doubt about its existence

I'm not sure about this. I'd probably want to see a deductive argument for this.

 

If no property of (dis)value existed and couldn't ever exist, then I think it would make no difference at all which outcome is brought about.

I'm not saying values don't exist, I just think they are projected, rather than intrinsic. It can still matter to whatever's doing the projection.

 

The motivational salience you speak of in one of your posts may be a necessary condition for suffering (in humans), but the disvalue is exclusively in the distinct way the experience feels.

This seems to me to be separating the apparent disvalue from one of the crucial mechanisms responsible for (a large share of) the apparent disvalue. Motivational salience is what gives suffering its apparent urgency, and (I think) a big part of what makes suffering feel the way it does. If you got rid of its motivational salience, it would feel very different.

Nunik @ 2025-01-21T18:23 (+1)

I don't think I properly understand your position. You are not sure that you are currently having an experience? Because if you are having an experience, then the experience necessarily exists, otherwise you can't be having it.

SummaryBot @ 2025-01-21T18:15 (+1) in response to Once More, Without Feeling (Andreas Mogensen)

Executive summary: Andreas Mogensen argues for a pluralist theory of moral standing based on welfare subjectivity and autonomy, challenging the necessity of phenomenal consciousness for moral status.

Key points:

  1. Mogensen introduces a pluralist theory that supports moral standing through either welfare subjectivity or autonomy, independent of each other.
  2. He questions the conventional belief that phenomenal consciousness is necessary for moral standing, introducing autonomy as an alternative ground.
  3. The paper distinguishes between the morality of respect and the morality of humanity, highlighting their relevance to different beings.
  4. It explores the possibility that certain beings could be governed solely by the morality of respect without being welfare subjects.
  5. Mogensen outlines conditions for autonomy that do not require welfare subjectivity, suggesting that autonomy alone can merit moral respect.
  6. The implications of this theory for future ethical considerations of AI systems are discussed, stressing the need to revisit the relationship between consciousness and moral standing.

 

 This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2025-01-21T18:13 (+1) in response to The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating

Executive summary: The paper argues that the strategic dynamics and assumptions driving a race to develop Artificial Superintelligence (ASI) ultimately render such efforts catastrophically dangerous and self-defeating, advocating for international cooperation and restraint instead.

Key points:

  1. A race to develop ASI is driven by assumptions that ASI provides a decisive military advantage and that states are aware of its strategic importance, yet these assumptions also highlight the race's inherent dangers.
  2. The pursuit of ASI risks triggering great power conflicts, particularly between the US and China, as states may perceive adversaries' advancements as existential threats, prompting military interventions.
  3. Racing to develop ASI increases the risk of losing control over the technology, especially given the competitive pressures to prioritize speed over safety and the theoretical high risk of rapid capability escalation.
  4. A successful ASI could disrupt internal power structures within the state that develops it, potentially undermining democratic institutions through an extreme concentration of power.
  5. The existential threats posed by an ASI race include great power conflict, loss of control of ASI, and the internal concentration of power, which collectively form successive barriers that a state must overcome to 'win' the race.
  6. The paper recommends establishing an international verification regime to ensure compliance with agreements to refrain from pursuing ASI projects, as a more strategic and safer alternative to racing.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

John Salter @ 2025-01-19T16:21 (+25) in response to John Salter's Quick takes

It seems that part of the reason communism is so widely discredited is the clear contrast between neighboring countries that pursued more free-market policies. This makes me wonder— practicality aside, what would happen if effective altruists concentrated all their global health and development efforts into a single country, using  similar neighboring countries as the comparison group?

Given that EA-driven philanthropy accounts for only about 0.02% of total global aid, perhaps the influence EA's approach could have by definitively proving its impact would be greater than trying to maximise the good it does directly.

Joseph @ 2025-01-21T17:25 (+4)

Superficially, it sounds similar to the idea of charter cities. The idea does seem (at face value) to have some merit, but I suspect that the execution of the idea is where lots of problems occur.

So, practically aside, it seems like a massive amount of effort/investment/funding would allow a small country to progress rapidly toward less suffering and better life.

My general impression is that "we don't have a randomized control trial to prove the efficacy of this intervention" isn't the most common reason why people don't get helped. Maybe some combination of lack of resources, politics & entrenched interests, and trade-offs are the big ones? I don't know, but I'm sure some folks around here have research papers and textbooks about it.

NobodyInteresting @ 2025-01-20T22:34 (–75) in response to NobodyInteresting's Quick takes

"wE sHoULd PaNdEr mOrE tO cOnsErvatives"

Not 5 minutes in office, and they are already throwing the Nazi salutes.

Congratulations, Edelweiss was not just a netflix show, it's reality.

And a great reminder, apart from the jews, there were slavic, roma, gay and disabled people in the camps as well. We can't sit and just scoff at this, we need to fight back.

Jason @ 2025-01-21T16:07 (+3)

Who said we should "PaNdEr" to conservatives? That reads like a caricature of the recent post on the subject. If you're claiming that there is a pro-pandering movement afoot, please provide evidence and citations to support your assertion.

I think the significant majority of people here -- including me! -- are somewhere between unhappy to extremely upset over yesterday's events, but that doesn't justify caricaturing good-faith posts. If you have a concrete, actionable idea about how we should respond to those events, that would make for a more helpful post.

David Mathers🔸 @ 2025-01-21T12:47 (+4) in response to GiveWell raised less than its 10th percentile forecast in 2023

FTX was late in 2022, but nonetheless 2022 already shows most of the drop in new donors. 

Jason @ 2025-01-21T15:56 (+3)

Good observation -- most of the drop in the number of new donors was seen in 2022, but little of the drop in the amount of donations from new donors happened then [$43.4MM (2021) vs $41.1MM (2022) vs. $20.5MM (2023]. Because of their size, the bulk of the 2021 --> 2022 drop was almost certainly people giving under $1,000, which is somewhat less concerning to me due to the small percentage of GiveWell's revenue that donations under $1K provide (less than 3%). There are a good number in the $1-$10K range, but they did not show a significant decline overall between 2021 and 2022. 

Presumably, the 2022 --> 2023 drop in revenue involved loss of new higher-dollar donors. My assumption is that higher-dollar donors act somewhat differently than others (e.g., I expect they engage in more due diligence / research than those donating > $1,000 on average). So it's plausible to me that the 2021 -> 2022 numerical decline and the 2022 --> 2023 volume decline have (or do not have) very similar causes. I'd guess FTX might hit higher-dollar new donors more because of the extra due diligence. 

The following chart is for all donors, not new ones: