If there were another discussion week, what would you like it to be on and when?

By Nathan Young @ 2024-07-04T12:26 (+37)

What topics do you think the EA community should actually focus on if we were being our best selves. 


Nathan Young @ 2024-07-04T12:37 (+32)

Animal welfare is far more effective per $ than Global Health. 

Edit:

How about "The marginal $100 mn on animal welfare is 10x the impact of the marginal $100 mn on Global Health"

NickLaing @ 2024-07-04T19:50 (+9)

I think this is a good topic, but including the word "far" kind of ruins the debate from the start as it seems like the person positing it may already have made up their mind and it introduces unnecessary bias.

MichaelStJules @ 2024-07-04T22:18 (+6)

Ya, we could just use a more neutral framing: Is animal welfare or global health more cost-effective?

Nathan Young @ 2024-07-05T13:17 (+2)

What do you think is the 50/50 point? Where half of people believe more, half less.

MichaelStJules @ 2024-07-05T14:28 (+2)

Not sure.

We could replace the agree/disagree slider with a cost-effectiveness ratio slider.

One issue could be that animal welfare has more quickly diminishing returns than GHD.

Nathan Young @ 2024-07-05T15:26 (+1)

Maybe but let's not overcomplicate things.

Toby Tremlett🔹 @ 2024-09-10T15:21 (+6)

Late to this conversation, but I like the debate idea. A simple way to get a cost-effectiveness slider might be just to have the statement be "On the current margin $100m should go to:" and the slider go from 100% animal welfare to 100% global health, with a mid-point being 50/50. 

Joseph Lemien @ 2024-07-05T12:41 (+3)

Does this basically just reflect how much people value human lives in relation to animal lives? If Alex values a chicken WALY at .00002 that of a human WALY, and Bob values a chicken WALY a 0.5 of a human WALY, then global health either is or isn't more effective.

Vasco Grilo🔸 @ 2024-07-05T11:11 (+3)

Thanks for suggesting that, Nathan! For context:

I arrived at a cost-effectiveness of corporate campaigns for chicken welfare of 15.0 DALY/$ (= 8.20*2.10*0.870), assuming:

  • Campaigns affect 8.20 chicken-years per $ (= 41*1/5), multiplying:
    • Saulius Ĺ imÄŤikas’ estimate of 41 chicken-years per $.
    • An adjustment factor of 1/5, since OP [Open Philanthropy] thinks â€śthe marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis [which is linked just above]”.
  • An improvement in chicken welfare per time of 2.10 times the intensity of the mean human experience, as I estimated for moving broilers from a conventional to a reformed scenario based on Rethink Priorities’ median welfare range for chickens of 0.332[6].

  • A ratio between humans’ healthy and total life expectancy at birth in 2016 of 87.0 % (= 63.1/72.5).

In light of the above, corporate campaigns for chicken welfare are 1.51 k (= 15.0/0.00994) times as cost-effective as TCF [GiveWell's Top Charities Fund].

JWS @ 2024-07-04T14:58 (+3)

Why just compare to Global Health here, surely it should be "Animal Welfare is far more effective per $ than other cause areas'?

Will Howard @ 2024-07-04T16:55 (+11)

I think they are natural to compare because they both have interventions that cash out in short-term measurable outcomes, and can absorb a lot of funding to churn out these outcomes.

Comparing e.g. AI safety and Global Health brings in a lot more points of contention which I expect would make it harder to make progress in a narrowly scoped debate (in terms of pinning down what the cruxes are, actually changing people's minds etc).

JWS @ 2024-07-04T19:08 (+7)

I think I'd rather talk about the important topic even if it's harder? My concern is, for example, that the debate happens and let's say people agree and start to pressure for moving $ from GHD to AW. But this ignores a third option, move $ from 'longtermist' work to fund both.

Feels like this is a 'looking under the streetlight because it's easier effect' kind of phenomenon.

If Longtermist/AI Safety work can't even to begin to cash out measurable incomes that should be a strong case against it. This is EA, we want the things we're funding to be effective.

Nathan Young @ 2024-07-04T12:37 (+15)

I would like a discussion week once a month-ish.

Chris Leong @ 2024-07-04T22:33 (+7)

I think we could give that a go, but it might make sense to have a vote after three months about whether it was too much.

Joseph Lemien @ 2024-07-05T12:43 (+4)

I'd like them to be regular, but a little bit less frequent. Maybe once every two months? Once every six weeks?

Ozzie Gooen @ 2024-07-05T17:22 (+12)

How can we best find new EA donors?

I have a lot of respect for OP, but I think it's clear that we could really use a larger funding base. My guess is that there should be a lot more thinking here.

NickLaing @ 2024-07-06T13:33 (+2)

This is a great one

Nathan Young @ 2024-07-04T12:27 (+9)

Should Global Health comprise more than 15% of EA funding? 

Vasco Grilo🔸 @ 2024-07-05T11:29 (+4)

Hi Nathan,

I wonder whether it may be better to frame the discussion around personal donations. Open Philanthropy accounts for the vast majority of what I guess you are calling EA funding, and my impression is that they are not very amenable to changing the allocation across their 3 major areas (global catastrophic risks, farmed animal welfare, and human global health and wellbeing) based on EA Forum discussions.

Chris Leong @ 2024-07-04T22:37 (+2)

Feels like maybe a broader discussion about how much EA should focus on long-termism vs near-termist interventions.

Ozzie Gooen @ 2024-07-05T17:19 (+8)

Where do we want EA to be in ~20 years?

I'd like there to be more envisioning of what sorts of cultures, strengths, and community we want to aim for. I think there's not much attention here now.

Nathan Young @ 2024-07-04T12:38 (+8)

AI Safety Advocates have been responsible for over half of the leading AI companies. We don't take that seriously enough.

Ozzie Gooen @ 2024-07-05T17:18 (+6)

Why, if anyone, should be leaders within Effective Altruism?

I think that OP often actively doesn't want much responsibility. CEA is the more obvious fit, but they often can only do so much, and also they arguably very much represent OP's interests more than that of EA community members. (just look at where their funding is coming from, or the fact that there's no way for EA community members to vote on their board or anything). 

I think that there's a clear responsibility gap and would like to see more understanding here, along with ideally plans of how things can improve.

Ozzie Gooen @ 2024-07-05T17:14 (+4)

Epistemics/forecasting should be an EA cause area

Nathan Young @ 2024-07-05T13:18 (+4)

I'd like a debate week once every 2 months-ish.

Nathan Young @ 2024-07-05T13:19 (+3)

Worldview diversity isn't a coherent concept and mainly exists to manage internal OpenPhil conflict.

JWS @ 2024-07-05T17:09 (+3)

Seems needlessly provocative as a title, and almost purposefully designed to generate more heat than light in the resulting discussion.

Samrin Saleem @ 2024-07-12T09:45 (+1)

Decision making is a personal favorite cause area of mine and I'd like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.

Samrin Saleem @ 2024-07-12T09:44 (+1)

Decision making is a personal favorite cause are of mine and I'd like to see a lot more discussion around it than there is right now, especially because it seems to hold immense potential.

Jelle Donders @ 2024-07-11T13:04 (+1)

Sensemaking of AI governance. What do people think is most promising and what are their cruxes.

Besides posts, I would like to see some kind of survey that quantifies and graphs people's believes.

Evander H. @ 2024-07-11T08:20 (+1)

I really liked the discussion week on PauseAI. I'd like to see another one on this topic, taking up the new developments in reasons and evidence.

When?
Probably there are other topics that didn't have a week, so they should be prioritized. I think PauseAI is one of the most important topics. So, maybe in the next 3 - 9 months?

Jordan Arel @ 2024-07-10T19:59 (+1)

While existential risks are widely acknowledged as an important cause area, some EA’s like William MacAskill have argued that “Trajectory Change” may be highly contingent even if x-risk is solved and so may be just as important for the long-term future. I would like to see this debated as a cause area

Jonas Hallgren @ 2024-07-05T08:49 (+1)

Wild animal welfare and longtermist animal welfare versus farmed animal welfare?