Propose a debate topic - I'll run one

By Toby Tremlett🔹 @ 2026-01-13T09:20 (+53)

Note: You can vote on our next debate week here.

Because the donation election fund raised $15,000 dollars, the Forum unlocked the collective prize: "The Forum audience will propose and vote on a debate week topic next year, and I’ll run it during Q1."

This thread is your opportunity to propose the debate topic.

What's an EA Forum Debate Week?

Here are the debate weeks I have run on the EA Forum before:

As you can see, each of them revolved around a statement, which Forum users could signal their level of agreement/ disagreement on. The statements were then posted on the banner at the top of the EA Forum, on a debate slider like this:

 

Alongside the central debate, which happens in a thread linked to this debate slider, we also elicit posts from interested parties, and once hosted a symposium.

How this works

Write your debate week topic suggestions below. Preferably each different suggestion in a different answer[1]. You can include some rationale for the debate week if you like.

Remember to write your debate topic as a statement, not a question. See the above examples. You don't need to worry about getting the wording absolutely perfect yet - we can work on that together if your topic is chosen.

I'll take this post down a week from when it is posted (though if we need to extend the deadline to get more submissions, I'll let you know).

The week after next (Jan 26- Feb 1) I'll write another post, where you can vote on the top suggestions[2]. The highest voted by Feb 2nd will be the topic of a debate week, held in Q1.

A reminder

If your favourite option doesn't win, remember that you can always make a poll yourself. There are so many valuable discussions we could have that I don't have time to run, so I'm always excited when I see people using the poll feature to kick off debates. 

Beyond just making a poll, I'm interested in working with others who would like to run events for the community, so please reach out if you have an idea for a theme week/ debate week that you'd like to run. 

Writing a good debate topic

I am the foremost global expert in EA Forum debate weeks in that I have run three of them. With that pedigree in mind... to write a good debate week:

Some ideas

There are a lot of good discussions to be had between and within cause prioritisation, and on tactics within causes. For example:

Cause prioritisation

Note that “neglected” and “prioritised” would need a more specific operationalisation for people to vote on them. You don’t need to worry about this now, we can deal with making terms precise when we have a winning topic.

Tactics

That's all. Suggest away! I look forward to reading your ideas.

  1. ^

    Reminder that answers are distinct from comments. Since this is a question post, answers will appear above comments.

  2. ^

    I'm hoping that I'll be able to make the top 10 karma voted answers the "top suggestions" which this audience votes on. However there are a few reasons this may not end up being the case:

    • If most karma is assigned near the start of the week, or there are very many answers, good suggestions might be buried. In this case I reserve the right to more actively curate the top 10.
    • An entry in the top 10 is a debate week we can't host for legal or strong comms related reasons. I think this is fairly unlikely, but it wouldn't be a shock if it happened.
  3. ^

    Relative to the general population


Toby_Ord @ 2026-01-15T14:46 (+36)

"EAs aren't giving enough weight to longer AI timelines"

(The timelines until transformative AI are very uncertain. We should, of course, hedge against it coming early when we are least prepared, but currently that is less of a hedge and more of a full-on bet. I think we are unduly neglecting many opportunities that would pay off only on longer timelines.)

EdoArad @ 2026-01-16T14:03 (+9)

I think that this question will be better if it is framed not in terms of the EA community. This is because 

  1. The reasoning about the object level question involving timelines and different intervention strategies is very interesting in itself, and there's no need to add the layer of understanding what the community is doing and how practically it could and should adjust.
  2. Signal boosting a norm of focusing less on intra-movement prioritization and more on personal or marginal additional prioritization and object-level questions. 

For example, I like Dylan's reformulation attempt due to it being about object-level differences. Another could be to ask about the next $100K invested in AI safety.

Toby_Ord @ 2026-01-16T14:21 (+4)

Whether it is true or not depends on the community and the point I'm making is primarily for EAs (and EA-adjacent people too). It might also be true for the AI safety and governance communities. I don't think it is true in general though — i.e. most citizens and most politicians are not giving too little regard to long timelines. So I'm not sure the point can be made when removing this reference.

Also, I'm particularly focusing on the set of people who are trying to act rationally and altruistically in response to these dangers, and are doing so in a somewhat coordinated manner. e.g. a key aspect is that the portfolio is currently skewed towards the near-term.

EdoArad @ 2026-01-16T14:49 (+2)

Re the first point, I agree that the context should be related to a person with an EA philosophy.

Re the second point, I think that discussions about the EA portfolio are often interpreted as 0-sum or tribal, and may cause more division in the movement. 

I agree that most of the effects of such a debate are likely about shifting around our portfolio of efforts. However, there are other possible effects (recruiting/onboarding/promoting/aiding existing efforts, or increasing the amount of total resources by getting more readers involved). Also, a shift in the portfolio can happen as a result of object-level discussion, and it is not clear to me which way is better.

I guess my main point is that I'd like people in the community to think less about the community should think. Err.. oops..

Dylan Richardson @ 2026-01-15T21:47 (+4)

Perhaps "Long timelines suggest significantly different approaches than short timelines" is more direct and under discussed?

I think median EA AI timelines are actually OK, it's more that certain orgs and individuals (like AI 2027) have tended toward extremity in one way or another.

Toby_Ord @ 2026-01-15T22:06 (+3)

The point I'm trying to make is that we should have a probability distribution over timelines with a chance of short, medium or long — then we need to act given this uncertainty, with a portfolio of work based around the different lengths. So even if our median is correct, I think we're failing to do enough work aimed at the 50% of cases that are longer than the median.

Dylan Richardson @ 2026-01-15T22:35 (+1)

I think that is both correct and interesting as a proposition.

But the topic as phrased seems more likely to mire it in more timelines debate. Rather than this proposition, which is a step removed from:

 1. What timelines and probability distributions are correct

2. Are EAs correctly calibrated 

And only then do we get to

3. EAs are "failing to do enough work aimed at longer than median cases".

- arguably my topic "Long timelines suggest significantly different approaches than short timelines" is between 2 & 3 

Matrice Jacobine🔸🏳️‍⚧️ @ 2026-01-14T10:34 (+36)

"Countering democratic backsliding is now a more urgent issue than more traditional longtermist concerns."

tylermjohn @ 2026-01-29T15:00 (+2)

@Toby Tremlett🔹 would a better focus be "tractable interventions on democratic backsliding" to focus on concretely useful things and get around many of the content policy issues?

Toby Tremlett🔹 @ 2026-01-30T09:15 (+2)

Maybe- how would that work in a full sentence?
General point - we'd probably want to also change 'urgent' (bit ambiguous) and 'more traditional longtermist concerns' (which?)

Toby Tremlett🔹 @ 2026-01-30T09:29 (+2)

My plan is to make the redrafting public as well - current idea is that I'll write a post with some options, get comments, and then decide how to phrase the statement based on input. 

SiobhanBall @ 2026-01-14T16:02 (+31)

Most, if not all, animal advocacy funding should be directed towards bringing cultivated meat products to market ASAP. 

PabloAMC 🔸 @ 2026-01-19T18:16 (+17)

Perhaps a better framing is: "On the margin, should we devote farmed animal welfare resources to improve the animals being farmed (e.g., via corporate campaigns) or devote resources to substituting farmed animals altogether via alternative proteins?"

Dylan Richardson @ 2026-01-15T02:04 (+3)

Good topic, but I think it would need to be opened to plant based as well and reduced to something like "more than 60%" to split debate adequately. 

Becca Rogers @ 2026-01-19T00:15 (+27)

"Farmed animal advocacy currently underinvests in anticipating future policy and industry shifts relative to responding to current harms."

Robi Rahman🔸 @ 2026-01-15T15:32 (+17)

"Individual donors shouldn't diversify their donations"

Arguments in favor:

Arguments against:

Benton 🔸 @ 2026-01-15T18:25 (+1)

Another argument against: moral uncertainty

Robi Rahman🔸 @ 2026-01-19T03:28 (+2)

Moral uncertainty is completely irrelevant at the level of individual donors.

groundsloth @ 2026-01-20T16:29 (+2)

Why would this be? For example, could not an individual donor be uncertain of the moral status of animals and therefore morally uncertain about the relative value of donations to an animal welfare charity compared to a human welfare one?

Robi Rahman🔸 @ 2026-01-20T20:35 (+2)

Of course they might be uncertain of the moral status of animals and therefore uncertain whether donations to an animal welfare vs a human welfare charity is more effective. That is not at all a reason for an individual to split their donations between animal and human charities. You might want the portfolio of all EA donations to be diversified, but if an individual splits their donations in that way, they are reducing the impact of their donations relative to contributing only to one or the other.

groundsloth @ 2026-01-20T21:27 (+4)

You seem to be assuming a maximize-expected-choiceworthiness or a my-favorite-theory rule for dealing with moral uncertainty. There are other plausible rules, such as a moral parliament model, which could endorse splitting.

Robi Rahman🔸 @ 2026-01-21T19:50 (+3)
  1. I'm definitely not assuming the my-favorite-theory rule.
  2. I agree that what I'm describing is favored by the maximize-expected-choiceworthiness approach, though I think you should reach the same conclusion even if you don't use it.
  3. Can you explain how a moral parliament would end up voting to split the donations? That seems impossible to me in the case where two conflicting views disagree on the best charity - I don't see any moral trade the party with less credence/voting power can offer the larger party not to just override them. For parliaments with 3+ views but no outright majority, are you envisioning a spoiler view threatening to vote for the charity favored by the second-place view unless the plurality view allocates it some donation money in the final outcome?

edit: actually, I think the donations might end up split if you choose the allocation by randomly selecting a representative in the parliament and implementing their vote, in which case the dominant party would offer a little bit of donations in cases where it wins in exchange for donations in cases where someone else is selected?

groundsloth @ 2026-01-22T05:14 (+3)

I don't know how philosophically sound they are, but the following rules, taken from the RP moral parliament tool, would end up splitting donations among multiple causes:

  • Maximize Minimum; "Sometimes termed the 'Rawlsian Social Welfare Function', this method maximizes the payoff for the least-satisfied worldview. This method treats utilities for all worldviews as if they fall on the same scale, despite the fact that some worldviews see more avenues for value than others. The number of parliamentarians assigned to each worldview doesn't matter because the least satisfied parliamentarian is decisive."
  • Moral Marketplace: "This method gives each parliamentarian a slice of the budget to allocate as they each see fit, then combines each's chosen allocation into one shared portfolio. This process is relatively insensitive to considerations of decreasing cost-effectiveness. For more formal details, see this paper."

There are a few other other voting/bargaining style views they have that can also lead to splitting.

I don't really have anything intelligent to say about whether or not it makes sense to apply these rules for individual donations, or whether these rules make sense at all, but I thought they were worth mentioning.

Robi Rahman🔸 @ 2026-01-22T07:55 (+2)

Thank you very much, I hadn't seen that the moral parliament calculator had implemented all of those.

Moral Marketplace strikes me as quite dubious in the context of allocating a single person's donations, though I'm not sure it's totally illogical.

Maximize Minimum is a nonsensically stupid choice here. A theory with 80% probability, another with 19%, and another with 0.000001% get equal consideration? I can force someone who believes in this to give all their donations to any arbitrary cause by making up an astronomically improbable theory that will be very dissatisfied if they don't, e.g. "the universe is ruled by a shrimp deity who will torture you and 10^^10 others for eternity unless you donate all your money to shrimp welfare". You can be 99.9999...% sure this isn't true but never 100% sure, so this gets a seat in your parliament.

Toby Tremlett🔹 @ 2026-01-23T14:51 (+15)

"By default, the world where AI goes well for humans will also go well for other sentient beings" 
Based on this comment from @Kevin Xia 🔸 

Jordan Arel @ 2026-01-24T18:27 (+14)

"Conditional on avoiding existential catastrophes, the vast majority of future value depends on whether humanity implements a comprehensive reflection process (e.g., long reflection or coherent extrapolated volition)"

I made a more extensive argument of why I think  this may be the case here

Essentially, we cannot expect to 'stumble' into a great future. Without a comprehensive reflection process to navigate complex strategic uncertainties (e.g. here and here), we risk surviving but failing to realize the vast majority of our potential. 

Crucially, humanity might not naturally converge on a patient process for determining the optimal use of future resources (e.g. here and here). 

This strategic area is severely neglected; if a comprehensive reflection process is essential for high value futures, this may have significant implications for strategy and cause prioritization that the community hasn't explicitly addressed.

Dylan Richardson @ 2026-01-14T09:51 (+13)

"Policy or institutional approaches to AI Safety are currently more effective than technical alignment work"

David Goodman @ 2026-01-14T14:56 (+12)

EA focuses too much on AI. 

Matrice Jacobine🔸🏳️‍⚧️ @ 2026-01-14T10:38 (+11)

"Longtermists should primarily concern themselves with the lives/welfare/rights/etc. of future non-human minds, not humans."

Jesper 🔸 @ 2026-01-17T16:57 (+10)

"In expectation, work to prevent the use of weapons of mass destruction (nuclear weapons, bio-engineered viruses, and perhaps new AI-powered weapons) as funded by Longview Philanthropy (Emerging Challenges Fund) and Founders Pledge (Global Catastrophic Risks Fund) is more effective at saving lives than Givewell's top charities."

RedCat @ 2026-01-21T18:52 (+9)

EA over-antagonizes China, increasing the probability of global coordination failure and thereby elevating global catastrophic risk.

Pankaj Jiwrajka @ 2026-01-15T06:16 (+8)

Air Pollution is a relatively neglected topic though it is among the top 5 leading contributing causes of mortality.

Mo Putera @ 2026-01-20T08:01 (+2)

Open Phil / CoeffG has a fund focusing on air quality, so I suppose you're saying "no it's still relatively neglected, more resources should be allocated"?

Pankaj Jiwrajka @ 2026-01-20T08:43 (+4)

You are correct. While funding for air quality improvement has increased in past few years, it is relatively neglected compared to the attention required and impact on human lives. The scale of interventions by various stakeholders (including governments) is extremely low compared to the impact of the problem and one major factor could be low funding. Discloser: Open Phil has been a supporter of my employer (Air Pollution Action Group (A-PAG)).

James Brobin @ 2026-01-16T17:58 (+7)

"EA animal activism's current interventions neglect creating meaningful change over longer time horizons (10 - 20 years)."

Tristan Katz @ 2026-01-22T22:22 (+3)

I feel like this is true with most (but maybe not all) EAA, and if there's no debate I would really like to see a forum post about it!

Clara Torres Latorre 🔸 @ 2026-01-16T09:22 (+7)

"At high levels of uncertainty, common sense produces better outcomes than explicit modelling"

Arthur Zeuner @ 2026-01-19T21:03 (+1)

I like that this question allows to some extent to incorporate discussions about EA's blindspots in 'messy' processes like politics / institutional change etc. by asking what to do in such areas.

Mo Putera @ 2026-01-20T08:13 (+3)

I'm skeptical of the blindspot claim, e.g. there's a decade-old 80K article listing the wide variety of efforts by EAs working on systemic change even then.

Arthur Zeuner 🔹 @ 2026-01-20T16:27 (+3)

Thanks for sharing! This actually updates my understanding of the community a bit, am excited to go deeper on this.
I saw a few other folks here also talking about that blindspot. Would be curious to know whether they (like me) were simply unaware, or whether there's still serious reasons to think we're underappreciating the murky areas beyond strict experimental control.

Mo Putera @ 2026-01-21T09:30 (+3)

I think it's more so the latter. Scott Alexander's ACX Grants gives to a ton of systemic change-flavored stuff (see here), Charity Entrepreneurship / AIM has launched a fair number of orgs that aren't RCT-based direct delivery charities (policy, effective giving, evaluators, etc), etc to say nothing of longtermist and meta cause areas for which strict experimental control isn't possible at all.

Andrew Roxby @ 2026-01-15T00:12 (+7)

"Altruistic action(s) should occasionally be adversarial."

[Edit: Folks' downvotes here interest me. I take them to mean 'No, I strongly feel this topic should not be debated', rather than people taking a stance in the debate, as hearing those stances and arguments was my intent in proposing this. If it is indeed the former, would love to know why!]

Robi Rahman🔸 @ 2026-01-15T15:39 (+2)

Can you give examples of "adversarial" altruistic actions? Like protesting against ICE to help immigrants? Getting CEOs fired to improve what their corporations do?

Andrew Roxby @ 2026-01-15T16:33 (+5)

I think I was envisioning the debate as something like 1) Do these sets (the sets of altruistic and adversarial actions) occasionally intersect? 2) Does that have any implications for EA as a movement? 

But to answer your question, I think a paradigmatic example for the purposes of debating the topic would be the military intervention in and defeat of an openly genocidal and expansionist nation-state; i.e., something requiring complex, sophisticated adversarial action, up to and including deadly force, assuming that the primary motivations for the defeat of said nation state were the prevention of catastrophic and unspeakable harm. Exploring what the set of altruistic adversarial actions might look like at various scales and in various instances could potentially be a generative part of the debate. 

NickLaing @ 2026-01-19T05:43 (+3)

Animal welfare corporate campaigns often are adversarial to some extent, heavily EA funded. AI safety stuff sometimes is too.

Mo Putera @ 2026-01-20T07:57 (+2)

Any interesting "adversarial" actions / interventions in GHD in your view?

RedCat @ 2026-01-21T11:43 (+1)

FTX

stevenhuyn🔸 @ 2026-01-19T01:24 (+6)

"Ozempic will be good for animals"

My forecast predicts that the supply of GLP-1s will increase from eight million patient-years to roughly enough for approximately 23 million Americans by 2030.

https://asteriskmag.com/issues/07/how-long-til-were-all-on-ozempic

Kevin Xia 🔸 @ 2026-01-23T15:03 (+5)

It might allow for more nuanced and actionable discussion to ask "how good" - perhaps something like "Promoting Ozempic will be among the most cost-effective ways to help animals."

PabloAMC 🔸 @ 2026-01-25T19:32 (+4)

In the margin and within the budget allocated to AI safety, the EA community has underspent on power concentration problems and overspent on AI control.

Toby Tremlett🔹 @ 2026-01-16T14:19 (+4)

Meta- but this thread has some good ideas in. Feel free to nick them and submit them here!

PabloAMC 🔸 @ 2026-01-19T18:28 (+3)

"It is appropriate for small donors to spend time finding small charities to support"

For:

Against:

Mo Putera @ 2026-01-20T08:08 (+2)

There's an interesting variant of this if you generalise "small charities" to "small giving opportunities", cf. Nadia Asparouhova's Helium Grants. This doesn't so much address your "against" point but sidesteps it by focusing on individuals not orgs, which from having spoken with some meta-funders is standard. 

Vasco Grilo🔸 @ 2026-01-15T06:49 (+3)

There is currently not any intervention which robustly increases welfare in expectation due to potentially dominant uncertain effects on soil animals and microorganisms. More research on how to compare welfare across species is needed to figure out whether these matter.

fandi-chen @ 2026-01-14T12:52 (+3)

The epistemic dominance of positivism within the Effective Altruism paradigm constrains its truth-seeking potential by marginalizing non-positivist forms of knowledge.

Robi Rahman🔸 @ 2026-01-15T15:35 (+2)

What is positivism and what are some examples of non-positivist forms of knowledge?

fandi-chen @ 2026-01-15T18:23 (+10)

This is probably a simplification but I'll try:

Positivism asks: What is true, measurable, and generalisable?
Within this frame, Effective Altruism privileges phenomena that can be quantified, compared, and optimised. What cannot be measured is not merely sidelined but often treated as epistemically inferior or irrelevant.

German theoretical physicist Werner Heisenberg, Nobel laureate for his foundational work in quantum mechanics, explicitly rejected positivism:

“The positivists have a simple solution: the world must be divided into that which we can say clearly and the rest, which we had better pass over in silence. But can any one conceive of a more pointless philosophy, seeing that what we can say clearly amounts to next to nothing? If we omitted all that is unclear we would probably be left with completely uninteresting and trivial tautologies.”[1]

Heisenberg’s critique points to a basic flaw in positivism: when clarity is achieved by cutting away what cannot be neatly expressed or measured, the result is not deeper truth but a thinner, more trivial understanding of the world.

Non-positivist traditions are plural (anti-positivism, post-positivism, postcritique, etc.) rather than unified. They include interpretivism, hermeneutics, constructivism, critical theory, historical/genealogical analysis, indigenous or situated knowledge, and many more.

What they share is a rejection of the idea that reality becomes fully knowable once it is rendered measurable. Knowledge is understood as partial, situated, historically contingent, and shaped by language, institutions, and power. Measurement is treated as one way of knowing among others, not as a privileged filter that separates “real” knowledge from the "other".

This also helps explain why I think EA tends to shy away from politics and direct activism. These domains are hard to measure cleanly. You can’t easily run counterfactuals on democratic backsliding, elite capture, or institutional decay. So within the EA paradigm, they end up looking messy, speculative, or methodologically unsafe.

But to me, this avoidance is a real loss. If you only optimise within existing systems and never confront how those systems are structured, you risk reinforcing them. It’s hard not to see this as part of the reason democratic institutions, especially in places like the US, have been hollowed out while plutocratic power keeps consolidating.

One text that really shifted how I think about linear things like “technology” is Langdon Winner’s Do Artifacts Have Politics? Winner’s point is simple: technologies are never just technical.

  1. ^

    https://en.wikipedia.org/wiki/Physics_and_Beyond

Mmachukwu @ 2026-01-17T01:53 (+2)

Ending and preventing genocide should be an EA cause area. If factory farming is equivalent to animal genocide, why is human genocide not a top priority?

Tejas Subramaniam @ 2026-01-16T03:06 (+2)

“In general, continued economic growth in low- and middle-income countries is in the interests of nonhuman animals.”

(This seems like a pretty important question to me, and I’m not sure how to weigh effects like increased factory farming against the plausible reduction in invertebrate populations that economic growth comes with.)

Sam Robinson 🔸 @ 2026-01-15T12:26 (+2)

The EA Forum should hold more debate weeks

 

it's hard to know without (a) what the counterfactual is for the forum team, and (b) some impact stories/ botecs/ indications of what debate weeks have resulted in.

SiobhanBall @ 2026-01-14T09:32 (+2)

A good debate does something campaigns tend to avoid, but ought to do more of: it makes trade-offs explicit. Participants must define assumptions, defend priorities, and confront where values or strategies genuinely diverge. 

For an audience, this can be far more informative - and 𝑚𝑜𝑟𝑒 𝑐𝑜𝑛𝑣𝑖𝑛𝑐𝑖𝑛𝑔 - than polished messaging.

The value of debates is diagnostic. They surface where a movement is aligned, and which questions still need answering.

SiobhanBall @ 2026-01-14T10:14 (+4)

Note: This comment was copy-pasted from my recent LinkedIn post for speed. Toby kindly flagged that it read a bit out of context, so just to clarify for other readers: this is not AI slop. It's human-authored LinkedIn slop 🙂

TLDR: I’d love to see more debates. 

Wyatt S. @ 2026-01-29T17:01 (+1)

"Political Polarization and Gridlock is the biggest threat to steering humanity towards a good future."

new_user_5940241937 @ 2026-01-18T17:52 (+1)

The widespread use of commercially developed AI systems for civilian surveillance and battlefield targeting constitutes a moral catastrophe that should be an EA priority to oppose.

quinngrace @ 2026-01-15T11:39 (+1)

The EA Forum should hold more debate weeks

 

A simulation tank for topics is pretty nifty

Whitney Peng @ 2026-01-14T13:14 (+1)

The EA Forum should hold more debate weeks

I personally don't believe in a right or wrong answer to any question... holding some reservation on agreeing 100% to leave room for debate! 

Matrice Jacobine🔸🏳️‍⚧️ @ 2026-01-14T10:37 (+1)

"AI safety advocates should primarily seek an understanding with {AI ethics advocates,AI acceleration advocates}."