What concrete question would you like to see debated? I might organise some debates.
By Nathan Young @ 2023-04-18T16:13 (+35)
I get bored by long form discussions and gain a lot from seeing people discuss in person. There are lots of contextual cues that we lose if it's just blocks of text going back and forth. What's more, many of these discussions need some time pressure, otherwise they become self-indulgent and even longer.
I want to organise some zoom debates
But:
- Standard debate rules suck. They are about winning, not finding truth. If I run debates, each debater will get to lead the discussion in 10 minute chunks, rather than making speeches. My suggested debate format.
- I am not interested in vague questions. I want actual tangible questions with real answers.
With that in mind, what concrete questions would you like to see two people debate, in real time, in a format that encourages them to understand and engage with one another and that is time-boxed.
NunoSempere @ 2023-04-19T01:24 (+40)
This house believes that current EA/Open Philanthropy leadership is/isn't basically competent.
MichaelPlant @ 2023-04-20T20:56 (+10)
An alternative that could touch on the same topic but is a bit more general:
This house believes effective altruism needs serious reform
Linch @ 2023-04-28T01:36 (+3)
I don't like that framing, personally.
"Serious reform" hides a lot of important details and disagreements among the reformers. Communists and alt-right folks both dislike mainstream American politicians, but this doesn't necessarily mean they have substantive agreements on how American politics can be improved.
MichaelPlant @ 2023-04-28T10:24 (+2)
Yeah, it is vague. My understanding of debate motions is that you want them to leave them broad and open to interpretation vs very narrowly specified.
Nathan Young @ 2023-04-19T08:14 (+4)
I would debate this but I imagine we can do better than me.
Linch @ 2023-04-19T05:25 (+3)
Brutal.
Gideon Futerman @ 2023-04-18T16:31 (+25)
This house believes that wild animals have net negative lives
Linch @ 2023-04-19T05:45 (+7)
Interesting! I'd be interested in arguing the "against" side, but I don't have strong arguments or convictions, just have some basic arguments and heuristics. Would certainly like to see a more informed argument against the "net negative" case; I basically think it's received wisdom in EA circles a bit too quickly.
Vasco Grilo @ 2023-05-22T09:48 (+6)
Hi Linch,
Would certainly like to see a more informed argument against the "net negative" case; I basically think it's received wisdom in EA circles a bit too quickly.
There is this preprint from Heather Browning and Walter Weit pushing against the view that wild animal welfare is negative. I think their take (and I agree) is that we simply do not know enough to be any confident either way (positive or negative).
Linch @ 2023-05-22T10:01 (+4)
Thanks, appreciate the link!
MichaelStJules @ 2023-04-28T05:38 (+2)
See Browning, H., Veit, W. Positive Wild Animal Welfare. Biol Philos 38, 14 (2023). https://doi.org/10.1007/s10539-023-09901-5
The paper mostly argues that existing arguments for net negative lives are unsound, not that wild animals actually have net positive lives.
The authors could also be good candidates to argue against net negative lives in a debate.
Nathan Young @ 2023-04-19T08:17 (+2)
Who do you think would best argue that side?
Linch @ 2023-04-19T10:36 (+2)
If I know, I'd have already mentioned them lol.
Quadratic Reciprocity @ 2023-04-19T23:28 (+1)
My guess is @RobBensinger would probably hold that view, based on https://www.lesswrong.com/posts/b7Euvy3RCKT7cppDk/animal-welfare-ea-and-personal-dietary-options and it would be fun to see him debate this though unlikely he'd choose to.
Linch @ 2023-04-20T21:04 (+5)
I personally thought that argument was pretty bad.
Quadratic Reciprocity @ 2023-04-20T22:04 (+1)
I don't think it makes any arguments? I also expect less to be convinced that factory-farmed animals have net positive lives, that wild animals might seems easier to defend
Linch @ 2023-04-20T23:34 (+10)
- 50%: If factory farmed animals are moral patients, it's more likely that they have net-negative lives (i.e., it would better for them not to exist, than to live such terrible lives).
- 50%: If factory farmed animals are moral patients, it's more likely that they have net-positive lives (i.e., their lives may be terrible, but they aren't so lacking in value that preventing the life altogether is a net improvement).
This seems like a super hard question, and not one that changes the importance of working to promote animal welfare, so naively (absent some argument for a more informative prior) it should have a 50/50 split within animal welfare circles.
Some of the implied claims feels weird to me. I can see a 50/50 split ex ante but it's hard to justify a 50-50 split ex post.
(analogously, having a ~0.5 expectation of Heads on a fair coin toss makes sense ex-ante, but I wouldn't expect ~50% of observers of the same coin toss to be in the Heads camp and ~50% in the Tails camp).
MichaelStJules @ 2023-04-28T05:17 (+2)
This is interesting, but I think a debate wouldn't be that informative or useful, and we should just support object-level research on wild animal lives like https://welfarefootprint.org/, but with positive welfare states. Break up animals into groups by moral weight/welfare range and sentience probability classes, and then pick some representatives for the total population in each class to study. Then we can aggregate the results.
A debate on intensity tradeoffs might be useful, though, because it could be that wild animals spend more time experiencing pleasure than suffering, but their average suffering is more intense than their average pleasure.
Nathan Young @ 2023-04-18T16:19 (+21)
The EA community should be cause area hubs rather than a single monolith.
David Nash & [idk, Benjamin Hilton]
David argues his line here: https://forum.effectivealtruism.org/posts/Zm6iaaJhoZsoZ2uMD/effective-altruism-as-coordination-and-field-incubation
Will Howard @ 2023-04-19T13:43 (+15)
An AI pause debate would be interesting (maybe @Matthew_Barnett would be interesting in debating someone?)
Quadratic Reciprocity @ 2023-04-19T23:33 (+2)
Ooh that sounds interesting, it was cool to see Matthew argue for his position in this Twitter thread https://twitter.com/MatthewJBar/status/1643775707313741824
Gideon Futerman @ 2023-04-18T16:31 (+15)
Climate Change is not a major contributor to existential risk. Suggested debaters: Luke Kemp and John Halstead
Nathan Young @ 2023-04-19T08:16 (+3)
Who do you think Luke and John would think best held the opposite positions respectively?
Gideon Futerman @ 2023-04-19T08:43 (+1)
As in my suggestion was John in favour and Luke against
Nathan Young @ 2023-04-18T16:25 (+14)
Are the top 1% more than 10,000x as efffective as the median
(suggested by @Hauke Hillebrandt here)
Suggested debaters: Brian Tomassik and Daniel
Eight years later, I still think this post is basically correct. My argument is more plausible the more one expects a lot of parts of society to play a role in shaping how the future unfolds. If one believes that a small group of people (who can be identified in advance and who aren't already extremely well known) will have dramatically more influence over the future than most other parts of the world, then we might expect somewhat larger differences in cost-effectiveness.
Daniel:
Link I can't access https://sci-hub.wf/https://onlinelibrary.wiley.com/doi/epdf/10.1111/phpe.12133
MichaelPlant @ 2023-04-20T20:55 (+11)
This house believes we should prioritise the longterm future
MathiasKB @ 2023-04-19T10:15 (+10)
This house believes that the theories of change for most of non-research AI policy work are too underspecified to robustly conclude that it is net-positive in expectation.
Will Aldred @ 2023-04-18T22:08 (+10)
This house believes that if digital minds are built, they will:
- be conscious
- experience valence (i.e., pleasure and/or pain)
I think this is an important debate to have because, as has been pointed out here and here, EA seems to largely ignore prioritization considerations around digital sentience and suffering risk.[1]
To argue against the motion, I suggest David Pearce: see his view explained here. To argue for the motion, maybe—aiming high—David Chalmers: see his position outlined here.
- ^
See the linked posts’ bullet points titled “I think EA ignores digital sentience too much,” and “Suffering-focused longtermism stuff seems weirdly sidelined,” respectively.
Quadratic Reciprocity @ 2023-04-19T23:31 (+4)
I think "digital minds can't be conscious" is an uncommon position among EAs
Nathan Young @ 2023-04-19T08:16 (+2)
Maybe Rob Long?
Chris Leong @ 2023-04-18T21:19 (+10)
I think it would be very topical to have a debate about how much the EA community should focus on AI safety given rapid progress.
Marcel D @ 2023-04-19T15:32 (+9)
Standard debate rules suck. They are about winning, not finding truth.
I really don't like the way people use this meme, even if I agree with some of the sentiment.
- Rules vs. culture: I think a lot of people base this on observing the really terrible practices[1] that have formed in some public school debate leagues (or the generally bad debates on IQ2/IQ2US[2]) and then they assume that the rules are to blame. The reality is that culture matters a lot: policy debate in homeschool leagues (where I debated in high school) is radically different from that in public schools (e.g., it's mostly sane), and the rules aren't significantly different. It's largely due to factors such as coaches telling people not to make dumb arguments, the judging pool drawing more from community (inexperienced) judges, having an example of what not to become, etc. Although I certainly support modifying the rules, I feel fairly confident that the exact same rules used in public school policy debate could produce good debates when used by people in/around EA (or at the very least, they would not produce the same problems of nonsense arguments, speed & spread, and other gamification).
- Back and forth is important: I wasn't particularly fond of the debate format you described, although I'm not sure I fully understood it, and I certainly think that it could still produce a good debate even if the rules are not ideal. I think that one of the most important/beneficial characteristics of debate rounds is that they should incentivize people to go beyond surface-level responses and actually dig deep into evidence and reasoning. Although it might seem like having 40 minutes of (loose) cross-examination is helpful for achieving this goal, I think this is far from optimal: people should have more than 3 minutes to present their arguments; how is the interviewee supposed to present the most compelling arguments for their side if the other side just chooses not to ask "what are the most compelling arguments for your side?" And the interruption aspect is also potentially quite problematic: when does the interviewer decide to cut the interviewee off and move on? Ultimately, I would recommend given both sides far more than 3 minutes for opening arguments, perhaps like ~8 minutes for a constructive speech and perhaps with the option to accept points of information from the opposing speaker, as in American and British parliamentary.
- Define the debate well: A major failure mode of debate rounds (both among experienced and inexperienced participants) is that the two sides end up disagreeing about how to even interpret the resolution. This is more reason for giving debaters more than 3 minutes for opening arguments—or perhaps this might be the most valuable way to use the "interview" time: have the debaters figure out what a fair interpretation of the resolution looks like and set up some initial (perhaps fuzzy) criteria for what upholding/rejecting the resolution looks like. (Then I would recommend they each be given ~8 minutes for a constructive speech.)
- Signpost and Delineate: A major failure mode of debate among non-debater participants is that they just treat the topic like a college lecture or speech, weaving together ideas without clear delineation and explicit relationships between points. Please ensure the debaters understand what it means to give taglines/signposts for arguments (as I have done for the points in this comment), and strongly suggest they try not to go back and forth between different arguments (at least, not without clear signposting).
- Note-taking is crucial: Regardless of whether you change any of the rules/format, I would strongly encourage you to have someone who is familiar with taking notes ("flowing") in a debate do so for the audience. As most debaters in my former league would probably tell you, flowing is absolutely crucial, especially for those who are trying to judge what's been said. Unfortunately, too often people think they can just judge based on memory and vibes (and/or they realize traditional note-taking is usually ineffective and don't realize there's an alternative), but this is very unreliable.
- ^
e.g., speed & spread, kritiks, performative cases, excessive/nonsensical nuclear war disadvantages.
- ^
I think one of the major reasons for these debates being bad is that they just choose resolutions that are written for buzz/appeal rather than for setting up a good, clear debate—and then the debaters themselves are often not actually experienced debaters and they do not adequately define terms or interpret the resolution fairly.
ChanaMessinger @ 2023-04-24T13:01 (+8)
THB we should strongly disapprove of working at an AI lab, even on a safety team
Sanjay @ 2023-04-18T17:30 (+6)
There's been a lot of debate on the forum about whether StrongMinds should be considered a high impact charity. I think Ishaan (Principal SoGive Analyst and author of this work on StrongMinds) would have some good contributions to make on this.
Gideon Futerman @ 2023-04-18T16:33 (+6)
Should the Existential Risk community collaborate with agents of doom (or make it more specific eg OpenAI or Anthropic should be treated as an enemy and not a friend)
BrownHairedEevee @ 2023-04-19T02:44 (+5)
This house believes that the strategy of developing advanced AI capabilities alongside safety techniques (as OpenAI does) increases AGI x-risk
ChanaMessinger @ 2023-04-24T13:01 (+4)
THB EAs should stop working at major AI labs
BrownHairedEevee @ 2023-04-19T02:43 (+4)
Fundamental questions about AI, like:
- What exactly is AGI / superintelligence?
- Are large language models a precursor to AGI?
TeddyW @ 2023-04-19T15:14 (+3)
AGI is more likely to save us from all-cause existential risk than it is likely to kill us all.
Linch @ 2023-04-28T01:38 (+4)
This question needs a time frame, I think.
TeddyW @ 2023-04-19T15:20 (+2)
Pro: Willard Wells
Geoffrey Miller @ 2023-04-18T21:06 (+3)
Nathan -- good idea.
In my experience, the most interesting and valuable debates happen when the debaters are about 60-70% in agreement about a significant issue, but have significant differences in their prioritization of issues, their values and world-views, their professional backgrounds, and/or their preferred strategies and policies.
And, of course, steel-manning the opponents' arguments should be an important and formalized part of any debate.
utilistrutil @ 2023-04-20T20:35 (+2)
THB that EA-minded college freshmen should study Computer Science over Biology
TeddyW @ 2023-04-19T15:17 (+2)
Highly effective causes saturate, making it impossible to distribute large sums of money especially more effectively.
utilistrutil @ 2023-04-20T20:36 (+1)
THW double the size of EA.
Sam Battis @ 2023-04-20T15:28 (+1)
EA should add systems change as a cause area - Macaskill or Ord v. [Someone with a view of history that favors systems change more who's been on 80k hours].
From hazy memory of their episodes it seems like Ian Morris, Mushtaq Khan, Christopher Brown, or Bear Braumoeller might espouse this type of view.
Ward A @ 2023-04-19T08:19 (+1)
Make it an 'undebate'. 10 points for every time you learn something, and 150 points for changing your mind on the central proposition.
Also, I'd like to see RLHF[1] debated. Whether any form of RL on realistic text data will be able to take us to a point where it's "smart enough", either to help us align higher intelligences or just smart enough for what we need.
- ^
Reinforcement Learning from Human Feedback.[2] A strategy for AI alignment.
- ^
I wish the forum had the feature where if you write [[RLHF]], it automatically makes an internal link to the topic page or where RLHF is defined in the wiki. It's standard in personal knowledge management systems like Obsidian, Roam, RemNote, and I think Arbital does it.
Ben Dean @ 2023-04-19T05:27 (+1)
Unsolicited procedural suggestions:
- It might be helpful for a third party to take notes in a real time argument map. There is a technique for this called "flowing" in high school / college debate in the US: https://thedebateguru.weebly.com/flowing.html
- Maybe consider hybrid text / oral debates? Each participant could have a written document of their position to start with. These would be more information dense than speeches. Like meetings at Amazon, where everybody apparently spends the first 10 minutes reading a text memo.
Felix Wolf @ 2023-04-19T07:04 (+2)
Maybe we can use the Debate tool from LessWrong when it is released.
LoveAndPeaceAlways @ 2023-04-19T04:29 (+1)
There are a couple of debate ideas I have, but I would most like to see a debate on whether ontological physicalism is the best view of the universe there is.
I would like to see someone like the theoretical physicist Sean Carroll represent physicalism, and someone like the professor Edward F. Kelly from the Division of Perceptual Studies at the University of Virginia represent anti-physicalism. The researchers at the Division of Perceptual Studies study near-death experiences, claimed past-life memories in children and other parapsychological phenomena, and Edward F. Kelly has written three long books on why he thinks physicalism is false, relying largely on case studies that he says don't fit well with the physicalistic worldview. Based on my understanding, the mainstream scientific community treats the research by the Division of Perceptual Studies as fringe science.
I'm personally agnostic, but I have thought about making an efforpost on steelmanning anti-physicalism based on Edward F. Kelly's works for LessWrong, but I have doubted whether there would be any kind of interest for it because the people at LessWrong seem to be very certain of physicalism and think lowly of other positions. If you think there would be interest for it, you can say so. Physicalism has very good arguments for it, and the anti-physicalist position relies on non-verifiable case studies being accurate.