What I learned from the criticism contest

By Gavin @ 2022-10-01T13:39 (+170)

I was a judge on the Criticism and Red-teaming Contest, and read 170 entries. It was overall great: hundreds of submissions and dozens of new points. 

Recurring patterns in the critiques

But most people make the same points. Some of them have been made from the beginning, like 2011. You could take that as an indictment of EA's responsiveness to critics, proof that there's a problem, or merely as proof that critics don't read and that there's a small number of wide basins in criticism space. (We're launching the EA Bug Tracker to try and distinguish these scenarios, and to keep valid criticisms in sight.[1])

Trends in submissions I saw:

(I took out the examples because it was mean. Can back em up in DMs.)

 

Fundamental criticism takes time

Karnofsky, describing his former view: "Most EA criticism is - and should be - about the community as it exists today, rather than about the “core ideas.” The core ideas are just solid. Do the most good possible - should we really be arguing about that?" He changed his mind!

Really fundamental challenges to your views don't move you at the time you read them. Instead they set dominoes falling; they alter some weights a little, so that the next time the problem comes up in your real life, you notice it and hold it in your attention for a fraction more of a second. And then over about 3 years, you become a different person, - and no trace of the original post remains, and no gratitude will accrue.

If the winners of the contest don't strike you as fundamental critiques, this is part of why. (The weakness of the judges is another part, but a smaller part than this, I claim. Just wait!)

My favourite example of this is 80k arguing with some Marxists in 2012. We ended up closer than you'd have believed!

My picks

Top for changing my mind

Top 5 for improving EA   

Top for prose

Top for rigour   

Top posts I don't quite understand in a way which I suspect means they're fundamental 

Top posts I disagree with

Process

One minor side-effect of the contest: we accidentally made people frame their mere disagreements or iterative improvements as capital-c Criticisms, more oppositional than they maybe are. You can do this for anything - the line between critique and next iteration is largely to do with tone, an expectation of being listened to, and whether you're playing to a third party audience.

 

  1. ^

    Here's a teaser I made in an unrelated repo.

  2. ^

    AI (i.e. not AI alignment) only rises above this because, at this point, there's no way that it's not going to have some major impact even if that's not existential.


Elizabeth @ 2022-10-02T09:02 (+15)

Update on the nutritional tests: 5 tests have been ordered, at least 3 completed,  2 have results back, 1 of which speaks to the thesis (the other person wasn't vegan but was very motivated). I won't have real results until people have gone through the full test-supplement-retest cycle, but so far it's 1 of 1 vegans having one of the deficiencies you'd expect. This person had put thought into their diet and supplements and it seems to have worked because they weren't deficient in any of the things they were supplementing, but had missed one.

 

I have no more budget for covering tests for people but if anyone would like to pay their own way ($613 for initial test) and share data I'm happy to share the testing instructions and the what-I'd-do supplementation doc (not medical advice,  purely skilled-amateur level "things to consider trying").

MichaelDickens @ 2022-10-06T02:07 (+2)

What's the easiest way to do a nutritional test if I want to do one myself?

Elizabeth @ 2022-10-06T02:58 (+3)

Draft instructions here, look for "Testing"

Gavin @ 2022-10-02T09:14 (+2)

I have a few years of data from when I was vegan; any use?

Elizabeth @ 2022-10-02T19:52 (+2)

I probably can't combine it with the trial data since it's not comparable enough, but seems very useful for estimating potential losses from veganism.

iporphyry @ 2022-10-02T13:20 (+7)

I enjoyed this post a lot! 

I'm really curious about your mention of the "schism" pattern because I both haven't seen it and I sort of believe a version of it. What were the schism posts? And why are they bad? 

I don't know if what you call "schismatics" want to burn the commons of EA cooperation (which would be bad), or if they just want to stop the tendency in EA (and really, everywhere) of people pushing for everyone to adopt convergent views (the focus of "if you believe X you should also believe Y" which I see and dislike in EA, versus "I don't think X is the most important thing, but if you believe X here are some ways you could can do it more effectively" which I would like to see more). 

Though I can see myself changing my mind on this, I currently like the idea of a more loose EA community with more moving parts that has a larger spectrum of vaguely positive-EV views. I've actually considered writing something about it inspired by this post by Eric Neyman https://ericneyman.wordpress.com/2021/06/05/social-behavior-curves-equilibria-and-radicalism/ which quantifies, among other things, the intuition that people are more likely to change their mind/behavior in a significant way if there is a larger spectrum of points of view rather than a more bimodal distribution.

Gavin @ 2022-10-02T14:45 (+5)

It seems bad in a few ways, including the ones you mentioned. I expect it to make longtermist groupthink worse, if (say) Kirsten stops asking awkward questions under (say) weak AI posts. I expect it to make neartermism more like average NGO work. We need both conceptual bravery and empirical rigour for both near and far work, and schism would hugely sap the pool of complements. And so on.

Yeah the information cascades and naive optimisation are bad. I have a post coming on a solution (or more properly, some vocabulary to understand how people are already solving it).

DMed examples.

ParthThaya @ 2022-10-03T02:30 (+5)

I'm the author of a (reasonably highly upvoted) post that called out some problems I see with all of EA's different cause areas being under the single umbrella of effective altruism. I'm guessing this is one of the schism posts being referred to here, so I'd be interested in reading more fleshed out rebuttals. 

The comments section contained some good discussion with a variety of perspectives - some supporting my arguments, some opposing, some mixed - so it seems to have struck a chord with some at least. I do plan to continue making my case for why I think these problems should be taken seriously, though I'm still unsure what the right solution is. 

Gavin @ 2022-10-03T09:42 (+5)

Good post!

I doubt I have anything original to say. There is already cause-specific non-EA outreach. (Not least a little thing called Lesswrong!) It's great, and there should be more. Xrisk work is at least half altruistic for a lot of people, at least on the conscious level. We have managed the high-pay tension alright so far (not without cost). I don't see an issue with some EA work happening sans the EA name; there are plenty of high-impact roles where it'd be unwise to broadcast any such social movement allegiance. The name is indeed not ideal, but I've never seen a less bad one and the switching costs seem way higher than the mild arrogance and very mild philosophical misconnotations of the current one.

Overall I see schism as solving (at really high expected cost) some social problems we can solve with talking and trade.

mm6 @ 2022-10-01T20:30 (+5)

This might be the best feedback I've ever gotten on a piece of writing (On the Philosophical Foundations of EA). Thanks for reading so many entries and helping make the contest happen!

David Kinney @ 2022-10-02T12:54 (+4)

Even though you disagreed with my post, I was touched to see that it was one of the "top" posts that you disagreed with :). However, I'm really struggling to see the connection between my argument and Deutsch's views on AI and universal explainers. There's nothing in the piece that you link to about complexity classes or efficiency limits on algorithms. 

Gavin @ 2022-10-02T14:26 (+6)

You are totally right, Deutsch's argument is computability, not complexity. Pardon!

Serves me right for trying to recap 1 of 170 posts from memory.

Sharmake @ 2022-10-02T14:30 (+1)

The basic answer is, computational complexity matters less than you think, primarily because it makes very strong assumptions, and even one of those assumptions failing weakens it's power.

The assumptions are:

  1. Worst case scenarios. In this setting, everything matters, so anything that scales badly will impact the overall problem.

  2. Exactly optimal, deterministic solutions are required.

  3. You have only one shot to solve the problem.

  4. Small advantages do not compound into big advantages.

  5. Linear returns are the best you can do.

This is a conjunctive argument, where if one of the premises are wrong, than the entire argument gets weaker.

And given the conjunction fallacy, we should be wary of accepting such a story.

Link to more resources here:

https://www.gwern.net/Complexity-vs-AI#complexity-caveats

Yonatan Cale @ 2022-10-02T07:19 (+4)

Got opinions on this? (how 80k vet jobs and their transparency about it)

It wasn't officially submitted to the contest

Gavin @ 2022-10-02T09:04 (+14)

Nice work, glad to see it's improving things.

I sympathise with them though - as an outreach org you really don't want to make public judgments like "infiltrate these guys please; they don't do anything good directly!!". And I'm hesitant to screw with the job board too much, cos they're doing something right: the candidates I got through them are a completely different population from Forumites. 

Adding top recommendations is a good compromise.

I guess a "report job " [as dodgy] button would work for your remaining pain point, but this still looks pretty bad to outsiders.

Overall: previous state strikes me as a sad compromise rather than culpable deception. But you still made them move to a slightly less sad compromise, so hooray.

Yonatan Cale @ 2022-10-02T11:50 (+3)

Ah and regarding "infiltrate these guys please" - I am not voicing an opinion on this making sense or not (it might) - but I am saying that if you want person X to infiltrate an org and do something there - at least TELL person X about this.

wdyt?

Gavin @ 2022-10-02T11:57 (+6)

Yeah maybe they could leave this stuff to their coaching calls

Yonatan Cale @ 2022-10-02T11:42 (+3)

Thanks,

How about the solution/tradeoff of having a link saying "discuss this job here"?

Gavin @ 2022-10-02T11:47 (+2)

on the 80k site? seems like a moderation headache

Yonatan Cale @ 2022-10-02T11:51 (+3)

I'd run the discussion in the forum by default

Gavin @ 2022-10-02T11:56 (+4)

ah, cool

Yonatan Cale @ 2022-10-03T15:06 (+2)

So.. would you endorse this? [I'm inviting pushback if you have it]

Gavin @ 2022-10-03T15:35 (+4)

got none

James Herbert @ 2022-10-06T08:07 (+3)

Wait, it's a small thing, but I think I have a different understanding of decoupling (even though my understanding is ultimately drawn from the Nerst post that's linked to in your definitional link); consequently, I'm not 100% sure what you mean when you say a common critique was 'stop decoupling everything'. 

You define the antonym of decoupling as the truism that 'all causes are connected'. This implies that a common critique was that, too often, EA takes causes that are interconnected, separates them and, as a result, undermines its efforts to make progress. 

I can imagine this would be a common critique. However, my definition of the antonym is quite different.

I would describe the antonym of decoupling to be a lack of separating an idea from its possible implications. 

For example, a low-decoupler is someone who is weirded out by someone who says, 'I don’t think we should kill healthy people and harvest their organs, but it is plausible that a survival lottery, where random people are killed and their organs redistributed, could effectively promote longevity and well-being'. A low-decoupler would be like, 'Whoa mate, I don't care how much you say you don't endorse the implications of your logic, the fact you think this way suggests an unhealthy lack of an empathy and I don't think I can trust you'.   

Are you saying that lots of critiques came from that angle? Or are you saying that lots of critiques were of the flavour, 'Too often, EA takes causes that are interconnected, separates them and, as a result, undermines its efforts to make progress'? 

Like I said, it's a minor thing, but I just wanted to get it clear in my head :) 

Thanks for the post! 

Gavin @ 2022-10-06T08:57 (+3)

Your read makes sense! I meant the lumping together of causes, but there was also a good amount of related things about EA being too weird and not reading the room. 

James Herbert @ 2022-10-06T09:04 (+1)

Thanks for the clarification!

Stephen Clare @ 2022-10-03T11:38 (+2)

Thanks, this was fun to read and highlighted several interesting posts I wouldn't have otherwise found!

esc12a @ 2022-10-03T04:50 (+1)

On the vegan thing:

I'm not actively involved in EA but sometimes I read this forum and try to live a utilitarian lifestyle (or like to tell myself that at least). I hope to become mostly vegan at some point, but given the particular moment I am at in my career/life, it strikes me as a terrible idea for me to try to be vegan right now. I'm working 100+ hours per week with virtually no social life. Eating and porn are basically the only fun things I do. 

If I were to try to go vegan, it would take me a lot longer to eat meals because I'd have to force it down, and I would probably not get full and would end up being hungry and thus less productive throughout the day. I think I would also lose up mental energy and willpower by removing fun from my day and would be less productive. If I am productive now, I can eventually potentially make a big impact on various things including something like animals stuff or other things.

 Is this just selfish rationalization? I don't think so, though there is some of that.

I try to look for good veggie/vegan dishes/restaurants and have ~2/3 of my meals vegan but the remainder just doesn't seem even close to worth it right now. Since I have very little social contact and am not "important" yet, the signaling value is low. 

I think it's great that people have made being vegan work for them, but I don't think it's right for everyone at every time in their lives.

Gavin @ 2022-10-03T09:27 (+2)

I struggled a lot with it until I learned how to cook in that particular style (roughly: way more oil, MSG, nutritional yeast, two proteins in every recipe). Good luck!

Sharmake @ 2022-10-02T13:02 (+1)

If I had to make a criticism, it's that EA's ideas of improving morality only exist if moral realism is true.

Now to define moral realism, I'm going to define it as moral rules that are crucially mind independent, ala how physical laws are mind- independent.

If it isn't true (which I suspect will happen with 50% probability) than EA has no special claim to morality, although everyone else doesn't either. But moral realism is a big crux here, at least for universal EA.

Karthik Tadepalli @ 2022-10-03T01:57 (+6)

I see this criticism a lot, but I don't understand where it cashes out. In the 50% case where moral realism is false, then the expected value of all actions is zero. So the expected value of our actions is determined only by what happens in the 50% case where moral realism is true, and shrinking the EV of all actions by 50% doesn't change our ordering of which actions have the highest EV. More generally than EV-based moralities, any morality that proposes an ordering of actions will have that ordering unchanged by a <100% probability that moral realism is false. So why does it matter if moral realism is false with probability 1% or 50% or 99%?

Sharmake @ 2022-10-03T13:49 (+1)

Admittedly that is a good argument against the idea that moral realism actually matters too much, albeit I would say that the EV of your actions can be very different depending on your perspective (if moral realism is false).

Also, this is a case where non-consequentialist moralities fail badly at probability, because it's asking for an infinite amount of evidence in order to update one's view away from the ordering, which is equivalent to asking for mathematical proof that you're wrong.