Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong

By Omnizoid @ 2023-08-27T01:07 (+44)

Introduction

Edit 8/27: I think the tone of this post was very unnecessarily hostile, changing much of it.  

“After many years, I came to the conclusion that everything he says is false. . . . Every one of his arguments was tinged and coded with falseness and pretense. It was like playing chess with extra pieces. It was all fake.”

Paul Postal (talking about Chomsky) (note, this is not exactly how I feel about Yudkowsky, I don’t think he’s knowingly dishonest, but I just thought it was a good quote and partially represents my attitude towards Yudkowsky).

Crosspost of this on my blog.  

In the days of my youth, about two years ago, I was a big fan of Eliezer Yudkowsky. I read his many, many writings religiously, and thought that he was right about most things. In my final year of high school debate, I read a case that relied crucially on the many worlds interpretation of quantum physics—and that was largely a consequence of reading through Eliezer’s quantum physics sequence. In fact, Eliezer’s memorable phrasing that the many worlds interpretation “wins outright given the current state of evidence,” was responsible for the title of my 44-part series arguing for utilitarianism titled “Utilitarianism Wins Outright.” If you read my early articles, you can find my occasional blathering about reductionism and other features that make it clear that my worldview was at least somewhat influenced by Eliezer.

But as I grew older and learned more, I came to conclude that much of what he said was deeply implausible.

Eliezer sounds good whenever he’s talking about a topic that I don’t know anything about. I know nothing about quantum physics, and he sounds persuasive when talking about quantum physics. But every single time he talks about a topic that I know anything about, with perhaps one or two exceptions, what he says is completely unreasonable, at least, when it’s not just advice about how to reason better. It is not just that I always end up disagreeing with him, it is that he says with almost total falsehood after falsehood, making it frequently clear he is out of his depth. And this happens almost every single time. It seems that, with few exceptions, whenever I know anything about a topic that he talks about, it becomes clear that his view is held confidently but very implausible. 

Why am I writing a hit piece on Yudkowsky? I certainly don’t hate him. In fact, I’d guess that I agree with him much more than almost all people on earth. Most people believe lots of outrageous falsehoods. And I think that he has probably done more good than harm for the world by sounding the alarm about AI, which is a genuine risk. And I quite enjoy his scrappy, willing-to-be-contrarian personality. So why him?

Part of this is caused by personal irritation. Each time I hear some rationalist blurt out “consciousness is just what an algorithm feels like from the inside,” I lose a year of my life and my blood pressure doubles (some have hypothesized that the explanation for the year of lost life involves the doubling of my blood pressure). And I spend much more time listening to Yukowsky’s followers say things that I think are false than most other people.

But a lot of it is that Yudkowsky has the ear of many influential people. He is one of the most influential AI ethicists around. Many people, my younger self included, have had their formative years hugely shaped by Yudkowsky’s views—on tons of topics. As Eliezer says:

In spite of how large my mistakes were, those two years of blog posting appeared to help a surprising number of people a surprising amount.

Quadratic Rationality expresses a common sentiment, that the sequences, written by Eliezer, have significantly shaped the world view of them and others. Eliezer is a hugely influential thinker, especially among effective altruists, who punch above their weight in terms of influence.

And Eliezer does often offer good advice. He is right that people often reason poorly, and there are ways people can improve their thinking. Humans are riddled by biases, and it’s worth reflecting on how that distorts our beliefs. I thus feel about him much like I do about Jordan Peterson—he provides helpful advice, but the more you listen, the more he sells you on a variety of deeply implausible, controversial views that have nothing to do with the self-help advice.

And the negative effects of Eliezer’s false beliefs have been significant. I’ve heard lots of people describe that they’re not vegan because of Eliezer’s animal consciousness views—views that are utterly nutty, as we’ll see. It is bad that many more people torture sentient beings on account of utterly loony beliefs about consciousness. Many people think that they won’t live to be 40 because they’re almost certain that AI will kill everyone, on account of Eliezer’s reasoning, and deference to Eliezer more broadly. Thinking that we all die soon can’t be good for mental health.

Eliezer’s influence is responsible for a narrow, insular way of speaking among effective altruists. It’s common to hear, at EA globals, peculiar LessWrong speak; something that is utterly antithetical to the goal of bringing new, normal non-nerds into the effective altruism movement. This is a point that I will assert without argument just based on my own sense of things—LessWrong speak masks confusion more than it enables understanding. People feel as though they’ve dissolved the hard problem by simply declaring that consciousness is what an algorithm feels like from the inside.

In addition, Eliezer’s views have undermined widespread trust in experts. They result in people thinking that they know better than David Chalmers about non-physicalism—that clever philosophers of mind are just morons who aren’t smart enough to understand Eliezer’s anti-zombie argument. Eliezer’s confident table pounding about quantum physics leads to people thinking that physicists are morons, incapable of understanding basic arguments. This undermining of trust in genuine authority results in lots of rationalists holding genuinely wacky views—if you think you are smarter than the experts, you are likely to believe crazy things.

Eliezer has swindled many of the smartest people into believing a whole host of wildly implausible things. Some of my favorite writers—e.g. Scott Alexander—seem to revere Eliezer. It’s about time someone pointed out his many false beliefs, the evaluation of which is outside of the normal competency of most people who do not know much about niche philosophical topics. If one of the world’s maybe 1,000 most influential thinkers is just demonstrably wrong about lots of topics, often in ways so egregious that they demonstrate very basic misunderstandings, then that’s quite newsworthy, just as it would be if a presidential candidate supported a slate of terrible policies.

The aim of this article is not to show that Eliezer is some idiot who is never right about anything. Instead, it is to show that Eliezer, on many topics, including ones where he describes agreeing with his position as being a litmus test for being sane, Eliezer is both immensely overconfident and demonstrably wrong. I think people, when they hear Eliezer express some view about some topic about which they’re unfamiliar, have roughly the following thought process:

Oh jeez, Eliezer thinks that most of the experts who think X are mistaken. I guess I should take seriously the hypothesis that X is wrong and that Eliezer has correctly identified an error in their reasoning. This is especially so given that he sounds convincing when he talks about X.

I think that instead they should have the following thought process:

I’m not an expert about X, but it seems like most of the experts about X think X or are unsure about it. The fact that Eliezer, who often veers sharply off-the-rails, thinks X gives me virtually no evidence about X. Eliezer, while being quite smart, is not rational enough to be worthy of significant deference on any subject, especially those subjects outside his area of expertise. Still though, he has some interesting things to say about AI and consequentialism that are sort of convincing. So it’s not like he’s wrong about everything or is a total crank. But he’s wrong enough, in sufficiently egregious ways, that I don’t really care what he thinks.

 

 

Eliezer is ridiculously overconfident and has a mediocre track record

Even the people who like Eliezer think that he’s wildly overconfident about lots of things. This is not without justification. Ben Garfinkel has a nice post on the EA forum running through Eliezer’s many, many mistaken beliefs that he held with very high confidence. Garfinkel suggests:

I think these examples suggest that (a) his track record is at best fairly mixed and (b) he has some tendency toward expressing dramatic views with excessive confidence.

Garfinkel runs through a series of incorrect predictions Eliezer has made. He predicted that nanotech would kill us all by 2010. Now, this was up until about 1999, when he was only about 20. So it’s not as probative as it would be if he made that prediction in 2005, for instance. But . . . still. If a guy has already incorrectly predicted that some technology would probably kill us soon, backed up by a rich array of arguments, and now he is predicting that some technology will kill us soon, backed up by a rich array of arguments, a reasonable inference is that, just like financial speculators who constantly predict recessions, this guy just has a bad habit of overpredicting doom.

I will not spend very much time talking about Eliezer’s views about AI, because they’re outside my area of expertise. But it’s worth noting that lots of people who know a lot about AI seem to think that Eliezer is ridiculously overconfident about AI. Jacob Cannell writes, in a detailed post arguing against Eliezer’s model:

My skill points instead have gone near exclusively towards extensive study of neuroscience, deep learning, and graphics/GPU programming. More than most, I actually have the depth and breadth of technical knowledge necessary to evaluate these claims in detail.

I have evaluated this model in detail and found it substantially incorrect and in fact brazenly naively overconfident.

. . .

Every one of his key assumptions is mostly wrong, as I and others predicted well in advance.

. . .

EY is just completely out of his depth here: he doesn't seem to understand how the Landauer limit actually works, doesn't seem to understand that synapses are analog MACs which minimally require OOMs more energy than simple binary switches, doesn't seem to have a good model of the interconnect requirements, etc.

I am also completely out of my depth here. Not only do I not understand how the Landauer limit works, I don’t even know what it is. But it’s worth noting that a guy who seems to know what he’s talking about thinks that many parts of Eliezer’s model are systematically overconfident, based on relatively egregious error.

Eliezer made many, many more incorrect predictions—let me just run through the list.

In 2001, and possibly later, Eliezer predicted that his team would build superintelligence probably between 2008-2010.

“In the first half of the 2000s, he produced a fair amount of technical and conceptual work related to this goal. It hasn't ultimately had much clear usefulness for AI development, and, partly on the basis, my impression is that it has not held up well - but that he was very confident in the value of this work at the time.”

Eliezer predicted that AI would quickly go from 0 to 100—that potentially over the course of a day, a single team would develop superintelligence. We don’t yet definitively know that that’s false but it almost certainly is.

There are other issues that are more debatable that Garfinkel highlights, that are probably instances of Eliezer’s errors. For most of those though, I don’t know enough to confidently evaluate them. But the worst part is that he has never acknowledged his mixed forecasting track record, and in fact, frequently acts as though he has a very good forecasting track record. This despite the fact that he often makes relatively nebulous predictions without giving credences, and then just gestures in the direction of having been mostly right about things when pressed about this. For example, he’ll claim that he came out better than Robin Hanson in the AI risk debate they had. Claiming that you were more right than someone, when you had wildly diverging models on a range of topics, is not a precise forecast (and in Eliezer’s case, is quite debatable). As Jotto999 notes:

In other domains, where we have more practice detecting punditry tactics, we would dismiss such an uninformative "track record".  We're used to hearing Tetlock talk about ambiguity in political statements.  We're used to hearing about a financial pundit like Jim Cramer underperforming the market.  But the domain is novel in AI timelines.

Even defenders of Eliezer agree that he’s wildly overconfident. Brian Tomasik, for example, says:

Really smart guy. His writings are “an acquired taste” as one of my friends put it, but I love his writing style, both for fiction and nonfiction. He’s one of the clearest and most enjoyable writers I’ve ever encountered.

My main high-level complaint is that Eliezer is overconfident about many of his beliefs and doesn’t give enough credence to other smart people. But as long as you take him with some salt, it’s fine.

Eliezer is in the top 10 list for people who have changed the way I see the universe.

Scott Alexander in a piece defending Eliezer says:

This is not to say that Eliezer – or anyone on Less Wrong – or anyone in the world – is never wrong or never overconfident. I happen to find Eliezer overconfident as heck a lot of the time.

The First Critical Error: Zombies

The zombie argument is an argument for non-physicalism. It’s hard to give a precise definition of non-physicalism, but the basic idea is that consciousness is non-physical in the sense that is it not reducible to the behavior of fundamental particles. Once you know the way atoms work, you can predict all the facts about chairs, tables, iron, sofas, and plants. Non-physicalists claim that consciousness is non-physical in the sense that it’s not explainable in that traditional way. The consciousness facts are fundamental—just as there are fundamental laws about the ways that particles behave, so too are there fundamental laws that govern that subjective experience arises in response to certain physical arrangements.

Let’s illustrate what a physicalist model of reality would work. Note, this is going to be a very simplistic and deeply implausible physicalist model; the idea is just to communicate the basic concept. Suppose that there are a bunch of blocks that move right every second. Assume these blocks are constantly conscious and consciously think “we want to move right.” A physicalist about this reality would think that to fully specify its goings-on, one would have to say the following:

Every second, every block moves right.

A non-physicalist in contrast might think one of the following two sets of rules specifies reality (the bolded thing is the name of the view):

Epiphenomenalism

Every second, every block moves right

Every second, every block thinks “I’d like to move right.”

Interactionism

Every second, every block thinks “I’d like to move right.”

Every time a block thinks “I’d like to move right,” it moves right.

The physical facts are facts about the way that matter behaves. Physicalists think once you’ve specified the way that matter behaves, that is sufficient to explain consciousness. Consciousness, just like tables and chairs, can be fully explained in terms of the behavior of physical things.

Non-physicalists think that the physicalists are wrong about this. Consciousness is its own separate thing that is not explainable just in terms of the way matter behaves. There are more niche views like idealism and panpsychism that we don’t need to go into, which say that consciousness is either fundamental to all particles or the only thing that exists, so let’s ignore them. The main view about consciousness is called dualism, according to which consciousness is non-physical and there are some psychophysical laws, that result in consciousness when there are particular physical arrangements.

There are broadly two kinds of dualism: epiphenomenalism and interactionism. Interactionism says that consciousness is causally efficacious, so the psychophysical laws describe that particular physical arrangements give rise to particular mental arrangements and also that those mental states cause other physical things. This can be seen in the block case—the psychophysical laws mean that the blocks give rise to particular conscious states that cause some physical things. Epiphenomenalism says the opposite—consciousness causes nothing. It’s an acausal epiphenomenon—the psychophysical laws go only one way. When there is a certain physical state, consciousness arises, but consciousness doesn’t cause anything further.

The zombie argument is an argument for non-physicalism about consciousness. It doesn’t argue for either an epiphenomenalist or interactionist account. Instead, it just argues against physicalism. The basic idea is as follows: imagine any physical arrangement that contains consciousness, for example, the actual world. Surely, we could imagine a world that is physically identical—where all the atoms, quarks, gluons, and such, move the same way—that doesn’t have consciousness. You could imagine an alternative version of me that is the same down to the atom.

Why think such beings are possible? They sure seem possible. I can quite vividly imagine a version of me that continues through its daily goings-on but that lacks consciousness. It’s very plausible that if something is impossible, there should be some reason that it is impossible—there shouldn’t just be brute impossibilities. The reason that married bachelors are impossible is that they require a contradiction—you can’t be both married and unmarried at the same time. But spelling out a contradiction in the zombie scenario has proved elusive.

I find the zombie argument quite convincing. But there are many smart people who disagree with it who are not off their rocker. Eliezer, however, has views on the zombie argument that demonstrate a basic misunderstanding of it—the type that would be cleared up in an elementary philosophy of mind class. In fact, Eliezer’s position on zombies is utterly bizarre; when describing the motivation for zombies, he writes what amounts to amusing fiction, trying to describe the motivation for zombies, but demonstrating that he has no idea what motivates belief in zombies. It would be like a Christian writer writing a thousand words eloquently steelmanning the problem of evil, but summarizing it as “atheists are angry at god because he creates things that they don’t like.”

What Eliezer thinks the zombie argument is (and what it is not)

Eliezer seems to think the zombie argument is roughly the following:

It seems like if you got rid of the world’s consciousness nothing would change because consciousness doesn’t do anything.

Therefore, consciousness doesn’t do anything.

Therefore it’s non-physical.

Eliezer then goes on an extended attack against premise 1. He argues that if it were true that consciousness does something, then you can’t just drain consciousness from the world and not change anything. So the argument for zombies hinges crucially on the assumption that consciousness doesn’t do anything. But he goes on to argue that consciousness does do something. If it didn’t do anything, what are the odds that when we talked about consciousness, our descriptions would match up with our conscious states? This would be a monumental coincidence, like it being the case that there are space aliens who work exactly the way you describe them to work, but your talk is causally unrelated to them—you’re just guessing and they happen to be exactly what you guess. It would be like saying “I believe there is a bridge in San Francisco with such and such dimensions, but the bridge existing has nothing to do with my talk about the bridge.” Eliezer says:

Your "zombie", in the philosophical usage of the term, is putatively a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious.

It is furthermore claimed that if zombies are "possible" (a term over which battles are still being fought), then, purely from our knowledge of this "possibility", we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is "epiphenomenalism".

(For those unfamiliar with zombies, I emphasize that this is not a strawman.  See, for example, the SEP entry on Zombies.  The "possibility" of zombies is accepted by a substantial fraction, possibly a majority, of academic philosophers of consciousness.)

Eliezer goes out of his way to emphasize that this is not a strawman. Unfortunately, it is a strawman. Not only that, Eliezer’s own source that he links to to describe how unstrawmanny it is shows that it is a strawman. Eliezer claims that the believers in zombies think consciousness is causally inefficacious and are called epiphenomenalists. But the SEP page he links to says:

True, the friends of zombies do not seem compelled to be epiphenomenalists or parallelists about the actual world. They may be interactionists, holding that our world is not physically closed, and that as a matter of actual fact nonphysical properties do have physical effects.

In fact, David Chalmers, perhaps the world’s leading philosopher of mind, says the same thing when leaving a comment below Eliezer’s post:

Someone e-mailed me a pointer to these discussions. I'm in the middle of four weeks on the road at conferences, so just a quick comment. It seems to me that although you present your arguments as arguments against the thesis (Z) that zombies are logically possible, they're really arguments against the thesis (E) that consciousness plays no causal role. Of course thesis E, epiphenomenalism, is a much easier target. This would be a legitimate strategy if thesis Z entails thesis E, as you appear to assume, but this is incorrect. I endorse Z, but I don't endorse E: see my discussion in "Consciousness and its Place in Nature", especially the discussion of interactionism (type-D dualism) and Russellian monism (type-F monism). I think that the correct conclusion of zombie-style arguments is the disjunction of the type-D, type-E, and type-F views, and I certainly don't favor the type-E view (epiphenomenalism) over the others. Unlike you, I don't think there are any watertight arguments against it, but if you're right that there are, then that just means that the conclusion of the argument should be narrowed to the other two views. Of course there's a lot more to be said about these issues, and the project of finding good arguments against Z is a worthwhile one, but I think that such an argument requires more than you've given us here.

The zombie argument is an argument for any kind of non-physicalism. Eliezer’s response is to argue that one particular kind of non-physicalism is false. That’s not an adequate response, or a response at all. If I argue “argument P means we have to accept views D, E, F, or I, and the response is ‘but view E has some problems’ that just means we should adopt views D, F, or I.”

But okay, what’s the error here? How does Eliezer’s version of the zombie argument differ from the real version? The crucial error is in his construction of premise 1. Eliezer assumes that, when talking about zombies, we are imagining just subtracting consciousness. He points out (rightly) that if consciousness is causally efficacious then if you only subtract consciousness, you wouldn’t have a physically identical world.

But the zombie argument isn’t about what would actually happen in our world if you just eliminated the consciousness. It’s about a physically identical world to ours lacking consciousness. Imagine you think that consciousness causes atoms 1, 2, and 3 to each move. Well then the zombie world would also involve them moving in the same physical way as they do when consciousness moves them. So it eliminates the experience, but it keeps a world that is physically identical.

This might sound pretty abstract. Let’s make it clearer. Imagine there’s a spirit called Casper. Casper does not have a physical body, does not emit light, and is physically undetectable. However, Casper does have conscious experience and has the ability to affect the world. Every thousand years, Casper can think “I really wish this planet would disappear,” and the planet would disappear. Crucially, we could imagine a world physically identical to the world with Casper, that just lacks Casper. This wouldn’t be what you would get if you just eliminated Casper—you’d also need to do something else to copy the physical effects that Casper has. So when writing the laws of nature for the world that copies Casper’s world, you’d also need to specify:

Oh, and also make one planet disappear every few months, specifically, the same ones Casper would have made disappear.

So the idea is that even if consciousness causes things, we could still imagine a physically identical world to the world where consciousness causes the things. Instead, the things would be caused the same physical way as they are with consciousness, but there would be no consciousness.

Thus, Eliezer’s argument fails completely. It is an argument against epiphenomenalism rather than an argument against zombieism. Eliezer thinks those are the same thing, but that is an error that no publishing academic philosopher could make. It’s really a basic error.

And when this is pointed out, Eliezer begins to squirm. For example, when responding to Chalmers’ comment, he says:

It seems to me that there is a direct, two-way logical entailment between "consciousness is epiphenomenal" and "zombies are logically possible".

If and only if consciousness is an effect that does not cause further third-party detectable effects, it is possible to describe a "zombie world" that is closed under the causes of third-party detectable effects, but lacks consciousness.

Type-D dualism, or interactionism, or what I've called "substance dualism", makes it impossible - by definition, though I hate to say it - that a zombie world can contain all the causes of a neuron's firing, but not contain consciousness.

You could, I suppose, separate causes into (arbitrary-seeming) classes of "physical causes" and "extraphysical causes", but then a world-description that contains only "physical causes" is incompletely specified, which generally is not what people mean by "ideally conceivable"; i.e., the zombies would be writing papers on consciousness for literally no reason, which sounds more like an incomplete imagination than a coherent state of affairs. If you want to give an experimental account of the observed motion of atoms, on Type-D dualism, you must account for all causes whether labeled "physical" or "extraphysical".

. . .

I understand that you have argued that epiphenomenalism is not equivalent to zombieism, enabling them to be argued separately; but I think this fails. Consciousness can be subtracted from the world without changing anything third-party-observable, if and only if consciousness doesn't cause any third-party-observable differences. Even if philosophers argue these ideas separately, that does not make them ideally separable; it represents (on my view) a failure to see logical implications.

Think back to the Casper example. Some physical effects in that universe are caused by physical things. Other effects in the universe are caused by nonphysical things (just one thing actually, Casper). This is not an arbitrary classification—if you believe that some things are physical and others are non-physical, then the division isn’t arbitrary. On type-D dualism, the consciousness causes things, and so the mirror world would just fill in the causal effects. A world description that contains only physical causes would be completely specified—it specifies all the behavior of the world, all the physical things, and just fails to specify the consciousness.

This is also just such cope! Eliezer spends an entire article saying, without argument, that zombieism = epiphenomenalism, assuming most people will believe him, and then when pressed on it, gives a barely coherent paragraph worth of justification for this false claim. It would be like it I argued against deontology by saying it was necessarily Kantian and arguing Kant was wrong, and then when called out on that by a leading non-Kantian deontologist, concocted some half-hearted justification for why they’re actually equivalent. That’s not being rational.

Even if we pretend, per impossible, that Eliezer’s extra paragraph refutes interactionist zombieism, it is not responsible to go through an entire article claiming that the only view that believes X is view Y, when that’s totally false, and then just later mention when pressed that there’s an argument for why believers in views other than X can’t believe Y.

In which Eliezer, after getting the basic philosophy of mind wrong, calls others stupid for believing in zombies

I think that the last section conclusively establishes that, at the very least, Eliezer’s views on the zombie argument both fail and evince a fundamental misunderstanding of the argument. But the most infuriating thing about this is Eliezer’s repeated insistence that disagreeing with him about zombies is indicative of fundamental stupidity. When explaining why he ignores philosophers because they don’t come to the right conclusions quickly enough, he says:

And if the debate about zombies is still considered open, then I'm sorry, but as Jeffreyssai saysToo slow!  It would be one matter if I could just look up the standard answer and find that, lo and behold, it is correct.  But philosophy, which hasn't come to conclusions and moved on from cognitive reductions that I regard as relatively simple, doesn't seem very likely to build complex correct structures of conclusions.

Sorry - but philosophy, even the better grade of modern analytic philosophy, doesn't seem to end up commensurate with what I need, except by accident or by extraordinary competence.  Parfit comes to mind; and I haven't read much Dennett, but Dennett does seem to be trying to do the same sort of thing that I try to do; and of course there's Gary Drescher.  If there was a repository of philosophical work along those lines - not concerned with defending basic ideas like anti-zombieism, but with accepting those basic ideas and moving on to challenge more difficult quests of naturalism and cognitive reductionism - then that, I might well be interested in reading. 

(Eliezer wouldn’t like Parfit if he read more of him and realized he was a zombie-believing, non-physicalist, non-naturalist moral realist.)

There’s something infuriating about this. Making basic errors that show you don’t have the faintest grasp on what people are arguing about, and then acting like the people who take the time to get Ph.Ds and don’t end up agreeing with your half-baked arguments are just too stupid to be worth listening to is outrageous. And Eliezer repeatedly admonishes the alleged cognitive deficiency of us zombieists—for example:

I also want to emphasize that the “why so confident?” is a straw misquestion from people who can’t otherwise understand why I could be unconfident of many details yet still not take into account the conflicting opinion of people who eg endorse P-zombies.

It also seems to me that this is not all that inaccessible to a reasonable third party, though the sort of person who maintains some doubt about physicalism, or the sort of philosophers who think it’s still respectable academic debate rather than sheer foolishness to argue about the A-Theory vs. B-Theory of time, or the sort of person who can’t follow the argument for why all our remaining uncertainty should be within different many-worlds interpretations rather than slopping over outside, will not be able to access it.

We zombieists are apparently not reasonable third parties, because we can’t grasp Eliezer’s demonstrably fallacious reply to zombies. Being this confident and wrong is a significant mark against one’s reasoning abilities. If you believe something for terrible reasons, don’t update in response to criticisms over the course of decades, and then act like others who don’t agree with you are too stupid to get it, and in fact use that as one of your go-to examples of “things people stupider than I believe that I shouldn’t update on,” that seriously damages your credibility as a thinker. That evinces dramatic overconfidence, sloppiness, and arrogance.

The Second Critical Error: Decision Theory

Eliezer Yudkowsky has a decision theory called functional decision-theory. I will preface this by noting that I know much less about decision theory than I do about non-physicalism and zombies. Nevertheless, I know enough to get why Eliezer’s decision theory fails. In addition, most of this involves quoting people who are much more informed about decision theory than I am.

There are two dominant decision theories, both of which Eliezer rejects. The first is called causal decision theory. It says that when you have multiple actions that you can take, you should take the action that causes the best things. So, for example, if you have two actions, one of which would cause you to get 10 dollars, the other of which would cause you to get five dollars, and the final of which would cause you to get nothing, you should take the first action because it causes you to be richest at the end.

The next popular decision theory is called evidential decision theory. It says you should take the action where after you take that action you’ll expect to have the highest payouts. So in the earlier case, it would also suggest taking the first action because after you take that action, you’ll expect to be five dollars richer than if you take the second action, and ten dollars richer than if you take the third action.

These sound similar, so you might wonder where they come apart. Let me preface this by saying that I lean towards causal decision theory. Here are some cases where they give diverging suggestions:

Newcombe’s problem: there is a very good predictor who guessed whether you’d take two boxes or one box. If you take only one box, you’d take box A. If the guesser predicted that you’d take box A, they put a million dollars in box A. If they predicted you’d take both boxes, they put nothing into box A. In either case, they put a thousand dollars into box B.

Evidential decision theory would say that you should take only one box. Why? Those who take one box almost always get a million dollars, while those who take two boxes almost always get a thousand dollars. Causal decision theory would say you should take two boxes. On causal decision theory, it doesn’t matter whether people who make decisions like you usually end up worse off—what maters is that, no matter whether there is a million dollars in box A, two-boxing will cause you to have a free thousand dollars, and that is good! The causal decision theorist would note that if you had a benevolent friend who could peek into the boxes and then give you advice about what to do, they’d be guaranteed to suggest that you take both boxes. I used to have the intuition that you should one box, but when I considered this upcoming case, I abandoned that intuition.

Smoker’s lesion: suppose that smoking doesn’t actually cause averse health outcomes. However, smokers do have much higher rates of cancer than non-smokers. The reason for that is that many people have a lesion on their lung that both causes them to be much more likely to smoke and more likely to get cancer. So if you know that someone smokes, you should think it much more likely that they’ll get cancer even though smoking doesn’t cause cancer. Suppose that smoking is fun and doesn’t cause any harm. Evidential decision theory would say that you shouldn’t smoke because smoking gives you evidence that you’ll have a shorter life. You should, after smoking, expect your life to be shorter because it gives you evidence that you had a lesion on your lung. In contrast, causal decision theory would instruct you to smoke because it benefits you and doesn’t cause any harm.

Eliezer’s preferred view is called functional decision theory. Here’s my summary (phrased in a maximally Eliezer like way):

Your brain is a cognitive algorithm that outputs decisions in response to external data. Thus, when you take an action like

take one box

that entails that your mental algorithm outputs

take one box

in Newcombe’s problem. You should take actions such that the algorithm that outputs that decision generates higher expected utility than any other cognitive algorithm.

On Eliezer’s view, you should one box, but it’s fine to smoke because whether your brain outputs “smoke” doesn’t affect whether there is a lesion on your lung, so smoking. Or, as the impressively named Wolfgang Schwarz summarizes:

In FDT, the agent should not consider what would happen if she were to choose A or B. Instead, she ought to consider what would happen if the right choice according to FDT were A or B.

You should one box in this case because if FDT told agents to one box, they would get more utility on average than if FDT told agents to two box. Schwarz argues the first problem with the view is that it gives various totally insane recommendations. One example is a blackmail case. Suppose that a blackmailer will, every year, blackmail one person. There’s a 1 in a googol chance that he’ll blackmail someone who wouldn’t give in to the blackmail and a googol-1/googol chance that he’ll blackmail someone who would give in to the blackmail. He has blackmailed you. He threatens that if you don’t give him a dollar, he will share all of your most embarrassing secrets to everyone in the world. Should you give in?

FDT would say no. After all, agents who won’t give in are almost guaranteed to never be blackmailed. But this is totally crazy. You should give up one dollar to prevent all of your worst secrets from being spread to the world. As Schwarz says:

FDT says you should not pay because, if you were the kind of person who doesn't pay, you likely wouldn't have been blackmailed. How is that even relevant? You are being blackmailed. Not being blackmailed isn't on the table. It's not something you can choose.

Schwarz has another even more convincing counterexample:

Moreover, FDT does not in fact consider only consequences of the agent's own dispositions. The supposition that is used to evaluate acts is that FDT in general recommends that act, not just that the agent herself is disposed to choose the act. This leads to even stranger results.

Procreation. I wonder whether to procreate. I know for sure that doing so would make my life miserable. But I also have reason to believe that my father faced the exact same choice, and that he followed FDT. If FDT were to recommend not procreating, there's a significant probability that I wouldn't exist. I highly value existing (even miserably existing). So it would be better if FDT were to recommend procreating. So FDT says I should procreate. (Note that this (incrementally) confirms the hypothesis that my father used FDT in the same choice situation, for I know that he reached the decision to procreate.)

Schwarz’s entire piece is very worth reading. It exposes various parts of Soares and Yudkowsky’s paper that rest on demonstrable errors. Another good piece that takes down FDT is MacAskill’s post on LessWrong. He starts by laying out the following plausible principle:

Guaranteed Payoffs: In conditions of certainty — that is, when the decision-maker has no uncertainty about what state of nature she is in, and no uncertainty about the utility payoff of each action is — the decision-maker should choose the action that maximises utility. 

This is intuitively very obvious. If you know all the relevant facts about how the world is, and one act gives you more rewards than another act, you should take the first action. But MacAskill shows that FDT violates that constraint over and over again.

Bomb

You face two open boxes, Left and Right, and you must take one of them. In the Left box, there is a live bomb; taking this box will set off the bomb, setting you ablaze, and you certainly will burn slowly to death. The Right box is empty, but you have to pay $100 in order to be able to take it. 

A long-dead predictor predicted whether you would choose Left or Right, by running a simulation of you and seeing what that simulation did. If the predictor predicted that you would choose Right, then she put a bomb in Left. If the predictor predicted that you would choose Left, then she did not put a bomb in Left, and the box is empty. 

The predictor has a failure rate of only 1 in a trillion trillion. Helpfully, she left a note, explaining that she predicted that you would take Right, and therefore she put the bomb in Left. 

You are the only person left in the universe. You have a happy life, but you know that you will never meet another agent again, nor face another situation where any of your actions will have been predicted by another agent. What box should you choose?  

The right action, according to FDT, is to take Left, in the full knowledge that as a result you will slowly burn to death. Why? Because, using Y&S’s counterfactuals, if your algorithm were to output ‘Left’, then it would also have outputted ‘Left’ when the predictor made the simulation of you, and there would be no bomb in the box, and you could save yourself $100 by taking Left. In contrast, the right action on CDT or EDT is to take Right.

The recommendation is implausible enough. But if we stipulate that in this decision-situation the decision-maker is certain in the outcome that her actions would bring about, we see that FDT violates Guaranteed Payoffs

You can read MacAskill’s full post to find even more objections. He shows that Yudkowsky’s view is wildly indeterminate, incapable of telling you what to do, and also involves a broad kind of hypersensitivity, where however one defines “running the same algorithm” becomes hugely relevant, and determines very significant choices in seemingly arbitrary ways. The basic point is that Yudkowsky’s decision theory is totally bankrupt and implausible, in ways that are evident to those who know about decision theory. It is much worse than either evidential or causal decision theory.

The Third Critical Error: Animal Consciousness

(This was already covered here—if you’ve read that article skip this section and control F conclusion.)

Perhaps the most extreme example of an egregious error backed up by wild overconfidence occured in this Facebook debate about animal consciousness. Eliezer expressed his view that pigs and almost all animals are almost certainly not conscious. Why is this? Well, as he says:

However, my theory of mind also says that the naive theory of mind is very wrong, and suggests that a pig does not have a more-simplified form of tangible experiences. My model says that certain types of reflectivity are critical to being something it is like something to be. The model of a pig as having pain that is like yours, but simpler, is wrong. The pig does have cognitive algorithms similar to the ones that impinge upon your own self-awareness as emotions, but without the reflective self-awareness that creates someone to listen to it.

Okay, so on this view, one needs to have reflective processes in order to be conscious. One’s brain has to model itself to be conscious. This doesn’t sound plausible to me, but perhaps if there’s overwhelming neuroscientific evidence, it’s worth accepting the view. And this view implies that pigs aren’t conscious, so Yudkowsky infers that they are not conscious.

This seems to me to be the wrong approach. It’s actually incredibly difficult to adjudicate between the different theories of consciousness. It makes sense to gather evidence for and against the consciousness of particular creatures, rather than starting with a general theory and using that to solve the problems. If your model says that pigs aren’t conscious, then that seems to be a problem with your model.

Mammals feel pain

I won’t go too in-depth here, but let’s just briefly review the evidence that mammals, at the very least, feel pain. This evidence is sufficiently strong that, as the SEP page on animal consciousness notes, “the position that all mammals are conscious is widely agreed upon among scientists who express views on the distribution of consciousness." The SEP page references two papers, one by Jaak Panksepp (awesome name!) and the other by Seth, Baars, and Edelman.

Let’s start with the Panksepp paper. They lay out the basic methodology, which involves looking at the parts of the brain that are necessary and sufficient for consciousness. So they see particular brain regions which are active during states when we’re conscious—and particularly correlate with particular mental states—and aren’t active when we’re not conscious. They then look at the brains of other mammals and notice that these features are ubiquitous in mammals, such that all mammals have the things that we know make us conscious in our brains. In addition, they act physically like we do when we’re in pain—they scream, they cry, their heart rate increases when they have a stressful stimulus, they make cost-benefit analyses where they’re willing to risk negative stimuli for greater reward. Sure looks like they’re conscious.

Specifically, they endorse a “psycho-neuro-ethological ‘‘triangulation’’ approach. The paper is filled with big phrases like that. What that means is that they look at various things that happen in the brain when we feel certain emotions. They observe that in humans, those emotions cause certain things—for example, being happy makes us more playful. They then look at mammal brains and see that they have the same basic brain structure, and this produces the same physical reactions—using the happiness example, this would also make the animals more playful. If they see that animals have the same basic neural structures as we do when we have certain experiences and that those are associated with the same physical states that occur when humans have those conscious states, they infer that the animals are having similar conscious states. If our brain looks like a duck’s brain when we have some experience, and we act like ducks do when they are in a comparable brain state, we should guess that ducks are having a similar experience. (I know we’re talking about mammals here, but I couldn’t resist the “looks like a duck, talks like a duck joke.”)

If a pig has a brain state that resembles ours when we are happy, tries to get things that make it happy, and produces the same neurological responses that we do when we’re happy, we should infer that pigs are not mindless automatons, but are, in fact, happy.

They then note that animals like drugs. Animals, like us, get addicted to opioids and have similar brain responses when they’re on opioids. As the authors note “Indeed, one can predict drugs that will be addictive in humans quite effectively from animal studies of desire.” If animals like the drugs that make us happy and react in similar ways to us, that gives us good reason to think that they are, in fact, happy.

They then note that the parts of the brain responsible for various human emotions are quite ancient—predating humans—and that mammals have them too. So, if the things that cause emotions are also present in animals, we should guess they’re conscious, especially when their behavior is perfectly consistent with being conscious. In fact, by running electricity through certain brain regions that animals share, we can induce conscious states in people—that shows that it is those brain states that are causing the various mental states.

The authors then run through various other mental states and show that those mental states are similar between humans and animals—animals have similar brain regions which provoke similar physical responses, and we know that in humans, those brain regions cause specific mental states.

Now, maybe there’s some magic of the human brain, such that in animal brains, the brain regions that cause qualia instead cause causally identical stuff but no consciousness. But there’s no good evidence for that, and plenty against. You should not posit special features of certain physical systems, for no reason.

Moving on to the Seth, Baars, and Edelman paper, they note that there are various features of consciousness, that differentiate conscious states from other things happening in the brain that don’t induce conscious states. They note:

Consciousness involves widespread, relatively fast, low-amplitude interactions in the thalamocortical core of the brain, driven by current tasks and conditions. Unconscious states are markedly different and much less responsive to sensory input or motor plans.

In other words, there are common patterns among conscious states. We can look at a human brain and see that the things that are associated with consciousness produce different neurological markers from the things that aren’t associated with consciousness. Features associated with consciousness include:

Irregular, low-amplitude brain activity: When we’re awake we have irregular low-amplitude brain activity. When we’re not conscious—e.g. in deep comas or anesthesia-induced unconsciousness—irregular, low-amplitude brain activity isn’t present. Mammal brains possess irregular, low-amplitude brain activity.

Involvement of the thalamocortical system: When you damage the thalamocortical system, that deletes part of one’s consciousness, unlike other systems. Mammals also have a thalamocortical system—just like us.

Widespread brain activity: Consciousness induces widespread brain activity. We don’t have that when things induce us not to be conscious, like being in a coma. Mammals do.

The authors note, from these three facts:

Together, these first three properties indicate that consciousness involves widespread, relatively fast, low-amplitude interactions in the thalamocortical core of the brain, driven by current tasks and conditions. Unconscious states are markedly different and much less responsive to sensory input or endogenous activity. These properties are directly testable and constitute necessary criteria for consciousness in humans. It is striking that these basic features are conserved among mammals, at least for sensory processes. The developed thalamocortical system that underlies human consciousness first arose with early mammals or mammal-like reptiles, more than 100 million years ago.

More evidence from neuroscience for animal consciousness:

Something else about metastability that I don’t really understand is also present in humans and animals.

Consciousness involves binding—bringing lots of different inputs together. In your consciousness, you can see the entire world at once, while thinking about things at the same time. Lots of different types of information are processed simultaneously, in the same way. Some explanations involving neural synchronicity have received some empirical support—and animals also have neural synchronicity, so they would also have the same kind of binding.

We attribute conscious experiences as happening to us. But mammals have a similar sense of self. Mammals, like us, process information relative to themselves—so they see a wall and process it relative to them in space.

Consciousness facilitates learning. Humans learn from conscious experiences. In contrast, we do not learn from things that do not impinge on our consciousness. If someone slaps me whenever I scratch my nose (someone does actually—crazy story), I learn not to scratch my nose. In contrast, if someone does a thing that I don’t consciously perceive when I scratch my nose, I won’t learn from it. But animals seem to learn to, and update in response to stimulus, just like humans do—but only when humans are exposed to things that affect their consciousness. In fact, even fish learn.

So there’s a veritable wealth of evidence that at least mammals are conscious. The evidence is less strong for organisms that are less intelligent and more distant from us evolutionarily, but it remains relatively strong for at least many fish. Overturning this abundance of evidence, that’s been enough to convince the substantial majority of consciousness researchers requires a lot of evidence. Does Yudkowsky have it?

Yudkowsky’s view is crazy, and is decisively refuted over and over again

No. No he does not. In fact, as far as I can tell, throughout the entire protracted Facebook exchange, he never adduced a single piece of evidence for his conclusion. The closest that he provides to an argument is the following:

I consider myself a specialist on reflectivity and on the dissolution of certain types of confusion. I have no compunction about disagreeing with other alleged specialists on authority; any reasonable disagreement on the details will be evaluated as an object-level argument. From my perspective, I’m not seeing any, “No, this is a non-mysterious theory of qualia that says pigs are sentient…” and a lot of “How do you know it doesn’t…?” to which the only answer I can give is, “I may not be certain, but I’m not going to update my remaining ignorance on your claim to be even more ignorant, because you haven’t yet named a new possibility I haven’t considered, nor pointed out what I consider to be a new problem with the best interim theory, so you’re not giving me a new reason to further spread probability density.”

What??? The suggestion seems to be that there is no other good theory of consciousness that implies that animals are conscious. To which I’d reply:

We don’t have any good theory about consciousness yet—the data is just too underdetermined. Just as you can know that apples fall when you drop them before you have a comprehensive theory of gravity, so too can you know some things about consciousness, even absent a comprehensive theory.

There are various theories that predict that animals are conscious. For example, integrated information theory, McFadden’s CEMI field theory, various Higher-Order theories, and the global workspace model will probably imply that animals are conscious. Eliezer has no argument to prefer his view to others.

Take the integrated information theory, for example. I don’t think it’s a great view. But at least it has something going for it. It has made a series of accurate predictions about the neural correlates of consciousness. Same with McFadden’s theory. It seems Yudkowsky’s theory has literally nothing going for it, beyond it sounding to Eliezer like a good solution. There is no empirical evidence for it, and, as we’ll see, it produces crazy, implausible implications. David Pearce has a nice comment about some of those implications:

Some errors are potentially ethically catastrophic. This is one of them. Many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Thus in orgasm, for instance, much of the neocortex effectively shuts down. Or compare a mounting sense of panic. As an intense feeling of panic becomes uncontrollable, are we to theorise that the experience somehow ceases to be unpleasant as the capacity for reflective self-awareness is lost? “Blind” panic induced by e.g. a sense of suffocation, or fleeing a fire in a crowded cinema (etc), is one of the most unpleasant experiences anyone can undergo, regardless or race or species. Also, compare microelectrode neural studies of awake subjects probing different brain regions; stimulating various regions of the “primitive” limbic system elicits the most intense experiences. And compare dreams – not least, nightmares – many of which are emotionally intense and characterised precisely by the lack of reflectivity or critical meta-cognitive capacity that we enjoy in waking life.

Yudkowsky’s theory of consciousness would predict that during especially intense experiences, where we’re not reflecting, we’re either not conscious or less conscious. So when people orgasm, they’re not conscious. That’s very implausible. Or, when a person is in unbelievable panic, on this view, they become non-conscious or less conscious. Pearce further notes:

Children with autism have profound deficits of self-modelling as well as social cognition compared to neurotypical folk. So are profoundly autistic humans less intensely conscious than hyper-social people? In extreme cases, do the severely autistic lack consciousness’ altogether, as Eliezer’s conjecture would suggest? Perhaps compare the accumulating evidence for Henry Markram’s “Intense World” theory of autism.

Francisco Boni Neto furthers:

many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Super vivid, hyper conscious experiences, phenomenic rich and deep experiences like lucid dreaming and ‘out-of-body’ experiences happens when higher structures responsible for top-bottom processing are suppressed. They lack a realistic conviction, specially when you wake up, but they do feel intense and raw along the pain-pleasure axis.

Eliezer just bites the bullet:

I’m not totally sure people in sufficiently unreflective flow-like states are conscious, and I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious. This is not where most of my probability mass lies, but it’s on the table.

So when confronted with tons of neurological evidence that shutting down higher processing results in more intense conscious experiences, Eliezer just says that when we think that we have more intense experiences, we’re actually zombies or something? That’s totally implausible. It’s sufficiently implausible that I think I might be misunderstanding him. When you find out that your view says that people are barely conscious or non-conscious when they orgasm or that some very autistic people aren’t conscious, it makes sense to give up the damn theory!

And this isn’t the only bullet Eliezer bites. He admits, “It would not surprise me very much to learn that average children develop inner listeners at age six.” I have memories from before age 6—these memories would have to have been before I was conscious, on this view.

Rob Wiblin makes a good point:

[Eliezer], it’s possible that what you are referring to as an ‘inner listener’ is necessary for subjective experience, and that this happened to be added by evolution just before the human line. It’s also possible that consciousness is primitive and everything is conscious to some extent. But why have the prior that almost all non-human animals are not conscious and lack those parts until someone brings you evidence to the contrary (i.e. “What I need to hear to be persuaded is,”)? That just cannot be rational.

You should simply say that you are a) uncertain what causes consciousness, because really nobody knows yet, and b) you don’t know if e.g. pigs have the things that are proposed as being necessary for consciousness, because you haven’t really looked into it.

I agree with Rob. We should be pretty uncertain. My credences are maybe the following:

92% that at least almost all mammals are conscious.

80% that almost all reptiles are conscious.

60% that fish are mostly conscious.

30% that insects are conscious.

It’s about as likely that reptiles aren’t conscious as insects are. Because consciousness is private—you only know your own—we shouldn’t be very confident about any features of consciousness.

Based on these considerations, I conclude that Eliezer’s view is legitimately crazy. There is, quite literally, no good reason to believe it, and lots of evidence against it. Eliezer just dismisses that evidence, for no good reason, bites a million bullets, and acts like that’s the obvious solution.

Absurd overconfidence

The thing that was most infuriating about this exchange was Eliezer’s insistence that those who disagreed with him were stupid, combined with his demonstration that he was unfamiliar with the subject matter. Condescension and error make an unfortunate combination. He says of the position that pigs, for instance, aren’t conscious:

It also seems to me that this is not all that inaccessible to a reasonable third party, though the sort of person who maintains some doubt about physicalism, or the sort of philosophers who think it’s still respectable academic debate rather than sheer foolishness to argue about the A-Theory vs. B-Theory of time, or the sort of person who can’t follow the argument for why all our remaining uncertainty should be within different many-worlds interpretations rather than slopping over outside, will not be able to access it.

Count me in as a person who can’t follow any arguments about quantum physics, much less the arguments for why we should be almost certain of many worlds. But seriously, physicalism? We should have no doubt about physicalism? As I’ve argued before, the case against physicalism is formidable. Eliezer thinks it’s an open-and-shut case, but that’s because he is demonstrably mistaken about the zombie argument against physicalism and the implications of non-physicalism. 

And that’s not the only thing Eliezer expresses insane overconfidence about. In response to his position that most animals other than humans aren’t conscious, David Pearce points out that you shouldn’t be very confident in positions that almost all experts disagree with you about, especially when you have a strong personal interest in their view being false. Eliezer replies:

What do they think they know and how do they think they know it? If they’re saying “Here is how we think an inner listener functions, here is how we identified the associated brain functions, and here is how we found it in animals and that showed that it carries out the same functions” I would be quite impressed. What I expect to see is, “We found this area lights up when humans are sad. Look, pigs have it too.” Emotions are just plain simpler than inner listeners. I’d expect to see analogous brain areas in birds.

When I read this, I almost fell out of my chair. Eliezer admits that he has not so much as read the arguments people give for widespread animal consciousness. He is basing his view on a guess of what they say, combined with an implausible physical theory for which he has no evidence. This would be like coming to the conclusion that the earth is 6,000 years old, despite near-ubiquitous expert disagreement, providing no evidence for the view, and then admitting that you haven’t even read the arguments that experts give in the field against your position. This is the gravest of epistemic sins.

Conclusion

This has not been anywhere near exhaustive. I haven’t even started talking about Eliezer’s very implausible views about morality (though I might write about that too—stay tuned), reductionism, modality, or many other topics. Eliezer usually has a lot to say about topics, and it often takes many thousands of words to refute what he’s saying.

I hope this article has shown that Eliezer frequently expresses near certainty on topics that he has a basic ignorance about, an ignorance so profound that he should suspend judgment. Then, infuriatingly, he acts like those who disagree with his errors are morons. He acts like he is a better decision theorist than the professional decision theorists, a better physicist than the physicists, a better animal consciousness researcher than the animal consciousness researchers, and a much better philosopher of mind than the leading philosophers of mind.

My goal in this is not to cause people to stop reading Eliezer. It’s instead to encourage people to refrain from forming views on things he says just from reading him. It’s to encourage people to take his views with many grains of salt. If you’re reading something by Eliezer and it seems too obvious, on a controversial issue, there’s a decent chance you are being duped.

I feel like there are two types of thinkers, the first we might call innovators and the second systematizers. Innovators are the kinds of people who think of wacky, out-of-the-box ideas, but are less likely to be right. They enrich the state of discourse by being clever, creative, and coming up with new ideas, rather than being right about everything. A paradigm example is Robin Hanson—no one feels comfortable just deferring to Robin Hanson across the board, but Robin Hanson has some of the most ingenious ideas.

Systematizers, in contrast, are the kinds of people who reliably generate true beliefs on lots of topics. A good example is Scott Alexander. I didn’t research Ivermectin, but I feel confident that Scott’s post on Ivermectin is at least mostly right.

I think people think of Eliezer as a systematizer. And this is a mistake, because he just makes too many errors. He’s too confident about things he’s totally ignorant about. But he’s still a great innovator. He has lots of interesting, clever ideas that are worth hearing out. In general, however, the fact that Eliezer believes something is not especially probative. Eliezer’s skill lies in good writing and ingenious argumentation, not forming true beliefs.


sphor @ 2023-08-27T12:06 (+120)

A couple of other examples, both of which have been discussed on LessWrong before:

niplav @ 2023-08-28T13:54 (+15)

I find this comment much more convincing than the top-level post.

Laplace @ 2023-08-31T22:27 (+12)

I do not find the argument against the applicability of the Complete Class theorem in that post convincing. See Charlie Steiner's reply in the comments.

You just have to separate "how the agent internally represents its preferences" from "what it looks like the agent us doing." You describe an agent that dodges the money-pump by simply acting consistently with past choices. Internally this agent has an incomplete representation of preferences, plus a memory. But externally it looks like this agent is acting like it assigns equal value to whatever indifferent things it thought of choosing between first.

Decision theory is concerned with external behaviour, not internal representations. All of these theorems are talking about whether the agent's actions can be consistently described as maximising a utility function. They are not concerned whatsoever with how the agent actually mechanically represents and thinks about its preferences and actions on the inside. To decision theory, agents are black boxes. Information goes in, decision comes out. Whatever processes may go on in between are beyond the scope of what the theorems are trying to talk about.

So

Money-pump arguments for Completeness (understood as the claim that sufficiently-advanced artificial agents will have complete preferences) assume that such agents will not act in accordance with policies like ‘if I previously turned down some option X, I will not choose any option that I strictly disprefer to X.’ But that assumption is doubtful. Agents with incomplete preferences have good reasons to act in accordance with this kind of policy: (1) it never requires them to change or act against their preferences, and (2) it makes them immune to all possible money-pumps for Completeness. 

As far as decision theory is concerned, this is a complete set of preferences. Whether the agent makes up its mind as it goes along or has everything it wants written up in a database ahead of time matters not a peep to decision theory. The only thing that matters is whether the agent's resulting behaviour can be coherently described as maximising a utility function. If it quacks like a duck, it's a duck.

EJT @ 2023-09-01T11:22 (+6)

The only thing that matters is whether the agent's resulting behaviour can be coherently described as maximising a utility function.

If you're only concerned with externals, all behaviour can be interpreted as maximising a utility function. Consider an example: an agent pays $1 to trade vanilla for strawberry, $1 to trade strawberry for chocolate, and $1 to trade chocolate for vanilla. Considering only externals, can this agent be represented as an expected utility maximiser? Yes. We can say that the agent's preferences are defined over entire histories of the universe, and the history it's enacting is its most-preferred.

If we want expected-utility-maximisation to rule anything out, we need to say something about the objects of the agent's preference. And once we do that, we can observe violations of Completeness.

Laplace @ 2023-09-01T12:03 (+4)

all behaviour can be interpreted as maximising a utility function.

Yes, it indeed can be. However, the less coherent the agent acts, the more cumbersome it will be to describe it as an expected utility maximiser. Once your utility function specifies entire histories of the universe, its description length goes through the roof. If describing a system as a decision theoretic agent is that cumbersome, it's probably better to look for some other model to predict its behaviour. A rock, for example, is not well described as a decision theoretic agent. You can technically specify a utility function that does the job, but it's a ludicrously large one.

The less coherent and smart a system acts, the longer the utility function you need to specify to model its behaviour as a decision theoretic agent will be. In this sense, expected-utility-maximisation does rule things out, though the boundary is not binary. It's telling you what kind of systems you can usefully model as "making decisions" if you want to predict their actions.

If you would prefer math that talks about the actual internal structures agents themselves consist of, decision theory is not the right field to look at. It just does not address questions like this at all. Nowhere in the theorems will you find a requirement that an agent's preferences be somehow explicitly represented in the algorithms it "actually uses" to make decisions, whatever that would mean. It doesn't know what these algorithms are, and doesn't even have the vocabulary to formulate questions about them. It's like saying we can't use theorems for natural numbers to make statements about counting sheep, because sheep are really made of fibre bundles over the complex numbers, rather than natural numbers. The natural numbers are talking about our count of the sheep, not the physics of the sheep themselves, nor the physics of how we move our eyes to find the sheep. And decision theory is talking about our model of systems as agents that make decisions, not the physics of the systems themselves and how some parts of them may or may not correspond to processes that meet some yet unknown embedded-in-physics definition of "making a decision".

keith_wynroe @ 2023-09-02T21:27 (+7)

I think this response misses the woods for the trees here. It's true that you can fit some utility function to behaviour, if you make a more fine-grained outcome-space on which preferences are now coherent etc. But this removes basically all of the predictive content that Eliezer etc. assumes when invoking them.

In particular, the use of these theorems in doomer arguments absolutely does implicitly care about "internal structure" stuff - e.g. one major premise is that non-EU-maximising AI's will reflectively iron out the "wrinkles" in their preferences to better approximate an EU-maximiser, since they will notice that their e.g. incompleteness leads to exploitability. The OP argument shows that an incomplete-preference agent will be inexploitable by its own lights. The fact that there's some completely different way to refactor the outcome-space such that from the outside it looks like an EU-maximiser is just irrelevant.

>If describing a system as a decision theoretic agent is that cumbersome, it's probably better to look for some other model to predict its behaviour

This also seems to be begging the question - if I have something I think I can describe as a non-EU-maximising decision-theoretic agent, but which has to be described with an incredibly cumbersome utility function, why do we not just conclude that EU-maximisation is the wrong way to model the agent, rather than throwing out the belief that is should be modelled as an agent. If I have a preferential gap between A and B, and you have to jump through some ridiculous hoops to make this look EU-coherent ( "he prefers [A and Tuesday and feeling slightly hungry and saw some friends yesterday and the price of blueberries is <£1 and....] to [B and Wednesday and full and at a party and blueberries >£1 and...]" ), seems like the correct conclusion is not to throw away me being a decision-theoretic agent, but me being well-modelled as an EU-maximiser

>The less coherent and smart a system acts, the longer the utility function you need to specify...

These are two very different concepts? (Equating "coherent" with "smart" is again kinda begging the question). Re: coherence, it's just tautologous that the more complexly you have to partition up outcome-space to make things look coherent, the more complex the resulting utility function will be. Re: smartness, if we're operationalising this as "ability to steer the world towards states of higher utility", then it seems like smartness and utility-function-complexity are by definition independent. Unless you mean more "ability to steer the world in a way that seems legible to us" in which case it's again just tautologous

EJT @ 2023-09-02T12:41 (+4)

That all sounds approximately right but I'm struggling to see how it bears on this point:

If we want expected-utility-maximisation to rule anything out, we need to say something about the objects of the agent's preference. And once we do that, we can observe violations of Completeness.

Can you explain?

keith_wynroe @ 2023-08-29T01:10 (+11)

The coherence theorem part seems particularly egregious to me given how load-bearing it seems to be to a lot of his major claims. A frustration I have personally is that he seems to claim a lot that no one ever comes to him with good object-level objections to his arguments, but then when they do like in that thread he just refuses to engage 

prisonpent @ 2023-08-28T05:27 (+2)

his first attempted refutation of the post 

this link is broken

sphor @ 2023-08-28T11:08 (+1)

Thanks, fixed. 

David Mathers @ 2023-08-27T12:11 (+92)

I appreciate the spirit of this post as I am not a Yudkowsky fan, think he is crazy overconfident about AI, am not very keen on rationalism in general, and think the EA community sometimes gets overconfident in the views of its "star" members. But some of the philosophy stuff here seems not quite right to me, though none of its egregiously wrong, and on each topic I agree that Yudkowsky is way, way overconfident. (Many professional philosophers are way overconfident too!)

As a philosophy of consciousness PhD: the view that animals lack consciousness is definitely an extreme minority view in the field, but it it's not a view that no serious experts hold. Daniel Dennett has denied animal consciousness for roughly Yudkowsky like reasons I think. (EDIT: Actually maybe not: see my discussion with Michael St. Jules below. Dennett is hard to interpret on this, and also seems to have changed his mind to fairly definitively accept animal consciousness more recently. But his earlier stuff on this at the very least opposed to confident assertions that we just know animals are conscious, and any theory that says otherwise is crazy.) And more definitely Peter Carruthers (https://scholar.google.com/citations?user=2JF8VWYAAAAJ&hl=en&oi=ao) used to defend the view that animals lack consciousness because they lack a capacity for higher-order thought. (He changed his mind in the last few years, but I personally didn't find his explanation as to why made much sense.) Likewise, it's far from obvious that higher-order thought views imply any animals other than humans are conscious. And still less obvious that they imply all mammals are conscious.* Indeed a standard objection to HOT views, mentioned in the Stanford Encyclopedia of Philosophy page on them last time I checked, is that they are incompatible with animal consciousness. Though that does of course illustrate that you are right that most experts take it as obvious that mammals are conscious.

As for the zombies stuff: you are right that Yudkowsky is mistaken and mistaken for the reasons you give, but it's not a "no undergraduate would make this" error. Trust me. I have marked undergrads a little, though I've never been a Prof. Far worse confusion is common. It's not even "if an undergrad made this error in 2nd year I'd assume they didn't have what it takes to become a prof". Philosophy is really hard and the error is quite subtle, plus many philosophers of mind do think you can get from the possibility of zombies to epiphenomenalism given plausible further assumptions, so when Yudkowsky read into the topic he probably encountered lots of people assuming accepting the possibility of zombies commits you to epiphenomenalism. But yes, the general lesson of "Dave Chalmers, not an idiot" is obviously correct.

As for functional decision theory. I read Wolfgang Schwarz's critique when it came out, and for me the major news in it was that a philosopher as qualified as Wolfgang thought it was potentially publishable given revisions. It is incredibly hard to publish in good philosophy journals, at the very top end they have rejection rates of >95%. I have literally never heard of a non-academic doing do without even an academic coauthor. I'd classify it as a genuinely exceptional achievement to write something Wolfgang gave a revise and resubmit verdict to with no formal training in philosophy. I say this not because I think it means anyone should defer to Yudkowsky and Soares-again, I think their confidence on AI doom is genuinely crazy, but just because it feels a bit unfair to me to see what was actually an impressive achievement denigrated.

*My own view is that IF animals are not capable of higher-order thought there isn't even a fact of the matter about whether they are conscious, but that only justifies downweighting their interests to a less than overwhelming degree, and so doesn't really damage arguments for veganism. Though it would affect how much you should prioritise animals v. humans.

MichaelStJules @ 2023-08-29T07:59 (+10)

FWIW, I'm confused about Dennett's current position on animal consciousness. Still, my impression is that he does attribute consciousness to many other animals, but believes that human consciousness is importantly unique because of language and introspection.

In this panel discussion, Dennett seemed confident that chickens and octopuses are conscious, directly answering that they are without reservation, and yes on bees after hesitating, but acknowledging their sophisticated capacities and going back to gradualism and whether what they do "deserves to be called consciousness at all".

 

Some other recent writing by him or about his views:

But Dennett thinks these things are like evolution, essentially gradualist, without hard borders. The obvious answer to the question of whether animals have selves is that they sort of have them. He loves the phrase “sort of.” Picture the brain, he often says, as a collection of subsystems that “sort of” know, think, decide, and feel. These layers build up, incrementally, to the real thing. Animals have fewer mental layers than people—in particular, they lack language, which Dennett believes endows human mental life with its complexity and texture—but this doesn’t make them zombies. It just means that they “sort of” have consciousness, as measured by human standards.

https://www.newyorker.com/magazine/2017/03/27/daniel-dennetts-science-of-the-soul

 

To appreciate what I see to be Chalmers’ second contribution, we first need to distinguish two different illusions: the malignant theorists’ illusion and the benign user illusion. Chalmers almost does that. He asserts: ‘To generate the hard problem of consciousness, all we need is the basic fact that there is something it is like to be us’ (2018, p. 49). No, all we need is the fact that we think there is something it is like to be us. Dogs presumably do not think there is something it is like to be them, even if there is. It is not that a dog thinks there isn’t anything it is like to be a dog; the dog is not a theorist at all, and hence does not suffer from the theorists’ illusion. The hard problem and meta-problem are only problems for us humans, and mainly just for those of us humans who are particularly reflective. In other words, dogs aren’t bothered or botherable by problem intuitions. Dogs — and, for that matter, clams and ticks and bacteria — do enjoy (or at any rate do not suffer from) a sort of user illusion: they are equipped to discriminate and track only some of the properties in their environment.

https://www.ingentaconnect.com/content/imp/jcs/2019/00000026/f0020009/art00004

 

I have long stressed the fact that human consciousness is vastly different from the consciousness of any other species, such as apes, dolphins, and dogs, and this “human exceptionalism” has been met with little favor by my fellow consciousness theorists. Yes, of course, human beings, thanks to language, can do all sorts of things with their consciousness that their language-less cousin species cannot, but still, goes the common complaint, I have pushed my claims into extreme versions that are objectionable, and even offensive. Not wanting to stir up more resistance than necessary to my view, I have on occasion strategically soft-pedaled my claims, allowing animals to be heterophenomenological subjects (of sorts) thanks to their capacity to inform experimenters (if not tell them), but now, my thinking clarified by Rosenthal’s, I want to recant that boundary blurring and re-emphasize the differences, which I think Rosenthal may underestimate as well. “Thoughts are expressible in speech,” he writes (p. 155), but what about the higher-order thoughts of conscious animals? Are they? They are not expressed in speech, and I submit that it is a kind of wishful thinking to fill the minds of our dogs with thoughts of that sophistication. So I express my gratitude to Rosenthal for his clarifying account by paying him back with a challenge: how would he establish that non-speaking animals have higher-order thoughts worthy of the name? Or does he agree with me that the anchoring concept of consciousness, human consciousness, is hugely richer than animal consciousness on just this dimension?

https://davidrosenthal.org/Dennett-on-Seeming-to-Seem.pdf

David Mathers @ 2023-08-29T08:47 (+2)

Maybe that is right. Dennett is often quite slippery (I think he believes that precision actually makes philosophy worse a lot of the time.) 

He also just may have changed his position. The SEP article on Animal Consciousness at one point refers to 'Dennett (who argues that consciousness is unique to humans)', but the reference is to a paper from 1995. Looking at the first page of the paper they cite, I think it was the one I vaguely remembered as "Dennett denies animal consciousness for Yudkowsky-like reasons". But having skimmed some of the paper again, I found it hard to tell this time if the reading of it as flat-out denying that animals are conscious was right. It seemed like Dennett *might* just be saying "we don't know, but it's not obvious, and for some animals, there probably isn't even a fact of the matter". (This is basically my view too, I think, except that unlike Dennett I don't think this much damages the case for animal rights.) But even that is inconsistent with "anyone who thinks mammals aren't conscious is totally out-of-step with experts in the field, I think." And it's possible the stronger reading of Dennett as actually denying animal consciousness is correct: I only skimmed it, and the SEP thinks so. 

Omnizoid @ 2023-08-27T12:46 (+2)

This is I think a really good comment.  The animal consciousness stuff I think is a bit crazy.  If Dennett thinks that as well . . . well, I never gave Dennett much deference.  

I was exaggerating a bit when I said that no undergraduate would make that error.  

I don't think that Schwarz saying he might publish it is much news.  I have a friend who is an undergraduate in his second year and he has 5 or 6 published philosophy papers--I'm also an undergraduate and I have one forthcoming.  

Do we know what journal Eliezer was publishing in?  I'd expect it not to get published in even a relatively mediocre journal, but I might be wrong. 

David Mathers @ 2023-08-27T13:14 (+21)

Thanks!

I don't know the journal Schwarz rejected it for, no. I f your friend has 5 or 6 publications as an undergrad then either they are a genius, or they are unusually talented and also very ruthless about identifying small, technical objections to things famous people have said, or they are publishing in extremely mediocre journals. The second and third things ares probably not what's going on when Wolfgang gives an R&R to the Yudkowsky/Soares fdt paper. It is an attempt to give a big new fundamental theory, not a nitpick. And regardless of the particular journal Wolfgang was reviewing for, I don't think (could be wrong though!), that the reason why it is easy to get published in the crappiest journals is because really sharp philosophers with multiple publications in top 5-10 journals drop their standards to a trivial level when reviewing for them. No doubt they drop their standards somewhat, but those journals probably have worse reviewers quite a lot of the time. (That's only a guess though.)

More importantly, a bit of googling to me revealed that Soares, though not Yudkowsky, is a coauthor on a paper defending fdt in Journal of Philosophy. (With Ben Levinstein who is an actual philosophy prof.) That alone takes fdt well out of the crank zone in my view. J Phil is a clear top 10 journal, probably top 5. It probably rejects around 95% of the papers sent to it. Admittedly there's a limit to how much credit Eliezer should get for a paper he didn't write, but insofar as fdt is "his" idea (don't know how much he developed it versus Soares and other MIRI people), this is the greenest of Philosophy green flags.

Omnizoid @ 2023-08-27T13:18 (+1)

Okay yeah, fair.  Here's my friends publication record https://philpeople.org/profiles/amos-wollen  

Though worth noting that the other author rejected it.  It's not clear how common it is for one reviewer to be willing to submit your paper after heavy revisions is. 

David Mathers @ 2023-08-27T14:02 (+10)

Fair point that many rejected things probably received one "revise and resubmit".

The link to your friend's philpapers page I'd broken, but I googled him and I think mediocre journals is probably, mostly the right answer, mixed a bit with "your friend is very, talented" (Though to be clear even 5 mediocre pubs is impressive for a 2nd year undergrad, and I would predict your friend can go to a good grad school if he wants to. ) Philosophia is a generalist journal I never read a single paper in in the 15 or so years I was reading philosophy papers generally, which is a bad sign. I'd never heard of "Journal of Ayn Rand Studies" but I can think of at most 1 possible examples of a good journal dedicated to a single philosopher and my guess is most people competent to review philosophy paper either hate Rand or have never read her. (This is the one journal of the 4 that even an undergrad pub in might not mean much, beyond the selection effect of mostly only fairly talented students trying to publish in the first place.) I'd never heard of Journal of Value Inquiry either. But I did find a Leiter Reports poll ranking it 18th out of moral and political philosophy journals, do publishing in it is probably a non-trivial achievement. Never heard of History and Philosophy of the Life Sciences, nor would I expected to have even if it was good. Your friend's paper looks like a straightforward historical discussion of what Darwin himself said evolution implied about epistemology rather than a defence of an original philosophical view though.

Omnizoid @ 2023-08-28T04:25 (+1)

Philosophia has I think a publication rate decently below 50%.  

JoshuaBlake @ 2023-08-27T13:48 (+2)

Fixed link

Ariel Simnegar @ 2023-08-27T06:06 (+71)

Eliezer's perspective on animal consciousness is especially frustrating because of the real harm it's caused to rationalists' openness to caring about animal welfare.

Rationalists are much more likely than highly engaged EAs to either dismiss animal welfare outright, or just not think about it since AI x-risk is "obviously" more important. (For a case study, just look at how this author's post on fish farming was received between the EA Forum and LessWrong.) Eliezer-style arguments about the "implausibility" of animal suffering abound. Discussions of the implications of AI outcomes on farmed or wild animals (i.e. almost all currently existing sentient beings) are few and far between.

Unlike Eliezer's overconfidence in physicalism and FDT, Eliezer's overconfidence in animals not mattering has serious real-world effects. Eliezer's views have huge influence on rationalist culture, which has significant influence on those who could steer future TAI. If the alignment problem will be solved, it'll be really important for those who steer future TAI to care about animals, and be motivated to use TAI to improve animal welfare.

niplav @ 2023-08-27T17:14 (+61)

I would very much prefer it if one didn't appeal to the consequences of the belief about animal moral patienthood, and instead argue whether animals in fact are moral patients or not, or whether the question is well-posed.

For this reason, I have strong-downvoted your comment.

Ariel Simnegar @ 2023-08-27T19:11 (+14)

Thanks for describing your reasons. My criterion for moral patienthood is described by this Brian Tomasik quote:

When I realize that an organism feels happiness and suffering, at that point I realize that the organism matters and deserves care and kindness. In this sense, you could say the only "condition" of my love is sentience.

Many other criteria for moral patienthood which exclude animals have been proposed. These criteria always suffer from some combination of the following:

  1. Arbitrariness. For example, "human DNA is the criterion for moral patienthood" is just as arbitrary as "European DNA is the criterion for moral patienthood".
  2. Exclusion of some humans. For example, "high intelligence is the criterion for moral patienthood" excludes people who have severe mental disabilities.
  3. Exclusion of hypothetical beings. For example, "human DNA is the criterion for moral patienthood" would exclude superintelligent aliens and intelligent conscious AI. Also, if some people you know were unknowingly members of a species which looked/acted much like humans but had very different DNA, they would suddenly become morally valueless.
  4. Collapsing to sociopathy or nihilism. For example, "animals don't have moral patienthood because we have power over them" is just nihilism, and if a person used that justification to act the way we do towards farmed animals towards other humans, they'd be locked up.

The most parsimonious definition of moral patient I've seen proposed is just "a sentient being". I don't see any reason why I should add complexity to that definition in order to exclude nonhuman animals. The only motivation I can think of for doing this would be to compromise on my moral principles for the sake of the pleasure associated with eating meat, which is untenable to a mind wired the way mine is.

seanrson @ 2023-08-27T18:55 (+8)

I think the objection comes from the seeming asymmetry between over-attributing and under-attributing consciousness. It's fine to discuss our independent impressions about some topic, but when one's view is a minority position and the consequences of false beliefs are high, isn't there some obligation of epistemic humility?

niplav @ 2023-08-28T15:16 (+7)

Disagreed, animal moral patienthood competes with all the other possible interventions effective altruists could be doing, and does so symmetrically (the opportunity cost cuts in both directions!).

Max H @ 2023-08-27T20:22 (+16)

It's frustrating to read comments like this because they make me feel like, if I happen agree with Eliezer about something, my own agency and ability to think critically is being questioned before I've even joined the object-level discussion.

Separately, this comment makes a bunch of mostly-implicit object-level assertions about animal welfare and its importance, and a bunch of mostly-explicit assertions about Eliezer's opinions and influence on rationalists and EAs, as well as the effect of this influence on the impacts of TAI.

None of these claims are directly supported in the comment, which is fine if you don't want to argue for them here, but the way the comment is written might lead readers who agree with the implicit claims about the animal welfare issues to accept the explict claims about Eliezer's influence and opinions and their effects on TAI with a less critical eye than if these claims were otherwise more clearly separated.

For example, I don't think it's true that a few FB posts / comments have had a "huge influence" on rationalist culture. I also think that worrying about animal welfare specifically when thinking about TAI outcomes is less important than you claim. If we succeed in being able to steer TAI at all (unlikely, in my view), animals will do fine - so will everyone else. At a minimum, there will also be no more global poverty, no more malaria, and no more animal suffering. Even if the specific humans who develop TAI don't care at all about animals themselves (not exactly likely), they are unlikely to completely ignore the concerns of everyone else who does care. But none of these disagreements have much or any bearing on whether I think animal suffering is real (I find this at least plausible) and whether that's a moral horror (I think this is very likely, if the suffering is real).

Linch @ 2023-08-27T21:02 (+29)

If we succeed in being able to steer TAI at all (unlikely, in my view), animals will do fine - so will everyone else

I'm not personally convinced fwiw; this line of reasoning has some plausibility but feels extremely out-of-line with approximately every reasonable reference class TAI could be in.

Ariel Simnegar @ 2023-08-27T22:08 (+9)

I apologize for phrasing my comment in a way that made you feel like that. I certainly didn't mean to insinuate that rationalists lack "agency and ability to think critically" -- I actually think rationalists are better at this than almost any other group! I identify as a rationalist myself, have read much of the sequences, and have been influenced on many subjects by Eliezer's writings.

I think your critique that my writing gave the impression that my claims were all self-evident is quite fair. Even I don't believe that. Please allow me to enumerate my specific claims and their justifications:

  1. Caring about animal welfare is important (99% confidence): Here's the justification I wrote to niplav. Note that this confidence is greater than my confidence that animal suffering is real. This is because I think moral uncertainty means caring about animal welfare is still justified in most worlds where animals turn out not to suffer.
  2. Rationalist culture is less animal-friendly than highly engaged EA culture (85% confidence): I think this claim is pretty evident, and it's corroborated here by many disinterested parties."
  3. Eliezer's views on animal welfare have had significant influence on views of animal welfare in rationalist culture" (75% confidence):
    1. A fair critique is that sure, the sequences and HPMOR have had huge influence on rationalist culture, but the claim that Eliezer's views in domains that have nothing do with rationality (like animal welfare) have had outsize influence on rationalist culture is much less clear.
    2. My only pushback is the experience I've had engaging with rationalists and reading LessWrong, where I've just seen rationalists reflecting Eliezer's views on many domains other than "rationality: A-Z" over and over again. This very much includes the view that animals lack consciousness. Sure, Eliezer isn't the only influential EA/rationalist who believes this, and he didn't originate that idea either. But I think that in the possible world where Eliezer was a staunch animal activist, rationalist discourse around animal welfare would look quite different.
  4. Rationalist culture has significant influence on those who could steer future TAI (80% confidence):
    1. NYT: "two of the world’s prominent A.I. labs — organizations that are tackling some of the tech industry’s most ambitious and potentially powerful projects — grew out of the Rationalist movement...Elon Musk — who also worried A.I. could destroy the world and met his partner, Grimes, because they shared an interest in a Rationalist thought experiment — founded OpenAI as a DeepMind competitor. Both labs hired from the Rationalist community."
    2. Sam Altman:"certainly [Eliezer] got many of us interested in AGI, helped deepmind get funded at a time when AGI was extremely outside the overton window, was critical in the decision to start openai, etc".

On whether aligned TAI would create a utopia for humans and animals, I think the arguments for pessimism--especially about the prospects for animals--are serious enough that having TAI steerers care about animals is very important.

Max H @ 2023-08-27T23:18 (+2)

Thank you. I don't have any strong objections to these claims, and I do think pessimism is justified. Though my guess is that a lot of people at places like OpenAI and DeepMind do care about animal welfare pretty strongly already. Separately, I think that it would be much better in expectation (for both humans and animals) if Eliezer's views on pretty much every other topic were more influential, rather than less, inside those places.

My negative reaction to your initial comment was mainly due to the way critiques (such as this post) of Eliezer are often framed, in which the claims "Eliezer's views are overly influential" and "Eliezer's views are incorrect / harmful" are combined into one big attack. I don't object to people making these claims in principle (though I think they're both wrong, in many cases), but when they are combined it requires more effort to separate and refute.

(Your comment wasn't a particularly bad example of this pattern, but it was short and crisp and I didn't have any other major objections to it, so I chose to express the way it made me feel on the expectation that it would be more likely to be heard and understood compared to making the point in more heated disagreements.)

Lorenzo Buonanno @ 2023-08-27T22:54 (+5)

animal consciousness is especially frustrating because of the real harm it's caused to rationalists' openness to caring about animal welfare.


I think you might be greatly overestimating Eliezer's influence on this.

According to Wikipedia: "In a 2014 survey of 406 US philosophy professors, approximately 60% of ethicists and 45% of non-ethicist philosophers said it was at least somewhat "morally bad" to eat meat from mammals. A 2020 survey of 1812 published English-language philosophers found that 48% said it was permissible to eat animals in ordinary circumstances, while 45% said it was not."

It really does not surprise me that people who give great importance to rationality value animals much less than the median EA, given that non-human animals probably lack most kinds of advanced meta-level thinking and might plausibly not be "aware of their own awareness".

Even in EA, there are many great independent thinkers who are uncertain about whether animals should be members of the "moral community"

My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern.

 

Now, I don't think animals matter as much as humans. I think there's a very large chance they don't matter at all, and that there's just no one inside to suffer, but to be safe I'll assume they do.


I think that sometimes in EA we risk forgetting how fringe veganism is, and I don't think Yudkowsky's arguments on the importance of animal suffering influence a lot of the views in the rationalist community on the subject. Especially considering people at leading AI labs that might steer TAI, they seem to be very independent thinkers and often critical of Yudkowsky's arguments (otherwise they wouldn't be working at leading AI labs in the first place)

Ariel Simnegar @ 2023-08-28T01:42 (+7)

For what it's worth, both Holden and Jeff express considerable moral uncertainty regarding animals, while Eliezer does not. Continuing Holden's quote:

My own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern. And my intuitions value humans astronomically more. However, I don’t think either my reflections or my intuitions are highly reliable, especially given that many thoughtful people disagree. And if chickens do indeed merit moral concern, the amount and extent of their mistreatment is staggering. With worldview diversification in mind, I don’t want us to pass up the potentially considerable opportunities to improve their welfare.

I think the uncertainty we have on this point warrants putting significant resources into farm animal welfare, as well as working to generally avoid language that implies that only humans are morally relevant.

I agree with you that it's quite difficult to quantify how much Eliezer's views on animals have influenced the rationalist community and those who could steer TAI. However, I think it's significant--if Eliezer were a staunch animal activist, I think the discourse surrounding animal welfare in the rationalist community would be different. I elaborate upon why I think this in my reply to Max H.

EliezerYudkowsky @ 2023-08-27T16:47 (+67)

The first object-level issue the author talks about is whether the brain is close to the Landaeur limit.  No particular issue is cited, only that somebody else claimed a lot of authority and claimed I was wrong about something, what exactly is not shown.

The brain obviously cannot be operating near the Landaeur limit.  Thousands of neurotransmitter molecules and thousands of ions need to be pumped back to their original places after each synaptic flash.  Each of these is a thermodynamically irreversible operation and it staggers the imagination that every ion pumped en masse back out of some long axon or dendrite, after ions flooded en masse into it to propagate electrical depolarization, is part of a well-designed informational algorithm that could not be simplified.  Any calculation saying that biology is operating close to the Landaeur limit has reached a face-value absurdity.

Of course, this may not seem to address anything, since OP failed to state what I was putatively wrong about and admits to not understanding it themselves; I can't refute what isn't shown.

The first substantive criticism OP claims to understand theirself is on Zombies.

I say:

Your "zombie", in the philosophical usage of the term, is putatively a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious.

The author would have you believe this is a ludicrous straw position.

I invite anyone to simply read the opening paragraphs of the SEP encyclopedia entry on P-zombies:

If zombies are to be counterexamples to physicalism, it is not enough for them to be behaviorally and functionally like normal human beings: plenty of physicalists accept that merely behavioral or functional duplicates of ourselves might lack qualia. Zombies must be like normal human beings in all physical respects, and they must have the physical properties that physicalists suppose we have. This requires them to be subject to the causal closure of the physical, which is why their supposed lack of consciousness is a challenge to physicalism. If instead they were to be conceived of as creatures whose behavior could not be explained physically, physicalists would have no reason to bother with the idea: there is plenty of evidence that, as epiphenomenalists hold, our movements actually are explicable in physical terms (see e.g. Papineau 2002).

This is a debate that has gone on for very long in philosophy.  I'd say it's gone on too long.

But whether or not particular thought experiments, by seeming metaphysically possible, license other conclusions about metaphysics, is exactly the entire substance.  The base thought experiment is not or should not be in dispute: it's a being whose physics duplicate the physics of a human being including the causal closure of what is said to be 'physics', i.e., all of the causes of behavior are included into the p-zombie.  Some people go on at fantastic length from this to say that it demonstrates the possibility of an extra consciousness that they call "epiphenomenal", and some say that it demonstrates the possibility of a nonphysical consciousness that they don't call "epiphenomenal", but it's my position that somewhere along the way of a long argument they have dropped the ball on the original thought experiment; whatever they call "consciousness" that isn't in the supposed p-zombie, it can't be among the causes of why we talk about consciousness, or why our verbally reportable stream of thought talks about consciousness, etc, because the zombie behaves outwardly like we do and also includes the minimal closure of the causes of that physical behavior.

The author of the above post has misrepresented what my zombies argument was about.  It's not that I think philosophers openly claim that p-zombies demonstrate epiphenomenalism; it's that I think philosophers are confused about what this thought experiment demonstrates.

The author having been shown to be wrong on the first points addressed, which I chose in order rather than selectively sampling, I hope you accept this as obvious evidence that the rest would be no better if you looked into them in detail or I responded in detail.  For a post claiming to show that I'm often grossly wrong, actual quotes from me, with linked context and dates attached, are remarkably thin on the ground.

You will mark that in this comment I first respond to a substantive point and show it to be mistaken before I make any general criticism of the author; which can then be supported by that previously shown, initial, first-thing, object-level point.  You will find every post of the Less Wrong sequences written the same way.

As the entire post violates basic rules of epistemic conduct by opening with a series of not-yet-supported personal attacks, I will not be responding to the rest in detail.  I'm sad about how anything containing such an egregious violation of basic epistemic conduct got this upvoted, and wonder about sockpuppet accounts or alternatively a downfall of EA.  The relevant principle of epistemic good conduct seems to me straightforward: if you've got to make personal attacks (and sometimes you do), make them after presenting your object-level points that support those personal attacks.  This shouldn't be a difficult rule to follow, or follow much better than this; and violating it this hugely and explicitly is sufficiently bad news that people should've been wary about this post and hesitated to upvote it for that reason alone.

Omnizoid @ 2023-08-27T22:10 (+19)

Hi Eliezer.  I actually do quite appreciate the reply because I think that if one writes a piece explaining why someone else is systematically in error, it's important that the other person can reply. That said . . . 

You are misunderstanding the point about causal closure.  If there was some isomorphic physical law, that resulted in the same physical states of affairs as is resulted in by consciousness, the physical would be causally closed.  I didn't say that your description of what a zombie is was the misrepresentation.  The point you misrepresented was when you said "It is furthermore claimed that if zombies are "possible" (a term over which battles are still being fought), then, purely from our knowledge of this "possibility", we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is "epiphenomenalism"."

No, the term is non-physicalism.  This does not entail epiphenomenalism.  If you say the standard term for believers in zombies is epiphenomenalists, then even if you have a convincing argument for why believers in zombies must be epiphenomenalists (which you don't) then it is still totally misleading to say the standard term is something totally different from what it is.  I think I have a convincing argument for why Objective List Theorists should accept hypersensitivity--the idea that slight changes in well-being supervene on arbitrarily small changes in welfare goods--but it would be misleading to say "the standard term for the belief in objective list theory is belief in hypersensitivity."  

The quote you give from the SEP page is "If zombies are to be counterexamples to physicalism, it is not enough for them to be behaviorally and functionally like normal human beings: plenty of physicalists accept that merely behavioral or functional duplicates of ourselves might lack qualia."  But here behavior and function are about the external outputs of the thing--you could have a behavioral and functional duplicate of me made with silicon.  However, it wouldn't be physically identical because it would be physically different--made of different stuff.  That is the point that is being made.  

If I am wrong why is it that Chalmers and the SEP page both deny that you have to be an epiphenomenalist to be a nonphysicalist? 

You said "It is furthermore claimed that if zombies are "possible" (a term over which battles are still being fought), then, purely from our knowledge of this "possibility", we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is "epiphenomenalism".

(For those unfamiliar with zombies, I emphasize that this is not a strawman.  See, for example, the SEP entry on Zombies. "

However, the SEP entry, as I note in the article, explicitly says that you do not have to be an epiphenomenalist to be a zombie believer.  

"True, the friends of zombies do not seem compelled to be epiphenomenalists or parallelists about the actual world. They may be interactionists, holding that our world is not physically closed, and that as a matter of actual fact nonphysical properties do have physical effects."

As for the final points sockpuppets, if a moderator would like to look into whether there are sockpuppet accounts, be my guest.  I'd be willing to bet at 9.5 to .5 odds that if a moderator looked into it, they would not find lots of newly created accounts 

I'd also be happy to bet about whether, if we ask a philosopher of mind like Chalmers, Goff, or Chappell which of us is correct about the zombie argument, they would say me! 

Finally, you suggest that saying bad things about people before addressing the object level is bad conduct.  Why?  You never give a reason for this.  It seems to me that if a post is arguing that some public figure should not be deferred to as much as he is currently being deferred to, on account of his frequent errors, there is nothing wrong with describing that that is your aim at the outset.  

David Mathers @ 2023-08-28T09:48 (+23)

'Chalmers, Goff, or Chappell'  This is stacking the deck against Eliezer rather unfairly; none of these 3 are physicalists, even though physicalism is the plurality, and I think still slight majority position in the field: https://survey2020.philpeople.org/survey/results/4874 

Omnizoid @ 2023-08-28T14:25 (+5)

We could ask a physicalist too--Frankish, Richard Brown, etc. 

Devin Kalish @ 2023-08-27T23:32 (+10)

Re Chalmers agreeing with you, he would, he said as much in the LessWrong comments and I recently asked him in person and he confirmed it. In Yudkowsky’s defense it is a very typical move among illusionists to argue that Zombiests can’t really escape epiphenomenalism, not just some ignorant outsider’s move (I think I recall Keith Frankish and Francois Kammerer both making arguments like this). That said I remain frustrated that the post hasn’t been updated to clarify that Chalmers disagrees with this characterization of his position.

Omnizoid @ 2023-08-28T00:18 (+4)

Yes, there are some arguments of questionable efficacy for the conclusion that zombieism entails epiphenomenalism.  But notably:

  1. Eliezer hasn't given any such argument. 
  2. Eliezer said that deniers of zombieism are by definition zombieists.  That's just flatly false. 
Pseudotruth @ 2023-08-30T14:47 (+5)

Eliezer quoted the SEP entry as support for his position and you, in your response, cut off the part of said quote which contained the support and only responded to the remaining part which did not contain the supporting point (eg the key words: causal closure). This seems bad-faith to me even though I think you're right that Eliezer did not account for interactionist dualism (though I disagree that it is necessarily a critical error, I don't think one should be expected to note every possibilty no matter how low prob in the course of an argumentation.)

Omnizoid @ 2023-08-31T00:05 (+5)

He didn't quote it--he linked to it.  I didn't quote the broader section because it was ambiguous and confusing.  The reason not accounting for interactionist dualism matters is because it means that he misstates the zombie argument, and his version is utterly unpersuasive. 

Sinclair Chen @ 2023-08-30T17:36 (+11)

Unfortunate. I find the author's first two sections weak but I find the third section about animal consciousness to be interesting, concrete, falsifiable, written clearly, and novel-to-me.

Linch @ 2023-08-28T09:41 (+8)

The relevant principle of epistemic good conduct seems to me straightforward: if you've got to make personal attacks (and sometimes you do), make them after presenting your object-level points that support those personal attacks.  This shouldn't be a difficult rule to follow, or follow much better than this; and violating it this hugely and explicitly is sufficiently bad news that people should've been wary about this post and hesitated to upvote it for that reason alone.

This might well be a reasonable norm to follow, and it might well even be the type of norm that enlightened rational actors can converge on as good, but I think it's far from settled practice, and I don't think Omnizoid is defecting on established norms at least in this instance (in the way that e.g., doxxing or faking data is widely considered defecting in most internet discussions).

Larks @ 2023-08-27T17:42 (+64)

If you're going to claim he is 'egregiously' wrong I would hope for clearer examples, like that he said the population of China was 100 million, or that the median apartment in Brooklyn cost $100k, or something like that. These three examples seem both cherrypicked - anyone with a long career as a genuine intellectual innovator will make claims on a wide variety of subjects, so three is nothing like what is required to claim 'frequent' - and ambiguous. 

Chris Leong @ 2023-08-29T01:59 (+10)

FDT isn’t cherry-picked as Eliezer has described himself as a decision theorist and his main contribution is TDT (which latter developed into FDT).

Larks @ 2023-08-29T13:30 (+11)

That seems correct to me. Perhaps not by coincidence, I also think the case against FDT is the weakest of his three, with some of the counterexamples being cases where I'm happy to bite the bullet, and the others seeming no worse than the objections to CDT, EDT, TDT, UDT etc.

seanrson @ 2023-08-27T18:47 (+6)

Maybe the examples are ambiguous but they don't seem cherrypicked to me. Aren't these some of the topics Yudskowky is most known for discussing? It seems to me that the cherrypicking criticism would apply to opinions about, I don't know, monetary policy, not issues central to AI and cognitive science.

trevor1 @ 2023-08-27T19:48 (+6)

None of these issues are "central" to AI or the cognitive science that's relevant to AI, AI alignment, or human upskilling. The author's area of interest is more about consciousness, animal welfare, and qualia. 

The issues are the sole thing justifying Omnizoid's rather heated indictments against Yudkowsky, such as:

Eliezer has swindled many of the smartest people into believing a whole host of wildly implausible things. Some of my favorite writers—e.g. Scott Alexander—seem to revere Eliezer. It’s about time someone exposed the mountain of falsehoods on which his arguments rest. If one of the world’s most influential thinkers is just demonstrably wrong about lots of topics, often in ways so egregious that they demonstrate very basic misunderstandings, then that’s quite newsworthy, just as it would be if a presidential candidate supported a slate of terrible policies.

Most readers will only read the accusations in the introduction and then bounce off the evidence backing them; because all of them are topics that, like string theory, only a handful of people on earth are capable of engaging with them. It just so happens that the author is one of them. Virtually nobody can read the actual arguments behind this post without dedicating >4 hours of their life to it, which makes it pretty well optimized to attract attention and damage Yudkowsky's reputation as much as possible with effectively zero accountability.

Omnizoid @ 2023-08-27T22:15 (+4)

I tried very hard to phrase everything as clearly as possible.  But if people's takeaway is "people who know about philosophy of mind and decision theory find Eliezer's views there deeply implausible and indicative of basic misunderstandings," then I don't think that's the end of the world.  Of course, some would disagree. 

Larks @ 2023-08-27T23:35 (+5)

If I was trying to list central historical claims that Eliezer made which were controversial at the time I would start with things like:

  • AGI is possible.
  • AI alignment is the most important issue in the world.
  • Alignment will not be easy.
  • People will let AGIs out of the box.
Holly_Elmore @ 2023-08-27T10:11 (+60)

I like the general point about recognizing Eliezer’s flaws and breaking through lazy dogmas that have been allowed to take hold just because he said them. I think it’s important for readers to know that Eliezer is arrogant, in case that doesn’t come across in his writing, but I don’t think these examples make the case that he’s frequently or egregiously wrong. Just sometimes wrong.

I am annoyed by the effect of that one Facebook post on the entire rationalist community’s opinions of animals, but I can’t put all the blame on Eliezer for that. He wrote one comment on the Forum that he shared to facebook, and in it he admitted that eating animals is a sin his society lets him get away with and that he wouldn’t eat animals if he felt he could get adequate nutrition otherwise. He’s not making strong claims about animal consciousness— just giving his take. I think he’s rationalizing in places, and I think a lot of people were grateful for the excuse not to give matter any more thought, but I don’t think it’s fair to act like he goes around parading this view when he doesn’t. The only text we have is that decade-old comment.

Seems like a low blow to say his strength isn’t in forming true beliefs, the thing he wrote the Sequences about, when most of your complaints are about him being arrogant or not respecting expertise, not being probably wrong especially often.

Arepo @ 2023-08-27T10:20 (+4)

I have a distinct memory, albeit one which could plausibly be false, of Eliezer once stating that he was '100% sure that nonhuman animals aren't conscious' because of his model of consciousness. If he said it, it's now been taken down from whichever site it appeared on. I'm now genuinely curious whether anyone else remembers this (or some actual exchange on which my psyche might have based it)

Holly_Elmore @ 2023-08-31T03:56 (+3)

I would be shocked if Eliezer ever said he was “100% sure” about anything. It would just sound gauche coming from him.

Using the log odds exposes the fact that reaching infinite certainty requires infinitely strong evidence, just as infinite absurdity requires infinitely strong counterevidence.

Furthermore, all sorts of standard theorems in probability have special cases if you try to plug 1s or 0s into them—like what happens if you try to do a Bayesian update on an observation to which you assigned probability 0.

So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers.

https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities

Omnizoid @ 2023-08-31T04:13 (+3)

Wait sorry it’s hard to see the broader context of this comment on account of being on my phone and comment sections being hard to navigate on ea forum. I don’t know if I said eliezer had 100% credence, but if I did, that was wrong.

Arepo @ 2023-08-31T16:35 (+2)

I agree - but that's why it stuck in my mind so strongly. I remember thinking how incongruous it was at the time.

Lizka @ 2023-08-27T21:22 (+55)

I think this post has some good points about overconfidence and over-deferral, but (as some others have pointed out) it seems unnecessarily inflammatory and includes jibes and rhetorical attacks I’d rather not see on the EA Forum. Examples have been pointed out by Max H here:

the language is also unnecessarily emotionally charged and inflammatory in many places. A quick sampling:

> But as I grew older and learned more, I realized it was all bullshit.

> it becomes clear that his view is a house of cards, built entirely on falsehoods and misrepresentations.

> And I spend much more time listening to Yukowsky’s followers spout nonsense than most other people.

> (phrased in a maximally Eliezer like way): ... (condescending chuckle)

I also think that you should retitle the post; I do not think that the contents defend the title to a reasonable extent, and therefore the title both feels misleading and somewhat like clickbait.

The moderators have decided to move the post to Personal Blog — the connection to EA and doing good better is not that clear. I’ll also discuss with the rest of the moderation team to see if there’s anything else we should do about this post.

SeeYouAnon @ 2023-08-28T09:26 (+85)

I have mixed feelings about this mod intervention. On the one hand, I value the way that the moderator team (including Lizka) play a positive role in making the forum a productive place, and I can see how this intervention plays a role of this sort.

On the other hand:

  1. Minor point: I think Eliezer is often condescending and disrespectful, and I think it's unlikely that anyone is going to successfully police his tone. I think there's something a bit unfortunate about an asymmetry here.
  2. More substantially: I think procedurally it's pretty bad that the moderator team act in ways that discourages criticism of influential figures in EA (and Eliezer is definitely such a figure). I think it's particularly bad to suggest concrete specific edits to critiques of prominent figures. I think there should probably be quite a high bar set before EA institutions (like forum moderators) discourage criticism of EA leaders (esp with a post like this that engages in quite a lot of substantive discussion, rather than mere name calling). (ETA: Likewise, with the choice to re-tag this as a personal blogpost, which substantially buries the criticism. Maybe this was the right call, maybe it wasn't, but it certainly seems like a call to be very careful with.)
  3. I personally agree that Eliezer's overconfidence is dangerous, given that many people do take his views quite seriously (note this is purely a comment on his overconfidence; I think Eliezer has other qualities that are praiseworthy). I think that the way EA has helped to boost Eliezer's voice has, in this particular respect, plausibly caused harm. Against that backdrop, I think it's important that there be able to be robust pushback against this aspect of Eliezer.

I don't know what the right balance is here, and maybe the mod team/Lizka have already found it. But this is far from clear to me.

(P.S. While I was typing this, I accidentally refreshed, and I was happy to discover that my text had been autosaved. It's a nice reminder of how much I appreciate the work of the entire forum team, including the moderators, to make using the forum a pleasant experience. So I really do want to emphasise that this isn't a criticism of the team, or Lizka in particular. It's an attempt to raise an issue that I think is worth reflection in terms of future mod action).

Lizka @ 2023-09-06T00:11 (+50)

Thanks for the pushback. Writing some notes, and speaking only for myself (I don’t know what the other moderators think). 

  1. I think my note[1] about Personal Blog-ing this post was unambiguously bad. In practice, the decision was made because I was trying to avoid delaying the comment, someone proposed (in the moderator slack) that this post was only loosely connected to doing good effectively and should be in Personal Blog, and I didn’t question it further. 
  2. I think we probably shouldn’t have moved the post to Personal Blog, but I’m not totally sure. I’ve flip-flopped a bit about this. (I just moved the post back, although I think this doesn’t change anything at this point.) I think the bigger error is that the distinction is so messy — I had written a doc trying to clarify things last year (it was mostly focused on whether productivity-hack-style posts should go on the Frontpage or not), and we thought a bit about it when we added the Community section, but this hasn’t been resolved. I think we probably should have prioritized clearing this up earlier, but I’m once again unsure. 
  3. Relatedly, I don’t think moving the post to “Personal Blog” substantially lowered the post’s visibility (I’m not sure it did anything except put a little “personal blog” icon on it), given that the post is also in the Community section. If it were not a Community post, then I think logged-out users wouldn’t see it on the Frontpage, but I think nothing really changes for Community posts. (Not totally confident in this; I'll check with the rest of the Online Team.)
  4. I agree that in an ideal world, I (or someone else from the moderation team) would have responded sooner to the replies on my comment. But I was traveling, very busy, and didn’t think the visibility of the post was actually lowered (see #3, and see the number of comments on the post), so I didn’t prioritize this issue. (I also suspect that this got ughy, although ughiness mainly pushed back my response time today, when I came back to the thread and saw newer comments.) I don’t know if I endorse the trade-offs I made, but it’s hard for me to tell.
  5. Setting aside Personal Blog — re: the fact that this is criticism of an influential figure in EA, and moderators should avoid responding to posts like that. I think it’s very important to protect criticism, but I also think the moderators are currently over-correcting for this kind of consideration a bit (see e.g. this), and I honestly think that I want to discourage the kinds of rhetorical attacks that I saw in this post. I want to protect whistleblowing, red-teaming, disagreement, serious critical engagement with the quality of someone’s work, etc., but I don’t want to encourage the sense that if you frame your post as criticism, then it will be featured even if it is inflammatory and misleading. 

(I’m still swamped and traveling, so might continue to be slow to respond.)

  1. ^

    “The moderators have decided to move the post to Personal Blog — the connection to EA and doing good better is not that clear”

Lorenzo Buonanno @ 2023-09-08T16:57 (+12)

My personal thoughts, as I was the mod who most pushed to move this to personal blog[1]. I haven't checked this with other mods:

  • My main actionable general takeaway from this incident is that we should try to write longer and more detailed notes when taking any moderation action. We should treat moderation notes as low context communication, and we should try to expand more on things like "violates norms" or "is not clearly related to doing more good". I'm very guilty of this, e.g. I think this was a core mistake here and here. In particular, we should always try to make it clear that criticism is welcome on the forum.

My less actionable and less general thoughts on this specific case:

  • I strongly believe that this decision was not a blunder, even if it probably was a mistake:
    • As many people agreed than disagreed with the moderation comment (It was 21 agreed to 18 disagreed as of 3 days ago. After the post edits and recent discussion it's 22 to 23. People might be biased to agree, but I don't think more than to disagree in this specific case.)
    • The author agreed with the decision
    • People who agree have no reason to comment and are less likely to see the moderation comment in the first place
  • In this case, there were several considerations, which made things messy. From my perspective, this post as posted was somewhat borderline on these axes, and I can see reasonable and contradicting perspectives on: 
    • The post relevance to doing more good
    • The post breaking forum norms (i.e. the insults that have since been edited)
    • Yudkowsky relationship with EA and if that raises or lowers the bar for acceptable criticism. As an influential voice, we should allow more criticism; as a critic of large parts of EA, like AI labs and animal welfare, we should make sure criticism is kind and doesn't discourage people from criticizing EA.
  • I think, in retrospect, the ideal action might have been to take mod action in the form of writing a comment asking the author to edit the post (as they did) to keep the good parts and reduce the insults (and maybe clarify the practical relevance to doing more good).
  • I think the main reasons why we didn't reply earlier to comments are that:
    • The poster agreed with the decision, so there wasn't much to change
    • Moving the post to personal blog for whatever reason didn't remove it from the frontpage, even for logged out users (idk if this is a bug, but it just showed a little icon next to the post, which didn't seem important to fix)
    • It's obvious to moderators that criticizing anyone is ok (while following norms) so we didn't feel the need to spell it out
    • I weakly wanted to reach more of a consensus in the mod team, and hear the perspectives of all moderators
  • I was wrong in not seeing any relevance to EA. EY is much more relevant to EA for many more users than I would have thought, and social reality is much more important than I thought, and arguably is a core reason for the community section.[2]
  • I feel that the "silent majority" that reads but doesn't write on the forum wants relatively more moderation than people who write lots of comments, so we should weakly keep that in mind when getting feedback in terms of "how much to moderate" (but the feedback in terms of "how to moderate" is very useful)
  • We should probably have replied earlier, even if we didn't reach a consensus on whether it was the right call or not, potentially just to surface that we were not sure it was.
  • Mostly unrelated to the above, but I really liked some of the comments in this thread. I am grateful for the standards that many commenters hold themselves to when posting, and the time they invest in sharing their expertise and thoughtful perspectives even in threads that would naturally have a tendency to devolve into fights.

Apologies for writing this quickly[3], and again I want to emphasize that this is just my personal perspective, and I haven't asked for feedback from other mods or advisors.

  1. ^

    As I (wrongly) didn't see a strong connection between this post and doing good better

  2. ^

    I might have overreacted because I have seen people loving to hate on Yudkowsky for >10 years. There used to be a subreddit dedicated to it. I haven't found comments on either side of those discussions to be particularly true, necessary or kind. I would want this forum to have less of that, but this is my personal view and shouldn't have influenced mod action

  3. ^

    I'm writing this from EAGxBerlin

SeeYouAnon @ 2023-09-07T00:40 (+8)

I appreciate the thoughtful reply. However, I don't agree with 5, which I take to be the most important claim in this reply.

Side comment: my claim isn't that moderators should avoid responding to posts that criticise prominent figures in EA. But my claim is that moderators should be cautious about acting in ways that discourage critique. I think this creates a sort of default presupposition that formal mod action should not be taken against critiques that include substantive discussion, as this one did.

I don't particularly find the comparison to the "modest proposal" post fruitful, because the current post just seem like a very different categories of post. I think it's perfectly possible to not take action on substantive criticisms of leaders while taking action on "modest proposal" style posts.

While it might be reasonable to want to discourage the sort of rhetorical attacks seen in this post if all else were equal, I don't think all else was equal in this case. And while I agree that "criticism" of leaders shouldn't permit all sins, the post seemed to me to have enough substantive discussion that it shouldn't be grouped into the general category of "inflammatory and misleading".

Lorenzo Buonanno @ 2023-09-08T15:51 (+2)

Writing only my personal perspective on the moderation team's approach, I haven't checked this with other moderators or advisors

my claim isn't that moderators should avoid responding to posts that criticise prominent figures in EA. But my claim is that moderators should be cautious about acting in ways that discourage critique.

My view is that all moderators agree with this! There are just many reasonable places to draw this line, though, and both different users and different moderators have different preferences and perspectives on what the bar should be and what counts as "prominent figures in EA".

In the past, we have received feedback from some users that we should have intervened in the opposite direction in other threads about prominent figures.

SeeYouAnon @ 2023-09-07T11:11 (+2)

Also, just to say: I think these judgement calls are easy to make in the abstract, but I'm glad I don't have to make them quickly in reality when they actually have implications.

I do think the wrong call was made here, but I also think the mod team acts in good faith and is careful and reflective in their actions. I am discussing things here because I think this is how we can collectively work towards a desirable set of moderation norms. I am not mentioning these things to criticise the mod team as individuals or indeed as a group.

Grumpy Squid @ 2023-09-06T17:26 (+4)

Thanks for sharing your reasoning, openly acknowledging a mistake and explaining how it happened.

Note: the below is an observation of a structural problem, rather than any individual. person Moderation is not an easy job and I do believe that the Forum mods are doing their best. 

Overall it sounds like the Forum team may not have enough capacity to adequately deal with issues like this (according to your description it sounds like despite traveling and being busy, you were ultimately the person responsible for this). 

This could result in a sub-optimal situation that decisions like this are either delayed, or made quickly (with a higher chance of mistakes). I think this is bad because the Forum is actively used by hundreds of community members, and time spent critiquing mod decisions is valuable time that isn't being spent on object-level issues. 

In my opinion, it seems like it should be higher priority for the Forum team to expand the number of dedicated moderators who are "on call" to prevent situations like this in the future.

Some notes on mod capacity:

  • From my understanding the forum has hired some paid moderators in the past year or two, but it seems like it may not be sufficient (possibly because of a increase in forum usage over the same time period)
  • I am also aware that the Forum is trying to hire another Content Specialist, although it is unclear whether they are replacing Lizka or adding more capacity. 
Larks @ 2023-09-06T18:35 (+10)

Moderation issues are annoying (and I agree they are too quick to go after disagreeable-but-insightful people) but adding new dedicated paid moderators seems quite expensive. Most of the time there isn't a huge issue so their time would be wasted, and even when there was an issue you don't get certainty of improved performance - the new people might sometimes have worse ideas than the old guard. My guess (?) is the EA forum is already an outlier on the admin-hours / user-hours ratio.

Grumpy Squid @ 2023-09-06T20:03 (+6)

RE outlier - Do you mean an outlier in that there are more admin hours put in than other places? 

I don't think that is true, at least from my impression of a couple other places, but this is a weak impression. 

I would make the case that we probably don't want to compare the Forum to most other online communities, because unlike in other places, people are writing & sharing substantive research and trying to, in some sense, do work. Of course, there is a community / social element to it as well, but I think there is a case to see the Forum as more than just that. As a result, I think it's okay for the mod team to be an outlier. 

I'll also say that in general, I believe CEA as an organization undervalues / underinvests in infrastructural investments for the EA movement and community (e.g. the Groups team, Events team and Community health had been chronically understaffed until early 2022. The Forum only had ~3 FTE until 2021 and only had capacity to maintain rather than build new features. I'd argue the Community Health team is still very understaffed relative to their remit.)

Larks @ 2023-09-06T21:08 (+17)

What do you think CEA over-invests in? If you take away Online, Groups, Events and CH as all undervalued there's not much of CEA left.

Grumpy Squid @ 2023-09-06T19:55 (+2)

I think having adding something like 1 FTE or 2 x 0.5 FTE moderators wouldn't be that expensive - would add ~5% to the Forums' overall budget (currently $2M per year per a recent comment). Onboarding and recruiting would take some time, but the process for hiring moderators (AFAIK) is less time-consuming if they are in a contract role. 

It's true that new moderators could make worse decisions, but they could also be trained by existing moderators, read up on past instances of moderation that worked / didn't, and initially run decisions by more experienced mods to reduce the chance of decreasing quality. It seems like moderators who joined in 2022 did a pretty good job, at least Forum leadership's standards. 

Lorenzo Buonanno @ 2023-09-08T16:25 (+4)

Writing only my personal perspective. I haven't checked this with other moderators or advisors.

adding something like 1 FTE or 2 x 0.5 FTE moderators wouldn't be that expensive

I think an important cost would be the opportunity cost for what those moderators could be doing.

For me personally, the theory of change for spending more time on moderation is often not that clear. My personal theory of change is that the main value I provide via moderation is to save time/energy for Lizka and JP to focus a bit more on projects that I think are extremely valuable. (Edit: This is just my personal view! I don't work for CEA, and I think they disagree with this!)

seems like moderators who joined in 2022 did a pretty good job

As one of these mods, I think I also made some pretty clear mistakes[1], even one year into this, that I think more experienced mods wouldn't have made. I think the new mods went through a better selection process, though, so I'm optimistic that it will take less time for them to make better decisions.

Tangentially related to this point, I think 99% of the moderation action on this forum comes from users (via voting, commenting, and reporting posts). I think that's how it should be, and I'm really impressed by how well users of this forum moderate discussions, compared to e.g. serious subreddits, Twitter spheres, or Hacker News.

  1. ^

    I was also the moderator who pushed the most to move this to personal blog, as I (wrongly) didn't see a strong connection between this post and doing good better.

Lorenzo Buonanno @ 2023-09-08T16:08 (+5)

Writing only my personal perspective. I haven't checked this with other moderators or advisors.

Overall it sounds like the Forum team may not have enough capacity to adequately deal with issues like this (according to your description it sounds like despite traveling and being busy, you were ultimately the person responsible for this). [...] In my opinion, it seems like it should be higher priority for the Forum team to expand the number of dedicated moderators who are "on call" to prevent situations like this in the future.

You might be happy to hear that this already happened to a significant amount!

There are now six active moderators, plus advisors, which is ~2x as many as there were at some points. Three of the active moderators joined in August, I think ~three weeks before this post, and the content specialist role you linked to starts with "to work with me (Lizka)", so I don't think that she's looking for a replacement.[1]

  1. ^

    I have no insider info, just going by public posts.

sphor @ 2023-09-08T12:06 (+2)

Thanks for the honesty Lizka. I appreciate it. 

I'm very disappointed about the low priority the mod team assigns to being responsive to and engaging with critical feedback about their decisions from forum users. It's very surprising to me that in this situation, with all the substantive and popular comments and votes pushing back on a potentially consequential decision and multiple people following up publicly a week later about their disappointment that the situation was left unaddressed, you are undecided on whether this situation needed addressing sooner. I find myself less interested in this forum due to this reason. 

Lorenzo Buonanno @ 2023-09-08T17:00 (+3)

Hi sphor,

I'm sorry about this, especially that this worsened your experience on the forum, I quickly wrote some reasons why it took us so long here

Nathan_Barnard @ 2023-08-28T10:26 (+81)

I strongly disagree with the claim that the connection to EA and doing good is unclear. The EA community's beliefs about AI have been, and continue to be, strongly influenced by Eliezer. It's very pertinent if Eliezer is systematically wrong and overconfident about being wrong because, insofar as there's some level of defferal to Elizer on AI questions within the EA community which I think there clearly is, it implies that most EAs should reduce their credence in Elizer's AI views. 

Lizka @ 2023-09-06T00:12 (+2)

Thanks for commenting — I agree with your main point, and wrote more here

Linch @ 2023-08-28T09:44 (+69)

I agree that much of the language is inflammatory, and this is blameworthy. I disagree that the connection to EA and doing good better is unclear, conditional upon the writer being substantively correct. And historically, the personal blogpost/frontpage distinction has not been contingent on correctness. (But I understand you're operating under pretty difficult tradeoffs, need to move fast, etc, so wording might not be exact). 

Omnizoid @ 2023-08-28T14:23 (+4)

Just want to say, I also agree that much of the original language was inflammatory.  I think I have fixed it to make it less inflammatory, but do let me know if there are other parts that you think are inflammatory.  

Linch @ 2023-08-29T01:25 (+10)

In your shoes, I'd remove "egregiously" from the title, but I'm not great at titles and also occupy a different epistemic status than you (eg I think FDT is better than CDT or EDT).

Lizka @ 2023-09-06T00:13 (+2)

Thanks for this comment — I left a longer reply here

sphor @ 2023-08-28T13:25 (+25)

Can you clarify the basis on which a post about an influential figure in the EA community that according to you makes some good points about overconfidence and over-deferral is not clearly connected to EA and doing good better? I genuinely cannot make sense of this decision or its stated justification. 

Your comment only goes into specifics about the tone and rhetoric in parts of the post. Are these factors relevant to which section a post belongs to? If so, can you clarify how? 

Lizka @ 2023-09-06T00:15 (+3)

Thanks for this comment — I left a longer comment here. In brief, I hadn't thought very hard about the decision to move the post to Personal Blog, and was in fact mostly focused on the rhetorical/inflammatory aspects of the post, and only briefly considered the strength of its relevance to EA.

Omnizoid @ 2023-08-27T22:13 (+19)

Yes, sorry I should have had it start in personal blog.  I have now removed the incendiary phrasing that you highlight.

Lizka @ 2023-09-06T00:16 (+4)

Thanks for editing your post. 

I've moved the post back to Frontpage (although I don't think this changes much) — see this comment. We don't generally move posts to Frontpage if the authors mark them as Personal Blog themselves. Do you want us to move this post back? 

SeeYouAnon @ 2023-09-05T12:22 (+10)

I don't feel particularly good that the various concerns about this mod decision were not, as far as I can tell, addressed by mods. I accept that this decision has support from some people, but a number of people have also expressed concern. My own concern got 69 upvotes and 24 agree votes. Nathan, Linch, and Sphor all raise concerns too. I think a high bar should be set for mod action against critiques of EA leaders, but I also think that mods would ideally be willing to engage in discussion about this sort of action (even if only to provide reassurance that they generally support appropriate critique but that they feel this instance wasn't appropriate for X, Y and Z reasons).

ETA: Lizka has now written a thoughtful and reflective response here (and also explained why it took a while for any such response to be written).

Max H @ 2023-08-27T13:06 (+39)

Note for readers: this was also posted on LessWrong, where it received a very different reception and a bunch of good responses. Summary: the author is confidently, egregiously wrong (or at least very confused) about most of the object-level points he accuses Eliezer and others of being mistaken or overconfident about. 

Also, the writing here seems much more like it is deliberately engineered to get you to believe something (that Eliezer is bad) than anything Eliezer has ever actually written. If you initially found such arguments convincing, consider examining whether you have been "duped" by the author. 

JoshuaBlake @ 2023-08-27T13:47 (+46)

I don't think you've summarised the LessWrong comments well. Currently, they don't really engage with the substantive content of the post and/or aren't convincing to me. They spend a lot of time criticising the tone of the post. The comments here by Dr. David Mathers are a far better critique than anything on LessWrong.

I do agree that the post title goes too far compared to what is actually argued.

Also, the writing here seems much more like it is deliberately engineered to get you to believe something (that Eliezer is bad) than anything Eliezer has ever actually written. If you initially found such arguments convincing, consider examining whether you have been "duped" by the author.

This paragraph seems in bad faith without substantiating, currently it's just vague rhetoric. What do you mean by "deliberately engineered to get you to believe something"? That sounds to me like a way of framing "making an argument" to sound malicious.

Max H @ 2023-08-27T14:07 (+27)

I personally commented with an object-level objection; plenty of others have done the same.

I mostly take issue with the factual claims in the post, which I think is riddled with errors and misunderstandings (many of which have been pointed out), but the language is also unnecessarily emotionally charged and inflammatory in many places. A quick sampling:


But as I grew older and learned more, I realized it was all bullshit.


it becomes clear that his view is a house of cards, built entirely on falsehoods and misrepresentations.

And I spend much more time listening to Yukowsky’s followers spout nonsense than most other people.

 (phrased in a maximally Eliezer like way): ... (condescending chuckle)


I am frankly pretty surprised to see this so highly-upvoted on the EAF; the tone is rude and condescending, more so than anything I can recall Eliezer writing, and much more so than the usual highly-upvoted posts here.

The OP seems more interested in arguing about whatever "mainstream academics" believe than responding to (or even understanding) object-level objections. But even on that topic, they make a bunch of misstatements and overclaims. From a comment:


But the views I defend here are utterly mainstream.  Virtually no people in academia think either FDT, Eliezer's anti-zombie argument, or animal nonconsciousness are correct.  


(Plenty of people who disagree with the author and agree or partially agree with Eliezer about the object-level topics are in academia. Some of them even post on LessWrong and the EAF!)



 

Omnizoid @ 2023-08-27T13:21 (+9)

I obviously disagree that this is the conclusion of the LessWrong comments, many of which I think are just totally wrong!  Notably, I haven't replied to many of them because the LessWrong bot makes it impossible for me to post above once per hour because I have negative Karma on recent posts. 

Jack Malde @ 2023-08-27T13:33 (+15)

Putting aside whether or not what you say is correct, do you think it's possible that you have fallen prey to the overconfidence that you accuse Eliezer of? This post was very strongly written and it seems a fair number of people disagree with your arguments.

Omnizoid @ 2023-08-27T13:36 (+9)

I mean, it's always possible.  But the views I defend here are utterly mainstream.  Virtually no people in academia think either FDT, Eliezer's anti-zombie argument, or animal nonconsciousness are correct.  

aprilsun @ 2023-08-27T20:15 (+33)

I thought the final three paragraphs were the best part of this post and I wish you had led with them!

"Consider two types of thinkers...I think that 'hero worship' is often a problem in this community because people - including myself - have mistaken innovators for systematizers...Sounds plausible, right? Let's take Eliezer as an example (sorry, Eliezer!)..."

This practice of "This generally pretty great person/org in our community [probably] has a FLAW!!" *upvote upvote upvote upvote* doesn't seem very healthy to me[1], whereas "Here's a mistake I think I've made and I think lots of us are making" (and sharing the post in advance with anyone identifiable singled out for criticism) seems very helpful :)

  1. ^

    Although if Eliezer is as condescending as you make out, part of me thinks it's fair play to be somewhat ridiculing in response. Still, an eye for an eye makes the whole world blind, ya know.

titotal @ 2023-08-27T05:24 (+24)

I fully agree with the title of this post, although I do think Yudkowsky can be valuable if you treat him as an "interesting idea generator", as long as you treat said ideas with a very skeptical eye. 

I've only had time to comprehensively debunk one of his overconfident mistakes, but there are a more mistakes or flaws I've noticed but haven't gotten around to fleshing out in depth, which I'll just list here:

Yudkowsky treats his case for the “many worlds hypothesis” as a slam-dunk that proves the triumph of Bayes, but in fact it is only half-done. He presents good arguments against “collapse is real”, but fails to argue that this means many worlds is the truth, rather than one of the other many interpretations which do not involve a real collapse. stating that he's solved the problem is flatly ridiculous. 

The description of Aumanns agreement theorem in “defy the data” is false, leaving behind important caveats that render his use of it incorrect. 

In general, Yudkowsky talks about bayes theorem a lot, but his descriptions of practical bayesianism are firmly stuck in the 101 level, lacking, for example, any discussion on how to deal with uncertain priors or uncertain likelihood ratios. I don't know if he is unaware of how bayesian statistics are actually used or if he just thinks it was too complicated to explain, but it has lead to a lot of rationalists adopting a form of "pseudo-bayesianism" that bears little resemblance to how it is used in science. 

Yud talks a lot about “Einsteins arrogance”, in a way that obsfucates the actual evidence of Einsteins belief, and if i recall he has implied that using bayes theorem can justifiably get you to the same level of arrogance. In fact, general relativity was a natural extension of special relativity (which had a ton of empirical evidence in it's favour). Einsteins arrogance was justified by the nature of laws of physics and is in no way comparable to the type of speculative forecasts used by Yud and company. 

The implications of the “AI box experiment” have been severely overstated. It does not at all prove that an AGI cannot be boxed, only that a subset of people are highly persuadable. “rationalists are gullible” fits the evidence provided just as well.  

I haven't even touched his twitter account, which I feel is just low-hanging fruit. 

Arepo @ 2023-08-27T09:53 (+34)

I fully agree with the title of this post, although I do think Yudkowsky can be valuable if you treat him as an "interesting idea generator", as long as you treat said ideas with a very skeptical eye. 

Fwiw I think the rule thinkers in philosophy popular in EA and rat circles has itself been quite harmful. Yeah, there's some variance in how good extremely smart people are at coming up with original takes, but for the demonstrably smart people I think 'interesting idea generation' is more of a case of 'we can see them reasoning hypercarefully and scrupulously about their area of competence almost all the time they speak on it, sometimes they also come up with genuinely novel ideas, and when those ideas are outside their realm of expertise maybe they slightly underresearch and overindex on them'. I'm thinking of uncontroversially great thinkers like Feynman, Einstein, Newton, as well as more controversially great thinkers like Bryan Caplan, Elon Musk, here.

There is an opportunity cost to noise, and that cost is higher to a community the louder and more prominently it's broadcast within that community. You, the OP and many others have gone to substantial lengths to debunk almost casually thrown out views by EY that, as others have said, have made their way into rat circles almost unquestioned. Yet the cycle keeps repeating because 'interesting idea generation' gets so much shrift.

Meanwhile, there are many more good ideas than there is bandwidth to look into them. In practice, this means for every bad idea a Yudkowsky or Hanson overconfidently throws out, some reasonable idea generated by someone more scrupulous but less good at self-marketing gets lost.

titotal @ 2023-08-27T10:25 (+8)

Actually, I think a comparison to Musk is pretty apt here. I frequently see Musk saying very incorrect things, and I don't think his object level knowledge of engineering is very good. But he is a good at selling ideas and building hype, which has translated into funding for actual engineers to build rockets and electric cars in a way that probably wouldn't have happened without his hype skills. 

In the same way, Yud's skills at persuasive writing have accelerated both AI research and AI safety research (see Altman credited him for boosting openAI). The problem is that he is not actually very good at AI safety research himself (or any sub-set of the problems), and his beliefs and ideas on the subject are generally flawed. It would be like if you hired elon musk directly to build a car in your garage. 

At this point, I think the field of AI safety is big enough that you should stick to spokespeople who are actual experts in AI, and don't make grand incorrect statements on an almost weekly basis. 

Jonas Hallgren @ 2023-08-27T07:20 (+21)

Generally, some good points across the board that I agree with. Talking with some physicist friends helped me debunk the many worlds thing Yud has going. Similarly his animal consciousness stuff seems a bit crazy as well. I will also say that I feel that you're coming off way to confident and inflammatory when it comes to the general tone. The AI Safety argument you provided was just dismissal without much explanation. Also, when it comes to the consciousness stuff I honestly just get kind of pissed reading it as I feel you're to some extent hard pandering to dualism.

I totally agree with you that Yudkowsky is way overconfident in the claims that he makes. Ironically enough it also seems that you to some extent are as well in this post since you're overgeneralizing from insufficient data. As a fellow young person, I recommend some more caution when it comes to solid claims about stuff where you have little knowledge (you cherry-picked data on multiple occasions in this post).

Overall you made some good points though, so still a thought-provoking read.

Pablo @ 2023-08-27T08:11 (+47)

Talking with some physicist friends helped me debunk the many worlds thing Yud has going.

Yudkowsky may be criticized for being overconfident in the many-worlds interpretation, but to feel that you have “debunked” it after talking to some physicist friends shows excessive confidence in the opposite direction. Have you considered how your views about this question would have changed if e.g. David Wallace had been among the physicists you talked to?

Also, my sense is that “Yud” was a nickname popularized by members of the SneerClub subreddit (one of the most intellectually dishonest communities I have ever encountered). Given its origin, using that nickname seems disrespectful toward Yudkowsky.

Devin Kalish @ 2023-08-27T09:19 (+4)

I don’t have a link because Twitter is very difficult to search now if you don’t have an account (if someone wants to provide one be my guest, there’s one discussion thread involving Zach Weinersmith that says so for instance), but Yudkowsky currently uses and seems to like the nickname at this point.

Pablo @ 2023-08-27T09:36 (+5)

Thanks for the update: I have retracted the relevant part of my previous comment.

Jonas Hallgren @ 2023-08-28T07:31 (+1)

Sorry, Pablo, I meant that I got a lot more epistemically humble, I should have thought about how I phrased it more. It was more that I went from the opinion that many worlds is probably true to: "Oh man, there are some weird answers to the Wigner's friend thought experiment and I should not give a major weight to any." So I'm more like maybe 20% on many worlds? 

That being said I am overconfident from time to time and it's fair to point that out from me as well. Maybe you were being overconfident in saying that I was overconfident? :D

Omnizoid @ 2023-08-28T04:41 (+4)

I don't think I really overgeneralized from limited data.  Eliezer talks about tons of things, most of which I don't know about.  I know a lot about maybe 6 things that he talks about and expresses strong views on.  He is deeply wrong about at least four of them. 

Jonas Hallgren @ 2023-08-28T07:33 (+1)

I didn't mean it in this sense. I think the lesson you drew from it is fair in general, I was just reacting to the things I felt you pulled under the rug, if that makes sense.

Omnizoid @ 2023-08-27T12:51 (+3)

Eliezer talks about lots of topics that I don't know anything about.  So I can only write about the things that I do know about.  There are maybe five or six examples of that, and I think he has utterly crazy views in perhaps all except one of those cases.  

I can't fat check him on physics or nanotech, for instance. 

Jonas Hallgren @ 2023-08-27T07:27 (+3)

I will say that I thought the consciousness p zombie distinction was very interesting and a good example of overconfidence as this didn't come across in my previous comment.

MichaelStJules @ 2023-08-27T18:38 (+20)

My understanding from Eliezer's writing is that he's an illusionist (and/or a higher-order theorist) about consciousness. However, illusionism (and higher-order theories) are compatible with mammals and birds, at least, being conscious. It depends on the specifics.

I'm also an illusionist about consciousness and very sympathetic to the idea that some kinds of higher-order processes are required, but I do think mammals and birds, at least, are very probably conscious, and subject to consciousness illusions. My understanding is that Humphrey (Humphrey, 2022Humphrey, 2023aHumphrey, 2023bHumphrey, 2017Romeo, 2023Humphrey, 2006Humphrey, 2011) and Muehlhauser (2017) (a report for Open Phil, but representing his own views) would say the same. Furthermore, I think the standard interpretation of illusionism doesn’t require consciousness illusions or higher-order processes in conscious subjects at all, and instead a system is conscious if connecting a sufficiently sophisticated introspective system to it the right way would lead to consciousness illusions, and this interpretation would plausibly attribute consciousness more widely, possibly quite widely (Blackmore, 2016 (available submitted draft), Frankish, 2020, Frankish, 2021Frankish, 2023Graziano, 2021, Dung, 2022).

If I recall correctly, Eliezer seemed to give substantial weight to relatively sophisticated self- and other-modelling, like cognitive empathy and passing the mirror test. Few animals seem to pass the mirror test, so that would be reason for skepticism.

However, maybe they’re just not smart enough to infer that the reflection is theirs, or they don’t rely enough on sight. Or, they may recognize themselves in other ways or to at least limited degrees. Dogs can remember what actions they’ve spontaneously taken (Fugazza et al., 2020) and recognize their own bodies as obstacles (Lenkei, 2021), and grey wolves show signs of self-recognition via a scent mirror test (Cazzolla Gatti et al., 2021, layman summary in Mates, 2021). Pigeons can discriminate themselves from conspecifics with mirrors, even if they don’t recognize the reflections as themselves (Wittek et al., 2021Toda and Watanabe, 2008). Mice are subject to the rubber tail illusion and so probably have a sense of body ownership (Wada et al., 2016).

Furthermore, Carey and Fry (1995) show that pigs generalize the discrimination between non-anxiety states and drug-induced anxiety to non-anxiety and anxiety in general, in this case by pressing one lever repeatedly with anxiety, and alternating between two levers without anxiety (the levers gave food rewards, but only if they pressed them according to the condition). Similar experiments were performed on rodents, as discussed in Sánchez-Suárez, 2016, in section 4.d., starting on p.81. Rats generalized from hangover to morphine withdrawal and jetlag, and from high doses of cocaine to movement restriction, from an anxiety-inducing drug to aggressive defeat and predator cues. Of course, anxiety has physical symptoms, so maybe this is what they're discriminating, not the negative affect.

 

There are also of course many non-illusionist theories of consciousness that attribute consciousness more widely that are defended (although I'm personally not sympathetic, unless they're illusionist-compatible), and theory-neutral or theory-light approaches. On theory-neutral and theory-light approaches, see Low, 2012Sneddon et al., 2014Le Neindre et al., 2016Rethink Priorities, 2019Birch, 2020Birch et al., 2022Mason and Lavery, 2022, generally giving more weight to the more recent work.

David Mathers @ 2023-08-28T10:00 (+4)

What do you mean by "illusionism"? I understand "eliminativism" where people say there is no such thing as (phenomenal) consciousness. But that is obviously incompatible with birds, mammals or humans(!) being (phenomenally) conscious. When I hear "consciousness is an illusion" in ordinary English, it sounds like them same claim: there's no such thing. But in fact, people mean something else, and I've never been quite sure what. Sometimes it seems just to be "nothing shows up in perceptual phenomenology except external stuff, but people mistakenly believe that qualia are properties instantiated by the experience and show up in phenomenology", but that makes all phenomenal externalists like Tye, Dretske, Mike Martin (etc.) "illusionists", which is not a way any of them has ever self-identified as far as I know. 

MichaelStJules @ 2023-08-28T16:24 (+2)

Rather than denying consciousness per se, (strong) illusionists would deny that there’s something like phenomenal consciousness, where that's defined (at least in part) in terms of qualitative properties, like the quality of reddishness in experiences or red, classic qualia (private, intrinsic, ineffable, and subjective, etc.) or even nonphysical properties. Humans and other animals can still be conscious, if understood in terms of the illusions of phenomenal/qualitative properties, either directly (actually having such illusions) or indirectly (would have these illusions, with the right additional machinery connected in the right way).

The hard problem of consciousness is typically defined as the problem of explaining why there's phenomenal consciousness or why consciousness has these phenomenal/qualitative properties. Illusionists (strong illusionists) believe this is misguided because there are no such phenomenal/qualitative properties, and we replace the hard problem with the problem of explaining why (many) people believe consciousness has these phenomenal/qualitative properties, despite not having them. I think Frankish, 2016 (preprint) is a standard reference. He also contrasts weak illusionism as denying classic qualia but not phenomenality per se, while strong illusionism also denies phenomenality:

Weak illusionism holds that these properties are, in some sense, genuinely qualitative: there really are phenomenal properties, though it is an illusion to think they are ineffable, intrinsic, and so on. Strong illusionism, by contrast, denies that the properties to which introspection is sensitive are qualitative: it is an illusion to think there are phenomenal properties at all.

I think illusionism about consciousness usually refers to strong illusionism.

I'm not familiar with the writing of Tye, Dretske, Mike Martin, but what you wrote suggests to me that they're weak illusionists and so deny classic qualia, but not strong illusionists, so don't deny phenomenality generally.

FWIW, I've seen Michael Graziano, Walter Veit and Heather Browning each self-describe as an illusionist (or something similar) and say they don't like the term and don't like to use it because it's misleading and confusing.[2] Illusionists are not saying there’s no such thing as consciousness and are frequently misinterpreted that way, among other ways, like a Cartesian theatre. "Consciousness illusion" is also probably a confusing term for similar reasons, and something like "illusion of phenomenality" would be better.

 

I'd also add that being an illusionist doesn't make experiences of red stop seeming to have qualitative features, so it seems to me that some such beliefs are "wired-in" and instinctual or intuitive, or, as Kammerer (2022) puts it, cognitively impenetrable.[1] You can't get rid of these illusions just by understanding that they are illusions or even how they work, just like you can't for the Müller-Lyer illusion, with which Kammerer (2022) illustrates.

  1. ^

    See also Dawson, 2017 for cognitive impenetrability in general, not just in this context.

  2. ^

    Graziano, 2016 (ungated) wrote:

    In the target article of this special issue, Frankish describes an approach to consciousness called illusionism that is shared by many theories of consciousness. The attention schema theory has much in common with illusionism. It clearly belongs to the same category of theory, and is especially close to the approach of Dennett (1991). But I confess that I baulk at the term ‘illusionism’ because I think it miscommunicates. To call consciousness an illusion risks confusion and unwarranted backlash. To me, consciousness is not an illusion but a useful caricature of something real and mechanistic. My argument here concerns the rhetorical power of the term, not the underlying concepts.

    It goes on further about resulting misunderstandings the term can cause.

     

    Veit and Browning (2023) (preprint) wrote, responding to some misunderstandings of illusionism:

    While we consider ourselves akin to illusionists, we do not typically use the term, since it invites just these kinds of confusions among those less familiar with the position.

JWS @ 2023-08-28T17:36 (+4)

Apologies if this is derailing the thread by butting in with my thoughts r.e. illusionism. I'd love to find people to discuss these theories and share thoughts/notes outside of this thread. Please get in touch if interested :) maybe we could do a review and adversial/collaborative collaboration or something

I will say as a former student of philosophy, and someone who likes reading philosophy a lot more than the median person (though perhaps less than the median EA!) I've never been able to get my head around illusionism. Like, I just really don't understand how many people (at least in the rationalist/EA space) don't seem to grok the Hard Problem. I really think one of the best arguments for 'qualia realism' or the belief that consciousness is a phenomenon demanding of an extra-physical explanation (or perhaps a more convincing one than current theories allow) is a 'Moorean' argument:[1]

  1. If illusionism is true then I am not conscious
  2. I am conscious
  3. Therefore, illusionism is false

I could try and throw in references and arguments but, if I'm being honest, like every person I do not have the time to re-evaluate each philosophy tradition and argument from scratch, and I find this form of argument very, very strong against the illusionist school, be that Frankish, Dennett, Hofstadter, and illusionist-inclined rationalists (though I'm actually not sure Yudkowsky is an illusionist here?).

Of course, the illusionists would say that the term is confusing, and that they're not eliminativists. They'd say that there's a difference between consciousness1 - what we're all experiencing which they'd agree exists, and consciousness2 - which is the non-physical/mysterious/subjective/qualitative what-it-is-likeness. This is one of many things, I think, that causes debates on consciousness to often lead to people talking past each other.

Some final thoughts to end (and as I said at the beginning, maybe to discuss on a different place and time):

  1. I'd recommend trying Sam Harris' Waking Up app to explore what introspection tells you. I think it'd be better than many other approaches to meditation which might be a bit 'new-agey' for many EA/LW types. I think the experiences and insight I've had with meditation are much, much more convincing to me than what can come across as very confusing, unintuitive, and esoteric arguments in contemporary philosophy of mind.
  2. Sam Harris particularly mentions Douglas Harding, and how Dennett and Hofstadter completely misunderstand his point. I'm 100% with Harding over Dennett and Hofstadter here. One of Harding's students, Richard Lang, has a 'Headless Way' course on Waking Up and I think it's brilliant.
  3. In his great appearance on the 80k podcast, Chalmers brings up an inconsistent triad for the illusionist-inclined to deal with:

I mean, you better not hold, number one, that consciousness is required for moral status; two, that consciousness is entirely an illusion; and that, three, some beings have moral status.

           Though I suspect that this is again a definitional dispute on what we actually mean by the term 'consciousness'

  1. ^

    See the conclusion section in this recent Chalmers essay. Kammerer has responded here, but I haven't read that yet

MichaelStJules @ 2023-08-28T18:26 (+2)

I'm not sure how much you wanted to get into the object-level here, but I'll leave a few quick responses to a few points from the (strong) illusionist perspective (or what I understand it to be):

 

  1. If illusionism is true then I am not conscious
  2. I am conscious
  3. Therefore, illusionism is false

I assume this is supposed to refer to phenomenal consciousness specifically, not consciousness in general, because (strong) illusionists don't deny consciousness in general, and consciousness can be understood in different terms. And, it's worth noting that people have other illusions that we find hard to disabuse ourselves of on some level, like the Müller-Lyer illusion, with which Kammerer (2022b) illustrates. It's intuitively obvious that one line is longer than the other, but it's also false. The same could be the case for phenomenality (assuming the definition doesn't collapse to one compatible with strong illusionism). Kammerer (2022a) (which you linked to) describes other ways in which we are obviously conscious that are compatible with illusionism: functional and normative.

 

"I mean, you better not hold, number one, that consciousness is required for moral status; two, that consciousness is entirely an illusion; and that, three, some beings have moral status."

Though I suspect that this is again a definitional dispute on what we actually mean by the term 'consciousness'

I would say that this is largely definitional. Consciousness is not entirely an illusion according to strong illusionists; phenomenality (qualitativeness, what-it-is-likeness), classic qualia and dualism are illusions. You can just use the illusionist's conception of consciousness to ground moral status. This is the approach Dung (2022), Frankish and Muehlhauser take, and a 'conservative' approach described in Kammerer, 2019. That's also what I'd do, and I'd imagine the vast majority of strong illusionists would do.

That being said, I think (stance-independent) moral realism is false anyway (and did so before I became an illusionist), and strong illusionists probably have more reason to be moral antirealists of some kind than most, because similar or even the same debunking arguments would apply to both phenomenality and stance-independent moral claims, e.g. that pain is bad.

prisonpent @ 2023-08-28T20:13 (+5)

And, it's worth noting that people have other illusions that we find hard to disabuse ourselves of on some level ... It's intuitively obvious that one line is longer than the other, but it's also false.

Sure, but that only establishes that "it's intuitively obvious" is not an infinitely strong reason for belief. It remains a strong one. To overcome the Moorean argument you need to provide arguments for illusionism which are stronger.

MichaelStJules @ 2023-08-28T22:24 (+4)

Fair. I think the stronger arguments for (strong) illusionism are of the following form:

  1. Physicalism seems true and dualism (including property dualism and epiphenomenalism) false for various reasons.
  2. No other (physicalist) theory besides strong illusionism seems able to address the meta-problem of consciousness or even on the right path.
  3. No theory has an adequate solution to the hard problem of consciousness and some debates between them seem empirically unresolvable (e.g. where the line is between report(ability)/access and phenomenal consciousness), but every theory other than strong illusionism needs to solve it.
  4. There are specific illusionist explanations of some posited phenomenal or classic qualia properties.
  5. There don't seem to be any strong arguments against illusionism (other than possibly mere intuition that phenomenal consciousness is real).

To be clear, I'm leaving out all of the details, none of the above is obvious, and most or all of it is controversial. I think part of 3 isn't controversial (no full solution yet, and non-illusionist theories need it).

On 4, ineffability and privacy seem easy to explain. First, we don't today know enough of the details of how our brains make the discriminations they do, so we can't fully communicate or compare them in practice yet anyway. Second, even if I understood and could communicate how my brain makes the discriminations it does, this doesn't allow you to put yourself in the same brain states or generally make the same discriminations in the same way. You could potentially build an AI that could or modify your brain accordingly, but this hasn't been possible yet, and it wouldn't really be "you" making those discriminations. I can't subject you to my illusions just by explanation, so ineffability is true in practice. With a full enough description, we could compare and privacy wouldn't hold.

Also, I don't take "intuitively obvious" to be a strong reason for belief, but I am unusually skeptical.

prisonpent @ 2023-08-31T14:06 (+3)

but every theory other than strong illusionism needs to solve [the hard problem].

I agree in the sense that other theories can't simply dissolve it, but that's almost tautological. If you mean that other theories need to solve it in order to justify belief in them, or in other words if you mean that if we were all certain the hard problem would never be adequately resolved we would be forced to accept illusionism, then I don't think that's correct at all. 

Consider what we might call "the hard problem of physics": why this? Why anything? What puts the fire in the equations? Short of some galaxybrained PSR maneuver, which seems more and more dubious by the century, I doubt we're ever going to get an answer. It is completely inexplicable that anything should exist.

And yet it does. It's there, it's obviously there, everything you've ever seen or felt or thought bears witness to it, and someone who claims otherwise on the grounds that it doesn't make any sense has entirely misunderstood the nature of their situation. 


This is also how I feel about illusionism. Phenomenal experience is the only thing we have direct access to: all arguments, all inferences, all sense data, ultimately cash out in some regularity in the phenomenal content of consciousness. Whatever its ontological status, it's the epistemic ground of everything else. You can't justify the claim that phenomenal consciousness doesn't exist by pointing to patterns of phenomena, any more than you can demonstrate the nonexistence language in an essay or offer a formal disproof of modus ponens. 

So these illusionists explanations are, well, not really explanations of consciousness. They're explanations of a coarse world model in terms of a finer one, but the coarse world model wasn't the thing I wanted explained. On the contrary, it was a tiny provisional step towards an explanation: there are many lawlike regularities in the structure of my experiences, so I hypothesize a common cause and call it "my brain". It's a very successful hypothesis, and I'd like to know why - given that the world is more than just its shadow on the mind[1], why should the structure of one reflect the other? 

The illusionist response of "actually your hypothesis is the evidence and your data are just hypotheses" misses the point entirely. 

  1. ^

    the dumbest possible solution, but I can't rule it out

MichaelStJules @ 2023-08-31T18:10 (+2)

The analogy to the "hard problem of physics" is interesting, and my stance towards the problem is the same as yours.

 

However, I don't think the analogy really works.

This is also how I feel about illusionism. Phenomenal experience is the only thing we have direct access to: all arguments, all inferences, all sense data, ultimately cash out in some regularity in the phenomenal content of consciousness. Whatever its ontological status, it's the epistemic ground of everything else.

Is phenomenality itself necessary/on the causal path here? Illusionists aren't denying consciousness, that it has contents, that there's regularity in its contents or that it's the only thing we have direct access to. Illusionists are just denying the phenomenal nature of consciousness or phenomenal properties. I would instead say, more neutrally:

Experience (whatever it is) is the only thing we have direct access to: all arguments, all inferences, all sense data, ultimately cash out in some regularity in the content of consciousness (whatever it is). Whatever its ontological status, it's the epistemic ground of everything else.

Note also that the information in or states of a computer (including robots and AIs) also play a similar role for the computer. And, a computer program can't necessarily explain how it does everything it does. "Ineffability" for computers, like us, could just be cognitive impenetrability: some responses and contents are just wired in, and their causes are not accessible to (certain levels of) the program. For "us", everything goes through our access consciousness.

So, what exactly do you mean by phenomenality, and what's the extra explanatory work phenomenality is doing here? What isn't already explained by the discriminations and responses by our brains, non-phenomenal (quasi-phenomenal) states or just generally physics?

If you define phenomenality just by certain physical states, effects or responses, or functionalist or causal abstractions thereof, say, then I think you'd be defining away phenomenality, i.e. "zero qualia" according to Frankish (paper, video).

prisonpent @ 2023-09-01T17:15 (+3)

Is phenomenality itself necessary/on the causal path here?

I have no idea what the causal path is, or even whether causation is the right conceptual framework here. But it has no bearing on whether phenomenal experiences exist: they're particular things out there in the world (so to speak), not causal roles in a model. 

Note also that the information in or states of a computer (including robots and AIs) also play a similar role for the computer.

It plays a similar role, for very generous values of "similar", in the computer qua physical system, sure. And I am perfectly happy to grant that "I" qua human organism am almost certainly a causally closed physical system like any other. (Or rather, the joint me-environment system is). But that's not what I'm talking about. 

For "us", everything goes through our access consciousness.

I'm not talking about access consciousness either! That's just one particular sort of mental state in a vast landscape. The existence of the landscape - as a really existing thing with really existing contents, not a model  - is the heart of the mystery. 

what's the extra explanatory work phenomenality is doing here?

My whole point is that it doesn't do explanatory work, and expecting it to is a conceptual confusion. The sun's luminosity does not explain its composition, the fact that looking at it causes retinal damage does not explain its luminosity, the firing of sensory nerves does not explain the damage, and the qualia that constitute "hurting to look at" do not explain the brain states which cause them.

Phenomenality is raw data: the thing to be explained. Not what I do, not what I say, not the exact microstate of my brain, not even the structural features of my mind - but the stuff being structured, and the fact there is any.

If you define phenomenality just by certain physical states, effects or responses, or functionalist or causal abstractions thereof

I don't define phenomenality! I point at it. It's that thing, right there, all the time. The stuff in virtue of which all my inferential knowledge is inferential knowledge about something, and not just empty formal structure. The relata which introspective thought relates[1]. The stuff at the bottom of the logical positivists' glass. You know, the thing.

  1. ^

    And again, I am only pointing at particular examples, not defining or characterizing or even trying to offer a conceptual prototype: qualia need not have anything to do with introspection, linguistic thought, inference, or any other sort of higher cognition. In particular, "seeing my computer screen" and "being aware of seeing my computer screen" are not the same quale.

MichaelStJules @ 2023-09-01T19:17 (+2)

But it seems to me that phenomenal aspects themselves aren't the raw data by which we know things. If you accept the causal closure of the physical, non-phenomenal aspects of our discriminations and cognitive responses are already enough to explain how we know things, or the phenomenal aspects just are physical aspects (possibly abstracted to functions or dispositions), which would be consistent with illusionism.

Or, do you mean that knowing itself is not entirely physical?

prisonpent @ 2023-09-02T09:57 (+3)

If you accept the causal closure of the physical

I think the causal closure of the physical is very, very likely, given the evidence. I do not accept it as axiomatic. But if it turns out that it implies illusionism, i.e. that it implies the evidence does not exist, then it is self-defeating and should be rejected.

Or, do you mean that knowing itself is not entirely physical?

I am referring to my phenomenology, not (what I believe to be) the corresponding behavioral dispositions. E.g. so far as I know my visual field can be simultaneously all blue and all dark, but never all blue and all red. We have a clear path towards explaining why that would be true, and vague hints that it might be possible to explain why, given that it's true, I can think the corresponding thoughts and say the corresponding words.  But explaining how I can make that judgement is not an explanation of why I have visual qualia to begin with. 

Whether these are also physical in some broader sense of the word, I can't say.

TAG @ 2023-08-30T12:47 (+1)

The argument is basically saying that if X can't be explained by physicalism, then X is an illusion. That's treating physicalism as unfalsifable.

MichaelStJules @ 2023-08-30T15:47 (+2)

No, it isn't just saying that. That understates the case for both physicalism and illusionism that I outlined.

We have good independent reasons to believe physicalism and against alternatives, and I mentioned this, but didn't give examples. Here are some:

  1. There's the good empirical track record of physicalism generally and specifically in giving adequate explanations for the seemingly nonphysical.
  2. There are the questions of where, when, how and why nonphysical properties arise, whether that's from or with a collection of particles in a system, over a human's development from conception, or in our evolutionary history, that nonphysicalist theories struggle to give sensible answers to. If the nonphysical is fundamental and there at all levels (panpsychism), then we have the combination problem: how does the nonphysical combine to make minds like ours?
  3. There's the expansion of the physical to include what's empirically reliable and testable to very high precision and for which we have precise fundamental accounts, including interactions with other fundamental physical properties (although not necessarily all such interactions, e.g. we don't yet have a good theory of quantum gravity). For example, gravity, quantum superposition and quantum entanglement might have seemed unphysical before, but they've become part of our physical ontology because of their reliability and our very good (but incomplete) understanding of them and their relationships with other things. Of course, maybe the seemingly nonphysical properties of minds will eventually come to gain the same status, but it’s nowhere close to that now. We shouldn’t be hasty to assume the existence of things that don't meet this bar, because the evidence for them is far weaker.

The illusionist also argues (or would want to, but currently lacks the details to make it very convincing) that there's a specific adequate (physicalist) explanation for the appearance of X that doesn’t require the existence of X. If the appearance of X doesn't depend on its existence, then the appearance of X isn't reliable evidence for its existence. Without any other independent argument for the existence of X (as seems to be the case for phenomenality and classic qualia), then it becomes like any other verified illusion, and our reasons to believe in X become very weak.

JWS @ 2023-08-29T20:04 (+4)

Thanks for your response Michael (and your one below to prisonpent). I'll try to keep it to the point and pre-commit to not responding further as I don't think this is the right place to have a debate about illusionism,[1] but since you presented somewhat of a case for the illusionist I thought I might present the other side.

To me, phenomenal consciousness refers to the first-person perspective, which obviously exists. That first person perspective can make mistakes about the nature of the world, as in the Müller-Lyer case, but I have the experience nonetheless. In the Kammerer(a) piece, he argues that the argument from the anomalousness of phenomenal consciousness is a piece of evidence in favour of illusionism, but I take it one as in favour of non (reductive) physicalism. One man's modus ponens and all that.

I often find (strong) illusionist writings utterly baffling. I actually re-skimmed Quining Qualia before writing this, and it was really difficult for me to understand[2] even when consulting with GPT-4 in philosopher mode. In Kammerer(a) he refers to a 'quasi-phenomenal state', which I have no idea what that is. Again, viewing phenomenal consciousness as the first-person perspective, that just sounds like saying I have 'a fake first-person perspective'. To me that's the same as saying the first-person perspective doesn't exist, and since it clearly does, there is evidence that illusionist theories do not explain and therefore they are bad theories both philosophically and scientifically.

  1. ^

    I'd be happy to pick this up in an alternate forum though :)

  2. ^

    This is partly a me problem, but is also a philosophy problem. Sometimes technical language is needed, but the language of a lot of academic philosophy on all sides often seems to be needlessly obscurantist to me.

MichaelStJules @ 2023-08-30T02:31 (+2)

Also, the commentary here on Nicholas Humphrey's views may be illustrative of definitional issues. Humphrey denies the label illusionism for his theory, but Frankish responds that his theory really is illusionist. Also, Schwitzgebel and Nida-Rümelin attempted to define phenomenality as common features of multiple example mental states (and/or by contrast with unconscious states), but Frankish argues that this doesn't work to define phenomenality (at least not in a way incompatible with illusionism):

 

For, precisely because his definition is so innocent, it is not incompatible with illusionism. As I stressed in the target article, illusionists do not deny the existence of the mental states we describe as phenomenally conscious, nor do they deny that we can introspectively recognize these states when they occur in us. Moreover, they can accept that these states share some unifying feature. But they add that this feature is not possession of phenomenal properties (qualia, what-it’s-like-ness, etc.) in the substantive sense created by the phenomenality language game. Rather, it is possession of introspectable properties that dispose us to judge that the states possess phenomenal properties in that substantive sense (of course, we could call this feature ‘phenomenality’ if we want, but I take it that phenomenal realists will not want to do that). Now, the challenge of the target article was to articulate a concept of phenomenality that is recognizably substantive (and so not compatible with illusionism) yet stripped of all commitments incompatible with physicalism. Schwitzgebel hasn’t done this, since his conception is not substantive.

Nevertheless, Schwitzgebel has succeeded in something perhaps more important. He has defined a neutral explanandum for theories of consciousness, which both realists and illusionists can adopt. (I have referred to this as consciousness in an inclusive sense. We might call it simply consciousness, or, if we need to distinguish it from other forms, putative phenomenal consciousness.) In doing this, Schwitzgebel has performed a valuable service.

 

However, I deny that it is the sort of feature realists think it is. It is not some intrinsic quality, akin to the property characterized by the phenomenality language game. Rather, it is (roughly) the property of having a cluster of introspective representational states and dispositions that create the illusion that one is acquainted with some intrinsic quality. I am sure that this is not what Nida-Rümelin thinks the procedure picks out, but I don’t see how she can rule out the possibility.

MichaelStJules @ 2023-08-29T23:16 (+2)

I'm not sure exactly what you mean by "first-person perspective", but strong illusionists might not deny that it exists, if understood in functionalist terms, say.

Frankish says it is like something to be a bat, in terms of a bat's first-order responses or reactive patterns to things, but a bat can't know what it's like to be a bat, because they don’t have (sufficiently sophisticated) introspection on those first-order responses. Dennett says even bacteria have a kind of "user-illusion", because they can discriminate, but only "particularly reflective" humans are subject to the theorists' illusion and worry about things like the hard problem of consciousness. So, we could define first-person perspective in terms of responses or discriminations, and in a way compatible with strong illusionism. This would attribute first-person perspectives extremely widely, e.g. even to bacteria.

If by first-person perspective, you mean introspection, then illusionists wouldn't deny that humans have it.

If by first-person perspective, you mean classic qualia (private, ineffable, intrinsic, etc.), then an illusionist would deny that this exists.

Strong illusionists would also deny phenomenality, of course, in case that's different from classic qualia, but some attempted definitions of phenomenality (including what specific physicalist theories define consciousness as, e.g. broadcasting to a global workspace) actually could be understood as defining quasi-phenomenal states, and so compatible with illusionism.

A theory-neutral defintion of quasi-phenomenal states could be that they're real things, processes or responses (physical or otherwise) on which introspection (of the right kind) leads to beliefs in phenomenal properties, e.g. these quasi-phenomenal states appear epistemically to us as to be phenomenal. If introspection is reliable and can access phenomenal states, then these accessed phenomenal states would be quasi-phenomenal states under this definition. Illusionists would claim that introspection is not reliable, no phenomenal states actually exist, and so the beliefs in phenomenality are mistaken, hence illusions.

I think it would be wrong to take phenomenal properties as evidence that must be explained, and doing so begs the question against illusionism. What we have evidence of is the appearance of (our beliefs in) phenomenal properties, and illusionism tries to explain that without requiring the actual existence of phenomenal properties. Sometimes (maybe usually) appearances and beliefs are accurate instead of illusions, and the best explanation is based on what they represent actually existing.

LeonardDung @ 2023-08-28T17:27 (+4)

I agree. In case of interest: I have published a paper on exactly this question: https://link.springer.com/article/10.1007/s11229-022-03710-1

There, I argue that if illusionism/eliminativism is true, the question which animals are conscious can be reconstructed as question about particular kinds of non-phenomenal properties of experience. For what it’s worth, Keith Frankish seems to agree with the argument and, I’d say, Francois Kammerer does agree with the core claim (although we have disagreements about distinct but related issues). 

David Mathers @ 2023-08-28T17:31 (+2)

Thanks. (Pleased to see most of this stuff postdates my DPhil and therefore it's less embarrassing I haven't read it!). I guess I feel I don't really have enough gasp on what phenomenal consciousness is, beyond definition by examples to feel like I entirely understand what is meant by "there's consciousness, but not phenomenal consciousness". 

MichaelStJules @ 2023-08-28T17:40 (+2)

I think people generally or often have Nagel's what-it-is-likeness in mind as the definition of phenomenal consciousness (or at least without classic qualia or nonphysical properties).

If I recall correctly, Frankish (paper, video) called this 'diet qualia' and argued that attempts to define phenomenality in more specific terms generally reduce to either classic qualia or 'zero qualia' (I think purely functionalist terms, compatible with strong illusionism).

David Mathers @ 2023-08-28T17:50 (+4)

Sure, but even the Nagel thing is kind of a metaphor. I find it easy to class which mental states it does or doesn't apply to, but it's not something I can really characterize in other terms? I don't know, I've become less certain I know what all this terminology means the longer I've thought about it over the years.

prisonpent @ 2023-08-28T19:54 (+3)

but it's not something I can really characterize in other terms?

Well, that's the whole issue, isn't it? Qualia are the things that can't be fully characterized by their relations.

David Mathers @ 2023-08-29T07:27 (+4)

On some definitions of "qualia" yes. I.e. not if you talk in the Tye/Byrne way where "qualia" turn out just to be perceived external properties that show up in the phenomenology, for example. And not, necessarily if qualia just means "property of a conscious experience that shows up in the phenomenology". But some people do think that about qualia in the second sense, and probably some people do endorse the stronger claim that this is part of the definition of "qualia". 

Still having glanced at the Frankish paper I think I get what's going on now. Frankish is (I think, didn't read just glanced!) doing something like claiming standard dualist thought experiments show that ordinary people think there is more to consciousness than what goes on physically and functionally, then arguing that this makes that part of the meaning of "phenomenally conscious", so if there's nothing beyond the physical and the functional, there is no phenomenal consciousness by definition. 

Brad West @ 2023-08-27T13:56 (+17)

Another pernicious aspect of Eliezer's Zombie discussion is his insinuation that differing views from him on the matter imply that one should not take seriously their other views. Even if Yudkowsky is right and others are fantastically wrong on zombies, this provides but a very small credence update as to how we should consider their other views being accurate. History is littered with brilliant and useful people who have been famously and impressively wrong on some specific matters.

Jackson Wagner @ 2023-08-28T22:07 (+9)

I agree, and I think your point applies equally well to the original Eliezer Zombie discussion, as to this very post.  In both cases, trying to extrapolate from "I totally disagree with this person on [some metaphysical philosophical questions]" to "these people are idiots who are wrong all the time, even on more practical questions", seems pretty tenuous.

Brad West @ 2023-08-28T22:59 (+5)

To be fair to the OP, I don't think that he was saying you should not consider the views of Yudkowsky- in fact he admits that Yudkowsky has some great thoughts and that he is an innovator.

OP observes that he himself for a long time reflexively deferred to Yudkowksy. I think his objective with his post was to point out some questions on which he thought Yudkowsky was  pretty clearly wrong (although it is not clear that he accomplished this). His goal was not to urge people not to read or consider Yudkowsky, but rather to urge people not to reflexively defer to him.

Omnizoid @ 2023-08-29T04:01 (+5)

Well put!  Though one nitpick: I didn't defer to Eliezer much.  Instead, I concluded that he was honestly summarizing the position.  So I assumed physicalism was true because I assumed, wrongly, that he was correctly summarizing the zombie argument. 

JWS @ 2023-08-28T21:35 (+15)

I've had some time to think about this post and it's reception both here and on LessWrong. There's a lot of discussion about the object-level claims and I don't think I have too much to say about adjudicating them above what's been said already, so I won't. Instead, I want to look at why this post is important at all.

 

1: Why does it matter if someone is wrong, frequently or egregiously?

I think this post thinks that its thesis matters because of the reach of Eliezer's influence on the rationalist and EA communities. It certainly seems historically true given Eliezer's position as one of the key founders of the Rationalist movement, but I don't know how strong it is now, or how that question could be operationalised in a way where people could change their minds about it.

If you think Eliezer believes some set of beliefs X that are 'egregiously wrong' then it's probably worth writing separate posts about those issues rather than a hit piece. If you think that the issue is dangerous community epistemics surrounding Eliezer, then it'd probably be better if you focused on establishing that before bringing up the object level, or even bringing up the object level at all.

This has been a theme of quite a few posts recently (i.e. last year or so) on the Forum, but I think I'd like to see some more thoughts explaining what people mean by 'deference' or 'epistemic norms', and ideally some more concrete evidence about them being good or bad beyond personal anecdotes/vibes at an EAG.

2: Did it need to be said in this way?

Ironically, a lot of what Omnizoid critcises Eliezer for is stuff I find myself thinking about Omnizoid takes some of the times! I definitely think this post could have had a better light-to-heat ratio if it was worded and structured differently, and I think it's to your credit Omni that you recongised this, but bad that you posted it in its original state on both Forums.

3: Why is Eliezer so confident?

I've never met Eliezer or interacted in the same social circles, so I don't know to what extent personality figures into it. I Eliezer most clearly argues for this approach in his book Inadequate Equilibria he argues against what he calls 'modest epistemology' (see Chapter 6), I think he'd rather believe strongly and update accordingly[1] if proven wrong than slowly approach the right belief via gradient descent. This would explain why he's confident when he's both right and both wrong.

4: Why is the EA/LW reaction so different?

So it's reaction is definitely more 'positive' on the EA Forum than LessWrong. But I wouldn't say it's 'positive' (57 karma, 106 votes, 116 comments) at time of writing. That's decidedly 'mixed' at best, so I don't think you can view this post as 'EA Forum says yeah screw Eliezer' and LessWrong says 'boo stupid EA Forum', I think both community's views are more nuanced than that.

I do get a sense that while many on LW disagree with Eliezer on a lot, everyone there respects him and his accomplishments, whereas there is an EAF contigent that really doesn't like Eliezer or his positions/vibes, and are happy to see him 'taken down a peg or two'. I think there's a section of LW that doesn't like this section of EA, hence Eliezer's claim about the 'downfall of EA' in his response.[2]

5: An attempted innoculation?

I think this is again related to the perception of outsiders of EA. Lots of our critics, fairly or unfairly, hone in on Eliezer and his views that are more confident/outside the overton window and run wit those to tarnish both EA and rationalism. Maybe this post is attempting to show internally and externally that Eliezer isn't/shouldn't be highly respected in the community to innoculate against this criticism, but I'm not sure it does that well.

 

I think having this meta-level discussion about what discussion we're actually having, or want to have, helps move us in the light-not-heat direction. All in all, I think the better discussion is to try and find measure on Eliezer's. I think my main point is 1) - I think there are some good object-level discussions, and it's worth being wary of the confidence the community places in primary figures, but on balance on reflection I don't think this was the right way to go about it.

  1. ^

    Again, not making a claim on whether he does or not

  2. ^

    Epistemic status - reading vibes, very unconfident

Omnizoid @ 2023-08-28T21:42 (+3)

Thanks for this comment.  I agree with 2.  On 3, it seems flatly irrational to have super high credences when experts disagree with you and you do not have any special insights.

If an influential person who is given lots of deference is often wrong, that seems notable.  If people were largely influenced by my blog, and I was often full of shit, expressing confident views on things I didn't know about, that would be noteworthy.  

Agree with 4. 

On 5, I wasn't intending to criticize EA or rationalism.  I'm a bit lukewarm on rationalism, but enthusiastically pro EA, and have, in fact, written lengthy responses to many of the critics of EA.  Really my aim was to show that Eliezer is worthy of much less deference then he currently is given, and to argue the object level--that many of his view, commonly believed in the community, are badly mistaken. 

JWS @ 2023-08-29T15:08 (+3)

I guess on #3, I suggest reading Inadequate Equilibria. I think it's given me more insight into Eliezer's approach to making claims. The Bank of Japan example he uses in the book is probably, ironically, one of the clearest examples of an uncorrect, egregious and overconfident mistake. I think the question of when to trust your own judgement over experts, of how much to incorporate expert views into your own, and how to identify experts in the first place is an open and unsolved issue (perhaps insoluble?).

Point taken on #5, was definitely my most speculative point.

I think it comes back to Point #1 for me. If your core aim was: "to show that Eliezer is worthy of much less deference then he currently is given" then I'd want you to show how much deference is given to him over and above the validity of his ideas spreading in the community, its mechanisms, and why that's a potential issue more than litigating individual object-level cases. Instead, if your issue is the commonly-believed views in the community that you think are incorrect, then you could have argued against those beliefs without necessarily invoking or focusing on Eliezer. In a way the post suffers from kinda trying to be both of those critiques at once, at least in my opinion. That's at least the feedback I'd give if you wanted to revisit this issue (or a similar one) in the future.

Scott Alexander @ 2023-08-28T19:50 (+12)

I won't comment on the overall advisability of this piece, but I think you're confused about the decision theory (I'm about ten years behind state of the art here, and only barely understood it ten years ago, so I might be wrong).

The blackmail situation seems analogous to the Counterfactual Mugging, which was created to highlight how Eliezer's decision theories sometimes (my flippant summary) suggest you make locally bad decisions in order to benefit versions of you in different Everett branches. Schwartz objecting "But look how locally bad this decision is!" isn't telling Eliezer anything he doesn't already know, and isn't engaging with the reasoning. I think I would pay Omega in Counterfactual Mugging; I agree Schwartz's case is harder, but provisionally I think it unintentionally adds a layer of Pascal's Wager + torture vs. dust specks by making the numbers so extreme, which are two totally unrelated reasoning vortices.

I think the "should you procreate to make your father procreate?" question only works if your father's cognitive algorithms are perfectly correlated with yours, which no real father's are. To make the example fair, it should be more like "You were created by Omega, a god who transcends time. It resolved to created you if and only if It predicted that you would procreate, and It is able to predict everything perfectly. Now should you procreate?" I would also accept "You were created by a clone of yourself in the exact same situation, down to the atom, that you find yourself in now, including worrying about being created by a clone of yourself and so on. Should you procreate?" In both of these, the question seems much more open than with a normal human father.

If Eliezer's decision theories make no sense and are ignoring easy disproofs, then everyone else who finds them compelling (or at least not obviously wrong) after long study, including people like Wei Dai, Abram Demski, Scott Garrabrant, Benya Fallenstein, etc, is also bizarrely and inexplicably wrong. This is starting to sound less like "Eliezer is a uniquely bad reasoner" and more like "there's something in the water supply here that makes extremely bright people with math PhDs make simple dumb mistakes that any rando can notice."

keith_wynroe @ 2023-08-29T01:00 (+36)

>that makes extremely bright people with math PhDs make simple dumb mistakes that any rando can notice

Bright math PhDs that have already been selected for largely buying into Eliezer's philosophy/worldview, which changes how you should view this evidence. Personally I don't think FDT is wrong as much as just talking past the other theories and being confused about that, and that's a much more subtle mistake that very smart math PhDs could very understandably make

Guy Raveh @ 2023-08-28T20:58 (+12)

This is starting to sound less like "Eliezer is a uniquely bad reasoner" and more like "there's something in the water supply here that makes extremely bright people with math PhDs make simple dumb mistakes that any rando can notice."

Independently of all the wild decision theory stuff, I don't think this is true at all. It's more akin to how for a few good years, people thought Mochizuki might have proven the ABC conjecture. It's not that he was right - just that he wrapped everything in so much new theory and terminology, that it took years for people to understand what he meant well enough to debunk him. He was still entirely wrong.

Scott Alexander @ 2023-08-28T22:07 (+6)

Were there bright people who said they had checked his work, understood it, agreed with him, and were trying to build on it? Or just people who weren't yet sure he was wrong?

David Mathers @ 2023-08-29T07:49 (+14)

'Were there bright people who said they had checked his work, understood it, agreed with him, and were trying to build on it?'

Yes, I think. Though my impression (Guy can make a better guess of this than me, since he has maths background) is that they were an extreme minority in the field, and all socially connected to Mochizuki:  https://www.wired.com/story/math-titans-clash-over-epic-proof-of-the-abc-conjecture/

'Between 12 and 18 mathematicians who have studied the proof in depth believe it is correct, wrote Ivan Fesenko of the University of Nottingham in an email. But only mathematicians in “Mochizuki’s orbit” have vouched for the proof’s correctness, Conrad commented in a blog discussion last December. “There is nobody else out there who has been willing to say even off the record that they are confident the proof is complete.”'

In any case with FDT, it might not really be an either/or of 'people who endorse it are clearly mistaken'  v. 'the critiques are clearly mistaken'. Often in philosophy, all known views have significant costs, but its unclear what that means about what you should accept/reject. In any case, as I've said elsewhere in this comment section, FDT has now been defended in Journal of Philosophy, so in terms of academic philosophy it is very definitely out of the crank category sociologically (rightly or wrongly): https://philpapers.org/rec/LEVCDI
https://leiterreports.typepad.com/blog/2022/07/best-general-philosophy-journals-2022.html
That makes me fairly confident FDT has something going for it. 

Guy Raveh @ 2023-08-30T21:00 (+5)

I might add that Mochizuki also isn't a crank (as far as I understand, at least) - just someone who made it difficult to realise that he was wrong in his very big claim despite being a smart person.

David Mathers @ 2023-08-30T11:47 (+10)

'The blackmail situation seems analogous to the Counterfactual Mugging, which was created to highlight how Eliezer's decision theories sometimes (my flippant summary) suggest you make locally bad decisions in order to benefit versions of you in different Everett branches. Schwartz objecting "But look how locally bad this decision is!" isn't telling Eliezer anything he doesn't already know, and isn't engaging with the reasoning'

I just control-F searched the paper Schwarz reviewed, for "Everett", "quantum", "many-worlds" and "branch" and found zero hits. Can't really blame Schwarz for ignoring an argument that does not appear in the paper! There's no mention of these in the Soares and Levinstein FDT paper that did get published in J Phil either.

TAG @ 2023-08-30T12:17 (+2)

The remark.about Everett branches rather gives the game away. Decision theories rest on assumptions about the nature of the universe and of the decider, so trying to formulate a DT that will work.perfectly in any universe is hopeles.

Omnizoid @ 2023-08-28T20:11 (+2)

If your action affects what happens in other Everett branches, such that there are actual, concretely existing people whose well-being is affected by your action to blackmail, then that is not relevantly like the case given by Schwarz.  That case seems relevantly like the twin case, where I think there might be a way for a causal decision theorist to accomodate the intuition, but I am not sure.  

We can reconstruct the case without torture vs dust specks reasoning, because that's plausibly a cofounder.  Suppose a demon is likely to create people who will cut off their legs once they exist.  Suppose being created by the demon is very good.  Once you're created, do you have any reason to cut off your legs, assuming it doesn't benefit anyone else?  No! 

In the twin case, suppose that there are beings named Bob.  Each being named Bob is almost identical to the last one--there choices are 99.9% correlated--and can endure great cost to create another Bob when he dies.  It seems instrumentally irrational not to bare great costs.  

I think it's plausible that most people are just not very good at generating true beliefs about philosophy, just as they're not good at generating true beliefs about physics.  Philosophy is really fricking hard!  So the phenomenon "lots of smart people with a math background rather than a philosophy background hold implausible views about philosophy," isn't news.  However, if someone claims to be the expert on physics, philosophy, decision theory, and AI, and then they turn out to be very confused about philosophy, then that is a mark against their reasoning abilities.  

It's true that there is a separate interesting question about how so many smart people go so wrong about philosophy (note, I'd dispute the characterization that these are errors basic enough that a rando can figure them out--I think it wouldn't have been obvious to me what the errors were if it weren't for MacAskill and Schwarz who are very much non-randos).  But the thing I intended to convey in this article is that, I know a lot about philosophy, such that I think I'm pretty good at assessing when people are totally wrong in the area of philosophy.  This isn't true of many things.  And the substantial majority of the time, when Eliezer has a controversial philosophical view, it turns out to be badly confused. 

TAG @ 2023-09-07T12:29 (+4)

However, if someone claims to be the expert on physics, philosophy, decision theory, and AI, and then they turn out to be very confused about philosophy, then that is a mark against their reasoning abilities.

It's more effective to show they are confused about maths, physics and AI, since it is much easier to establish truth/consensus in those fields.

Scott Alexander @ 2023-08-28T22:06 (+3)

I don't want to get into a long back-and-forth here, but for the record I still think you're misunderstanding what I flippantly described as "other Everett branches" and missing the entire motivation behind Counterfactual Mugging. It is definitely not supposed to directly make sense in the exact situation you're in. I think this is part of why a variant of it is called "updateless", because it makes a principled refusal to update on which world you find yourself in in order to (more flippant not-quite-right description) program the type of AIs that would weird games played against omniscient entities.

If the demon would only create me conditional on me cutting off my legs after I existed, and it was the specific class of omniscient entity that FDT is motivated by winning games with, then I would endorse cutting off my legs in that situation. 

(as a not-exactly-right-but-maybe-helpful intuition pump, consider that if the demon isn't omniscient - but simply reads the EA Forum - or more strictly can predict the text that will appear on the EA Forum years in the future - it would now plan to create me but not you, and I with my decision theory would be better off than you with yours. And surely omniscience is a stronger case than just reads-the-EA-Forum!)

If this sounds completely stupid to you, and you haven't yet read the LW posts on Counterfactual Mugging. I would recommend starting there; otherwise, consider finding a competent and motivated FDT proponent (ie not me) and trying to do some kind of double-crux or debate with them, I'd be interested in seeing the results.

Omnizoid @ 2023-08-29T03:37 (+1)

Oh sorry, yeah I misunderstood what point you were making.  I agree that you want to be the type of agent who cuts off their legs--you become better off in expectation.  But the mere fact that the type of agent who does A rather than B gets more utility on average does not mean that you should necessarily do A rather than B.  If you know you are in a situation where doing A is guaranteed to get you less utility than B, you should do B.  The question of which agent you should want to be is not the same as which agent is acting rationally.  I agree with MacAskill's suggestion that FDT is the result of conflating what type of agent to be with what actions are rational.  FDT is close to the right answer for the second and a crazy answer for the first imo.  

Happy to debate someone about FDT.  I'll make a post on LessWrong about it.  

One other point, I know that this will sound like a cop-out, but I think that the FDT stuff is the weakest example in the post.  I am maybe 95% confident that FDT is wrong, while 99.9% confident that Eliezer's response to zombies fails and 99.9% confident that he's overconfident about animal consciousness.

Scott Alexander @ 2023-08-29T05:34 (+4)

Sorry if I misunderstood your point. I agree this is the strongest objection against FDT. I think there is some sense in which I can become the kind of agent who cuts off their legs (ie by choosing to cut off my legs), but I admit this is poorly specified.

I think there's a stronger case for, right now, having heard about FDT for the first time, deciding I will follow FDT in the future. Various gods and demons can observe this and condition on my decision, so when the actual future comes around, they will treat me as an FDT-following agent rather than a non-FDT-following agent. Even though future-created-me isn't exactly in a position to influence the (long-since gone) demon, current me is in a position to make this decision for future relevant situations, and should decide to follow FDT in general. Part of this decision I've made involves being the kind of person who would take the FDT option in hypothetical scenarios.

Then there's the additional question of whether to defect against the demons/gods later, and say "Haha, back in August 2023 I resolved to become an FDT agent, and I fooled you into believing me, but now that I've been created I'm just going to not cut off my legs after all". I think of this as - suppose every past being created by the demon has cut off its legs, ie the demon has a 100% predictive success rate over millions of cases. So the demon would surely predict if I would do this. That means I should (now) try really hard not to do this. Cf. Parfit's Hitchhiker. Can I bind my future self like this? I think empirically yes - I think I have enough honor that if I tell hypothetical demon gods now that I'm going to do various things, I can actually do them when the time comes. This will be "irrational" in some sense, but I'll still end up with more utility than everyone else. 

Is there some sense in which, if I decide not to cut off my legs, I would wink out of existence? I admit feeling a superstitious temptation to believe this (a non-superstitious justification might be wondering if I'm the real me, or a version of me in the omniscient demon's simulation to predict what I would do). I think the literal answer is no but that it's practically useful to keep my superstitious belief in this to allow myself to do the irrational thing that gets me more utility. But this is a weird enough sidetrack that I'm really not sure I'm still in normal Eliezer-approved-decision-theory-land at all.

I think an easier question is whether you should program an AI to always keep its pre-emptive bargains with gods and demons; here the answer is just straightforwardly yes. You don't have to assume that your actions alter your algorithm, you can just alter the algorithm directly. I think this is what Eliezer is most interested in, though I'm not sure.

Omnizoid @ 2023-08-29T14:14 (+2)

I know you said you didn't want to repeatedly go back and forth, but . . . 

Yes, I agree that if you have some psychological mechanism by which you can guarantee that you'll follow through on future promises--like programming an AI--then that's worth it.  It's better to be the kind of agent who follows FDT (in many cases).  But the way I'd think about this is that this is an example of rational irrationality, where it's rational to try to get yourself to do something irrational in the future because you get rewarded for it.  But remember, decision theories are theories about what's rational, not theories about what kind of agent you should be.  

I think we agree with both of the following claims: 

  1. If you have some way to commit in advance to follow FDT in cases like the demon case or the bomb case, you should do so.  
  2. Once you are in those cases, you have most reason to defect.  
  3. Given that you can predict that you'll have most reason to defect, you can sort of psychologically make a deal with your future self where you say "NO REALLY, DON'T DEFECT, I'M SERIOUS."  

My claim though, is that decision theory is about 2, rather than 1 or 3.  No one disputes that the kinds of agents who two box do worse than the kinds of agents who one box--the question is about what you should do once you're in that situation. 

If an AI is going to encounter Newcombe's problem a lot, everyone agrees you should program it to one box. 

Scott Alexander @ 2023-08-29T18:22 (+2)

I guess any omniscient demon reading this to assess my ability to precommit will have learned I can't even precommit effectively to not having long back-and-forth discussions, let alone cutting my legs off. But I'm still interested in where you're coming from here since I don't think I've heard your exact position before.

Have you read https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality ? Do you agree that this is our crux?

Would you endorse the statement "Eliezer, using his decision theory, will usually end out with more utility than me over a long life of encountering the sorts of weird demonic situations decision theorists analyze, I just think he is less formally-rational" ? 

Or do you expect that you will, over the long run, get more utility than him?

Omnizoid @ 2023-08-30T00:19 (+1)

I would agree with the statement "if Eliezer followed his decision theory, and the world was such that one frequently encountered lots of Newcombe's problems and similar, you'd end up with more utility."  I think my position is relatively like MacAskill's in the linked post where he says that FDT is better as a theory of the agent you should want to be than what's rational.  

But I think that rationality won't always benefit you.  I think you'd agree with that.  If there's a demon who tortures everyone who believes FDT, then believing FDT, which you'd regard as rational, would make you worse off.  If there's another demon who will secretly torture you if you one box, then one boxing is bad for you!  It's possible to make up contrived scenarios that punish being rational--and Newcombe's problem is a good example of that.

Notably, if we're in the twin scenario or the scenario that tortures FDTists, CDT will dramatically beat FDT.  

I think the example that's most worth focusing on is the demon legs cut off case.  I think it's not crazy at all to one box, and have maybe 35% credence that one boxing is right.  I have maybe 95% credence that you shouldn't cut off your legs in the demon case, and 80% confidence that the position that you can is crazy, in the sense that if you spent years thinking about it while being relatively unbiased you'd almost certainly give it up. 

Scott Alexander @ 2023-08-30T03:24 (+6)

I think rather than say that Eliezer is wrong about decision theory, you should say that Eliezer's goal is to come up with a decision theory that helps him get utility, and your goal is something else, and you have both come up with very nice decision theories for achieving your goal.

(what is your goal?)

My opinion on your response to the demon question is "The demon would never create you in the first place, so who cares what you think?" That is, I think your formulation of the problem includes a paradox - we assume the demon is always right, but also, that you're in a perfect position to betray it and it can't stop you. What would actually happen is the demon would create a bunch of people with amputation fetishes, plus me and Eliezer who it knows wouldn't betray it, and it would never put you in the position of getting to make the choice in real life (as opposed to in an FDT algorithmic way) in the first place. The reason you find the demon example more compelling than the Newcomb example is that it starts by making an assumption that undermines the whole problem - that is, that the demon has failed its omniscience check and created you who are destined to betray it. If your problem setup contains an implicit contradiction, you can prove anything.

I don't think this is as degenerate a case as "a demon will torture everyone who believes FDT". If that were true, and I expected to encounter that demon, I would simply try not to believe FDT (insofar as I can voluntarily change my beliefs). While you can always be screwed over by weird demons, I think decision theory is about what to choose in cases where you have all of the available knowledge and also a choice in the matter, and I think the leg demon fits that situation.

Omnizoid @ 2023-08-31T00:02 (+1)

The demon case shows that there are cases where FDT loses, as is true of all decision theories.  IF the question is which decision theory will programming into an AI generate most utility, then that's an empirical question that depends on facts about the world.  If it's once you're in a situation which  will get the most utility, well, that's causal decision theory.  

Decision theories are intended as theories of what is rational for you to do.  So it describes what choices are wise and which choices are foolish.  I think Eliezer is confused about what a decision theory is, but that is a reason to trust his judgment less.  

In the demon case, we can assume it's only almost infallible, so every million times it makes a mistake.  The demon case is a better example, because I have some credence in EVT, and EVT entails you should one box.  I am waaaaaaaaaaaay more confident FDT is crazy than I am that you should two box. 

Scott Alexander @ 2023-09-01T02:35 (+2)

I thought we already agreed the demon case showed that FDT wins in real life, since FDT agents will consistently end up with more utility than other agents.

Eliezer's argument is that you can become the kind of entity that is programmed to do X, by choosing to do X. This is in some ways a claim about demons (they are good enough to predict even the choices you made with "your free will"). But it sounds like we're in fact positing that demons are that good - I don't know how to explain how they have 999,999/million success rate otherwise - so I think he is right.

I don't think the demon being wrong one in a million times changes much. 999,999 of the people created by the demon will be some kind of FDT decision theorist with great precommitment skills. If you're the one who isn't, you can observe that you're the demon's rare mistake and avoid cutting off your legs, but this just means you won the lottery - it's not a generally winning strategy.

Decision theories are intended as theories of what is rational for you to do.  So it describes what choices are wise and which choices are foolish. 

I don't understand why you think that the choices that get you more utility with no drawbacks are foolish, and the choices that cost you utility for no reason are wise.

On the Newcomb's Problem post, Eliezer explicitly said that he doesn't care why other people are doing decision theory, he would like to figure out a way to get more utility. Then he did that. I think if you disagree with his goal, you should be arguing "decision theory should be about looking good, not about getting utility" (so we can all laugh at you) rather than saying "Eliezer is confidently and egregiously wrong" and hiding the fact that one of your main arguments is that he said we should try to get utility instead of failing all the time and then came up with a strategy that successfully does that.

Omnizoid @ 2023-09-02T15:44 (+1)

We all agree that you should get utility.  You are pointing out that FDT agents get more utility.  But once they are already in the situation where they've been created by the demon, FDT agents get less utility.  If you are the type of agent to follow FDT, you will get more utility, just as if you are the type of agent to follow CDT while being in a scenario that tortures FDTists, you'll get more utility.  The question of decision theory is, given the situation you are in, what gets you more utility--what is the rational thing to do.  Eliezer's turns you into the type of agent who often gets more utility, but that does not make it the right decision theory.  The fact that you want to be the type of agent who does X doesn't make doing X rational if doing X is bad for you and not doing X is rewarded artificially.  

Again, there is no dispute about whether on average one boxers or two boxers get more utility or which kind of AI you should build. 

MichaelStJules @ 2023-08-27T18:58 (+12)

Some other discussion of his views on (animal) consciousness here (and in the comments).

Tiresias @ 2023-09-01T06:29 (+11)

I really appreciate this post, and think you did a great job writing it. This is one of the most comprehensive summaries of animal consciousness research I have seen, and I will likely be referring back to it. If you're interested, I have compiled a few sources that try to demonstrate that "animals are conscious" is the consensus view among people who study it. (I was dating someone who weakly believed that animals weren't conscious, so I sent him a 7 page email on animal consciousness).

I would summarize the errors you're describing as such:

 

The zombie and animal errors feel like fundamental, egregious errors. The decision theory error just feels like a philosophical disagreement? Your critique of it sounds like a lot of philosophical critiques of other philosophical theories. So a disagreement, but not evidence of egregious errors. But I'm not a philosopher and haven't read philosophy in a long, long time. So I may be mistaken about the nature of your disagreement.

Jackson Wagner @ 2023-08-28T22:15 (+8)

I suggest maybe re-titling this post to:
"I strongly disagree with Eliezer Yudkowsky about the philosophy of consciousness and decision theory, and so do lots of other academic philosophers"

or maybe:
"Eliezer Yudkowsky is Frequently, Confidently, Egregiously Wrong, About Metaphysics"

or consider:
"Eliezer's ideas about Zombies, Decision Theory, and Animal Consciousness, seem crazy"

Otherwise it seems pretty misleading / clickbaity (and indeed overconfident) to extrapolate from these beliefs, to other notable beliefs of Eliezer's -- such as cryonics, quantum mechanics, macroeconomics, various political issues, various beliefs about AI of course, etc.  Personally, I clicked on this post really expecting to see a bunch of stuff like "in March 2022 Eliezer confidently claimed that the government of Russia would collapse within 90 days, and it did not", or "Eliezer said for years that X approach to AI couldn't possibly scale, but then it did".

Personally, I feel that beliefs within this narrow slice of philosophy topics are unlikely to correlate to being "egregiously wrong" in other fields.  (Philosophy is famously hard!!  So even though I agree with you that his stance on animal consciousness seems pretty crazy, I don't really hold this kind of philosophical disagreement against people when they make predictions about, eg, current events.)

Jackson Wagner @ 2023-08-29T06:52 (+6)

reposting a reply by Omnizoid from Lesswrong:

"Philosophy is pretty much the only subject that I'm very informed about.  So as a consequence, I can confidently say Eliezer is eggregiously wrong about most of the controversial views I can fact check him on.  That's . . . worrying."

And my reply to that:

Some other potentially controversial views that a philosopher might be able to fact-check Eliezer on, based on skimming through an index of the sequences:

  • Assorted confident statements about the obvious supremacy of Bayesian probability theory and how Frequentists are obviously wrong/crazy/confused/etc.  (IMO he's right about this stuff.  But idk if this counts as controversial enough within academia?)
  • Probably a lot of assorted philosophy-of-science stuff about the nature of evidence, the idea that high-caliber rationality ought to operate "faster than science", etc.  (IMO he's right about the big picture here, although this topic covers a lot of ground so if you looked closely you could probably find some quibbles.)
  • The claim / implication that talk of "emergence" or the study of "complexity science" is basically bunk.  (Not sure but seems like he's probably right?  Good chance the ultimate resolution would probably be "emergence/complexity is a much less helpful concept than its fans think, but more helpful than zero".)
  • A lot of assorted references to cognitive and evolutionary psychology, including probably a number of studies that haven't replicated -- I think Eliezer has expressed regret at some of this and said he would write the sequences differently today.  But there are probably a bunch of somewhat-controversial psychology factoids that Eliezer would still confidently stand by.  (IMO you could probably nail him on some stuff here.)
  • Maybe some assorted claims about the nature of evolution?  What it's optimizing for, what it produces ("adaptation-executors, not fitness-maximizers"), where the logic can & can't be extended (can corporations be said to evolve?  EY says no), whether group selection happens in real life (EY says basically never).  Not sure if any of these claims are controversial though.
  • Lots of confident claims about the idea of "intelligence" -- that it is a coherent concept, an important trait, etc.  (Vs some philosophers who might say there's no one thing that can be called intelligence, or that the word intelligence has no meaning, or generally make the kinds of arguments parodied in "On the Impossibility of Supersized Machines".  Surely there are still plenty of these philosophers going around today, even though I think they're very wrong?)
  • Some pretty pure philosophy about the nature of words/concepts, and "the relationship between cognition and concept formation".  I feel like philosophers have a lot of hot takes about linguistics, and the way we structure concepts inside our minds, and so forth?  (IMO you could at least definitely find some quibbles, even if the big picture looks right.)
  • Eliezer confidently dismissing what he calls a key tenet of "postmodernism" in several places -- the idea that different "truths" can be true for different cultures.  (IMO he's right to dismiss this.)
  • Some pretty confident (all things considered!) claims about moral anti-realism and the proper ethical attitude to take towards life?  (I found his writing helpful and interesting but idk if it's the last word, personally I feel very uncertain about this stuff.)
  • Eliezer's confident rejection of religion at many points.  (Is it too obvious, in academic circles, that all major religions are false?  Or is this still controversial enough, with however many billions of self-identified believers worldwide, that you can get credit for calling it?)
  • It also feels like some of the more abstract AI alignment stuff (about the fundamental nature of "agents", what it means to have a "goal" or "values", etc) might be amenable to philosophical critique.

Maybe you toss out half of those because they aren't seriously disputed by any legit academics.  But, I am pretty sure that at least postmodern philosophers, "complexity scientists", people with bad takes on philosophy-of-science / philosophy-of-probability, and people who make "On the Impossibility of Supersized Machines"-style arguments about intelligence, are really out there!  They at least consider themselves to be legit, even if you and I are skeptical!  So I think EY would come across with a pretty good track record of correct philosophy at the end of the day, if you truly took the entire reference class of "controversial philosophical claims" and somehow graded how correct EY was (in practice, since we haven't yet solved philosophy -- how close he is to your own views?), and compared this to how correct the average philosopher is.

Sylvester Kollin @ 2023-08-27T10:17 (+7)

Wolfgang Schwartz

It's Schwarz.

Evidential decision theory would say that you shouldn’t smoke because smoking gives you evidence that you’ll have a shorter life.

Not so important, but I feel obliged to mention that this has been argued against by e.g. Eells (1982) and Ahmed (2014). In short, smoking will plausibly be preceded by a desire to smoke, and at the point of observing your own desire to do so, smoking or not does not provide additional evidence of cancer.

David Mathers @ 2023-08-29T11:18 (+2)

Disagree, or at least it doesn't have to be like that: I think deciding to smoke can give further evidence for the strength of the desire which at least could be further evidence that you are genetically predisposed to cancer if a common genetic cause between desire to smoke and getting cancer exists. 

Sylvester Kollin @ 2023-08-30T13:18 (+3)

Seems like you are thinking of a case without full introspection. Both Eells and Ahmed provide convincing tickle defences in this case as well. See Oesterheld (2022) for a review of the arguments (especially sections 6.3 and 6.4). 

David Mathers @ 2023-08-30T14:45 (+2)

At this point I have to admit that we've gotten beyond my knowledge of this stuff, and I can't really follow your comment! Though when I glanced at the Oesterheld I think I get what's going on, sort of, though it's not clear to me why decision theory should start with the assumption of perfect introspection, and I also suspect you can come up with a case where your X-ing provides evidence that Y bad thing will happen but does not causally influence Y, that gets around this while still looking bad for evidential decision theory. But that's only a guess, as again, I have no background on this stuff beyond generic philosophy education. 

Omnizoid @ 2023-08-27T12:47 (+1)

Yeah, though we can imagine that everyone feels similar urge to smoke, but it's only the people with the lesion who ultimately decide to smoke. 

Sylvester Kollin @ 2023-08-27T16:52 (+2)

As Ahmed notes (chapter 4.3.1), if the lesion doesn't work through your beliefs and desires, smoking is not a genuine option, and so this is not an argument against evidentialism.

prisonpent @ 2023-08-28T05:01 (+6)

Physicalists think once you’ve specified the way that matter behaves, that is sufficient to explain consciousness. Consciousness, just like tables and chairs, can be fully explained in terms of the behavior of physical things.

Non-physicalists think that the physicalists are wrong about this. Consciousness is its own separate thing that is not explainable just in terms of the way matter behaves. There are more niche views like idealism and panpsychism that we don’t need to go into, which say that consciousness is either fundamental to all particles or the only thing that exists, so let’s ignore them. The main view about consciousness is called dualism, according to which consciousness is non-physical and there are some psychophysical laws, that result in consciousness when there are particular physical arrangements.

 

This sort framing, which conflates 

is ironically an excellent example of LessWrong received wisdom leaking into the water supply. These are of course not unrelated topics, but they're not the same. 

1.

Physicalism is the thesis that only the physical exists. It is an extremely broad class of theories, differentiated in large part (but not exclusively) by disputes over what counts as physical. The main alternatives are dualism and neutral monism (though this is arguably still physicalism). Idealism is deader than dead. 

Physicalism is not and does not entail illusionism

Illusionism, aka eliminativism about consciousness, is very fringe and the vast majority of physicalists reject it. 

2.

Dualism is not "the main view" of consciousness. A slim majority of philosophers are physicalists.  

3.

Panpsychism is a thesis about what sort of physical systems have mental states (namely: all of them), not what mental states are or their causal structure. It is entirely compatible with both physicalism and property dualism. (And I suppose with substance dualism as well, though I'm not sure what would motivate that particular combination.)

4. 

Dualism is not emergentism. On the contrary, emergentism is typically (though not always) a physicalist position - and the claim that emergence entails substance dualism is one of the main lines of argument against it!

David Mathers @ 2023-08-28T11:32 (+2)

This is not the subarea of consciousness research I am most expert in, and I am not a very good philosopher, but I have long had the suspicion that "emergent" doesn't really mean anything precise at all, but is just a term used by scientists who want to (possibly sensibly) avoid thinking about metaphysics. I mean, I'm sure you can find philosophers using it, but if I see a philosopher say it, I don't feel like I immediately know what they mean, whereas I do (at least roughly) with "physicalism" "dualism" "panpsychism" "elminativism".
 

prisonpent @ 2023-08-28T19:36 (+1)

but is just a term used by scientists who want to (possibly sensibly) avoid thinking about metaphysics

It's certainly that, but I don't think it's just that. I've seen at least one instance (though I can't remember where) of someone explicitly not-rejecting the possibility of natural laws that switch on, so to speak, above a certain scale.

David Mathers @ 2023-08-29T07:22 (+2)

Yeah, I know it is sometimes used by philosophers with specific precise meanings, it's just I've never been sure that there is a standard precise(ish) meaning. 

david_reinstein @ 2023-08-27T12:54 (+5)

There’s a 1 in a googol chance that he’ll blackmail someone who would give in to the blackmail and a googol-1/googol chance that he’ll blackmail someone who won’t give in to the blackmail.

Did you mean the opposite of this? Sounds like you are saying he would almost never blackmail someone who WOULD give in and almost always blackmail someone who WOULDNT give in.

Omnizoid @ 2023-08-27T12:57 (+1)

Yes, sorry! 

david_reinstein @ 2023-08-27T13:10 (+2)

Phew. Please fix when you have a moment thanks. (Otherwise people may start to think they are not understanding things and give up reading.)

Omnizoid @ 2023-08-27T13:19 (+3)

Fixed! 

quinn @ 2023-08-27T22:55 (+3)

It does seem like a misjudgment, cuz the point of "my friends are sucked into a charismatic cult leader" doesn't necessarily have a lot to do with object level conclusions? It's about framing, the way attention is directed. An example of what I mean is "believing true things is hard and evolution's spaghetti code is unusually bad at it" is a frame (a characterization of an open problem), and you don't just throw it away when you say "this particular study was very credulously believed because no one had tried replicating it by the time thinking fast and slow was published, but you should've smelled/predicted something was wrong back then". If you're worried about overconfidence or overdeferrence amongs your friend group, it's pretty unrealistic for them to just take the wrong outputs at face value-- people correcting someone's mistakes is just the peer review process working as intended! If you really want to be concerned about this, you should show us that "if you're starting from correcting his object level mistake, then you're not being maximally efficient or clear in your own pursuit of answers". I think that would work! 

Apparently some old school news anchor, like 1950s of some kind, said "we don't tell people what to think. we tell them what to think about". This seems obviously to me to be the true source of fraught cult leader stuff, if there is any!!! 

oivavoi @ 2023-08-27T07:12 (+3)

Thanks a lot. This was a very convincing and valuable take-down of Eliezer. I tend to think, like you, that Eliezer's way of reasoning from first principles has done real damage to epistemic practices in EA circles. Just try to follow the actual evidence, for rationality's sake. It isn't more complicated than that.

Jackson Wagner @ 2023-08-28T22:04 (+8)

But all three parts of this "takedown" are about questions of philosophy / metaphysics?  How do you suggest that I "follow the actual evidence" and avoid "first principles reasoning" when we are trying to learn about the nature of consciousness or the optimal way to make decisions??

oivavoi @ 2023-08-29T19:25 (+9)

I realize that my comment was somewhat poorly worded. I do not mean that you can follow the evidence in an absolute and empirical sense when forming a belief about the nature of consciousness. What you can do, however, and which Eliezer doesn't do, is to pay attention to what the philosophers who spend their lives working on this question are saying, and take their arguments seriously. The first principle approach is kind of "I have an idea about consciousness which I think is right so I will not spend too much time looking at what actual philosophers are saying". 

(I did a master's degree in philosophy before turning to a career in social science, so at least I know enough about contemporary analytic philosophy to know what I don't know)

My comment "just follow the actual evidence" was not regarding consciousness or metaphysics, but regarding broader epistemic tendencies in the EA community. This tendency is very much Eliezer-ish in style: An idea that one knows best, because one is smart. If one has a set of "priors" one thinks are reasonably well-founded one doesn't need to look too much at empirical evidence, arguments among researchers or best practices in relevant communities outside of EA. 

A case in point that comes to mind was some time ago when EAs debated whether it is a good idea that close colleagues in EA orgs have sex with each other. Some people pointed out that this is broadly frowned upon in most high-risk or high-responsibility work settings. Eliezer and other EAs thought they knew better, because, hey - first principles and we know ethics and we are smart! So the question then becomes: who should we trust on this, Eliezer and some young EAs in their early twenties or thirties, or high-powered financial firms and intelligence agencies who have fine-tuned their organizational practices over decades? Hm, tough one.

There are obviously huge differences between metaphysics, empirical evidence on various social issues, and sexual ethics in organizations. But the similarity is the first principles style of thought that is common in EA: we have good priors so no need to listen too much to outsiders. 

I broadly agree with what the authors of "Doing EA better" wrote in their essay on this btw. They expressed similar points in a better and more precise way. 

Given that Eliezer has had such a huge influence on epistemic practices in EA I therefore think it is valuable with takedowns like this. Eliezer is not that smart, actually, and his style of thinking has led EAs astray epistemically.

Arturo Macias @ 2023-08-27T14:14 (+2)

I am as much a naturalist dualist as you are (see here), and I also find extremely suprising how confidently you write about fish suffering (even chickens are a doubtful case!). As a naturalistic dualist, you know how hard is to assess conscience (the ultimate noumenon).

My intuition is that conscient experience grows far more than linearly (perhaps exponentially!) with the size of the supporintg neural network. If this happens, the ample mayority of concience is concentrated in the apex taxa, while aggregate moral value of lower taxa is small (even if the number of individuals is massively bigger). 

Additionally, I made a some comments about your overconfidence on foreign policy, where all moral issues are dependent on historical counterfactuals. It is very easy to show that politician's arguments in this field are often absurd (the Chomsky method), but in general public declarations are purely instrumental.

On the other hand, the American Hegemony is the basis of a global integrated economy, and it has been an age of peace and global income convergence. Could you argue against American Foreing policy in a consequalist basis? What is the alternative you have in mind? 

Samin @ 2023-08-30T13:10 (+1)

I don’t know much about philosophy to participate in the zombies & animal consciousness debate meaningfully. (It takes me hours to get people who think there’s a 30% chance microns have qualia to start to understand the reason why they’re likely not. And the word “consciousness” is a not a good one, as people mean totally different things when they use it. And Yudkowsky eats fish but not octopi and some other seafood, because he think there’s a high enough chance octopi have consciousness. But, this is not my area, this is just something that’s fun to talk about.)

But the critique of FDT doesn’t seem valid at all.

If a simulated copy of you gives in to a threat, it makes sense to identically blackmail the real you. If you don’t, it doesn’t make sense to spend resources on reducing your utility.

If you’re the kind of agent who gives in to blackmail, everyone across the multiverse will extract everything you have from you, and you’ll get quite a negative utility for all the threats you didn’t have resources left to give in to. If you don’t give in to threats, you’ll get much less threats and won’t lose as much.

If you’re an AI trained with machine learning that does what a logical decision theory says you should do, you get a lower loss then an AI that does something else, and you get selected.