Thread for discussing Bostrom's email and apology

By Lizka @ 2023-01-13T13:33 (+72)

The Forum is getting a bit swamped with discussions about Bostrom's email and apology. We’re making this thread where you can discuss the topic

All other posts on this topic will be marked as “Personal Blog” — people who opt in or have opted into seeing “Personal Blog” posts will see them on the Frontpage, but others won’t; they’ll see them only in Recent Discussion or in All Posts. (If you want to change your "Personal Blog" setting, you can do that by following the instructions here.)

(Please also feel free to give us feedback on this thread and approach. This is the first time we’ve tried this in response to events that dominate Forum discussion. You can give feedback by commenting on the thread, or by reaching out to forum@effectivealtruism.org.)


Please also note that we have received an influx of people creating accounts to cast votes and comments over the past week, and we are aware that people who feel strongly about human biodiversity sometimes vote brigade on sites where the topic is being discussed. Please be aware that voting and discussion about some topics may not be representative of the normal EA Forum user base.


If you choose to participate in this discussion, please remember Forum norms. Chiefly, 

Please try to remember that most people on the Forum are here for collaborative discussions about doing good.


AnonymousCommentator @ 2023-01-13T23:07 (+232)

With apologies, I would like to share some rather lengthy comments on the present controversy. My sense is that they likely express a fairly conventional reaction. However, I have not yet seen any commentary that entirely captures this perspective. Before I begin, I perhaps also ought to apologise for my decision to write anonymously. While none of my comments here are terribly exciting, I would like to think, I hope others can still empathise with my aversion to becoming a minor character in a controversy of this variety.

Q: Was the message in question needlessly offensive and deserving of an apology?

Yes, it certainly was. By describing the message as "needlessly offensive," what I mean to say is that, even if Prof. Bostrom was committed to making the same central point that is made in the message, there was simply no need for the point to be made in such an insensitive manner. To put forward an analogy, it would be needlessly offensive to make a point about free speech by placing a swastika on one’s shirt and wearing it around town. This would be a highly insensitive decision, even if the person wearing the swastika did not hold or intend to express any of the views associated with the symbol. Furthermore, it could also be regarded as a reckless decision, in that the cavalier use of the symbol may, at least in some small way, needlessly contribute to the degradation of the protective taboo around these views. It would therefore be entirely reasonable for others to feel discomforted and disrespected by this decision and regard it as atrocious. Self-reflection, an apology, and a serious commitment to act differently in the future would all be more than warranted. Prof. Bostrom was certainly correct to apologise for causing needless offense with his words. Indeed, in my view, he ought to have apologised for this offense more thoroughly, with a note that more clearly displayed an empathetic understanding of why the rediscovered message was likely to be so upsetting for many. However, as others have noted, apologising for causing needless offense is not the same as apologising for the views that one actually holds.

Q: Were the actual views expressed in the message beyond the pale?

I do not believe this is as clear. As far as I can understand the message, at least on a charitable reading, it expresses the following views:

A. It is praiseworthy to try to acclimate people to engaging with discomforting ideas, even when these ideas are presented in blunt and provocative ways.

B. However, the professional costs of trying to foster this kind of intellectual culture are too high. It is not wise to express discomforting ideas in provocative ways, since people are liable to misunderstand and assume that you believe something worse than you in truth do.

C. (To provide an example of a discomforting idea that the author believes to be true) Black people have lower average IQ scores than white people do. Furthermore, IQ scores serve as a valid measure of intelligence.

D. (To provide an illustration of the communication style and reaction that the author advises against) If you express the above view bluntly, then people will assume that you dislike black people or believe they ought to be treated poorly.

The first claim is, in my own view, wrong and crucially misguided. Although it is often important to discuss upsetting ideas, it is simply unproductive and unkind to rub upsetting ideas in others’ faces rather than taking care to accommodate their reactions and feelings. Even putting aside the implications for one's own reputation, opting for this approach typically does not accomplish much beyond causing hurt. Furthermore, one ought to be cognisant of the fact that provocative presentations of ideas are, in many relevant cases, rather more likely to be misunderstood in ways that support harmful behaviours. This often-greater potential for harm to others is an important additional consideration against presenting discomforting ideas in maximally provocative ways.  Fortunately, Prof. Bostrom’s later comments suggest that he, at least, no longer believes in the first claim to the full extent he once did.

Of course, the third view is the central one that has attracted censure. Some have criticised Prof. Bostrom for not apologising for holding it. From what I understand, however, this view was in fact the mainstream view among psychologists at the time and, if it is fair to judge from the relevant Wikipedia page, likely still is. The same year the e-mail was sent, an American Psychiatric Association taskforce released a high-profile report on the state of the science of intelligence, with the title “Intelligence: Knowns and Unknowns.” The report claimed:

"African American IQ scores have long averaged about 15 points below those of Whites, with correspondingly lower scores on academic achievement tests. In recent years the achievement-test gap has narrowed appreciably. It is possible that the IQ-score differential is narrowing as well, but this has not been clearly established…. The differential between the mean intelligence test scores of Blacks and Whites (about one standard deviation, although it may be diminishing) does not result from any obvious biases in test construction and administration, nor does it simply reflect differences in socioeconomic status. Explanations based on factors of caste and culture may be appropriate, but so far have little direct empirical support. There is certainly no such support for a genetic interpretation. At present, no one knows what causes this differential."

While it is certainly possible for mainstream academics in a field to be wrong, trusting in an apparent academic consensus should not be treated as beyond the pale or interpreted as a serious indictment of one's character. In my view, the fact that Prof. Bostrom has held this belief does not, in itself, warrant condemnation.

Q: Should Prof. Bostrom face significant professional repercussions for his message?

I do not personally believe so. Individuals are of course free to choose not to associate with Prof. Bostrom. That is each individual’s own choice. However, there is the additional question of whether this kind of past behaviour warrants professional sanctions, such as removing some of the person’s institutional affiliations or professional titles, electing not to invite the person to professional events to which their expertise is relevant, or refraining from citing the person’s work when it is relevant to one’s own. This would seem excessive to me.

If an individual callously makes a needlessly offensive statement twenty-five years prior, apologises at once, and does not to one’s knowledge exhibit any equivalent behaviour in the subsequent twenty-five years, I do not view this as sufficient grounds for professional sanctions. I am conscious, of course, that others may disagree about the appropriate policy for responding to such cases. Some commentators also hold that Prof. Bostrom ought to be sanctioned for his views, not just for being needlessly offensive. However, as explored above, and unless I am mistaken, none of the views expressed in the message appear to have fallen outside of the range of mainstream academic debate. Enforcing sanctions for holding views within this range would seem to violate the traditional norms of an academic community.

Q: Have public statements condemning the message been beneficial?

I do believe it was reasonable for groups to release statements condemning the message. The message was highly insensitive and disrespectful, was sent by a prominent figure who is often held as a role model for junior researchers in his field, and has recently been seen by many people, even if it was sent many years prior. Acknowledging and affirming the message's offensiveness was a signal of concern and respect for anyone who had been affected by seeing it. It is good for institutions that are influential within relevant professional communities to be able to say, "We care a great deal about having a culture where people take their colleagues’ comfort seriously and do not engage in needlessly offensive speech. We want to make sure that everyone understands that it truly is not acceptable to behave this way within our professional community."

However, I found the statement released by the Centre for Effective Altruism to be fairly flawed. It did not describe what Prof. Bostrom’s message said, but, I believe unintentionally, implied that the message contained or stemmed from a view that black people count less than white people. Prof. Bostrom did not express that view, and, to my knowledge, there is no reason to believe he holds it. It is a significant mistake to put out an official statement that wrongly suggests a colleague holds an abhorrent moral view of this kind. The Centre for Effective Altruism should strive to avoid this suggestion in any future communications.

I have also been disappointed by the content of the university’s statement and by a number of individuals’ statements on the matter. Many appear to be condemning Prof. Bostrom either for views he has never expressed or for empirical views that, although they are discomforting and although they were discussed with deep carelessness and insensitivity, did correspond at the time to the mainstream academic consensus. I believe that one also has a professional responsibility to be careful in one’s condemnations. It is possible to care about sensitivity and inclusivity without sacrificing intellectual carefulness, precision, and integrity.

aella @ 2023-01-14T02:49 (+63)

I came here to write something kind of sloppy, but this is a much more measured and clear thing than I could have written, and I agree with basically all of it (though I think I might have more support for point A than you do, depending on some nuance).  I'm also pretty disappointed with CEA's response and have some desire to go around semi-emotionally pointing something like, "this organization clearly does not have truth/integrity as its primary value, you cannot trust it". I'm pretty sad about this; I'm not personally an EA but have many friends in the community and have supported and defended the movement from the sidelines for a long time. While I intend to keep supporting my friends I feel much less inclined to support the organized movement, now. 

Michael_PJ @ 2023-01-14T14:56 (+90)

Let us take a moment of sympathy for the folks at CEA (who are, after all, or allies in the flight to make the world better). Scant weeks ago they were facing harsh criticism for failing to quickly make the conventional statement about the FTX scandal. Now they're facing criticism for doing exactly that. I'm glad I'm not comms director at CEA for sure.

Arepo @ 2023-01-15T22:35 (+53)

While I do sympathise with them having to handle yet another scandal which most of them had no involvement with, this comment seems to both oversimplify the differences and misrepresent what people were actually asking for post-FTX:

  1. The actions of FTX and SBF were, from three or four days after the news broke, universally condemned. Many people feel Bostrom's apology was reasonable.
  2. CEA had been directly involved in helping establish Alameda/FTX, and had actively worked with them and promoted them ever since. Bostrom is often cited, but nowhere near as closely involved with the organisation, at least formally.
  3. What people were asking for was not for them 'to make the conventional statement', but to show evidence of honest introspection: to admit to responsibility where and if applicable (given their ongoing involvement), and give some kind of reason to believe they'd learned something useful so that nothing like it would happen again.

As far as I know, they've still offered no such public introspection.

Random @ 2023-01-14T12:12 (+16)

Agree with the original comment and Aella. I would add that should the university or others decide to take action, this matter would be important enough to stand the ground in favor of Bostrom, also non-anonymously. We can not "choose" our views, as some comments asked for that seems very PC. Also, we should not be held accountable for what we wrote 25+ years ago unless we repeat it.

However, "standing the ground" is precisely the opposite of what is needed, we need calm, well-intended, and measured discussions and I appreciated the blog post by David Thorstad in depth criticizing Bostrom. It is understandable when some (here, Twitter, or elsewhere) are angry, frustrated, or demand changes. Public statements like the one from CEA, however, are likely not helpful (that they don't support the original mail is presumed without their message). Nor to be fair was the rather sloppy apology by Bostrom.

What is not obvious is the next step. I believe Bostrom that he is not interested in continuing this discussion and I do not see a value in forcing him to. Maybe a workshop/red-team white paper having a close and balanced look at this discussion where and if EA and the longtermism movement suffer from racism as alleged and if yes, what can be done about it?

Guy Raveh @ 2023-01-15T13:13 (+2)

we should not be held accountable for what we wrote 25+ years ago unless we repeat it

But repeating it is exactly what his "apology" did! None of the people angry about this is thinking "Bostrom was racist 26 years ago, and that's problematic even though he apologized and has changed". The point is that his new letter exemplifies how he is racist and supports eugenicist ideas still.

Random @ 2023-01-16T08:25 (+2)

I understand where you are coming from and wish your comment was not downvoted so much. We both want EA free of racism and I suggested measures to be taken to ensure this and more should be considered.

While FTX was a once-in-a-decade crime and may have showed systematic failure of EA, Bostrom's apology is not a crime. Of course it does reflect badly on EA PR-wise.

So I see your points and will read his next publications with a yellow flag in my mind. I do, however, think he should not be "canceled", he likely is not racist (I don't know him personally), and I and we should focus in 2023 mostly on alignment work or other global priorities. Should you feel different, I fully support measures as detailed above.

Nathan Young @ 2023-01-14T16:54 (+8)

As I say in my comment below, I think the question is "given all that we know, is CEA a truth-seeking org" and I still think yes. I don't think the statement changed my mind much.

David Mears @ 2023-01-16T11:22 (+14)

While I think it can make sense to model whole organisations as having traits like 'truth-seeking' or 'having integrity' or 'transparent', particularly when they are small and homogenous, it's always worth remembering that organisations are made up of people, and those people can vary a lot along all those traits. For example, CEA's character could change rapidly after hiring a lot, or if they lose one exceptionally conscientious person, etc.

Sam Elder @ 2023-01-18T10:16 (+2)

This is indeed quite well-written and a helpful summary! One question I have: You write that "Fortunately, Prof. Bostrom’s later comments suggest that he, at least, no longer believes in the first claim to the full extent he once did." Where are you getting that?

The third claim is indeed what has attracted the most attention and censure, but it's the interaction with the first that makes it particularly toxic by ruling out many of the more charitable readings. It's not just that he thinks the sentence "Blacks are more stupid than whites" is factually true but regrettable; he wrote that he "likes" that sentence, particularly because "the more counterintuitive and repugnant a formulation, the more it appeals to me given that it is logically correct."

It would be therefore great if Professor Bostrom explained somewhere why he no longer personally finds repugnant formulations of true statements appealing, but I couldn't find it in the apology linked above.

Geoffrey Miller @ 2023-01-14T17:17 (+166)

Brief note on why EA should be careful to remain inclusive & welcoming to neurodiverse people:

As somebody with Aspergers, I'm getting worried that in this recent 'PR crisis', EA is sending some pretty strong signals of intolerance to those of us with various kinds of neurodiversity that can make it hard for us to be 'socially sensitive', to 'read the room', and to 'avoid giving offense'. (I'm not saying that any particular people involved in recent EA controversies are Aspy;  just that I've seen a general tendency for EAs to be a little Aspier than other people, which is why I like them and feel at home with them.)

There's an ongoing 'trait war' that's easy to confuse with the Culture War. It's not really about right versus left, or reactionary versus woke. It's more about psychological traits: 'shape rotators' versus 'wordcels', 'Aspies' versus 'normies', systematizers versus empathizers, high decouplers versus low decouplers. 

EA has traditionally been an oasis for Aspy systematizers with a high degree of rational compassion, decoupling skills, and quantitative reasoning. One downside of being Aspy is that we occasionally, or even often, say things that normies consider offensive, outrageous, unforgiveable, etc. 

If we impose standard woke cancel culture norms on everybody in EA, we will drive away everybody with the kinds of psychological traits that created EA, that helped it flourish, and that made it successful. Politically correct people love to Aspy-shame. They will seek out the worst things a neurodiverse person has ever said, and weaponize it to destroy their reputation, so that their psychological traits and values are allowed no voice in public discourse. (Systematizing and decoupling are existential threats to political correctness....)

I've seen this happen literally dozens of times in academia over the last decade. High emphathizers and low decouplers are taking over the universities from high systematizers and high decouplers. They will do the same to  EA, if we're not careful.  

I've written in much more depth about this in my 2017 essay 'The neurodiversity case for free speech' (paywalled here on Quillette, free pdf here). IMHO, it's more relevant than ever, in relation to some of EA's recent public relations issues.

Miles_Brundage @ 2023-01-14T21:10 (+32)

FWIW despite having pretty diametrically opposed views on a lot of these things, I agree that there is something to the issue/divide you reference. It seems correlated with the "normie-EA vs. rationalist-EA" divide I mentioned elsewhere on this page, and I think there are potential tradeoffs from naively responding to the (IMO) real issues at stake on the other side of the ledger. How to non-naively navigate all this seems non-obvious.

Geoffrey Miller @ 2023-01-15T01:01 (+19)

Miles. I agree, more or less. It is very tricky to navigate, because EA does include poeple with different personality traits and cognitive styles. These are almost like different Bayesian priors with respect to 'social cause areas', eg the relative importance of being nice vs being empirically accurate. Will ruminate more about all this....

AnonymousQualy @ 2023-01-16T19:36 (+18)

I wholeheartedly agree that EA must remain welcoming to neurodiverse people.  Part of how we do that is being graceful and forgiving for people who inadvertantly violate social norms in pursuit of EA goals.

But I worry this specific comment overstates its case by (1) leaving out both the "inadvertent" part and the "in pursuit of EA goals" part, which implies that we ought to be fine with gratuitous norm violation, and (2)  incorporating political bias.  You say:

If we impose standard woke cancel culture norms on everybody in EA, we will drive away [neurodiverse people]. Politically correct people love to Aspy-shame.  They will seek out the worst things a neurodiverse person has ever said, and weaponize it to destroy their reputation, so that their psychological traits and values are allowed no voice in public discourse.

I don't want to speak for anyone with autism.  However, as best I can tell, this is not at all a universal view.  I know multiple peope who thrive in lefty spaces despite seeming (to me at least) like high decouplers.  So it seems more plausible to me that this isn't narrowly true about high decouplers in "woke" spaces; it's broadly true about high decouplers in communities who's political/ethical beliefs the decoupler does not share.

I also think that, even for a high decoupler (which I consider myself to be, though as far as I know I'm not on the autism spectrum) the really big taboos - like race and intelligence - are usually obvious, as is the fact that you're supposed to be careful when talking about them.  The text of Bostrom's email demonstrates he knows exactly what taboos he's violating.

I also think we should be careful not to mistake correlation for causation, when looking at EA's success and the traits of many of its members.  For example, you say:

[if we punish social norm violation] we will drive away everybody with the kinds of psychological traits that created EA, that helped it flourish, and that made it successful

There are valuable EA founders/popularizers who seem pretty adept at navigating taboos.  For example, every interview I've seen with Will MacCaskill involves him reframing counterintuitive ethics to fit with the average person's moral intuitions.  This seems to have been really effective at popularizing EA!

I agree that there are benefits from decoupling.  But there are clear utilitarian downsides too.  Contextualizing a statement is often necessary to anticipate its social welfare implications.  Contextualizing therefore seems necessary to EA.

Finally, I want to offer a note of sympathy.  While I don't think I'm autistic, I do frequently find myself at odds with mainstream social norms.  I prefer more direct styles of communication than most people.  I'm a hardcore utilitarian.   Many of the leftwing shibboleths common in among my graduate school classmates I find annoying, wrong, and even harmful.  For all these reasons, I share your feeling that EA is "oasis."   In fact, it's the only community I'm a part of that reaffirms my deepest beliefs about ethics in a clear way.

But ultimately, I think EA should not optimize to be that sort of reaffirming space for me.   EA's goal is wellbeing maximization, and anything other than wellbeing maximization will sometimes - even if only rarely - have to be compromised.

Geoffrey Miller @ 2023-01-16T20:51 (+33)

AnonymousQualy - You make some valid & thoughtful points. Let me ruminate further about them....

For the moment, I would just question the generality of your claim that 'the really big taboos - like race and intelligence - are usually obvious'. That might be true within a given culture, generation, social class, and ideological context. However, it is often not true across cultures. (For example, when I taught courses for a mainland Chinese university the last couple of years, the students there really wanted to talk about the evolution of race differences and intelligence -- much more than I was comfortable doing.) 

If EA aspires to be a global movement, we need to consider the fact that some of our strongest current Anglo-American ideological taboos are not shared by other cultures. And if we impose our taboos on other cultures, we're not really being culturally inclusive.

I addressed these issues in this other 2018 article on 'The cultural diversity case for free speech' (paywalled on Quillette here; free pdf here.) 

Timothy Chan @ 2023-01-16T22:31 (+12)

Thanks a lot for raising this, Geoffrey. A while back I mentioned some personal feelings and possible risks related to the current Western political climate, from one non-Westerner's perspective. You've articulated my intuitions very nicely here and in that article.

From a  strategic perspective, it seems to me that if AGI takes longer to develop, the more likely it is that the expected decision-making power would be shared globally. EAs should consider that they might end up in that world and it might not be a good idea to create and enforce easily-violated, non-negotiable demands on issues that we're not prioritizing (e.g. it would be quite bad if a Western EA ended up repeatedly reprimanding a potential Chinese collaborator simply because the latter speaks bluntly from the perspective of the former). To be clear, China has some of this as well (mostly relating to its geopolitical history) and I think feeling less strongly about those issues could be beneficial.

AnonymousQualy @ 2023-01-16T21:12 (+1)

I agree!  Greater leniency across cultural divides is good and necessary.

But I also think that:

(1) That doesn't apply to the Bostrom letter

(2) There are certain areas where we might think our cultural norms are better than many alternatives; in these situations, it would make sense to tell the person from the alternate culture about our norm and try to persuade them to abide by it (including through social pressure).   I'm pretty comfortable with the idea that there's a tradeoff between cultural inclusion and maintaining good norms, and that the optimal balance between the two will be different for different norms.

Geoffrey Miller @ 2023-01-16T22:46 (+21)

Regarding your point (2), I can see both sides of this. 

I agree that some cultural norms are generally better, by most metrics, in terms of human flourishing, social cohesion,  progress, prosperity, freedom, resilience, longevity, etc. -- although there are almost always tradeoffs, exceptions, and complications that warrant considerable epistemic humility and ethical uncertainty.

My heuristic is that members of Anglo-American cultures should usually err on the side of listening more than preaching when interacting with people from other cultures who probably know much more about our culture (e.g. through US/UK movies, TV, social media, global news) than we know about theirs. 

David Mathers @ 2023-01-16T19:57 (+27)

For what it's worth, I am autistic-and a white man as it happens-and I do not find free, uncensored discussion of race/IQ stuff specifically  makes me feel more welcome and comfortable. Rather, it makes me feel sick, and anxious and worried that I am associating with bad people if I participate in the discussion. (I actually agree  with Geoffrey Miller that there is probably some connection between autism and what he's calling "decoupling" though. And that this probably makes EA more welcoming for autistics. But ultimately, the point of EA is meant to be to do good, not to be a social club for autistic people.) 

EDIT: I also strongly second AnonymousQualy's point that Bostrom knew he was saying taboo stuff and so it cannot be dismissed as 'autistic people don't know when they are offending others'. 
 

Susan II @ 2023-01-16T23:58 (+15)

I think censorship would be a bad choice here, because the EA forum hasn't discussed these concepts previously (in any routine way, I'm sure there is a screed or two that could be dug up from a mound of downvotes) and is unlikely to in the future.

I would agree that race/IQ debates on the EA forum are unlikely to produce anything of value. But it's my experience that if you have free discussion rights and one banned topic, that causes more issues than just letting people say their piece and move on.

I'd also agree that EA isn't meant to be a social club for autists - but from a cynical perspective, the blithely curious and alien-brained are also a strategic resource and snubbing them should be avoided when possible.

If people are still sharing takes on race/IQ two weeks from now, I think that would be a measurable enough detraction from the goal of the forum to support the admins telling them to take it elsewhere. But I would be surprised if it were an issue.

Lumpyproletariat @ 2023-01-19T21:25 (+42)

Uncontroversial take: EA wouldn't exist without the blithely curious and alien-brained. 

More controversially: I've been increasingly feeling like I'm on a forum where people think the autistic/decoupler/rationalist cluster did their part and now should just... go away. Like, 'thanks for pointing us at the moral horrors and the world-ending catastrophe, I'll bear them in mind, now please stop annoying me.'

But it is not obvious to me that the alien-brained have noticed everything useful that they are going to notice, and done all the work that they will do, such that it is safe to discard them.

David Thorstad @ 2023-01-20T09:56 (+11)

Let me say this: autism runs in my family, including two of my first cousins. I think that neurodivergence is not only nothing to be ashamed of, and not an "illness" to be "cured", but in fact a profound gift, and one which allows neurodivergent individuals to see what many of us do not. (Another example: Listen to Vikingur Olafsson play the piano! Nobody else hears Mozart like that.).

Neurodivergent individuals and high decouplers should not be chased out of effective altruism or any other movement. Doing this would not only be intrinsically wrong, but would also deprive the movement of profoundly important insights, and would deprive the neurodivergent of one of the few places where they can genuinely belong.

It is very important to recognize that neurodivergent individuals, among others, sometimes have a harder time recognizing violations of social norms, and to exercise some degree of patience and compassion in responding to norm violations.

It is also important for everyone, no matter their tendency towards decoupling, their neurodiversity, or their background, to understand that words can harm, and to be sensitive to the need to stop and reverse course when presented with credible evidence that words have harmed. 

Everyone reading this message, and I mean everyone, is capable of hearing "no, stop. That is wrong" or "racist science has no place in this discussion" and stopping.  It is time for the racism to stop, and the healing to begin. 

The discussion that needs to be had right now is about healing, growth and change. The time for defense is over. The time for debate is over. It is time to learn to do better. I hope that the coming weeks can be used for growth and change. 

Sharmake @ 2023-01-16T19:52 (+8)

I also think that, even for a high decoupler (which I consider myself to be, though as far as I know I'm not on the autism spectrum) the really big taboos - like race and intelligence - are usually obvious, as is the fact that you're supposed to be careful when talking about them. The text of Bostrom's email demonstrates he knows exactly what taboos he's violating.

And honestly, I think this is a great taboo for many reasons. I'd argue this is one of the more intelligent taboos here by the left.

AnonymousQualy @ 2023-01-16T20:02 (+1)

Agreed.

I'm no cultural conservative, but norms are important social tools we shouldn't expect to entirely discard.  Anthropologist Joe Henrich's writing really opened my eyes to how norms pass down complex knowledge that would be inefficient for an individual to try to learn on their own.

Sharmake @ 2023-01-16T20:05 (+2)

I don't exactly agree with the case that cultural knowledge is really important like Henrich wants to say, though I do credit cultural knowledge for increasing returns to scale.

Miles_Brundage @ 2023-01-14T03:07 (+144)

Seeing the discussion play out here lately, and in parallel seeing the topic either not be brought up or be totally censored on LessWrong, has made the following more clear to me: 

A huge fraction of the EA community's reputational issues, DEI shortcomings, and internal strife stem from its proximity to/overlap with the rationalist community. 

Generalizing a lot,  it seems that "normie EAs" (IMO correctly) see glaring problems with Bostrom's statement and want this incident to serve as a teachable moment so the community can improve in some of the respects above, and "rationalist-EAs" want to debate race and IQ (or think that the issue is so minor/"wokeness-run-amok-y" that it should be ignored or censored). This predictably leads to conflict.

(I am sure many will take issue with this, but I suspect it will ring true/help clarify things for some, and if this isn't the time/place to discuss it, I don't know when/where that would be)

[Edit: I elaborated on various aspects of my views in the comments, though one could potentially agree with this comment/not all the below etc.]

RobertM @ 2023-01-14T05:19 (+53)

There's definitely no censorship of the topic on LessWrong.  Obviously I don't know for sure why discussion is sparse, but my guess is that people mostly (and, in my opinion, correctly) don't think it's a particularly interesting or fruitful topic to discuss on LessWrong, or that the degree to which it's an interesting subject is significantly outweighed by mindkilling effects.

Edit: with respect to the rest of the comment, I disagree that rationalists are especially interested in object-level discussion of the subjects, but probably are much more likely to disapprove of the idea that discussion of the subject should be verboten. 

I think the framing where Bostrom's apology is a subject which has to be deliberately ignored is mistaken.  Your prior for whether something sees active discussion on LessWrong is that it doesn't, because most things don't, unless there's a specific reason you'd expect it to be of interest to the users there.  I admit I haven't seen a compelling argument for there being a teachable moment here, except the obvious "don't do something like that in the first place", and perhaps "have a few people read over your apology with a critical eye before posting it" (assuming that didn't in fact happen).  I'm sure you could find a way to tie those in to the practice of rationality, but it's a bit of a stretch.

Miles_Brundage @ 2023-01-14T05:46 (+23)

Thanks for clarifying on the censorship point!

I do think it's pretty surprising and in-need-of-an-explanation that it isn't being discussed (much?) on LW - LW and EA Forum are often pretty correlated in terms of covering big "[EA/rationalist/longtermist] community news" like developments in AI, controversies related to famous people in one or more of those groups, etc. And it's hard to think of more than 1-2 people who are bigger deals in those communities than Bostrom (at most, arguably it's zero). So him being "cancelled" (something that's being covered in mainstream media) seems like a pretty obvious thing to discuss.

To be clear, I am not suggesting any malicious intent (e.g. "burying" something for reputational purposes), and I probably shouldn't have used the word censorship. If that's not what's going on, then yes, it's probably just that most LWers think it's no big deal. But that does line up with my view that there is a huge rationalist-EA vs. normie-EA divide, which I think people could agree with even if they lean more towards the other side of the divide than me.

Habryka @ 2023-01-14T07:16 (+54)

LessWrong in-general is much less centered around personalities and individuals, and more centered around ideas. Eliezer is a bit of an outlier here, but even then, I don't think personality-drama around Eliezer could even raise to the level of prominence that personality-drama tends to have on the EA Forum.

Arepo @ 2023-01-15T22:45 (+53)

I don't find this explanation convincing fwiw. Eliezer is an incredible case of hero-worship - it's become the norm to just link to jargon he created as though it's enough to settle an argument. The closest thing we have here is Will, and most EAs seem to favour him for his character rather than necessarily agreeing with his views - let alone linking to his posts like they were scripture.

Other than the two of them, I wouldn't say there's much discussion of personalities and individuals on either forum.

Lumpyproletariat @ 2023-01-21T04:15 (+16)

Eliezer is an incredible case of hero-worship - it's become the norm to just link to jargon he created as though it's enough to settle an argument.

I think that you misunderstand why people link to things.

If someone didn't get why I feel morally obligated to help people who live in distant countries, I would likely link them to Singer's drowning child thought experiment. Either during my explanation of how I feel, or in lieu of one if I were busy. 

This is not because I hero-worship Singer. This is not because I think his posts are scripture. This is because I broadly agree with the specific thing he said which I am linking, and he put it well, and he put it first, and there isn't a lot of point of duplicating that effort. If after reading you disagree, that's fine, I can be convinced. The argument can continue as long as it doesn't continue for reasons that are soundly refuted in the thing I just linked.

I link people to things pretty frequently in casual conversation. A lot of the time, I link them to something posted to the EA Forum or LessWrong. A lot of the time, it's something written by Eliezer Yudkowsky. This isn't because I hero-worship him, or that I think linking to something he said settles an argument - it's because I broadly agree with the specific thing I'm linking and don't see the point of duplicating effort. If after reading you disagree, that's fine, I can be convinced. The argument can continue as long as it doesn't continue for reasons that are soundly refuted in the thing I just linked.

There are a ton of people who I'd like to link to as frequently as I do Eliezer. But Eliezer wrote in short easily-digested essays, on the internet instead of as chapters in a paper book or pdf. He's easy to link to, so he gets linked.

Arepo @ 2023-01-22T01:39 (+4)

There's a world of difference between the link-phrases 'here's an argument about why you should do x' and 'do x'. Only Eliezer seems to regularly merit the latter.

Lumpyproletariat @ 2023-01-22T02:52 (+11)

Here are the last four things I remember seeing linked as supporting evidence in casual conversation on the EA forum, in no particular order:

https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=HebnLpj2pqyctd72F - link to Scott Alexander, "We have to stop it with the pointless infighting or it's all we will end up doing," is 'do x'-y if anything is. (It also sounds like a perfectly reasonable thing to say and a perfectly reasonable way to say it.)

https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=SCfBodrdQYZBA6RBy - separate links to Scott Alexander and Eliezer Yudkowsky, neither of which seem very 'do x'-y to me.

https://forum.effectivealtruism.org/posts/irhgjSgvocfrwnzRz/?commentId=NF9YQfrDGPcH6wYCb - link to Scott Alexander, seems somewhat though not extremely 'do x'-y to me. Also seems like a perfectly reasonable thing to say and I stand by saying it. 

https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=x5zqnevWR8MQHqqvd - link to Duncan Sabien, "I care about the lives we can save if we don't rush to conclusions, rush to anger, if we can give each other the benefit of the doubt for five freaking minutes and consider whether it'd make any sense whatsoever for the accusation de jour to be what it looks like," seems pretty darn 'do x'-y. I don't necessarily stand behind how strongly I came on there, I was in a pretty foul mood.

I think that mostly, this is just how people talk.

I am not making the stronger claim that there are zero people who hero-worship Eliezer Yudkowsky. 

AnonymousAccount @ 2023-01-15T23:48 (+11)

Also, I would add at the very least Gwern (which might be relevant to note regarding the current topic) and Scott Alexander as other two clear cases of "personalities" in LW

Habryka @ 2023-01-16T00:05 (+25)

I agree that there are of course individual people that are trusted and that have a reputation within the community, but the frequency of conversations around Scott Alexander's personality, or his reputation, or his net-effect on the world, is much rarer on LW than it is on the EA Forum, as far as I can tell. 

Like, when was actually the last thread on LW about drama caused by a specific organization or individual? In my mind almost all of that tends to congregate on the EA Forum.

Jeff Kaufman @ 2023-01-16T08:05 (+37)

My guess is that you see this more in EA because the stakes are higher for EAs. There's much more of a sense that people here are contributing to the establishment and continuation of a movement, the movement is often core to people's identities (it's why they do what they do, live where they live, etc), and 'drama' can have consequences on the progress of work people care a lot about. Few people are here just for the interesting ideas.

While LW does have a bit of a corresponding rationality movement I think it's weaker or less central on all of these angles.

Habryka @ 2023-01-16T08:24 (+10)

Yep, I agree, that's a big part of it. 

Arepo @ 2023-01-16T12:30 (+20)

I think Jeff is right, but I would go so far to say the hero worship on LW is so strong that there's also a selection effect - if you don't find Eliezer and co convincing, you won't spend time on a forum that treats them with such reverence (this at least is part of why I've never spent much time there, despite being a cold calculating Vulcan type).

Re drama around organisations, there are way more orgs which one might consider EA than which one might consider rationalist, so there's just more available lightning rods.  

Habryka @ 2023-01-16T19:18 (+9)

It's a plausible explanation! I do think even for Eliezer, I really don't remember much discussion of like, him and his personality in the recent years. Do you have any links? (I can maybe remember something from like 7 years ago, but nothing since LW 2.0).

Overall, I think there are a bunch of other also kind of bad dynamics going on on LW, but I do genuinely think that there isn't that much hero worship, or institution/personality-oriented drama.

Arepo @ 2023-01-17T21:43 (+7)

I'm saying the people who view him negatively just tend to self-select out of LW. Those who remain might not bother to have substantive discussion - it's just that the average mention of him seems ridiculously deferent/overzealous in describing his achievements (for example, I recently went to an EAGx talk which described him along with Tetlock as one of the two 'fathers of forecasting'). 

If you want to see negative discussion of him, that seems to be basically what RationalWiki and r/Sneerclub exist for.

Sharmake @ 2023-01-18T15:21 (+2)

Putting Habryka's claim another way: If Eliezer right now was involved in a huge scandal like say SBF or Will Macaskill was, then I think modern LW would mostly handle it pretty fine. Not perfectly, but I wouldn't expect nearly the amount of drama that EA's getting. (Early LW from the 2000s or early 2010s would probably do worse, IMO.) My suspicion is that LW has way less personal drama over Eliezer than say, EA would over SBF or Nick Bostrom.

Arepo @ 2023-01-18T21:32 (+3)

I think there are a few things going on here, not sure how many we'd disagree on. I claim:

  • Eliezer has direct influence over far fewer community-relevant organisations than Will does or SBF did (cf comment above that there exist far fewer such orgs for the rationalist community).  Therefore a much smaller proportion of his actions are relevant to the LW community than Will's are and SBF's were to the EA community.
  • I don't think there's been a huge scandal involving Will? Sure, there are questions we'd like to see him openly address about what he could have done differently re FTX - and I personally am concerned about his aforementioned influence because I don't want anyone to have that much - but very few if any people here seem to believe he's done anything in seriously bad faith.
  • I think the a priori chance of a scandal involving Eliezer on LW is much lower than the chance of a scandal on here involving Will because of the selection effect I mentioned - the people on LW are selected more strongly for being willing to overlook his faults. The people who both have an interest in rationality and get scandalised by Bostrom/Eliezer hang out on Sneerclub, pretty much being scandalised by them all the time.
  • The culture on here seems more heterogenous than LW. Inasmuch as we're more drama-prone, I would guess that's the main reason why - there's a broader range of viewpoints and events that will trigger a substantial proportion of the userbase.

So these theories support/explain why there might be more drama on here, but push back against the 'no hero-worship/not personality-oriented' claims, which both ring false to me. Overall, I also don't think the lower drama on LW implies a healthier epistemic climate.

Sharmake @ 2023-01-18T22:54 (+5)

I don't think there's been a huge scandal involving Will? Sure, there are questions we'd like to see him openly address about what he could have done differently re FTX - and I personally am concerned about his aforementioned influence because I don't want anyone to have that much - but very few if any people here seem to believe he's done anything in seriously bad faith.

I was imagining a counterfactual world where William Macaskill did something hugely wrong.

And yeah come to think of it, selection may be quite a bit stronger than I think.

ChristianKleineidam @ 2023-01-22T20:38 (+1)

The bigger discussion  from maybe 7 years ago that Habryka refers to was as far as my memories goes his April first post  in 2014 about Dath Ilan. The resulting discussion was critical enough of EY that from that point on most of EY's writing was published on Facebook/Twitter and not LessWrong anymore. One his Facebook feed he can simply ban people who he finds annoying but on LessWrong he couldn't.  

RobBensinger @ 2023-01-23T22:33 (+4)

Izzat true? Aside from edited versions of other posts and cross-posts by the LW admins, I see zero EY posts on LW between mid-September 2013 and Aug 2016, versus 21 real posts earlier in 2013, 29 in 2012, 12 in 2011, 17 in 2010, and ~180 in 2009.

So I see a big drop-off after the Sequences ended in 2009, and a complete halt in Sep 2013. Though I guess if he'd mostly stopped posting to LW anyway and then had a negative experience when he poked his head back in, that could cement a decision to post less to LW.

(This is the first time I'm hearing that the post got deleted, I thought I saw it on LW more recently than that?)

2017 is when LW 2.0 launched, so 2014-2016 was also a nadir in the site's quality and general activity.

ChristianKleineidam @ 2023-01-25T17:08 (+1)

I was active at that time on LessWrong and mostly go after my memory and memories for something that happened eight years ago isn't perfect.

 https://yudkowsky.tumblr.com/post/81447230971/my-april-fools-day-confession was to my memory also posted to LessWrong and the LessWrong site of that post is deleted. 

When doing a Google search for the timeframe on LessWrong, that doesn't bring up any mention of Dath Ilan.

Is your memory that Dath Ilan was just never talked about on LessWrong when Eliezer wrote that post?

pseudonym @ 2023-01-22T21:38 (+1)

which post is this? I looked on EY's LW profile but couldn't see which one this was referring to. There's this blog post https://yudkowsky.tumblr.com/post/81447230971/my-april-fools-day-confession but it's not on LW. also, it looks like there's been a lot of posts from EY on LW since 2014?

ChristianKleineidam @ 2023-01-23T12:38 (+1)

I think that's the post. As far as my memory goes, the criticism led to Eliezer deleting it from LessWrong. 

Sharmake @ 2023-01-16T19:30 (+1)

As a person who went on LW several months ago, I think that Eliezer is a great thinker, but he does get things wrong quite a few times. He is not a perfect thinker or hero, but Eliezer was quite a bit better (arguably far better than most.)

I wouldn't idolize him, but nor would I ignore Eliezer's accomplishments.

Peter Singer is very disapointed in you @ 2023-01-15T23:37 (+1)

This seems to fit with the fact that there wasn't much appetite for the consequentialist argument against Bostrom until the term "information hazard" came up.

DPiepgrass @ 2023-01-15T20:43 (+20)

I for one probably wouldn't have brought it up on LessWrong because it seems like a tempest in a teapot. What is there to say? Someone who is clearly not racist accidentally said something that sounds pretty racist, decades ago, and then apologized profusely. Normally this would be standard CW stuff, except for the connection to EA. The most notable thing — scary thing — is how some people on this forum seem to be saying something like "Nick is a bad person, his apology is not acceptable, and it's awful that not everyone is on board with my interpretation" ("agreed", whispers the downvote brigade in a long series of -1s against dissenters.) If I bring this up as a metadiscussion on LW, would others understand this sentiment better than me?

I suspect that the neurotypicals most able to explain it to weirdos like me are more likely to be here than there. Since you said that 

normie EAs" (IMO correctly) see glaring problems with Bostrom's statement

I assume you mean the apology, and I would be grateful if you would explain what these glaring problems are. [edit: also, upon reflection maybe it's not a nuerodiverse vs neurotypical divide, but something else such as political thinking or general rules of thought or moral system. I never wanted to vote Republican, so I'm thinking it's more like a Democrat vs Independent divide.]

I am curious, too, whether other people see the same problems or different ones. (a general phenomenon in life is that vague statements get a lot more upvotes than specific ones because people often agree with a conclusion while disagreeing on why that conclusion is true.)

David Mears @ 2023-01-16T11:29 (+25)

Someone who is clearly not racist accidentally said something that sounds pretty racist, decades ago, and then apologized profusely.


Registering strong disagreement with this characterisation. Nick has done vanishingly little to apologise, both now and in 1997. In the original emails and the latest apology, he has done less to distance himself from racism than to endorse it.

peterhartree @ 2023-01-16T13:56 (+27)

In the original emails and the latest apology, he has done less to distance himself from racism than to endorse it.

In what ways do you think the 2023 message endorses racism? Is there a particular quote or feature of it that stands out to you?

The apology contains an emphatic condemnation of the use of a racist slur:

I completely repudiate this disgusting email from 26 years ago. It does not accurately represent my views, then or now. The invocation of a racial slur was repulsive. I immediately apologized for writing it at the time, within 24 hours; and I apologize again unreservedly today. I recoil when I read it and reject it utterly.

The 1996 email was part of a discussion of offensive communication styles. It included a heavily contested and controversial claim about group intelligence, which I will not repeat here. [1] Claims like these have been made by racist groups in the past, and an interest in such claims correlates with racist views. But there is not a strict correlation here: expressing or studying such claims does not entail you have racist values or motivations.

In general I see genetic disparity as one of the biggest underlying causes of inequality and injustice. I've no informed views or particular interests in averages between groups of different skin colour. But I do feel terrible for people who find themselves born with a difficult hand in the genetic lottery (e.g. a tendency to severe depression or dementia). And so I endorse research on genetic causes of chronic disadvantage, with the hope that we can improve things.

[1] This comment by Geoffrey Miller provides a bit more context on why Bostrom may have chosen this particular example.

David Mears @ 2023-01-17T18:38 (+7)

One of the main complaints people (including me) have about Bostrom's old_email.pdf is that he focuses on the use of a slur as the thing he is regretful for, and is operating under a very narrow definition of racism where a racist is someone who dislikes people of other races. But the main fault with the 1996 email, for which Bostrom should apologise, the most important harm and the main reason it is racist, was that it propagated the belief that blacks are inherently stupider than whites (it did not comment on the causation, but used language that is conventionally understood to refer to congenital traits, 'blacks have lower IQ than mankind in general'). Under this view, old_email.pdf omits to apologise for the main thing people are upset about in the 1996 email, namely, the racist belief, and the lack of empathy for those reading it; and it clarifies further that, in Bostrom's view, the lower IQ of blacks may in fact be in no small part genetically determined, and moreover, as David Thorstad writes, "Bostrom shows no desire to educate himself on the racist and discredited science driving his original beliefs or on the full extent of the harms done by these beliefs. He does not promise to read any books, have hard conversations, or even to behave better in the future. If Bostrom is not planning to change, then why are we to believe that his behavior will be any better than it was in the 1990s?"

So in my view: in total, in 1996 Nick endorses racist views, and in 2023 he clarifies beyond doubt that the IQ gap between blacks and whites may be genetically determined (and says sorry for using a bad word).

A more detailed viewpoint close to my own from David Thorstad: https://ineffectivealtruismblog.com/2023/01/12/off-series-that-bostrom-email/

Anon Rationalist @ 2023-01-17T19:35 (+9)

Would you prefer Bostrom's apology read:

I am sorry for saying that black people are stupider than whites. I no longer hold that view.

Even if he, with evidence, still believes it to be true? David Thorstad can write all he wants about changing his views, but the evidence of the existence of a  racial IQ gap has not changed. It is as ironclad and universally accepted by all researchers as it was in 1996 following the publication of the APA's Intelligence: Knowns and Unknowns

This may be a difference of opinion, but I don't think that acknowledging observed differences in reality is a racist view. But I am interested to know if you would prefer he make the statement anyway.

David Mears @ 2023-01-17T22:26 (+9)

By the way, the finding of an IQ gap isn’t (or shouldn’t be?) what is under contention/offensive, because that’s a real finding. It’s the idea that it has a significant genetic component.

I think both Bostrom and I claim that he does not believe that idea, but I’ll entertain your hypothetical below.

I think that, in the world where racial IQ gaps are known not to have a significant genetic component, believing so anyway as a layperson makes one very probably a racist (glossed as a person whose thinking is biased by motivated reasoning on the basis of race); and in the world where racial IQ gaps are known to have a significant genetic component, believing so is not strong evidence of being a racist (with the same gloss). There are also worlds in between.

In any of these worlds, and the world where we live, responsible non-experts should defer to the scientific consensus (as Bostrom seems to in 2023), and when they irresponsibly promote beliefs that are extremely harmful and false, through recklessness, they should apologise for that.

I don’t think anyone should apologise for the very act of believing something one still believes, because an apology is by nature a disagreement with one’s past self. But Bostrom in 2023 does not seem to believe any more, if he ever did, that the racial IQ gap is genetically caused, which frees him up to apologise for his 1996 promotion of the belief.

As a reminder, the original description I took issue with was:

Someone who is clearly not racist accidentally said something that sounds pretty racist, decades ago, and then apologized profusely

It ‘sounds pretty racist’ to say “blacks have lower IQ than mankind in general” because that phrasing usually implies it’s congenital. In other words, in 1996, Bostrom (whose status as a racist is ambiguous to me, and I will continue to judge his character based on his actions in the coming weeks and months) said something that communicates a racist belief, and I want to give him the benefit of the doubt that it was an accident — a reckless one, but an accident. However, apart from apologising for the n-word slur, I haven’t seen much that can be interpreted as an apology for the harm caused by this accident.

Now, if Bostrom, as a non-expert, in fact is secretly confident that IQ and race correlate because of genetics, I think that his thinking is probably biased in a racist way (that is to say, he is a racist) and he should be suspicious of his own motives in holding that belief. If he then finds his view was mistaken, he may meaningfully apologise for any racist bias that influenced his thinking. Otherwise, an apology would not make any sense as he would not think he’d done anything wrong.

The lack of apology for promulgating accidentally (or deliberately) the racist view is wrong if Bostrom does not hold the view (/any more). He is mistaken when in 2023 he skates over acknowledging the main harm he contributed to, by focusing mostly on his mention of the n-word (a lesser harm, partly due to the use-mention distinction).

DPiepgrass @ 2023-01-22T15:33 (+5)

I feel like some people are reading "I completely repudiate this disgusting email from 26 years ago" and thinking that he has not repudiated the entire email, just because he also says "The invocation of a racial slur was repulsive". I wonder if you interpreted it that way.

One thing I think Bostrom should have specifically addressed was when he said "I like that sentence". It's not a likeable sentence! It's an ambiguous sentence (one interpretation of which is obviously false) that carries a bad connotation (in the same way that "you did worse than Joe on the test" has a different connotation than "Joe did better than you on the test", making the second sentence probably better). Worst of all, it sounds like the kind of thing racists say. The nicest thing I would say about this sentence is that it's very cringe.

Now I'm a "high-decoupler Independent", and "low-decoupler Democrats" clearly wanted Bostrom to say different things than me. However, I suspect Bostrom is a high-decoupler Independent himself, and on that basis he loses points in my mind for not addressing the sorts of things that I myself notice. On the other hand... apology-crafting is hard and I think he made a genuine attempt.

But there are several things I take issue with in Thorstad's post, just one of which I will highlight here. He said that the claim "I think it is probable that black people have a lower average IQ than mankind in general" is "widely repudiated, are based on a long history of racist pseudoscience and must be rejected" (emphasis mine). In response to this I want to highlight a comment that discusses an anti-Bostrom post on this forum:

This post says both:

> If you believe there are racial differences in intelligence, and your work forces you to work on the hard problems of resource allocation or longtermist societal evolution, nobody will trust you to do the right tradeoffs.

and

> If he'd said, for instance, "hey I was an idiot for thinking and saying that. We still have IQ gaps between races, which doesn't make sense. It's closing, but not fast enough. We should work harder on fixing this." That would be more sensible. Same for the community itself disavowing the explicit racism.

The first quote says believing X (that there exists a racial IQ gap) is harmful and will result in nobody trusting you. The second says X is, in fact, true.

I think that we high-decouplers tend to feel that it is deeply wrong to treat a proposition X as true if it is expressed in one way, but false/offensive if expressed in another way. If it's true, it's true, and it's okay to say so without getting the wording perfect.[1]

In the Flynn effect, which I don't believe is controversial, populations vary significantly on IQ depending on when they were born. But if timing of birth is correlated with IQ, then couldn't location of birth be correlated with IQ? Or poverty, or education? And is there not some correlation between poverty and skin color? And are not correlations usually transitive? I'm not trying to prove the case here, just trying to say that people can reasonably believe there is a correlation, and indeed, you can see that even the anti-Bostrom post above implies that a correlation exists.

Thorstad cites no evidence for his implication that the average IQ of blacks is equal to the average IQ of  everyone. To the contrary, he completely ignores environmental effects on intelligence and zeroes in on the topic of genetic effects on intelligence. So even if he made an effort to show that there's no genetic IQ gap there would still be a big loophole for environmental differences. Thorstad also didn't make an effort to show that what he was saying about genetics was true, nor did he link to someone who did make that effort (but I will. Here's someone critiquing the most famous version of HBD, and if you know of a work that directly addresses the whole body of scientific evidence rather than being designed as a rebuttal, I'd like to see it.) Overall, the piece comes across to me as unnecessarily politicized, unfair, judgemental, and not evidence-based in the places it needs to be.

Plus it tends toward dihydrogen monoxide-style arguments. To illustrate this, consider these arguments supporting the idea of man-made global warming: "denial that humans cause global warming is often funded by fossil-fuel companies with a vested interest in blocking environmental regulations, some of which have a history of unethical behavior. And many of the self-proclaimed experts who purport to show humans don't cause climate change are in fact charlatans. The Great Global Warming Swindle, a denier film, labeled fellow denier Tim Ball as the 'head of climatology' at the University  of Winnipeg, which does not, in fact, have a climatology department. As droughts, heat waves and hurricane damage figures increase, it's time to reject denial and affirm that we humans are responsible." As a former writer for SkepticalScience who fought against climate denial for years, I held my gag reflex as I wrote those sentences, because they were bad arguments. It's not that they are false; it's not that I disagree with them; it's that they are politicized statements that create more heat than light and don't help demonstrate that humans cause global warming. There are ample explainers and scientific evidence out there for man-made global warming, so you don't need to rely on guilt-by-association or negative politically-charged narratives like the one I just wrote. Same thing for Bostrom—there may be good arguments against him, but I haven't seen them.

I also believe actions speak louder than words, so that Bostrom's value seems much higher than his disvalue (I know little about his value, but a quick look at his bio suggests it is high), and that in EA we should employ the principle of charity.

  1. ^

    Also, if someone doesn't know if an idea is true, it's wrong to condemn them just for saying they don't know or for not picking a side, as Thorstad does.

RobertM @ 2023-01-14T05:56 (+11)

Yes, I agree that there's a non-trivial divide in attitude.  I don't think the difference in discussion is surprising, at least based on a similar pattern observed with the response to FTX.   From a quick search and look at the tag, there were on the order of 10 top-level posts on the subject on LW.  There are 151 posts under the FTX collapse tag on the EA forum, and possibly more untagged.

Random @ 2023-01-14T11:27 (+32)

I very much agree with your analysis, except for the "IMO correctly". Firstly, because I hold the views of a "rationalist-EA", so it is to be expected following your argument. Secondly, because we should not hold emails/posts against people 25+ years later, unless they are continued and/or deeply relevant to their points today. Looking at his last publications, they do not seem that relevant.

However, I would like to point out that to me the benefits of EA also profit from the rationality influx. EA to me is "rationality applied to doing good". So the overlap is part of the deal.

Miles_Brundage @ 2023-01-14T19:52 (+3)

(will vaguely follow-up on this in my response to ESRogs's parallel comment) 

RyanCarey @ 2023-01-18T15:41 (+26)

A huge fraction of the EA community's reputational issues, DEI shortcomings, and internal strife stem from its proximity to/overlap with the rationalist community.

Generalizing a lot,  it seems that "normie EAs" (IMO correctly) see glaring problems with Bostrom's statement and want this incident to serve as a teachable moment so the community can improve in some of the respects above, and "rationalist-EAs" want to debate race and IQ (or think that the issue is so minor/"wokeness-run-amok-y" that it should be ignored or censored). This predictably leads to conflict.

This is inaccurate as stated, but there is an important truth nearby. The apparent negatives you attribute to "rationalist" EAs are also true of non-rationalist old-timers in EA, who trend slightly non-woke, while also keeping arms length from the rationalists. SBF himself was not particularly rationalist, for example. What seems to attract scandals is people being consequentialist, ambitious, and intense, which are possible features of rationalists and non-rationalists alike.

ESRogs @ 2023-01-14T13:11 (+14)

Generalizing a lot,  it seems that "normie EAs" (IMO correctly) see glaring problems with Bostrom's statement and want this incident to serve as a teachable moment

  1. As a "rationalist-EA", I would be curious if you could summarize what lessons you think should be drawn from this teachable moment (or link to such a summary that you endorse).
  2. In particular, do you disagree with the current top comment on this post?
    1. (To me, their Q1 seems like it highlights what should be the key lesson. While their Q2 provides important context that mitigates how censorious we should be in our response.)
Miles_Brundage @ 2023-01-14T20:14 (+42)

Happy to comment on this, though I'll add a few caveats first:

- My views on priorities among the below are very unstable
- None of this is intended to imply/attribute malice or to demonize all rationalists ("many of my best friends/colleagues are rationalists"), or to imply that there aren't some upsides to the communities' overlap
- I am not sure what "institutional EA" should be doing about all this
- Since some of these are complex topics and ideally I'd want to cite lots of sources etc. in a detailed positive statement on them, I am using the "things to think about" framing. But hopefully this gives some flavor of my actual perspective while also pointing in fruitful directions for open-ended reflection. 
- I may be able to follow up on specific clarifying Qs though also am not sure how closely I'll follow replies, so try to get in touch with me offline if you're interested in further discussion.
- The upvoted comment is pretty long and I don't really want to get into line-by-line discussion of specific agreements/disagreements, so will focus on sharing my own model.

Those caveats aside, I think some things that EA-rationalists might want to think about in light of recent events are below. 

- Different senses of the word racism (~the "believing/stating that race is a 'real thing'/there are non-trivial differences between them (especially cognitive ones) that anyone should care about" definition, and the "consciously or unconsciously treating people better/worse given their race"), why some people think the former is bad/should be treated with extreme levels of skepticism and not just the latter, and whether there might be a finer line between them in practice than some think.
- Why the rationalist community seems to treat race/IQ as an area where one should defer to "the scientific consensus" but is quick to question the scientific community and attribute biases to it on a range of other topics like ivermectin/COVID generally,  AI safety, etc. 
- Whether the purported consensus folks often refer to is actually existent + what kind of interpretations/takeaways one might draw from specific results/papers other than literal racism in the first sense above (I recommend The Genetic Lottery's section on this).  
- What the information value of "more accurate [in the red pill/blackpill sense] views on race" would even be "if true," given that one never interacts with a distribution but with specific people.
- How Black people and other folks underrepresented in EA/rationalist communities, who often face multiple types of racism in the senses above, might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.

ESRogs @ 2023-01-15T15:03 (+27)

I'll limit myself to one (multi-part) follow-up question for now —

Suppose someone in our community decides not to defer to the claimed "scientific consensus" on this issue (which I've seen claimed both ways), and looks into the matter themselves, and, for whatever reason, comes to the opposite conclusion that you do. What advice would you have for this person?

I think this is a relevant question because, based in part on comments and votes, I get the impression that a significant number of people in our community are in this position (maybe more so on the rationalist side?).

Let's assume they try to distinguish between the two senses of "racism" that you mention, and try to treat all people respectfully and fairly. They don't make a point of trumpeting their conclusion, since it's not likely to make people feel good, and is generally not very helpful since we interact with individuals rather than distributions, as you say.

Let's say they also try to examine their own biases and take into account how that might have influenced how they interpreted various claims and pieces of data. But after doing that, their honest assessment is still the same.

Beyond not broadcasting their view, and trying to treat people fairly and respectfully, would you say that they should go further, and pretend not to have reached the conclusion that they did, if it ever comes up?

Would you have any other advice for them, other than maybe something like, "Check your work again. You must have made a mistake. There's an error in your thinking somewhere."?

Miles_Brundage @ 2023-01-15T20:50 (+38)

I would have to think more on this to have a super confident reply. See also my point in response to Geoffrey Miller elsewhere here--there are lots of considerations at play. 

One view I hold, though, is something like "the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you're considering the [personal/community-level] social implications thereof, is non-zero." We can of course disagree on the precise amount/contexts for this, and sometimes it can go too far. And by definition in all such cases you will think you are right and others wrong, so there is a cost. But I don't think it is automatically/definitionally bad for people to do that to some extent, and indeed much of progress on issues like civil rights, gay rights etc. in the US has resulted in large part from actions getting ahead of beliefs among people who didn't "get it" yet, with  cultural/ideological change gradually following with generational replacement, pop culture changes, etc. Obviously people rarely think that they are in the wrong, but it's hard to be sure, and I don't think we [the world, EA] should be aiming for a culture where there are never repercussions for expressing beliefs that, in the speaker's view, are true. Again, that's consistent with people disagreeing about particular cases, just sharing my general view here.

This shouldn't only work in one ideological "direction" of course, which may be a crux in how people react to the above. Some may see the philosophy above as (exclusively) an endorsement of wokism/cancel culture etc. in its entirety/current form [insofar as that were a coherent thing, which I'm not sure it is]. While I am probably less averse to some of those things than the some LW/EAF readers, especially on the rationalist side side, I also think that people should remember that restraint can be positive in many contexts. For example, I am, in my effort to engage and in my social media activities lately, trying to be careful to be respectful to people who identify strongly with the communities I am critiquing, and have held back some spicy jokes (e.g. playing on the "I like this statement and think it is true" line which just begs for memes), precisely because I want to avoid alienating people who might be receptive to the object level points I'm making, and because I don't want to unduly egg on critiques by other folks on social media who I think sometimes go too far in attacking EAs, etc.

DPiepgrass @ 2023-01-15T22:39 (+20)

Is it okay if I give my personal perspective on those questions?

  1. I suppose I should first state that I don't expect that skin color has any effect on IQ whatsoever, and so on. But ... I feel like the controversy in this case (among EAs) isn't about whether one believes that or not [as EAs never express that belief AFAIK], but rather it is about whether one should do things like (i) reach a firm conclusion based purely on moral reasoning (or something like that), and (ii) attack people who gather evidence on the topic, just learn and comment about the topic, or even don't learn much about the topic but commit the sin of not reaching the "right" conclusion within their state of ignorance.
  2. My impression is that there is no scientific consensus on this question, so we cannot defer to it. Also, doesn't the rationalist community in general, and EA-rationalists in particular, accept the consensus on most topics such as global warming, vaccine safety, homeopathy, nuclear power, and evolution? I wonder if you are seeing the tolerance of skepticism on LW or the relative tolerance of certain ideas/claims and thinking the tolerance is problematic. But maybe I am mistaken about whether the typical aspiring rationalist agrees with various consensuses.
  3. [Whether the purported consensus folks often refer to is actually existent] The only consensus I think exists is that one's genetic code can, in principle, affect intelligence, e.g. one could theoretically be a genius, an idiot, or an octopus, for genetic reasons (literally, if you have the right genes, you are an octopus, with the intelligence of an octopus, "because of your genes"). I don't know whether or not there is some further consensus that relates somehow to skin color, but I do care about the fact that even the first matter is scarily controversial. There are cases where some information is too dangerous to be widely shared, such as "how to build an AGI" or "how to build a deadly infectious virus with stuff you can order online". Likewise it would be terrible to tell children that their skin color is "linked" to lower intelligence; it's "infohazardous if true" (because it has been observed that children in general may react to negative information by becoming discouraged and end up less skilled). But adults should be mature enough to be able to talk about this like adults. Since they generally aren't that mature, what I wonder is how we should act given that there are confusing taboos and culture wars everywhere. For example, we can try adding various caveats and qualifications, but the Bostrom case demonstrates that these are often insufficient.
  4. [What the information value of "more accurate [...] views on race" would even be "if true,"] I'd say the information value is low (which is why I have little interest in this topic) but that the disvalue of taboos is high. Yes, bad things are bad, but merely discussing bad things (without elaborate paranoid social protocols) isn't.
  5. [How Black people and other folks underrepresented [...] might react to seeing people in these communities speaking casually about all of this, and what implications that has for things like recruitment and retention in AI safety.] That's a great question! I suspect that reactions differ tremendously between individuals. I also suspect that first impressions are key, so whatever appears at the top of this page, for instance, is important, but not nearly as important as whatever page about this topic is most widely circulated. But... am I wrong to think that the average black person would be less outraged by an apology that begins with "I completely repudiate this disgusting email from 26 years ago" than some people on this very forum?
ChristianKleineidam @ 2023-01-22T00:18 (+9)

- Why the rationalist community seems to treat race/IQ as an area where one should defer to "the scientific consensus" but is quick to question the scientific community and attribute biases to it on a range of other topics like ivermectin/COVID generally,  AI safety, etc. 

With ivermectin we had a time where the best meta-analysis were pro-ivermectin but the scientific establishement was against ivermectin. Trusting those meta reviews that were published in reputable  peer reviewed is poorly understood as "not defering to the scientific consensus". Scott also wrote a deep dive on Ivermectin and the evidence in the scientific literature for it.

You might ask yourself "Why doesn't Scott Alexander write a deep dive on the literature of IQ and race?" Why don't other rationalists on LessWrong write deep dives on the literature of IQ and race and questions about which hypothesis are supported by the literature an which aren't. 

From a truth seeking perspective it would be nice to have such literature deep dives. From a practical point, writing deep dives on the literature of IQ and race and have indepth discussions about it has a high likelihood to offend people. The effort and risks that come with it are high enough that Scott is very unlikely to write such a post. 

One view I hold, though, is something like "the optimal amount of self-censorship, by which I mean not always saying things that you think are true/useful, in part because you're considering the [personal/community-level] social implications thereof, is non-zero." 

I think that there's broad agreement on this and that self-censorship is one of the core reasons why rationalists are not engaging as deeply with the literature around IQ and race as we did with Ivermectin or COVID.

On the other hand, there are situation where there are reasons to actually speak about an issue and people still express their views even if they would prefer to just avoid talking about a topic.

ESRogs @ 2023-01-14T23:24 (+8)

Thanks, I appreciate the thoughtful response!

freedomandutility @ 2023-01-15T21:54 (+12)

My view is that the rationalist community deeply values the virtue of epistemic integrity at all costs, and of accurately expressing your opinion regardless of social acceptability.

The EA community  is focused on approximately maximising consequentialist impact.

 Rationalist EAs should recognise when theses virtue of epistemic integrity and epistemic accuracy are in conflict with maximising consequentialist impact, via direct, unintended consequences of expressing your opinions, or via effects on EA's reputation.

Habryka @ 2023-01-15T23:32 (+30)

For what it's worth, I have my commitment to honesty primarily for consequentialist reasons.

freedomandutility @ 2023-01-16T09:19 (+9)

That makes sense and I would agree with the idea that honesty is usually helpful for conseuqentialist reasons, but I think it is important to recognise cases where it is not.

 Broadly, these cases are where the view you're expressing doesn't really help you do more good and the view brings a lot of harm to your reputation. 

So as much as I disagree with Bostrom's object level views on race / IQ, I think he should have lied about his views.

Another example I wrote down elsewhere:

If you were an atheist in a rural, conservative part of Afghanistan today aiming to improve the world by challenging the mistreatment of women and LGBT people, and you told people that you think that God doesn't exist, even if that was you accurately expressing your true  beliefs, you would be so far from the Overton Window that you're probably making it more difficult for yourself to improve things for LGBT people and women. Much better to say that you're a Muslim and you think women and LGBT people should be treated better.

Arepo @ 2023-01-15T22:52 (+13)

I would say it's less about rationalists vs non-rationalists and more that people who are inclined to social justice norms (who tend not to be rationalists, though one can be both or neither) think it's a big deal and people who aren't are at least less committal. 

AnonymousQualy @ 2023-01-15T23:01 (+5)

I think there's a decent case to be made that a lot of social justice norms (though certainly not all) can be arrived at by utilitarian reasoning ("normie EA") while a lot of opposition to social justice norms can be arrived at through a sort of truth seeking that actively eschews social norms ("rationalist").

Ives Parr @ 2023-01-18T23:29 (+8)

I think that social justice norms are sometimes harmful from a consequentialist viewpoint. The social justice project largely consists of highlighting disparities between oppressor groups and oppressed groups and attributing disparities to immoral action on the part of oppressor groups.   I think that most of these beliefs are actually false and the proposed solutions are harmful in that they will not actually solve the problem because the belief is false. I think that they make social relations worse. 

More egregious is social justice advocates propensity for censorship in the name of emotional harm-avoidance, and the willingness to attack the character of people who disagree with their viewpoint as bigots of various types. And most egregious is the small minority who actively causes reputational damage, firings, and ostracism. 

I think that various harmful social views and policies persist because many social justice advocates think they're so right they don't need particularly good truth-seeking behavior.

Miles_Brundage @ 2023-01-14T07:43 (+10)

Note that there is now at least one post on LW front page that is at least indirectly about the Bostrom stuff. I am not sure if it was there before and I missed it or what.

And others' comments have updated me a bit towards the forum vs. forum difference being less surprising. 

I still think there is something like the above going on, though, as shown by the kinds of views being expressed + who's expressing them just on EA Forum, and on social media. 

But I probably should have left LW out of my  "argument" since I'm less familiar with typical patterns/norms there. 

Habryka @ 2023-01-14T08:50 (+10)

The indirectness is also quite relevant to that. On LessWrong it's pretty encouraged to take current events and try to extract generalizable lessons from them, and make statements that are removed from the local political landscape. I am glad that post was written, and would have been happy about it independently of any Bostrom stuff going on.

Ataftoti @ 2023-01-15T04:39 (+3)

Can I ask for a link to this 'indirect post'? I'm interested in the generalized lessons being advertised here, but couldn't find the post after looking on LW.

freedomandutility @ 2023-01-15T21:49 (+8)

Spot on.

The rationalist community celebrates the virtue of epistemic integrity at all costs and celebrates the expression of opinions when they are deeply unpopular.

'Normie EAs's are not willing to sacrifice consequentialist impact for these virtues.

Chris Leong @ 2023-01-14T04:44 (+7)

Is the conversation censored or are people just not discussing it?

Nathan Young @ 2023-01-14T16:28 (+115)

I liked CEA's statement

I think people can dislike it. I really want people to feel free - but currently I don't.

Conflict of interest: I like Shakeel and feel loyalty to him which probably distorts this.

Susan II @ 2023-01-15T21:41 (+93)

The first statement would be viewed positively by most, the second would get a raised eyebrow and a "And what of it?", the third is on thin fucking ice, and the fourth is  utterly unspeakable.

2-4 aren't all that different in terms of fact-statements, except that IQ ≠ intelligence, so some accuracy is lost moving to the last. It's just that the first makes it clear which side the speaker is on, the second states an empiricism and the next two look like they're... attacking black people, I think?

I would consider the fourth a harmful gloss - but it doesn't state that there is a genetic component to IQ, that's only in the reader's eye. This makes sense in the context of Bostrom posing outrageous but Arguably Technically True things to inflame the reader's eye.

I think people would be mad at this, because they feel like poor people are being attacked and want to defend them. They would think, 'Oh, you're saying that rich people got there by being so smart and industrious, and if some single mom dies of a heart attack at 30 it's a skill issue.' But no one said that.

And this would be uncontested.

If someone says that, you'd probably assume they were pushing an antivax agenda and raise an eyebrow, even if they can produce a legitimate study showing that. (I don't think there is, I made up that example.) So I am sympathetic to being worried about agenda-pushing that is just saying selectively true statements.

Man, this shit is exhausting. Maybe CEA has the right idea here: they disavow the man's words without disavowing the man and then go back to their day.

Brian_Tomasik @ 2023-01-20T21:16 (+79)

Bostrom was essentially still a kid (age ~23) when he wrote the 1996 email. What effect does it have on kids' psychology to think that any dumb thing they've ever said online can and will be used against them in the court of public opinion for the rest of their lives? Given that Bostrom wasn't currently spreading racist views or trying to harm minorities, it's not as though it was important to stop him from doing ongoing harm. So the main justification for socially punishing him would be to create a chilling effect against people daring to spout off flippantly worded opinions going forward. There are some benefits to intimidating people away from saying dumb things, but there are also serious costs, which I think are probably underestimated by those expressing strong outrage.

Of course, there are also potentially huge costs to flippant and crass discussion of minorities. My point is that the stakes are high in both directions, and it's very non-obvious where the right balance to strike is. Personally I suspect the pendulum is quite a bit too far in the direction of trying to ruin people's lives for idiotic stuff they said as kids, but other smart people seem to disagree.

As some others have noted, probably the best approach we can take to the question of a genetic racial IQ gap is to voluntarily avoid mentioning the idea or at least to portray the question as relatively uninteresting (as Bostrom said in his apology), especially since the policy implications of such a gap wouldn't be that big anyway. However, trying to suppress inquiries by others into the topic seems to me like it may cause more harm than good, via the Streisand effect. If it looks like there's secret knowledge that the scientific community has informally conspired to suppress, the topic suddenly becomes way more interesting. There was also a genuine conspiracy (both in terms of meetings among scientists and tacit peer pressure) to suppress discussion of the lab-leak theory of COVID-19 in 2020, and that fact made the question much juicier.

I admire that Bostrom's apology didn't take the easy way out by lying to pretend he thinks there's no possibility of a genetic IQ gap. I see that as a positive sign about his intellectual honesty. (Actually, it's obvious that between any two groups of humans, there will be some difference in the distribution of genes and therefore some nonzero genetically caused difference in the average of almost any trait you wish to measure. The real question is whether those differences are big or negligible. And even if the differences are non-negligible, I suspect it's usually bad to highlight them anyway, such as due to stereotype threat and the possibility of encouraging violence toward and persecution of outgroups. In medical contexts, such as when people with African ancestry are genetically more at risk from certain diseases than other groups, highlighting these differences seems useful.)

Ives Parr @ 2023-01-21T20:47 (+42)

Many progressive institutions spend a great deal of time highlighting racial differences. I really wish they would not. Even worse, they go on to attribute these gaps to discrimination and nefariousness on the part of oppressor groups. If gaps are not due to discrimination, then it is immoral to place blame on a the designated oppressor group for discrimination. In other contexts, this is common sense. It is wrong to attribute Jewish success to coordinated conspiracies and exploitation because their success is largely attributable to higher average cognitive ability and intellectual culture. 

There are successful minority groups throughout the world who are resented because their higher socioeconomic status is attributed to exploitation. I think this is an unfortunate situation. If anything, attributing socioeconomic outcomes to exploitation leaves a group open to violence moreso than attributing socioeconomic gaps to average cognitive ability differences. 

Few people think it is moral to commit acts of violence against less intelligent people. Even fewer probably think it is acceptable to commit acts of violence against a group because they are a member of a groups with a lower than average level of cognitive ability. I really never see these attitudes. Eventually whatever is true about differences will come to light. The truth cannot be supressed for ever. It is best to argue now that nothing like violence comes from the existence of non-negligable gaps. What does follow is that a certain way of thinking about politics in mostly egalitarian societies, namely as race and class conflict, needs to be less dominant. 

We ought to move back to the attitude that it is an ideal to not care about race, sex, gender, sexual orientation etc rather than that we need to always be thinking about these things. It is hard to pushback against this narrative without touching on extraordinarily tabboo topics because absolute fairness creates disparity and mentioning the better explanations will get you regarded as a "bad person" and in some cases fired from your job.

Brian_Tomasik @ 2023-01-22T02:49 (+37)

We ought to move back to the attitude that it is an ideal to not care about race, sex, gender, sexual orientation etc rather than that we need to always be thinking about these things.

I plausibly agree. There are times and places to bring up racism and sexism, their historical contexts, and instances where they still exist today. But I also get the sense that people would generally be happier (plausibly even many minorities(?), though I'm not at all sure about that) if they ruminated on these ideas less often. Rumination can both exacerbate the pain of actual injustices and make one perceive injustices where they may not actually exist or don't exist much (manspreading, Shirtgate, etc). Note that this point can also apply to anti-woke people: focusing a bit less on the perceived wrongs of cancel culture might make them happier.

Believers in genetic racial IQ gaps often say their viewpoint is needed in cases like affirmative action, to show that it's not necessarily discriminatory if the demographic composition of some elite group doesn't match the demographic composition of the whole population. But if we were more race-blind and didn't think much about demographic composition to begin with, then talking about the possibility of genetic racial IQ gaps would also be less relevant. In order for this to work, society would also have to improve on providing better education, nutrition, income, etc to poorer parts of society, because otherwise not thinking much about the demographic composition of elite groups could lead to not noticing the substantial environmental and cultural causes of inequality that definitely still remain and that affirmative action aims to overcome. OTOH, maybe it's naive to expect significant improvement in society's motivation to actually raise the living standards of poor people, in which case the "kludge" of affirmative action might be better than nothing.

I think having a few visible examples of minorities in powerful positions, such as the first black or female president of the USA, can be pretty valuable as inspiring role models. It may also be the case that people would be deterred from entering workplaces where there are too few "of one's own kind" (race, gender, etc), such as because of fearing harassment. Maybe most humans are actually too tribalistic to pull off race-blindness, gender-blindness, etc. IDK.

In any case, I suspect it's generally more fruitfuil to focus on helping poor people (of all races) economically than to focus on, say, discrimination in hiring, because I think economic and cultural inequalities drive a lot of the inequality in outcome that we observe. In elite settings, my experience is that hiring discrimination is often in the direction of favoring black, women, gay, etc candidates, though I'm sure discrimination against such groups still happens somewhat too.

Even fewer probably think it is acceptable to commit acts of violence against a group because they are a member of a groups with a lower than average level of cognitive ability. I really never see these attitudes.

I imagine those attitudes were common during slavery and colonization. For example, the Europeans who arrived in the New World in the centuries after 1492 probably considered the indigenous people inferior and therefore didn't feel as guilty about enslaving or murdering them.

In the contemporary West, I agree that the view you mention seems rarer. Society tends to take less action against violence or hardship endured by very poor people, and poverty correlates with lower cognitive ability, but this isn't an intentional part of society's ideology so much as a byproduct of apathy.

In the case of non-human animals, even most SJWs think it's fine to enslave and murder them, and probably a main reason SJWs would cite for this is that non-human animals have different brains than humans do, though it's unclear how much this reason is about intelligence per se versus sentience.

Brian_Tomasik @ 2023-01-22T22:43 (+61)

Some notes on the last paragraph in my above comment:

When I used the phrase "SJWs", I intended it to have either neutral valence or a valence of friendly teasing. I agree with some amount of the SJW agenda myself. However, Wikipedia says that since 2011, the term is primarily used as an insult and is associated with the alt-right, which was not an implication I had in mind. Like Bostrom's 1996 email and 2023 apology, this example is an illustration that it can be difficult to realize exactly how a given word or statement may be perceived, especially if people are reading it as if it were a dog whistle.

Part of my reason for using the term "SJW" was that I didn't want to say merely "leftist" or "progressive". I was a strong leftist and progressive in the early aughts, and back then, people with that ideology were, in my experience, generally more focused on trying to improve people's welfare via economic and other government-level policy. Progressives didn't spend as much time as they do now on shaming individual people or groups. I think the woke-ward shift of the last decade, while it raises some important issues that were less highlighted in the past, is plausibly overall less useful for improving total human welfare than the earlier economic and policy focus was. So I don't like conflating "woke" with "progressive". (That said, I think some progressive economic policy positions, such as against outsourcing American jobs to developing nations, may be net bad for short-term human welfare.)

A more neutral phrasing than "SJW" could have been just "social-justice activist".

As far as the other part of my phrasing, when I said that most social-justice activists "think it's fine to enslave and murder" non-human animals, I was in part being deliberately provocative to make a point. Bostrom is right when he says in his apology: "I do think that provocative communication styles have a place". If I had instead written that most social-justice activists "think it's acceptable for farmers to raise and slaughter livestock", the use of those conventional euphemisms would have dulled what I was trying to convey, which is that this practice is actually really awful. (BTW, I should also acknowledge that I myself pay for some amount of enslavement and murder of dairy cows, via eating cheese and ice cream. However, I think the total amount of harm this causes is much lower than the harm caused by eating meat from smaller farm animals.)

Provocation can shock people out of their normal way of seeing the world into looking at some fact in a different light. This seems to be roughly what Bostrom was saying in the first paragraph of his 1996 email. However, in the case of that email, it's unclear what socially valuable fact he was trying to shock people into seeing in a new way.

One function of comedy is to do roughly the same thing: stating some true fact in an unconventional way in order to make people see the world through a new lens. However, an important principle in comedy is the distinction between "punching up" and "punching down", and if we interpret Bostrom's 1996 statement as analogous to provocative humor, it would clearly be punching down.

It's fairly common and even celebrated in modern Western society to hear statements like "women are more productive than men" or "girls are smarter at language than boys". Many of these statements are made in fairly blunt language, similar to Bostrom's 1996 statement. I assume most people think these statements about female superiority are pretty harmless, both because they're seen more as "punching up" (given the history of men dominating women in much of the world until the late 20th century) and because the hypothesis of biological gender differences is less taboo and more scientifically established. But I do think the contrast in people's reactions between saying "boys are worse at language than girls" versus Bostrom's 1996 statement is interesting, and it shows that the degree of outrage a statement provokes is often not obvious unless you have a lot of experience with a specific culture's norms.

I do worry a bit that the casual misandry that society often seems to celebrate may be detrimental to the self-esteem of boys, though I'm also not interested in trying to police such language. It's plausible to me that some amount of humorous mocking between different groups is actually helpful, by showing people that we can laugh together, rather than priming ourselves to interpret any offensive statement as an act of aggression.

Pablo @ 2023-01-23T02:41 (+31)

Great comments, Brian. You should spend more time on the Forum!

Brian_Tomasik @ 2023-01-23T05:07 (+35)

Thanks. :) I feel somewhat bad about spending time on this topic rather than my usual focus areas, especially since many of my points were already made by others. Plus, as I mentioned and as Bostrom learned, anything you say about controversial topics online is fodder for political enemies to take out of context. But I have a (maybe non-utilitarian) impulse to stick up for what I think is right even if some people will dislike me for doing so. (For a time, my top-level comment here had a net agreement of -10 or so. Of course, maybe the downvoters were correct and I'm wrong.)

Lumpyproletariat @ 2023-01-23T03:31 (+15)

Provocation can shock people out of their normal way of seeing the world into looking at some fact in a different light. This seems to be roughly what Bostrom was saying in the first paragraph of his 1996 email. However, in the case of that email, it's unclear what socially valuable fact he was trying to shock people into seeing in a new way.


Bostrom's email was in response to someone who made the point you do here about provocation sometimes making people view things in a new light. The person who Bostrom was responding to advocated saying things in a blunt and shocking manner as a general strategy for communication. Bostrom was saying to them that sometimes, saying things in a blunt and shocking manner does nothing but rile people up.

Brian_Tomasik @ 2023-01-23T04:50 (+19)

Interesting! I admit I didn't go and read the original discussion thread, so thanks for that context. To the extent that Bostrom was arguing against being needlessly shocking, he was kind of already making the same point that his critics have been making: don't say needlessly shocking things. He didn't show enough sensitivity/empathy in the process of presenting the example and explaining why it was bad, but he was writing a quick email to friends, not a carefully crafted political announcement intended to be read by thousands of people.

lastmistborn @ 2023-01-23T04:52 (+2)

I assume most people think these statements about female superiority are pretty harmless, both because they're seen more as "punching up" (given the history of men dominating women in much of the world until the late 20th century) and because the hypothesis of biological gender differences is less taboo and more scientifically established.

In my experience, the reason these statements tend to get less pushback is that they are generally explained by gendered socialization and norms rather than intrinsic biological or genetic factors, whereas the race/gender arguments that receive pushback claim that certain groups are genetically (intrinsically) inferior.

Brian_Tomasik @ 2023-01-23T05:21 (+23)

I see. :) I would think people would consider biological differences much more plausible in the gender case than the race case. I've heard several people say that when you're a parent to both a boy and a girl, the differences between them are unmistakeable even in the first ~2 years. I think many American adults at least privately understand that there are big biological differences between the brains of men and women, while most American adults probably expect no non-trivial biological racial brain differences. But yeah, any particular gender difference, such as the language gap, could be mostly or all environmental.

Jason @ 2023-01-21T00:01 (+7)

I don't like calling ~ 23 year olds "essentially still a kid." I think that has to cut both ways; if someone is "essentially still a kid" we shouldn't metaphorically let them use matches -- by which I mean have any roles and functions that could cause significant harm if they act badly.

I do think age is mitigating in the context of the 1996 email (on top of the passage of 26 years), but I feel that phrasing goes too far.

Brian_Tomasik @ 2023-01-21T00:10 (+21)

Fair enough. :) Some headlines called the FTX leadership "a gang of kids", which I think isn't unreasonable, even though they were in their late 20s or early 30s. The main thing I wanted to convey is that people at this age often have limited life experience or understanding of how the world works and so often do dumb things. Youth is a time to explore weird ideas and make mistakes. Therefore, I would agree that 23-year-olds generally shouldn't be entrusted to hold important decision-making positions unless they've shown a track record of unusual maturity.

Ives Parr @ 2023-01-21T20:26 (+9)

I think it is bad to deny a person access to a position because of the statistical average of their group. If a 23 year old is competent, then hold them to the same standard. 

It is odd to me that you would comment that highlighting differences in cognitive ability between groups should be taboo and suppressed and yet openly state that 23 year olds should have to face different standards in order to be entrusted in decision-making positions. I think you would find it utterly repugnant to say that blacks should have higher standards before we trust them. 

Edit: I don't think this is particularly bad and this attitude is relatively common. I just want to point out that I think this looks like an odd double-standard in my view, although many may disagree. Sorry if this comment comes across as aggressive.

Brian_Tomasik @ 2023-01-21T23:05 (+12)

It's a great point, and not at all aggressive. :)

I said that 23-year-olds should demonstrate "a track record of unusual maturity" in order to have important positions, not that they should always be denied them. In some cases, such as becoming the president of the USA, a minimum age requirement may make sense because the stakes are so high, although one could say that we should just let voters decide if any given person is qualified.

But you're right that I support a strong prior against, say, tasking a 23-year-old to run a major organization -- a prejudice that needs to be overcome with strong enough evidence of maturity and competence -- in a way that it would be abhorrent to do for a member of a particular racial group.

It's interesting to ponder the reasons for different attitudes toward racism vs ageism. My two main guesses are:

  1. Average differences in traits based on age are sometimes quite large, enough that the value of using the prejucide for making predictions can exceed the unfairness downsides of stereotyping people. For example, my impression is that young men are on average much riskier drivers than older men, so there's not a ton of society-level outrage about charging 16-year-old men several times more for car insurance than 55-year-old men, although I imagine that many individual young men who are cautious drivers are rightly annoyed by this situation.

  2. Historical context leads us to treat racial / sexual / etc discrimination more seriously than discrimination based on age, height, extroversion, etc. As far as I know, there hasn't been a lot of genocide against short people of the same race, or enslavement of them, or forcing them to use separate bathrooms, etc. A main argument for caution about racial stuff is a slippery slope concern that there's some small chance that allowing more callousness on these issues could actually lead to a new genocide, so the expected value of worrying about it is nontrivial, even if the risk of the genocide is very low. (That said, excessive mob punishment of people for not following ever-more-demanding requirements regarding proper speech and conduct may itself pose a very small risk of genocidal outcomes, and it's non-obvious whether this risk is smaller or larger than the racial genocide risk in the modern West. It may also be the case that the hostility between the extreme woke and extreme anti-woke camps makes both of them stronger, at the expense of moderate voices, thereby increasing both types of genocide risk at once.)

There are some types of discrimination that receive surprisingly little sympathy despite the lifelong trauma they can cause people, such as favoritism toward attractive people. Even woke Hollywood -- despite extraordinary efforts to introduce diversity of race, gender, sexual orientation, and so on -- rarely casts unattractive actors for leading roles. Maybe this is understandable, because those movies would usually perform poorly at the box office, and for whatever reason, there's not enough social outrage about discrimination against unattractive people to offset that. (To the extent that one point of watching a movie is to see attractive people, maybe one could argue that unattractive people are genuinely less qualified for the job, and no amount of new evidence could overcome that fact. This would make attractiveness discrimination unfortunate but not stereotyping. OTOH, there's some chance that if the unattractive person were given the leading role, s/he would charm audiences to a degree that the movie's creators didn't expect, in which case it could be similar to the case of a surprisingly wise 23-year-old.)

Larks @ 2023-01-22T02:59 (+14)

Historical context leads us to treat racial / sexual / etc discrimination more seriously than discrimination based on age, height, extroversion, etc. As far as I know, there hasn't been a lot of genocide against short people of the same race, or enslavement of them, or forcing them to use separate bathrooms, etc.

I agree with your comment in general, but I'm not quite sure about this point. I think age-based discrimination has been / is quite severe (though perhaps it is also often justified, since age does make a lot of difference to people's abilities):

  • Children are often forced to go sit in a small room all day, subject to the arbitrary whims of a single adult with little oversight, and often have to endure criminal violence from other children with little recourse, in a way that would be unacceptable for older people.
  • Young men have been repeatedly conscripted to fight in wars with high mortality rates.
  • Old people might face compulsory redundancy.
  • Young people have to pay taxes to fund benefits for older people, even if those retirees did not have to pay those taxes when they were young, and these retirement benefits may not be available by the time the young retire.
  • Many facilities do ban children, and people often complain about children being allowed on planes etc.
  • In many places babies can be killed by their parents without legal consequence.
  • Older people are often targeted by younger criminals because they are vulnerable.
  • In some places older people may be pressured to commit suicide to free up resources.
  • Many laws are passed that systematically disadvantage younger people (e.g. NIMBY rules on homebuilding).
Brian_Tomasik @ 2023-01-22T04:50 (+11)

Good list. :)

I think school is vastly less bad than, say, slavery, with some possible exceptions like if there's extreme bullying at the school.

You're right that the violence children endure from each other (and sometimes from their own parents) would be unacceptable if done to adults. If one adult hits another, that's criminal assault/battery. If a kid hits another kid, that's just Tuesday.

Children are also subject to the arbitrary whims of their parents, and are made to do unpaid labor against their will, though usually parents don't treat their own children extremely badly. (Of course, some parents do horrifically abuse or neglect their children.)

In any case, as you said, to some extent the lack of freedom for children is inevitable. (Actually, there is a way to avoid it entirely: don't have children, which is the antinatalist solution. If sentient beings didn't exist, none of the problems we're discussing here would be problems anymore.)

Tsunayoshi @ 2023-01-22T01:52 (+12)

It's definitely right to look at historical and other social context to explain current and past attitudes towards discrimination as explanations. A utilitarian framework is probably not the right approach, nor most other ethics systems. I doubt there was ever a time in the modern era where attitudes were consistent, and there's loads of social conditioning going on. I don't think many women felt angry in the 19th century when their heads of government were (almost?) invariably men, because "that's just how things are" and nobody else was getting angry about it anyway.

My favorite example of current discrimination that totally flies under the radar of the collective ire is height discrimination. 6 of 46 US presidents[1] have been of below average height, a result this extreme or more has less than a 0.005 chance occuring due to randomness (i.e. your chances of becoming president are 2 orders of magnitudes lower if you are short). This is not totally unknown, occasionally there's a paper or article about height advantages, but people perceive it as a mere curiosity. Personally speaking as a short guy, this absolutely fails to anger me either.

1: https://www.thoughtco.com/shortest-presidents-4144573

Matthew_Barnett @ 2023-01-22T03:48 (+9)

Average differences in traits based on age are sometimes quite large, enough that the value of using the prejucide for making predictions can exceed the unfairness downsides of stereotyping people.

Even if there were no average differences in maturity between age groups, it still might be rational to prefer older people for important roles like president or CEO for pure credentialing reasons. The reason is simple. 23 year olds have had less time to prove their maturity. Even if they were highly mature, their track record would be brief, and thus not conclusive.

Jason @ 2023-01-21T23:13 (+7)

Also, we all once were, are, or expect to be 23 years old at some point. That's not a complete justification for many reasons, but it makes me relatively less concerned about age-based classifications than classifications where the burden is not felt close to equally by everyone over time.

Brian_Tomasik @ 2023-01-22T01:40 (+5)

I hadn't thought of that, but it's an excellent point and probably is a big part of the explanation. There are a few cases where it might not apply, such as if a mother stays at home with her kids during her 20s and 30s, enters the workforce in her 40s, and faces ageism because she's not as sharp as the younger people. In that case, she never once was a sharp young person in the workplace. But these kinds of cases also tend to be ones where people feel that ageism is more of a problem.

Matthew_Barnett @ 2023-01-21T05:59 (+6)

Bostrom was essentially still a kid (age ~23) when he wrote the 1996 email.

I agree with Jason. I don't think being 23 years old means that you're "essentially still a kid". 

If we want to judge young adults for their positive achievements, it makes sense to hold a symmetric attitude and judge them for their mistakes as well (though one could take the perspective that we shouldn't judge anyone for making mistakes, but that's a separate argument).

Fluid intelligence is generally considered to peak between one's late teens to mid 20s,  and the majority of measured cognitive abilities either decline or only very slightly rise after the age of 23. If we use cognitive ability as the marker of adulthood, rater than life experience, one could even make the case that 23 year olds are more "adult" than any other age group.

(Though of course I might be biased, because I'm 23 years old right now.)

Brian_Tomasik @ 2023-01-21T06:49 (+38)

Thanks. :) I mainly had in mind something more like wisdom, rather than intelligence. Social norms on particular topics are often not what you would expect by armchair reasoning. In many cases, you have to directly encounter people expressing those norms, or see news stories / hear gossip about people who have run afoul of those norms, to know what they are. Nerds who are very interested in science/math/theoretical things may be less likely to learn about these norms than the average person, despite having high fluid intelligence. (BTW, this is one reason I've updated toward thinking reading some amount of news is important.) I imagine that people told Bostrom that what he said in 1996 wasn't cool, and if so, that was a useful learning experience for him. The only problem was that it was written down for posterity to see.

I think cultural context is also relevant to judging these things. Most young people today (even most nerds) know that what Bostrom said (even though it was in the context of giving an example of what you shouldn't say) would elicit strong negative reactions, given how much media attention these things receive. I assume this was less obvious to nerds in the 1990s (though it was probably fairly predictable even back then).

For what it's worth, my fifth-grade class was assigned to read The Great Gilly Hopkins, which includes a tasteless joke about the N-word (though in the context of suggesting the person making it was being an asshole). And in high school, in 2005, when reading Adventures of Huckleberry Finn, we used the N-word in relation to Jim without any problems, because that's the word Twain used. The degree of sensitivity around these things has changed a lot in the last 10-20 years.

Matthew_Barnett @ 2023-01-21T07:51 (+11)

Most young people today (even most nerds) know that what Bostrom said (even though it was in the context of giving an example of what you shouldn't say) would elicit strong negative reactions, given how much media attention these things receive. I assume this was less obvious to nerds in the 1990s (though it was probably fairly predictable even back then).

It is perhaps important to note that in the original email, Bostrom quite directly says that he is aware of the social norm about not saying what he said. In fact, that was one of the main points of the email: that saying something true in a blunt manner about a controversial topic is likely to be viewed as offensive. If Bostrom learned anything -- and indeed, he apologized within 24 hours -- it was that saying something like that can be inadvisable even among friends.

In general, I don't think old people generally have a stronger understanding of social norms than younger people. Old people will of course have more experience to draw from, and their mannerisms will have gone through more trial and error. In that sense, I agree: old people are often wiser. But the frontier of cultural norms are generally driven by young people, and old people are often left out of that conversation.

It is not uncommon to hear young people say they're shocked by their older relatives who are ignorant or only superficially aware of social norms that became widespread in the last ten years, e.g. stating one's pronouns while introducing oneself. To the extent that we are judging people on their understanding of current social norms, we should probably hold young adults to the strictest standards of any group.

Brian_Tomasik @ 2023-01-21T08:53 (+34)

Interesting point! I hadn't even heard of "stating one's pronouns while introducing oneself", although maybe that's because I rarely meet anyone in person.

As you said, there's a tension between young people having the cutting edge of norms versus older people knowing a greater quantity of norms, even though some may be stale.

I think the obsession among young people with political correctness increased dramatically in the last 10 years, and it was barely a discussion topic when I was in pre-college school. Usually it seemed to be teachers and administrators trying to inculcate anti-bullying lessons into the students. At the anti-bullying workshops, students often rolled their eyes. So I'm not sure how true it would have been to say that students were at the vanguard of social norms in my school. (I went to a pretty liberal public school in upstate New York.)

I may also be generalizing too much from my own past self, since I was often called "oblivious" at Bostrom's 23-year age and wasn't that well informed about scandals, maybe because I thought they were too gossip-y and not as important as "serious" topics. (Now I realize that gossip is actually very important.)

If Bostrom learned anything -- and indeed, he apologized within 24 hours -- it was that saying something like that can be inadvisable even among friends.

Yeah. He also said he only "recently" began to believe that speaking flippantly is unsuccessful, which I think jibes with my hypothesis of him being fairly oblivious. Many people would consider the ineffectiveness of speaking flippantly so obvious as to not be worth mentioning as any kind of realization.

Nathan Young @ 2023-01-15T00:41 (+74)

An attempt to express the “this is really bad” position. 

These are not my views, but an attempt to describe others.

Imagine I am a person who occasionally experiences racism or who has friends or family for whom that is the case. I want a community I and my friends feel safe in. I want one that shares my values and acts predictably (dare I say, well-aligned). Not one that never challenges me, but one where lines aren’t crossed and if they are, I am safe. Perhaps where people will push back against awful behaviour so I don’t have to feel constantly on guard. 

Bostrom’s email was bad and his apology:

And to add to that, rather than the community saying “yes that was bad” a top response is “I stand with Bostrom”. I understand that people might say “trust us, you know we are good and not racist” but maybe I don’t trust them. Or maybe my friends or family are asking me about if I know this Bostrom guy or if he’s part of my community. 

And maybe I am worried that Bostrom et al don’t have the interests of people of colour at heart when they think about the far future. Perhaps, like the first female astronaut, who was asked if she needed  100 of tampons for the week, I am concerned that the far future is not being built with anything like an understanding of the needs of me or people of colour.

Heck, perhaps Bostrom wouldn’t hire someone like me. His apology doesn’t inspire confidence. Whenever I meet someone who holds these views it’s only an amount of time before I see those views worked out in their actions. This might seem overblown, but it plays on my mind.

At that point, the endless focus on the object level might make me feel like this isn’t the community for me. I’m not trying to bully you into submission, but I’m signalling that if you don’t take a moment I might leave. Or people like me won’t join.

More than this these are just good values I want in a community I want to be part of. I want a community with empathy. And I want a community with a good reputation.

Imagine we’re both taking an uber and the driver takes a turn too fast. “Slow down” I shout. You say “it’s fine, it's fine”. I say “Look, can we tell the driver pull into a side road and stop for a bit”. The question isn’t whether they took the corner too fast, it’s about whether you support me when I don’t feel safe. And if you don’t, I’m gonna get out of the car.

 

This was written after conversations with a few friends. I’m trying to do a good job but it’s probably flawed, nor do I claim to be a good spokesperson here, I’m not. I think some people are seeing this as an intellectual discussion while others are trying to feel safe and comfortable. If they don't, those people might leave. While I might not mediate my tone in discourse for an adversary, I often do for my friends. 

IanDavidMoss @ 2023-01-16T03:31 (+8)

Several years ago, 12 self-identified women and people of color in EA wrote a collaborative article that directly addresses what it's like to be part of groups and spaces where conversation topics like this come up. It's worth a read. Making discussions in EA groups inclusive

DPiepgrass @ 2023-01-16T00:20 (+1)

Thank you for that explanation, Nathan. There's one statement I don't understand: you say "Whenever I meet someone who holds these views..." but what view(s) are we talking about, and why do some people to think that Nick holds them?

I suspect that people who feel this way get so offended by being asked this kind of question that they simply downvote + disagree without answering.

Nathan Young @ 2023-01-16T06:15 (+14)

That populations vary significantly on IQ. Someone I talked to said that they had found correlation between people who held these views and who treated them as if they were less intelligent.

DPiepgrass @ 2023-01-16T12:46 (+19)

So — this person believes IQ cannot vary significantly by population? Or that one mustn't say so?

In the Flynn effect, populations vary significantly on IQ depending on when they were born. So, assuming the Flynn effect isn't controversial, I suppose you meant "populations grouped by skin color". But, I would ask, if timing of birth is correlated with IQ, then couldn't location of birth be correlated with IQ? Or poverty, or education?

I could continue this line of reasoning, but.. somehow it doesn't feel useful. Positions people take on this can be arbitrarily extreme, e.g. some people object to any attempt to measure intelligence. If such a person sees the Bostrom "apology", they could be mad that he hasn't denounced these so-called "IQ tests" as illegitimate.

And I guess your point wasn't about logic, after all, but about feelings. So let me share my feeling: I find it extremely threatening and scary when people in/around EA — you know, EA, the concept I am building my whole life around — are vaguely treating someone who (to me) is obviously not racist as if he were a racist. It's like suddenly my neighbors joined a mob and are carrying a city councilman toward the giant tree in the town square. I'm alarmed and I say "whoa, why are you acting this way?! I think you're making a mistake!" and the mob just says "f*** you!" Maybe this is why I'm spending 3AM to 6AM on a Monday writing this.

This makes me want to either shrink away from EA, or take a stand.

Maybe a stand like this: I think if someone is so politicized that they think certain measurements should be ignored and no similar measurements should be done henceforth; or if they can't discuss correlation as distinct from causation; or if an apology is worthless/insincere because it provides too much context or has too much explanation afterward or because of a reason they aren't willing to explain; or if they like downvoting people who question their opinion without defending or explaining their own position; or if they are a conflict theorist; then I actually think maybe EA isn't for them. By all means, donate to help the poor. Donate to prevent biological weapons and so on. But in this neighborhood we like measurements and explanations and charitable interpretations. And we believe in "innocent until, on the balance of probabilities, probably guilty".

EAs decided pretty quickly that an EA-affiliated person, SBF, had acted appallingly badly and ought to be disowned. I trusted that judgement because it had a different flavor to it.  In this link you will see, in tweets two and three, a series of links to evidence that support the position Robert is taking. Evidence! Think what you want about Bostrom, but if you're going to make a case against him here, don't just say he was "mealy mouthed" and conclude that he believes bad and false things, without attempting to demonstrate it. I could and should read Rohit's article more charitably, especially since he says he's not an EA, but when the parade of downvotes rolled in, that became impossible on an emotional level.

Tristan Williams @ 2023-01-13T21:20 (+63)

A good friend sent me an article from Vice on this with the comment below "EA is falling apart laughing emoji" He's a great friend, but to be honest this comment pissed me off, and more than the FTX coverage, this round feels like a sort of particular hate attack on EA. 

Think what you want about Bostrom and his comment, digging through the threads of previous online engagements of someone to find some dirt to hopefully hurt them and their associated organizations and acquaintances is personally disgusting to me, and I really hope that we don't engage in similar sort of tactics in response, though I don't think it's a really worry because the general level of decency from EAs at least seems to be higher than the ever lowering bar  journalists set. 

sapphire @ 2023-01-14T12:51 (+47)

(reposting from the 'does it matter' thread)

It is commonly theorized that having friends who hold a viewpoint should make one more charitable to that viewpoint. This has not been the case with HBD. I have a close friend of around decade who has gotten increasingly obsessed with HBD. In general they are a smart and friendly person. But the things they has started espousing have become really shocking. 

Example of their views: If Black people are not heavily under-represented in a cognitively demanding organization that is strong evidence the organization is racist against White and Asian individuals!

Obviously these points of view are completely at odds with any sort of fair and inclusive community or organization. They have also moved further and further rightwing. This resulted in a lot of personal problems when I came out as trans. They don't 'just' have some abstract objections, they were quite toxic to an old and supportive friend when she was having a hard time. They explicitly admit that a huge driving force for them moving rightward in general is belief in HBD. The logic for why is not hard to see. If you believe in HBD you can start to feel 'persecuted' by people on the left or center-left. It's easy to start sympathizing with the right and far right.

I've been in the rationalist community for over a decade and the EA community for a somewhat shorter period. I have seen tons of seemingly kind and reasonable friends become increasingly far-right after they got into HBD.  Im honestly not surprised FLI was considering funding an explicit far-right nazi-adjacent group. The sympathies run deep. Neo-reaction has been close to the EA and rationalist communities for a very large fraction of our history. 

It would be extremely hypocritical for me to hold people to views they no longer support. I endorsed HBD in 2015 and 2016. Like many rationalists I was introduced to HBD by reading Slatestarcodex. Promoting HBD in anyway, including privately exposing people to the ideas, is one of the biggest regrets of my life. It is a seriously harmful philosophy. Im very, very sorry for any negative impact my actions may have caused. For obvious reasons I have sympathy for people who have gotten into the racist pipeline. I honestly only got out because the right is so shitty to trans people and is pretty anti-vegan. Like many eggs I had a lot of trans friends. Independently I was quite convinced veganism was a positive lifestyle. HBD is a very harmful pseudo-science and it is totally unacceptable that people with power in Effective Altruism believe in it. 

I truly hope the EA movement can move toward a better future free from this toxicity.

Lukas_Gloor @ 2023-01-15T00:54 (+48)

Thanks for posting this account!

It sounds like your friend ended up being attracted (or sucked in) to quite a bad memetic environment!

It seems really off to me to assign much practical relevance to questions of IQ differences between groups because these priors get screened off once we learn new information about individuals. For instance, if we learn that someone works at a "cognitively demanding organization," that evidence is more direct and more relevant than any prior from group averages.

(Same reasoning: if all we knew about someone is that they don't have a university degree, then we'd be forgiven to have priors that they're somewhat likely to be not extremely intelligent or conscientious. [But it still seems important to keep an open mind because priors are very crude – that's the whole point!] However, once we learn that they work at a cognitively-demanding organization, the prior from the uni degree no longer matters at all!)

Neo-reaction has been close to the EA and rationalist communities for a very large fraction of our history.

I think a lot of this happened before I became active in EA/rationality, but I remember feeling quite puzzled when I read about neoreaction and saw that some people active in that scene had ties to the rationalists in the Bay area. My impression was that this influence has gotten a lot weaker over time, but it sounds like your experience suggests that it's still a big issue. I find that very unfortunate.


 

sapphire @ 2023-01-15T11:00 (+5)

I think the connection is weaker but there is still a lot of really wacky social dynamics. 

"In one case I mentioned to someone that i was surprised the SSC subreddit had people posting the white surpremacist 14 words openly with others approving. This person then spent a year+ worried that I was infiltrating their group as a woke sjw so I could get people canceled. They never mentioned it to me but was just very jumpy and kind of unfriendly. Like 2 years later they were like oh ok thanks for not doing that and I was like ??? "

There are also still quite a few rationalists discussing and promoting legitimately far right sources. Multiple people I met at the NYC rat meetup were literally into Qanon. Almost all the rationalists I know who got involved with the far right started with HBD.

Nathan Young @ 2023-01-14T16:38 (+25)

I guess I'm not that surprised that FLI funded someone bad - many EA grants are quite light on checks and balances, but I don't think my lack of surprise is from the same direction as you. I would be surprised and upset if Max Tegmark thought that neonazis [had some things right] or [supporting them is an effective thing to do].

Wouldn't you?

Jason @ 2023-01-14T17:28 (+18)

I think viewing this as a "random" checks-and-balances failure does not adequately account for (1) the report that Tegmark's brother wrote for the news outlet and (2) FLI's stonewalling so far. If there's an innocent or merely negligent explanation, it is really difficult to understand why it hasn't been offered.

Aptdell @ 2023-01-15T05:18 (+22)

Thanks for writing this!

I've seen some posts on this forum discussing HBD as an is/ought issue -- something like: HBD is an "is", racial inegalitarianism is an "ought", and you can't derive an ought from an is.

I used to find this argument really compelling, and I still think it's powerful and underrated. But recently I've become more skeptical of it.

I think the is/ought boundary is not actually that firm. For example, consider the statement: "Most communities would be better off if adulterers received severe social sanction."

  • You could argue this is an "ought" claim. A person who says "adultery is deeply immoral" is essentially saying we should apply severe social sanction to adulturers.

  • You could argue this is an "is" claim which is empirically testable. Define a welfare metric, identify some communities, randomly assign half to a "shame adulterers" condition, see how the welfare metric is affected.

In the same vein, even if you believe you're a "high decoupler", there's a good chance you don't decouple as much as you think. Advertising is a multi-billion dollar industry even though people claim ads don't affect them. Humans are vulnerable to biases like the affect heuristic. We aren't perfect logical reasoners, especially when tribal politics are involved. The "pipeline" you describe may go to show that lots of "high decoupler" types are low-decoupling in practice.

And, even if you believe you're a "high decoupler", you have to acknowledge that the world is full of "low decouplers". I strongly agree with the arguments Coleman Hughes makes in this discussion with Charles Murray, re: negative societal effects of widespread HBD discussion.

I think a reasonable takeaway from the recent SBF tragedy is that on the margin, we should defer more to mainstream elite opinion (in SBF's case, crypto skepticism). And mainstream elite opinion says you don't talk about race & IQ. Maybe that's an adaptive response to an information hazard. Chesterton's Fence comes to mind.

Ives Parr @ 2023-01-19T00:18 (+12)

What follows from a belief about differences depends on a persons moral framework. Most people have reasonable moral frameworks and so nothing heinous follows. And most people without henous beliefs do not want to be lumped in with people with heinous beliefs (as is extremely common). So they just do not talk about this stuff, further skewing the public impression of your average believer in differences.

What I think happens is that some people think that since it would be unjust for genetic disparities to exist, then genetic disparities do not exist. That's mixing up is and ought. We have no strong reason to think that the world is just. In tons of other domains, the world is incredibly unjust. 

The current taboos are not really chesterton's fense. People used to say all sorts of wildly offensive things not that long ago. Think of how rapidly our culture has come to accept transgenderism and how recently it was acceptable to mock cross-dressing.

Ives Parr @ 2023-01-19T00:04 (+14)

If you have the belief that groups are the same, then disparity in representation points to someone putting their finger on the scale (intentional racial discrimination).

If you have the belief that groups are different, then equality in representation points to someone putting their finger on the scale (intentional racial discrimination). 

Obviously these points of view are completely at odds with any sort of fair and inclusive community or organization 

A person who believes in disparities would expect a fair organization to have disparity. Many would like college admissions to be race-blind, but in doing so, many suspect this would produce disparate represenation seeing as there are different averages for SAT/ACT/GPA. 

I highly doubt you're a bad person. In fact, I would suspect that even if you believed in HBD, you would still care about the welfare of others of all races. I don't think you should feel so guilty.

DPiepgrass @ 2023-01-16T00:04 (+7)

I agree with the top part, but what are you referring to when you say "people with power in Effective Altruism believe in it"?

Sharmake @ 2023-01-15T02:21 (+6)

Example of their views: If Black people are not heavily under-represented in a cognitively demanding organization that is very strong evidence the organization is racist against White and Asian individuals!

This is seriously wrong, for some important reasons. A perfectly irrational and racist belief.

Ives Parr @ 2023-01-19T00:25 (+6)

It is not irrational given this persons beliefs:  

If you have the belief that groups are the same, then disparity in representation points to someone putting their finger on the scale (intentional racial discrimination).

If you have the belief that groups are different, then equality in representation points to someone putting their finger on the scale (intentional racial discrimination). 

Whether or not this is "seriously wrong" depends largely on whether or not this persons belief is true. I suppose this could be called "racist" but I think that this term is not particularly useful because it is ambiguous, morally-loaded, and seems to point at the belief being false (does a belief stop being racist if it turns out true? I'm not sure).

ZachWeems @ 2023-01-20T10:27 (+2)

Tangent: Out of curiosity, did you/ does your friend typically refer to (belief in meaningful genetically influenced racial IQ differences) as "HBD", as "part of/under HBD", or neither?

My impression was the term was mostly used by genetics nerds, with a small number of racists using the term as a fig leaf, causing the internet to think it was a motte-and-bailey in all uses. If people who mostly cared about the IQ thing used it regularly I suppose I was wrong.

(And to be clear since I'm commenting under my own name, meaningful genetically influenced racial IQ differences aren't plausible. My interest is the old internet drama.)

Richard Y Chappell @ 2023-01-15T17:59 (+45)

I wrote a substack post, 'Text, Subtext, and Miscommunication', that touches on some relevant issues.

calebo @ 2023-01-15T18:50 (+4)

This is a good post. 

It would probably be better received if the substantive contents were detailed here and the link wasn't placed first.

Richard Y Chappell @ 2023-01-15T19:34 (+3)

Thanks. To save time, I've gone the opposite route and just cut my very limited summary, in case it was giving a misleading impression. (I don't think the post itself contains anything controversial enough to make sense of the initial downvotes that were happening here...)

Omnizoid @ 2023-01-13T15:28 (+44)

Here, I explain my views on the matter.  https://benthams.substack.com/p/hordes-of-vultures-descend-on-bostrom

Parrhesia @ 2023-01-13T23:03 (+12)

Great thoughts

AllAboutHustle @ 2023-01-15T20:17 (+39)

Are there any other key questions surrounding this debacle that I'm missing? 

Ability for Public Figures to Repent?
- Seems important over a span of time for a functioning society
- Relevant that this happened once 25+ years ago and no pattern of behavior
- Nobody seems to be questioning whether  the initial email was wrong to send

Should Bostrom's email/apology reflect negatively on EA?
- Yes from a Bayesian mindset, but very slightly
- Depends in part on whether one should disassociate Bostrom's broader ideas on Longtermism / AI Alignment from his actions 26 years ago - seems reasonable 

Does EA's response reflect negatively on EA? 
- Depends on if you want EA to have a "strong view" towards these questions; if so, you should probably update negatively on EA regardless since there have been a variety of responses
- If you prefer EA to have a diversity of viewpoint (seems reasonable) then maybe positive update on EA

What else should have been included in Bostrom's apology? 
- More explicit language around group intelligence differences? (what if he doesn't feel certain on that / doesn't want to lean into an Info-Hazerdous-if-True topic?)
- More empathy in tone? Seems like a solid criticism - what specifically ought to have been included? (And are we being welcoming to the neuro-divergent?)

Ought someone to convince themselves against "Info-Hazerdous if True" beliefs? 
- Epistemic integrity (seems unhealthy)
- Relative benefits of 10-foot pole position (easier to not talk about it vs showing certainty?)

Ought someone to discuss "Info-Hazerdous if True" beliefs?
- But doesn't that give extremists free reign over ideas?
- Seems like empirically it works alright to just not feed the fire? (do conspiracy theories/extremism disagree?)

Does this forecast future bad actions from Bostrom? 
- Likely not - would look to his pattern over last 25 years
- Prediction markets seem to have faith in Bostrom

David Mears @ 2023-01-16T11:41 (+21)

I don't think people rejecting Bostrom's apology are rejecting it on the basis that public figures can't repent. We just don't think it was a functional apology.

AllAboutHustle @ 2023-01-16T21:01 (+8)

Yes I believe that would be covered in my 4th point - "what else should have been included in Bostrom's apology". I suppose it could be rephrased: "what else should have been included/excluded in Bostrom's apology", and one could argue he should exclude the latter paragraphs from his apology. 

AnonymousEAForumAccount @ 2023-01-15T17:09 (+38)

Bostrom’s apology letter references another apology he made immediately after his original use of the racial slur:

I completely repudiate this disgusting email from 26 years ago. It does not accurately represent my views, then or now. The invocation of a racial slur was repulsive. I immediately apologized for writing it at the time, within 24 hours; and I apologize again unreservedly today. I recoil when I read it and reject it utterly.

 

It’s non-trivial to find his original apology, so I’ve shared it here for anyone who is interested.

 

>---collateral damage to Extropianism as a whole

Yes, that's what I'm concerned about. Transhumanism is weird
enough anyway; perhaps we should not make it even stranger
by adopting an offensive jargon.

Now, I am saddened by the reception of my message by some of
the other people on this list. It's my fault. In order to
make my point I thought that I should pick an example that
really was offensive. Well, it seems I succeeded only too
well. Let me say again, just to make sure that there be no
misunderstandings: I am NOT a racist. And I do not think we
should discuss the topic further. My apologies to "Damaged
Justice", Howard Julien, Pat Hardy and anybody else whom I
may have misled by my original reply to David Musick.

Personally, I found the original apology (and, FWIW, Bostrom’s most recent one too) disappointing. While I appreciate that he acknowledges culpability ("it's my fault"), it seems like the things he’s most concerned about are potential harm to transhumanism and people thinking he’s a racist. It would have been a MUCH stronger apology in my opinion if he had straightforwardly said it’s wrong to use racial slurs and apologized for having done so.

To be clear, I don’t think bad ideas expressed decades ago warrant automatic cancellation. I’m not sure what, if any, repercussions Bostrom should face now. But since he did cite his original apology as exculpatory evidence, I thought I should share that content so people can draw their own conclusions.

Jason @ 2023-01-13T14:00 (+33)

I think this approach is probably appropriate here with two caveats:

First, there should be some sort of objective criterion in the future to potentially trigger this action, to prevent giving the impression that it was being used to inhibit discussion of a topic. Since the apparent intent is to avoid swamping the frontpage, that criterion might be a certain number of posts on the frontpage.

Second, there should be a limited period in which the moderators will approve regular-post status upon request if a poster certifies that they were actively working on a post at the time the policy was activated. People have clearly invested significant time and attention into their posts on this and similar matters with the expectation that their post will receive the standard amount of visibility. Significantly reducing the amount of engagement they will likely receive after they have made that investment seems unfair. Moreover, creating a rush to get your post out there -- because you don't know when new posts will be relegated to personal-blog status without warning -- incentivizes hot takes and rushed work, which is undesirable.

Chris Leong @ 2023-01-14T04:39 (+5)

Agreed with importance of ensuring fairness between people with different opinions, though I disagree with this: "People have clearly invested significant time and attention into their posts on this and similar matters with the expectation that their post will receive the standard amount of visibility. Significantly reducing the amount of engagement they will likely receive after they have made that investment seems unfair". If this becomes a standing policy, then people would have been forewarned in advance.

Jason @ 2023-01-14T17:17 (+6)

I'd agree with that view if the trigger criteria were sufficiently definite and published in advance (which would have other downsides), or if there were sufficient past practice for people to reasonably understand how the trigger would be applied in new circumstances.

"Mods may decide to cut visibility on high-volume current events" isn't enough forewarning for me.

titotal @ 2023-01-13T16:46 (+32)

Is this thread pinned to the front-page? Hiding all the posts mentioning the drama while letting the one allowed thread drift away could be characterized by onlookers as an attempt to sweep a serious issue under the rug, and I would argue has a silencing effect on discussion. 

Also, is a new thread going to made if this one becomes too long? If there ends up being like, hundreds of comments in the only allowed discussion thread, I feel it would also damage discussion by clogging it all up into reply threads. 

Pablo @ 2023-01-13T19:34 (+104)

Many people are tired of being constantly exposed to posts that trigger strong emotional reactions but do not help us make intellectual progress on how to solve the world's most pressing problems. I have personally decided to visit the Forum increasingly less frequently to avoid exposing myself to such posts, and know several other EAs for whom this is also the case. I think you should consider the hypothesis that the phenomenon I'm describing, or something like it, motivated the Forum team's decision, rather than the sinister motive of "attemp[ting] to sweep a serious issue under the rug".

titotal @ 2023-01-13T22:57 (+11)

I meant "could be characterised as sweeping under the rug" as "people will use this to characterise EA as sweeping the issue under the rug", an accusation which has already started

I am very sympathetic to people who don't want to deal with this drama, especially considering the highly sensitive content. But I'm also sympathetic to people who find the email controversy (and peoples reaction to it) concerning and feel that it raises bigger issues. 

If someone hears of the controversy and wants to look at the EA response to it, I think it would reflect quite badly if they see zero discussion of it on the frontpage (this thread is currently 9 posts from the top and slipping), and learn that this is due to deliberate admin action. 

This is why I suggested that this thread should get pinned, and periodically renewed if too large. I think this solution allows people to both discuss the issue and to avoid it. 

Jason @ 2023-01-13T20:14 (+6)

Maybe the pinned thread could be called "Current Event Policy in Effect -- Megathread Here" or something nonspecific.

pseudonym @ 2023-01-13T21:04 (+1)

As expected, this is how it has been characterized by some

https://twitter.com/JotieFr/status/1613918086326788101
 

det @ 2023-01-16T12:56 (+30)

Here’s an attempt at a meta-level diagnosis of the conversation. My goal is to explain how the EA Forum got filled with race-and-IQ conversations that nobody really wants to be having, with everyone feeling like the other side is at fault for this.

First, the two main characters. 

Alice from Group A is:

Bob from Group B is:

I’m naturally more of a Group B, but as the discussion has evolved, I think I’ve moved toward understanding and agreeing with the concerns of Group A.[2] Hopefully this allows me to be moderately objective here -- but I expect I’m still biased in the B direction, so I welcome those who are naturally more A to tear this to shreds.

With the groundwork laid, here’s my potted conversation between Alice from A and Bob from B.

Alice: Bostrom’s apology is inadequate. He should completely renounce the position in the old email. Saying there’s a racial IQ gap is completely unacceptable, and he should renounce this too.

Bob:  I understand criticizing Bostrom’s apology, but as far as I can tell he was correct about the existence of an IQ gap. Here, look at these sources I found. You can’t ask him to say something false.

Alice: I absolutely do not want to discuss the question of whether or not there is an IQ gap. Please don’t bring up this question, it will be extremely alienating to tons of people for no benefit.

Bob: Hold up, it seems to me like you made a factual claim about race and IQ before I did. I’m just continuing the conversation you started. Am I not allowed to point out your mistake?

Alice: If you go around discussing questions of race and IQ, people will assume that you’re a racist. It could be ok to discuss this question in narrow contexts in academia, but it’s not ok here and the discussion is going to make us all look bad.

Bob: But you said something false! Are you saying we have to lie for good PR? I don’t support that.

Alice: I’m saying I don’t want to be having this object-level conversation, can't we just agree to condemn racist ideas?

[debate continues, neither side is happy about it.]

  1. ^

    I don’t mean to imply by this framing that diversity and epistemics are inherently in opposition -- this is just an observation that each side mentions one more than the other. I expect both A and B care about both values.

  2. ^

    Remembering other forums that were practically split apart by discussions of group IQ differences was one big update for me toward “discussing this on the EA forum is really bad.” This makes me sympathize more with wishing the conversation could have been avoided at all costs, although I'm less sure what to do going forward.

Peter Wildeford @ 2023-01-16T15:45 (+27)

Look, I really didn't want to write about this, but here we are.

I'm very upset with Nick Bostrom.

His original email was terrible, racist, and offensive.

His apology was absolutely idiotically executed.

Here I wrote a very personal post where explain why I feel this way.

AnonymousQualy @ 2023-01-14T05:25 (+27)

Just based on the discussion I've seen so far, I don't think people are taking this issue seriously enough.   Reputational costs from stuff like this are real and they are large.  

I'm a relatively new here.  I was still willing to tell people about EA after the FTX debacle, despite it being pretty damaging to my credibility.  But this incident has changed that.  All it would take is one unfortunate google search for someone to  wonder whether I'm secretly racist.  Of course, I'll keep donating to givewell and give directly, but telling other people about the movement is completely off the table for the time being.  I even made this new anonymous account so that my name won't come up on this website.

I also think people are dramatically underestimating just how (1) morally terrible, and (2) scientificly unfounded Bostrom's statement and apology both were.  Geneticists know race is a social construct at this point, with no basis in actual genes.  Psychologists know IQ is a somewhat mysterious measure (no, scoring lower on an IQ test does not necessary mean a person is "more stupid").  It is affected by things like income shifts across generations and social position.  For Bostrom to even have that opinion as an educated 23-year-old was bad, but to not unequivocably condemn it today - despite the harm it can clearly cause - seems even worse.

Ives Parr @ 2023-01-19T00:47 (+25)

Population geneticists tend to use the term "populations" from what I understand rather than race. Race is an imprecise term. However, people of the white race, black race, and Asian race tend to have different allele frequencies. That is why 23andMe is able to determine a persons ancestry. People of a shared ancestry tend to be more related to one another. 

I'm not sure if IQ is particularly mysterious. There was a rise in average IQ  score across several decades in the 20th century. IQ tends to be the determiner of social position rather than the other way around. 

Regardless of the cause, Bostroms statement about the relative scores of whites and blacks was accurate. I do not think it is "bad" in the moral sense. His view was not out of line with mainstream intelligence researchers at the time. It is still not inaccurate to recognize there exists a disparity. Progress minded people still speak about college entrance exam disparities which are rather highly correlated with IQ. 

I do not like the above fact. I would rather it not true. But I do not think I am bad for believing it because I think it is true and believing the truth is morally good. If I have been duped by the intelligence research community, then I have made an error in reasoning, but I still do not think I am immoral for doing so.

ZachWeems @ 2023-01-20T15:36 (+3)

Separate from my other comment:

|people of the white race, black race, and Asian race

I'm assuming this was completely unintended, but terms like "the X race" have very negative connotations in American English. Especially if X is "white". Better terms are "X people" or "people categorized as X".

"Blacks" also has somewhat negative connotations. "Black people" is better.

(I apologize on behalf of America for our extremely complicated rules about phrasing)

ZachWeems @ 2023-01-20T15:21 (+1)

I hard-disagreed for two reasons:

  • The mainstream-ness of the linked statement is heavily disputed. A person in 1996 could have reasonably been unaware of this ofc. (You may have intended to link to the 1996 APA report Intelligence: Knowns and Unknowns instead?)
  • Accuracy about genetics and race is unusually important in charged conversations like this, and your 1st paragraph seems to miss an important point: categories like "black", "white" and "Asian" are poor choices of genetic clusters. This is part of why population geneticists will call race a social construct: if you set out to find "racial" clusters of alleles (which is generally asserted to be low-value), you will find far better fits than society's standard racial groupings.

You're correct that "race" in the social sense has nonzero genetic meaning. However this doesn't mean that members of the same "race" are particularly related. For example, my understanding is that a Korean, a Scotsman, an indigenous Australian, and a Meru would all likely share more alleles than any would with a Tuu. Yet the last two or three would be categorized as "black". You could make a computer program that correctly predicts someone's "race", but it would be doing something equivalent to saying "this person is probably Meru, and Meru are labelled 'black'".

Ives Parr @ 2023-01-21T20:14 (+13)

I meant to link to Gottfredson's statement. Do you think that black people and other racial groups scored equally on IQ tests in 1996? I don't. My point is that there was a good number of people who had this belief and if Bostrom formulated a true belief, it seems odd that he should face criticism for this. If you think it is false, we can discuss more.

I don't know whether exactly it is a "poor choice" but the reason people talk about genetics and race is because they believe that the social categories have different gene variant frequencies resulting in phenotypic differences on socially relevant traits.

The Tuu are an unusual case. I fully grant that many would see a Tuu and not recognize that they are genetically much more distant. But most Americans have probably never even met a Bushman (I think this is the more respectful term than San).I do not think that these categories are perfectly defined and unambiguous, and yet I think they can have genetic differences. 

This may not apply to you in particular, but I feel there is often isolated demands for semantic precision. People don't object as often  to arguments about race in this way in other contexts. For example, "black people are abused by the police more" doesn't get the response of "what do you mean by black? Is a mixed race person black? What if they look mostly white? What is a police? Does that include security guards? What if a police officer abused a black person but it turned out it was actually a rather dark skinned Sri Lankan? Do Bushmen count as black?" I understand what progressives are talking about when they say Black people even if there is not a platonic ideal definition. And although you can find some counter examples, I think it is generally true that black people tend to be more related to eachother than white people.

ZachWeems @ 2023-01-23T02:13 (+1)

|I meant to link to Gottfredson's statement. Do you think that black people and other racial groups scored equally on IQ tests in 1996? I don't.

My disagreement was with the characterization of Gottfredson's statement as mainstream when this is disputed by mainstream sources. 

It is true that there was a difference in IQ scores, so I suggested a less disputed source saying so.

|People don't object as often  to arguments about race in this way in other contexts. For example, "black people are abused by the police more" doesn't get the response of "what do you mean by black?..."

Perhaps I was overly harsh in my initial reply. However, I do endorse being very rigorous when talking about the overlap of race and genetics. Whereas in the example of police, we generally assume that any influence of race on a given interaction involves the social labels.

|I do not think that these categories are perfectly defined and unambiguous, and yet I think they can have genetic differences.

The issue I find relevant isn't vagueness, it's that the standard ways to subdivide humans into 3-10 races don't cleave reality at the joints.

If I understand correctly, ignoring recent intermixing, humans can be divided into the highly genetically diverse "Khoisan", and the much more populous and less diverse non-Khoisan. Descendants of the out-of-Africa migration group (ie people who aren't of sub-Saharan ancestry) are effectively one branch of non-Khoisan. 

|And although you can find some counter examples, I think it is generally true that black people tend to be more related to each other than white people.

Ignoring recent intermixing, I think this is actually false, and may remain false if we ignore the Khoisan peoples. On average, a randomly selected black person may be more closely related to a randomly selected white (or Asian) person than to another randomly selected black person. (Whereas white or Asian people would be more closely related to their own group). This can happen if multiple clusters are grouped together under one label.

Whereas a couple weakened versions of your claim are true: 

"Socially defined labels contain nonzero information about genetics, such that you can predict someone's racial label with very good accuracy by looking at their genome, much more so than if people had been randomly assigned to racial groups."

And, "You can decompose racial groups into a reasonably small number of subgroups, such that a randomly chosen member of a subgroup is on average closer to another random member than to a random member of another group" is probably true as well.

Sharmake @ 2023-01-14T16:27 (+9)

Agree with this, and think that HBD is getting seriously concerning in it's prominence.

ZachWeems @ 2023-01-20T14:15 (+3)

I disagree with the first and last sentences of the last paragraph- while Bostrom's statements were compatible with a belief in genetically-influenced IQ differences,  he did not clearly say so.

That said, it isn't to his credit that he hedged about it in the apology.

ChristianKleineidam @ 2023-01-20T16:07 (+11)

Yes, when it comes to judging people for what they said it's useful to focus on what they actually said. 

Generally, if you have to focus on things that a person didn't say to fuel your own outrage that should be taken as a sign that what they actually said isn't as problematic as your first instinctual response suggests. 

Sharmake @ 2023-01-20T14:28 (+1)

Psychologists know IQ is a somewhat mysterious measure (no, scoring lower on an IQ test does not necessary mean a person is "more stupid"). It is affected by things like income shifts across generations and social position. For Bostrom to even have that opinion as an educated 23-year-old was bad, but to not unequivocably condemn it today - despite the harm it can clearly cause - seems even worse.

I disagree, because I think the evidence from psychology is that IQ is a real measure of intelligence, and while a lot of old tests had high cultural biases, the modern ones are way better.

That stated, I still strong upvoted your comment because PR and looking good matters, and you are correct on the genetic science point of there being evidence against real life subspecies/races.

DPiepgrass @ 2023-01-16T00:44 (+1)

Please believe me when I say it is not clear to me which opinion you believe Bostrom had in the 90s, or in what important sense his recent apology was not "unequivocal" (was something important missing? what something present that shouldn't have been?), or whether you believe he still holds a related bad opinion today.

AnonymousQualy @ 2023-01-16T01:21 (+25)

it is not clear to me which opinion you believe Bostrom had in the 90s

I don't know and am not really interested in whatever Bostrom's actual opinion in the 90s was because I'm a consequentialist, not a virtue ethicist.  Susan II's post above highlights the reasons Bostrom should have expected his statement to be interpreted as a racist one, and why it was in fact reasonable for people (who both agree with and disagree with it) to interpret it that way.

was something important missing? what something present that shouldn't have been?

I think that drawing attention to racial gaps in IQ test results without highlighting appropriate social context is in-and-of itself racist.  We live in a world where ideas about differences in intelligence between races have caused a lot of suffering - more suffering than most other ideas out there.

I think the ideal apology would have at least walked through the history of claims of racial differences in intelligence and the harms they motivated, acknowledged their continued ability to cause harm, provided appropriate social context for the difference in IQ scores and apologized of the lack of it in the statement from the 90s, and highlighted the implausibility of a genetic basis for the difference.

If we disagree about the implausibility of a genetic basis for the difference in IQ scores, I'm not really interested in debating it.  My view is that:

  • I find the research suggesting no genetic basis for racial IQ differences credible
  • I do not find the survey that people cite to the opposite effect compelling (it acknowledges that it highly unrepresentative - as an internet survey with a high nonresponse rate would be)
  • I believe the scientists who say that race is a social construct not a biological one
  • I believe the scientists  who point to clear environmental influences on IQ
DPiepgrass @ 2023-01-22T13:24 (+8)

I think that drawing attention to racial gaps in IQ test results without highlighting appropriate social context is in-and-of itself racist.

Why is it that this doesn't count as highlighting appropriate social context?

I also think that it is deeply unfair that unequal access to education, nutrients, and basic
healthcare leads to inequality in social outcomes, including sometimes disparities in skills and cognitive capacity. This is a huge moral travesty that we should not paper over or downplay. [apology paragraph 2]

I guess you could say that the social context is only mentioned rather than highlighted, and that there is more context he could have added.

Neel Nanda @ 2023-01-13T14:17 (+26)

I would personally prefer a separate option to allow personal blog posts but not drama-of-the-day posts - I like seeing personal blog posts, but want to avoid drama.

JP Addison @ 2023-01-13T15:05 (+27)

You could temporarily set the "Nick Bostrom" tag to "Hidden" on your frontpage.

Yadav @ 2023-01-13T14:42 (+7)

Yeah, I concur. I feel like this could have been under a separate tag that we could have hidden instead. 

JP Addison @ 2023-01-13T17:45 (+3)

I thought of that. The reason I didn't suggest that approach was rather mundane. This post needs to have the "Nick Bostrom" tag. Currently, someone who wants to hide all Bostrom-email content from the frontpage can hide the Bostrom tag, and it will hide this post as well. In the approach where we default-hide the Nick Bostrom tag, we would need to do some ugly kludge to make it so that, by default, this post shows up on the frontpage. The approach taken was simpler. Maybe in the future it'd be worth it to do that ugly kludge? I'm unconvinced, but that's a pretty lightly held view.

Linch @ 2023-01-20T08:42 (+25)

I wrote some thoughts on the discourse that emerged so far, with the upshot that I wish people are kinder and more charitable to each other.

Nathan Young @ 2023-01-14T17:03 (+17)

It seems the key points here are thus:

This is a hard circle to square, since for both groups these are key points (intellectual openness and psychological safe/lack of racism).

I think this is a solvable problem, but I don't know what the solution is. Does anyone have any thoughts?

DPiepgrass @ 2023-01-16T00:53 (+8)

Recognizing that there are two groups, both with legitimate concerns, is a good start.

I don't see any way around the need to communicate. I think some people view their opinion as so clearly right that little explanation is necessary. Here's an old essay arguing that the modern world doesn't work that way — not even on EA forum!

Alex Harris @ 2023-02-03T11:39 (+15)

It doesnt seem like the done thing to talk mainly about feelings on this forum but I think that here they are relevant.

I am very upset by this and very disappointed. By the original email, by the quality of the apology, and by the nature of the discussion here on the forum. 

Many people here in the comments are discussing what science can/cannot say with certainty about the IQ of different groups, and whether technically speaking the sentences in Bostrom's apology are factually accurate. I think this is missing the point.

Being super when rational is important in matters such as deciding how most effectively to allocate funding but I don't think that is the mindset we should be taking here. I think some people see all discussion as a mere iterative process for coming closer to the truth but human communication is a lot more than that - you don't 'prove' to someone that they shouldnt be offended by a something someone said. Attitudes are expressed by the things you choose to say (even if you never say a falsehood) and right now the community needs to be signalling that it cares about people of colour. 

Many have mentioned that we have a neurodiverse community. I really do sympathise with those who find this sort of approach/deciding when to take this sort of approach difficult. I honestly don't know what to suggest. 

I have friends who are highly effective, intelligent, and compassionate individuals. I hope to be able to slowly convince them to be EAs but I'd really struggle to tempt them into a community that deals with a matter like this so coldly. If they saw the discussion in this comment section they would walk away, and in their minds, they'd have a big red X over anything associated with EA.

I was tempted not to write this and to just have my personal feelings of identification and association with this community reduced slightly. I hope others who feel like me won't do that either.

We need to do better at being inclusive. Our goals are compassionate but I think we would do better if we signalled compassion day-to-day.

Amber Dawn @ 2023-01-19T13:55 (+14)

Some musings on apologies in general:

I feel like I’ve read quite a few comments which imply that, *because* Bostrom proactively brought up this email in order to apologise for it, we shouldn’t be criticising him or being so harsh on him.

Setting aside the questions of whether the apology was sufficient, or whether he really brought it up proactively or because he feared exposure anyway, or whether the apology did or did not “double down” on the original wrongdoing…

…I think it’s wrong to say that ‘if Bostrom brought this up purely out of remorse, and apologised perfectly, he should lose no points.’

I think some of the commenters have an implicit model of apologies which is like ‘doing a Bad Thing means you lose some points; apologising correctly for the Bad Thing means you get the points back’. This is not how apologies work, and nor should it be. If this was how apologies worked, then there would be no social disincentive for doing bad things, provided you learned how to make proper apologies.

My model of how apologies work is more like ‘doing a Bad Thing means you lose some points; apologising correctly means that you lose fewer points (but you still lose some). Apologising also paves the way towards you eventually regaining your lost points with your community (rather than them, for example, distancing themselves from you), because they can *trust* you not to do it again’.

Unfortunately, often people make apologies *because* people are upset with them for the Bad Thing and they want people to stop being upset with them. This is understandable, but it makes it difficult to introspect and produce a *sincere* apology (or, conversely, to decide *not* to apologise because you actually don’t think the thing was bad) - because you’re too focussed on the goal of ‘stop people from being mad at me’. And people can kind of sense that desperation, and you *seem* less sincere. So I guess my advice to an apologiser would be something like ‘set aside the goal of “making people less mad at you” or “regaining your lost points immediately” - they’ll be mad at you either way, and your points are lost. Instead, set yourself the goal of “convincing them to trust you going forward, so you can regain the points over time.”’

Agustín Covarrubias @ 2023-01-14T03:42 (+14)

I strongly support this model of structuring conversations during emotionally intensive situations like these.  We still have to see how well it works, but my prior is that this would have been a big improvement during parts of the FTX crisis, and it's useful now.

During the FTX crisis, I published a couple of posts regarding SBF interviews, and I've now come to regret bringing that type of discussion to the forum. Most discussions weren't productive, and a significant number of people were left with emotional burnout from that period.

Back then, and now again, there has been a shared sentiment of tiredness and burnout around these discussions. We should be open to criticism, and there should be a place to have these discussions, but I don't think our current approach is working well.

kbog @ 2023-01-21T21:43 (+13)

GCRI Statement on Race and Intelligence | Global Catastrophic Risk Institute (gcrinstitute.org)

Seth Baum on behalf of GCRI writes a statement on the controversy.

Worth noting that in this statement Seth justifies the practice of digging through old emails in order to expose offensive statements people have made in the past. Based on this I assume a risk that Seth/GCRI might leak anything I say in an email, and if they have this mindset then I worry that other EA organizations might have it as well. I would prefer to deal with EA organizations which express a commitment against leaking private communications. 

AnonymousGuest @ 2023-01-17T15:45 (+13)

I wrote a short post aiming to explain some reasons to uphold norms that are common in other communities, even when the instrumental value of those norms isn’t obvious.  

One of the goals of this post is to explain a subset of the reasons why I think some people believe Bostrom’s apology and some statements in the subsequent conversation have been inadequate or harmful.

As a two-sentence summary:

peterhartree @ 2023-01-13T14:00 (+13)

Please also feel free to give us feedback on this thread and approach. This is the first time we’ve tried this in response to events that dominate Forum discussion. You can give feedback by commenting on the thread.

Seems good to me, especially as a "quick experiment".

I shared some related thoughts on Chris Leong's recent thread: Should the forum be structured such that the drama of the day doesn't occur on the front page?

sapphire @ 2023-01-16T08:27 (+9)

This was posted in 2019 by a group of EAs from underrepresented groups. Wish it had been taken seriously. 

 

https://forum.effectivealtruism.org/posts/nqgE6cR72kyyfwZNL/making-discussions-in-ea-groups-inclusive

Aithir @ 2023-01-21T11:26 (+8)

So far his e-mail has gotten relatively little media attention, his English Wikipedia page was changed (not the German one though) and there was little social media outrage.

This seems like a pretty good outcome for him. The reasons I can think of why that happened.

  1.  His strategy of preemptively publishing the e-mails worked.
  2.  He has no social media presence which would be the natural place for people to pile up on him.
  3.  He might simply have gotten lucky.
Aleks_K @ 2023-01-21T14:08 (+16)

Another possible reason: He (and EA and longtermism) are not actually that interesting for most people.

David Mears @ 2023-01-15T11:38 (+8)

The forum ought to have a policy in place about whether object level discussion on the aetiology of group differences is permitted, in advance of that discussion happening. (Otherwise it will have to make one up on the fly.)

Nathan Young @ 2023-01-14T16:53 (+8)

I think that EA orgs should not cut ties with Bostrom, but if he was found to be using this language again then we should do so for a period eg a year.

I think this has probably really harmed Bostrom so I am satisfied that others won't do this. That said, I'm curious to hear from those who think EA hasn't done enough here.

Nathan Young @ 2023-01-15T02:07 (+6)

Sending that email was really bad. 

Gideon Futerman @ 2023-01-15T02:14 (+14)

And the equivocal nature of the apology was also bad (and perhaps more morally relevant, as it is current rather than historic)

Ives Parr @ 2023-01-18T18:44 (+7)

I think if sending the email was bad because it was offensive, it seems resurfacing the email is even more immoral and further spreading it to the forum for others to see should be regarded as immoral as well. What do you think about that argument?

Nathan Young @ 2023-01-18T18:48 (+3)

So I think it's a little underrated as an argument. I don't think its exactly true but I guess I think it could have been better framed to avoid all this - ie the first person could have made clear they though the email was racist and bad and that we should take a moment before responding.

Hard, but I think we owe it to our community to not get everyone upset.

Ives Parr @ 2023-01-18T19:08 (+41)

In my view, Bostrom was making a point about offensive language and decided to use actually offensive examples. I think the appropriateness of this depends on context. I don’t know the attitudes of the email group but I if nobody was particularly offended or only a few people, it seems like minimal harm.

However, whoever was looking back through old emails was going to deliberately spread to an audience thousands of times larger which increases the number of offended persons by orders of magnitude.

If this were about offense then either the person rummaging through old emails or 2023-Bostrom is the most morally blameworthy. I think the actual issue is that Bostrom harbors politically incorrect views and people want to lower his status, this was just the best way of doing so.

I think honestly that if people are extremely upset by the text, they should not discuss the text and avoid it for their own mental well-being.

anonymous8101 @ 2023-01-14T02:20 (+6)

(content warning: discussion of racially motivated violence and coercion)

 

I wanted to share that I think it's not bad to think about the object level question of whether there are group differences in intelligence rooted in genetic differences. This is an empirical claim, and can be true or false.

My moral beliefs are pretty rooted in egalitarianism. I think as a matter of policy, but also as a matter of moral character, it is good and important to treat the experience of strangers as equally valuable, regardless of their class or race. I do not think more intelligent people are more worthy of moral consideration than less intelligent people. I think it can be complicated at the extremes, especially when considering digital people, animals, etc., but that this has little bearing on public policy when concerning existing humans.

I don't think genetic group differences in intelligence are likely to be that relevant given I have short AI timelines. If we assume longer timelines, I believe the most likely places they would be important in terms of policy would be in education and reproductive technology. Whether or not there are such differences between groups now, there could easily come to be large differences through the application of embryo selection techniques or other intelligence enhancing technologies. From an egalitarian moral framework, I suspect it would be important to subsidize this technology for disadvantaged groups or individuals so that they have the same options and opportunities as everyone else. Even if genes turn out to not be a major cause of inegalitarian outcomes today, they can definitely become a major cause in the future, if we don't exercise wisdom and thoughtfulness in how we wield these technologies. However, as I said, I don't expect this to be very significant in practice given short AI timelines.

Most importantly, from my perspective, it's important to be able to think about questions like this clearly, and so I want to encourage people to not feel constrained to avoid the question because of fear of social censure for merely thinking about them. For a reasonably well researched (not necessarily correct) discussion of the object level, see this post:

[link deleted at the author's request; see also AnonymousCommentator's note about the racial IQ gap]

I think it's important context to keep in view that some of the worst human behaviors have involved the enslavement and subjugation of whole groups of people, or attempts to murder entire groups—racial groups, national groups, cultural groups, religious groups. The eugenics movement in the United States and elsewhere attempted to significantly curtail the reproductive freedom of many people through extremely coercive means in the not-so-distant past.  Between 1907 and 1963, over 64,000 individuals were forcibly sterilized under eugenic legislation in the United States, and minority groups were especially targeted. Presently in China, tens of thousands of Uighurs are being sterilized, and while we don't have a great deal of information about it, I would predict that there is a major element of government coercion in these sterilizations.

Coercive policies like this are extremely wrong, and plainly so. I oppose and condemn them. I am aware that the advocates of these policies sometimes used genetic group differences in abilities as justification for their coercion. This does not cause me to think that I should avoid the whole subject of genetic group differences in ability.  Making this subject taboo, and sanctioning anyone who speaks of it, seems like a sure way to prevent people from actually understanding the underlying problems disadvantaged groups or individuals face. This seems likely to inhibit rather than promote good policy-making. I think the best ways to resist reproductive and other forms of coercion go hand in hand with trying to understand the world, do good science, and have serious discussions about hard topics. I think strict taboos around discussing an extremely broad scientific subject matter hurt the ability of people to understand things, especially when the fear of public punishment is enough to prevent people from thinking about a topic entirely.

Another reason people cite for not talking about genetically mediated group differences, even if they exist, is that bringing people's attention to this kind of inequality could make the disadvantaged feel terrible. I take this cost seriously, and think this is a good reason to be really careful about how we discuss this issue (the exact opposite of Bostrom's approach in the Extropians email), and a good reason to include content warnings so anyone can easily avoid this topic if they find it upsetting.

But I don't think forbidding discussion of this topic across the board is the right society-level response.

Imagine a society where knowledge of historical slavery is suppressed, because people worry it would make the descendants of enslaved people sad. I think such a society would be unethical, especially if the information suppression causes society to be unable to recognize and respond to ongoing harms caused by slavery's legacy.

Still, assuming that we were in a world like that: In that kind of world, we can imagine that the information leaks out and a descendant of slaves finds out about slavery and its legacy, and is (of course) tremendously horrified and saddened to learn about all this.

If someone pointed at this to say, "Behold, this information caused harm, so we were right to suppress it," I would think they're making a serious moral mistake.

If the individual themselves didn't want to personally know about slavery, or about any of the graphic details, that's fully within their right. This should be comparatively easy to achieve in online discussion, where it's easier to use content warnings, tags, and web browser apps to control which topics you want to read about.

But society-wide suppression of the information, for the sake of protecting people's feelings even though those individuals didn't consent to being protected from the truth this way, is frankly disturbing and wrong. This is not the way to treat peers, colleagues, or friends. It isn't the way to treat people who you view as full human beings; beyond just being a terrible way to carry out scientific practice, it's infantilizing and paternalistic in the extreme.

 

EA Forum top-level link

LessWrong top-level link

Sharmake @ 2023-01-16T21:20 (+5)

My views on what EA should learn from this event is the following:

  1. EA needs to articulate what moral views or moral values it will not accept in the pursuit of it's goals. I don't believe EA can consider every moral point valid due to the Paradox of Tolerance. Thus, moderators and administrators need to start working on what values or moral viewpoints it will not accept, and it will need to be willing to ban or cancel people who violate this policy.

  2. EA has an apology problem. Titotal has a good post on it, but a lot of apologies tend to be bad. Linking the post,

https://forum.effectivealtruism.org/posts/KB8XPfh7dJ9uJaaDs/does-ea-understand-how-to-apologize-for-things

And quoting a section:

Okay, let's go over the rules for an apology to be genuine and sincere. I'll take them from here.

  1. Acknowledge the offense.
  2. Explain what happened.
  3. Express remorse.
  4. Offer to make amends. Notably missing from this list is step 5: Go off on an unrelated tangent about eugenics.

Basically, EAs need to apologize way better than they are currently doing.

Finally, I thank CEA for disavowing his racist email fast, and while I think it isn't perfect, I believe a lot of the criticism of CEA is misguided IMO.

Sam Elder @ 2023-01-15T17:16 (+4)

I haven’t waded into the deep end of comments elsewhere but just want to make a simple point: the way Bostrom’s apology starts is awful.

“I do think that provocative communication styles have a place—but not like this!”

This comment just plainly doesn’t make any sense! It’s saying that he believes in provocation as a style, but has a problem with…the provocative style in which he once expressed that view. And there’s no further clarification of the places where provocative communication styles are appropriate, and how his thinking has changed (even if it changed within 24 hours) on whether this particular example was appropriate or not.

Everything in the entire discussion seems downstream of this core point, which is completely botched in the apology. There are similar moments in the rest (like the “what about eugenics?” discussion, or decrying “sloganeering”) that could be dissected alone but this one is emblematic enough on its own.

DPiepgrass @ 2023-01-16T01:10 (+18)

It made perfect sense to me. All kinds of authors know that there is value in provocative communication. e.g. how about that book titled "Against Empathy". Provocative, right? He's agreeing with that common viewpoint, but adding that there are wrong ways to be provocative, as exemplified by that awful old email.

On the other hand, I do not understand how "the entire discussion seems downstream of this core point". Which core point? (That this is the wrong way to be provocative? That provocative writing has a place?) I do think some people would be upset if he said too much about provocativeness.

Sam Elder @ 2023-01-16T02:31 (+4)

It’s not at all clear from the apology what subset of provocative communication styles Bostrom still believes in — and finds it necessary to defend categorically before even starting to apologize — and what subset he now condemns. As far as I can tell, he still “likes” “repugnant” formulations of statements one believes as costly signals of one’s commitment to truth-telling and merely regrets that this example will turn out to be too costly.

Nathan Young @ 2023-01-14T16:51 (+4)

I am open to the idea that Longtermism is hard to disentangle from eugenics. How about it "things that are unacceptable" statement. eg

I don't love seeming to agree to criticisms that Longtermism is racist/eugenicist but if it's a thing people believe then then perhaps an open letter etc is a good response.

Parrhesia @ 2023-01-15T23:46 (+9)

"Eugenics" as typically considered is very different from human genetic enhancement of the type in which parents voluntarily select embryos to implant during IVF, as discussed in Bostrom and Shulman's article "Embryo Selection for Cognitive Enhancement: Curiosity or Game Changer?" 

Eugenics of the twentieth century was bad because it harmed people. Who is harmed when a mother voluntarily decides to select an embryo that is genetically predisposed to be healthier, happier or more intelligent? To prevent a mother from having the right to genetically enhance (engage in "eugenics") would be coercion in reproduction. 

Seeing as embryo selection can extend a child's healthspan and mental well-being by selecting against schizophrenia/depression/etc., I think it is extraordinarly moral to support this practice even if it could correctly be called "eugenics." I will stand on the side of less premature death and suffering even if it means I can be grouped with other bad "eugenicists" for rather tenous reasons.

Guy Raveh @ 2023-01-14T17:47 (+1)

I currently think of Longtermism as an idea and not a set of people, and I don't think the basic idea that we're going to influence many people in the future and we should have their voice in mind is bad, or racist, or eugenicist.

However, I think some of the prevalent views among people who identify as Longtermists, and especially among the key figures promoting it - like Bostrom and MacAskill* - are indistinguishable from eugenics. They're pretty easy to think of as different, due to the points you raised - but I'm convinced they'd lead to the same kinds of outcomes.

*Edit: I want to clarify that I think this much more strongly about Bostrom than about MacAskill - e.g. I think MacAskill isn't racist - but my point still stands.

𝕮𝖎𝖓𝖊𝖗𝖆 @ 2023-01-14T19:37 (+52)

Very strong disagree here.

Bostrom endorses positive selection for beneficial traits (via e.g. iterated embryo selection), he doesn't support negative selection (i.e. preventing people who have less of the beneficial trait from reproducing).

I think positive selection for beneficial traits/human enhancement more generally is good.

Lukas_Gloor @ 2023-01-14T21:11 (+34)

I like that you make a distinction between longtermism, the idea, and other "related" views that are prominent among longtermists, but logically distinct from longtermism. 

I disagree with calling the other views (like transhumanism, though that's a broad tent) "indistinguishable from eugenics." I find that statement so wrong that I downvoted the comment even though I really liked that you pointed out the above distinction. 

On transhumanism among longtermists, I like Cinera's point about focus on positive selection, but I also want to make a quite different point in addition, on how many longtermists, as far as I'm aware, don't expect "genetics" to play a big role in the future. (People might still have views on thought experiments that involve genes; I'm just saying those views are unlikely to influence anything in practice.) Many longtermists expect mind uploading to become possible, at which point people who want to be uploaded can enter virtual worlds (and ones who don't want it can stay back in biological form in protected areas). Digital minds do not reproduce the biological way with fusion of gametes (I mean, maybe you could program them to do that, but what would be the point?), so the whole issue around "eugenics" no longer exists or has relevance in that context. There would then be lots of new ethical issues around digital minds, explored here, for instance. I think it's important to highlight that many (arguably most?) longtermists who think transhumanism is important in practice mostly mean mind uploading rather than anything related to genes. 

So, it might be interesting to talk about attitudes around mind uploading. I think it's very reasonable if some people are against uploading themselves. It's a different question whether someone wants to prohibit the technology for everyone else. Let's assume that society thinks carefully about these options and decides not to ban all forms of mind uploading for everyone. In that scenario, everything related to mind uploading becomes "transhumanism." There'll be a lot of questions around it. In practice, current "transhumanists" are pretty much the only people who are concerned about bad things happening to digital minds  or bad dynamics among such minds (e.g., Malthusian traps) – no one else is really thinking about these scenarios or considers them important. So, there's a sense in which you have to be a transhumanist (or at least participating in the discourse) if you think it matters what's going to happen with digital minds. And the motivation here seems very different from the motivation behind eugenics – I see it as forecasting (the possibility of) radical societal changes and thinking ahead about what are good vs. bad options and trajectories this could take.

𝕮𝖎𝖓𝖊𝖗𝖆 @ 2023-01-15T13:18 (+20)

I agree (strongly upvoted), but I think iterated embryo selection is likely to become feasible before mind uploading in the mainline. It may not be all that relevant to humanity's longterm future (genetics bases human enhancement changes need decades to cause significant societal wide changes) except under long timelines, but long timelines are feasible, especially for mind uploading technology.

In the year of our Lord 2023, we still cannot upload C. Elegans.

Nathan Young @ 2023-01-23T13:25 (+3)

I've wanted to write a steelman of the "intellectual honesty matters, especially now" position but the versions I have written cause anger rather than empathy which seems bad. That said, I still think it's worth doing. I think those who don't think that the condemnation of Bostrom is worse than what Bostrom did have a position worth understanding and empathising with. 

Sam Elder @ 2023-01-15T17:34 (+1)

On the forum organization decision, two thoughts: