An instance of white supremacist and Nazi ideology creeping onto the EA Forum

By Concerned EA Forum User @ 2024-04-16T16:29 (–19)

On October 28, 2023, an EA Forum user posting under the name "Ives Parr" made a very long post arguing, among other things, that IQ is mainly determined by genetics, that IQ varies between racial groups as the result of genetics (although they are careful to avoid the word "race" in the post, preferring to talk about "nations"), and that social or environmental interventions such as education make little difference to true intelligence, even if they change IQ scores. 

Importantly, the sources the post cites connect it to white supremacist and Nazi ideology.

Mankind Quarterly

On March 28, 2024, the same user, "Ives Parr", posted a follow-up post that, among other things, defended their use of the pseudoscientific "journal" Mankind Quarterly as a source, after a commenter on the original post pointed out its "nasty associations". Just a few important facts about Mankind Quarterly, which only scratch the surface:

When a commenter on the original post noted the disturbing provenance of Mankind Quarterly, the user posting as "Ives Parr" replied, defending the "journal":

The relationship between genes, IQ, race, and GDP is very controversial. Prestigious journals are hesitant to publish articles about these topics. Using the beliefs of the founding members in the 1930s to dismiss an article published in 2022 is an extremely weak heuristic.

Richard Lynn

Citing Mankind Quarterly does not appear to be a one-off fluke. In their original post on intelligence and race, "Ives Parr" frequently cited Richard Lynn, a self-described "scientific racist" who is quoted as saying in 1994:

What is called for here is not genocide, the killing off of the population of incompetent cultures. But we do need to think realistically in terms of the 'phasing out' of such peoples.... Evolutionary progress means the extinction of the less competent. To think otherwise is mere sentimentality.

Update #3 (Thursday, April 18, 2024 at 12:45 UTC): The SPLC has a profile of Richard Lynn with more information, including selected quotes such as this one:

I think the only solution lies in the breakup of the United States. Blacks and Hispanics are concentrated in the Southwest, the Southeast and the East, but the Northwest and the far Northeast, Maine, Vermont and upstate New York have a large predominance of whites. I believe these predominantly white states should declare independence and secede from the Union. They would then enforce strict border controls and provide minimum welfare, which would be limited to citizens. If this were done, white civilisation would survive within this handful of states.

The name "Lynn" appears a dozen times in the original post by "Ives Parr".

Emil O. W. Kirkegaard

A name that appears half a dozen times in that same post is "Kirkegaard", as in Emil O. W. Kirkegaard, a far-right figure who, notably, advocates for colonialism from an allegedly effective altruist perspective:

...an EA-utilitarianist case can easily be made for Western colonialism. With Westerners, the common people will experience better health (multiple examples above), economic growth (trade), justice (impartial courts), better governance, less war, less savagery (cannibalism, slavery). What's not to like? Surely, freedom can be given some value, but that valuation is not infinite, so we have to ask ourselves whether Africans, Samoans etc. were not better off as colonies.

He argues:

Another way to argue for this case is smart fraction theory. It turns out empirically that having relatively smart people in charge of the country is important, controlling for the average level of intelligence. The easiest way to create a large smart fraction for the people in the poorest part of the world is to install Western governments staffed mainly by Europeans and the local elites...

Kirkegaard also supports "ethno-nationalism", particularly in Europe. For example, he has stated, "In addition to low intelligence, Muslims seem to have other traits that make them poor citizens in Western countries."

"Ives Parr"

The person posting as "Ives Parr" does not appear to have merely cited these sources as an unlucky coincidence. Rather, the sources seem predictive of the sort of political views they are likely to endorse. For example, in a post on Substack titled "Closed Borders and Birth Restrictions", this person muses on the desirability of legally restricting births based on, among other things, "culture":

If you are worried that an immigrant may be more likely to vote Democrat/Left, commit a crime, retain their non-Western culture or be on welfare and believe that it is ethical to exclude them from migrating for these reasons, why is it not ethical to prevent someone from giving birth if their offspring are prone to all of these behaviors?

...I believe that if you are concerned about welfare, crime, IQ, culture and so on, then the optimal combination of border control and birth restrictions is not ~98% ~0% because you could be more optimal. Take IQ for example. You could prohibit the lowest 10% from having kids and have open borders for the top 10% of IQ scorers (90% 10%). If all you care about is IQ. But you could extend this to crime, voting, culture, etc. Set whatever criteria you want and permit immigration from the most XX% and prohibit birth for the least XX%.

Update (Thursday, April 18, 2024 at 07:45 UTC): The person posting as "Ives Parr" has also published an article under the same pseudonym in Aporia Magazine, a publication which appears to have many connections to white nationalism and white supremacy. In the article, titled "Hereditarian Hypotheses Aren't More Harmful", the person posting as "Ives Parr" writes:

Explanations for group disparities that allege mistreatment are actually more dangerous than genetic explanations.

Update #2 (Thursday, April 18, 2024 at 11:35 UTC): Aporia Magazine is one of the six Substacks that "Ives Parr" lists as "recommended" on their own Substack. Emil O. W. Kirkegaard's blog is another one of the six.

Conclusion

EA Forum users should be aware of these posts’ connections to white supremacist, Nazi, and fascist ideology and movements. Going forward, I urge vigilance against these kinds of posts making their way onto the forum, in case they should re-appear in the future under a different name or in a different guise.


I’m posting under a pseudonym because 1) I don't want my name to be associated with white supremacists or Nazis in the public record and because 2) I don’t want to make it easy for white supremacists or Nazis to come after me if I should happen to stir up the hornet’s nest. What I write should speak for itself and be judged on its own merits and accuracy. 


titotal @ 2024-04-16T17:10 (+61)

I want to remind people that there are severe downsides of having these race and eugenics discussions like the ones linked on the EA forum.

1. It makes the place uncomfortable for minorities and people concerned about racism, which could someday trigger a death spiral where non-racists leave, making the place more racist on average, causing mor non-racists to leave, etc. 

2. It creates an acrimonious atmosphere in general, by starting heated discussions about deeply held personal topics. 

3. It spreads ideas that could potentially cause harm, and lead uninformed people down racist rabbitholes by linking to biased racist sources. 

4. It creates bad PR for EA in general, and provides easy ammunition for people who want to attack EA.

5. In my opinion, the evidence and arguments are generally bad and rely on flawed and often racist sources.

6. In my opinion, most forms of eugenics (and especially anything involving race) is extremely unlikely to be an actually effective cause area in the near future, given the backlash, unclear benefit, potential to create mass strife and inequality, etc

Now, this has to be balanced against a desire to entertain unusual ideas and to protect freedom of speech. But these views can still be discussed, debated, and refuted elsewhere. It seems like a clearly foolish move to host them on this forum. If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake. 

Nathan Young @ 2024-04-16T18:50 (+20)

I agree in terms of random discussions of race, but this one was related to a theory of impact, so it does seem relevant for this forum.

I don't think we need to fear this discussion, the arguments can be judged on their own merit. If they are wrong, we will find them to be wrong.

If anything, I think on difficult topics those of us with the energy should take time to argue carefully so that those who find the topic more difficult don't have to. 

But I'm not in favour of banning discussion of theories of impact, however we look upon them.

Jason @ 2024-04-16T20:07 (+25)

But you can couch almost anything in terms of a theory of impact, at least tenuously, including stuff a lot worse than this. The standard can't be "anything goes, as long as the author makes some attempt to tie to some theory of impact."

No online discussion space can be all things to all people (cf. titotal's first and second points).

Nathan Young @ 2024-04-17T08:51 (+3)

Sure, and I think that we should discuss anything with such a theory of impact. Or scan it and downvote it.

Here the system worked as it should, I think. 

Jason @ 2024-04-17T15:02 (+13)

Among other things, I don't think that solution scales well.

As the voting history for this post shows, people with these kinds of views may have some voting power at their disposal (whether that be from allies or brigadiers). So we'd need a significant amount of voting power to quickly downvote this kind of content out of sight. As someone with a powerful strong downvote, I try to keep the standards for deploying that pretty high -- to use a legal metaphor, I tend to give a poster a lot of "due process" before strong downvoting because a -9 can often contribute to the effect of squelching someone's voice. 

If we rely on voters to downvote content like this, that feels like either asking them to devote their time to careful reading of distasteful stuff they have no interest in, or asking them to actively and reflexively downvote stuff that looks off-base based on a quick scan. As to the first, few if any of us get paid for this. I think the latter is actually worse than an appropriate content ban -- it risks burying content that should have been allowed to show on the frontpage for a while.

If we don't deploy strongvotes on fairly short notice, the content is going to be on the front page for a while and the problems that @titotal brought up with strongly apply. 

Finally, I am very skeptical that there would be any actionable, plausibly cost-effective actions for EAs to take even if we accepted much of the argument here (or on other eugenics & race topics). That does further reassure me that there is no great loss expecting those who wish to have those discussions to do so in their own space. The Forum software is open-source; they can run their own server.

Nathan Young @ 2024-04-17T22:43 (+3)

Seems like that solution has worked well for years. Why is it not scaling now? It’s not like the forum is loads bigger than a year ago.

Jason @ 2024-04-18T17:19 (+2)

I expect an increase in malicious actors as AI develops, both because of greater acute conflict with people with a vested interest in weakening EA, and because AI assistance will lower the barrier to plausible malicious content. I think it would take time and effort to develop consensus on community rules related to this kind of content, and so would rather not wait until the problem was acutely upon us.

Ives Parr @ 2024-04-17T05:34 (+9)

This person is creating a discussion of race and eugenics and trying to make me look very bad by highlighting extremely offensive but unrelated content. Quotations from cited authors or people who run a journal are quite irrelevant to my argument which is aligned with EA values. These sorts of attacks distort your intuitions and make you feel moral disgust, but are largely irrelevant to my core argument. The author took a quote from an argument where I was trying to emphasize how much of a rights violation restrictions on immigration are and presented it in a misleading way, see Nathan Youngs comment. Right after that I reveal I am against closed borders and birth restrictions (with the extreme exception of something like brother-sister marriage). 

It seems the efforts to throw mud on me are what is actually inflammatory. The original post is not inflammatory in tone. Nor does it dive into race. It is the attackers of the post that are bringing up the upsetting content to tarnish my reputation. There is a similar attack pattern against EA, which aims to associate it with crypto-fraud. Many people in EA recognize these attacks as unfair as the core mission of EA is virtuous. If you are actually worried about about optics, then trying to broadcast to everyone how EA is hosting "white supremacists" aggressively and posting offensive (and unrelated) quotes does not seem to be helping. 

I feel this is a wildly unfair attack. And it seems like people don't want me to defend myself, my reputation, or my article. They just want me to go away for optics reasons but that let's censors win and incentivizes this sort of behavior of digging up quotes and smearing people. 

In my opinion, the evidence and arguments are generally bad and rely on flawed and often racist sources.

The arguments are generally good. What can I do to defend against mere assertion but ask that people read the article and think for themselves?

If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake. 

I am not misinformed. I worked hard on my article. Many people are not even reading what was written or engaging seriously with it except to claim that citations are racist.

This is sad to see EAs advocate for censorship. 

nathan98000 @ 2024-04-21T03:11 (+3)

I think any discussion of race that doesn't take the equality of races as a given will be considered inflammatory. And regardless of the merits of the arguments, they can make people uncomfortable and choose not to associate with EA.

Chris Leong @ 2024-04-16T21:30 (+7)

If EA is trying to do the most good, letting people like Ives post their misinformed stuff here seems like a clear mistake. 

Disagree because it is at -36.

Happy to consider your points on the merits if you have an example of an objectionable post with positive upvotes.

That said: part of me feels that Effective Altruism shouldn't be afraid of controversial discussion, whilst another part of me wants to shift it to Less Wrong. I suppose I'd have to have a concrete example in front of me to figure out how to balance these views.

Jason @ 2024-04-17T01:07 (+6)

Which (if any) of titotal's six numbered points only apply and/or have force if the post's net karma is positive, as Mr. Parr's have been at certain points in time?

Ben Stewart @ 2024-04-16T22:44 (+41)

Below are the vote scores of the October 'Genetic Enhancement' post from internet archive snapshots. The post saw an early negative reaction, then a robustly positive one over the following week (+38 change, at minimum). It remained positive for 4 months. On March 13, David Thorstad tweeted about it, which was correlated with a significant decline. The voting pattern does not suggest the EA community quickly and thoroughly rejected the post. 
Oct 28 = +4 
Oct 30 = -5
Oct 30 = -14
Nov 5 = +23
Nov 5 = +24
Nov 6 = +24
Nov 6 = +24
March 13 = +18
March 29 = -16
March 29 = -20
April 4 = -13
April 5 = -15
April 10 = -15

David Mathers @ 2024-04-18T14:42 (+21)

People don't reject this stuff, I suspect, because there is frankly, a decently large minority of the community who thinks "black people have lower IQs for genetic reasons" is suppressed forbidden knowledge. Scott Alexander has done a lot, entirely deliberately in my view, to spread that view over the years (although this is probably not the only reason), and Scott is generally highly respected within EA. 

Now, unlike the people who spend all their time doing race/IQ stuff, I don't think more than a tiny, insignificant fraction of the people in the community who think this actually are Nazis/White Nationalists. White Nationalism/Nazism are (abhorrent) political views about what should be done, not just empirical doctrines about racial intelligence, even if the latter are also part of a Nazi/White Nationalist worldview. (Scott Alexander individually is obviously not "Nazi", since he is Jewish, but I think he is rather more, i.e. more than zero sympathetic ,to white nationalists than I personally consider morally acceptable, although I would not personally call him one, largely because I think he isn't a political authoritarian who wants to abolish democracy.) Rather I think most of them have a view something like "it is unfortunate this stuff is true, because it helps out bad people, but you should never lie for political reasons".  

Several things lie behind this:

-Lots of people in the community like the idea of improving humanity through genetic engineering, and while that absolutely can be completely disconnected from racism, and indeed, is a fairly mainstream position in analytic bioethics as far as I can tell, in practice it tends to make people more suspicious of condemning actual racists, because you end up with many of the same enemies as them, since most people who consider anti-racist a big part of their identity are horrified by anything eugenic.  This makes them more sympathetic to complaints from actual, political racists that they are being treated unfairly.

-As I say, being pro genetic enhancement or even "liberal eugenics"* is not that outside the mainstream in academic bioethics: you can publish it in leading journals etc. EA has deep roots in analytic philosophy, and inherits it's sense of what is reasonable.

-Many people in the rationalist community are for various reasons strongly polarized against "wokeness", which again, makes them sympathetic to the claims of actual political racists that they are being smeared.

-Often, the arguments people encounter against the race/IQ stuff are transparently terrible. Normal liberals are indeed terrified of this stuff, but most lack expertise in being able to discuss it, so they just claim it has been totally debunked and then clam up. This makes it look like there must be a dark truth being suppressed when it is really just a combination of almost no one has expertise on this stuff and in any case, because causation of human traits is so complex, for any case where some demographic group appears to be score worse on some trait, you can always claim it could be because of genetic causes, and in practice it's very hard to disprove this. But of course that is not itself proof that there IS a genetic cause of the differences. The result of all this can make it seem like you have to endorse unproven race/IQ stuff or take the side of "bad arguers" something EAs and rationalists hate the thought of doing. See what Turkheimer said about this here https://www.vox.com/the-big-idea/2017/6/15/15797120/race-black-white-iq-response-critics: 

'There is not a single example of a group difference in any complex human behavioral trait that has been shown to be environmental or genetic, in any proportion, on the basis of scientific evidence. Ethically, in the absence of a valid scientific methodology, speculations about innate differences between the complex behavior of groups remain just that, inseparable from the legacy of unsupported views about race and behavior that are as old as human history. The scientific futility and dubious ethical status of the enterprise are two sides of the same coin.

To convince the reader that there is no scientifically valid or ethically defensible foundation for the project of assigning group differences in complex behavior to genetic and environmental causes, I have to move the discussion in an even more uncomfortable direction. Consider the assertion that Jews are more materialistic than non-Jews. (I am Jewish, I have used a version of this example before, and I am not accusing anyone involved in this discussion of anti-Semitism. My point is to interrogate the scientific difference between assertions about blacks and assertions about Jews.)

One could try to avoid the question by hoping that materialism isn’t a measurable trait like IQ, except that it is; or that materialism might not be heritable in individuals, except that it is nearly certain it would be if someone bothered to check; or perhaps that Jews aren’t really a race, although they certainly differ ancestrally from non-Jews; or that one wouldn’t actually find an average difference in materialism, but it seems perfectly plausible that one might. (In case anyone is interested, a biological theory of Jewish behavior, by the white nationalist psychologist Kevin MacDonald,  actually exists [have removed link here because I don't want to give MacDonald web traffic-David].'

If you were persuaded by Murray and Harris’s conclusion that the black-white IQ gap is partially genetic, but uncomfortable with the idea that the same kind of thinking might apply to the personality traits of Jews, I have one question: Why? Couldn’t there just as easily be a science of whether Jews are genetically “tuned to” (Harris’s phrase) different levels of materialism than gentiles?

On the other hand, if you no longer believe this old anti-Semitic trope, is it because some scientific study has been conducted showing that it is false? And if the problem is simply that we haven’t run the studies, why shouldn’t we? Materialism is an important trait in individuals, and plausibly could be an important difference between groups. (Certainly the history of the Jewish people attests to the fact that it has been considered important in groups!) But the horrific recent history of false hypotheses about innate Jewish behavior helps us see how scientifically empty and morally bankrupt such ideas really are.' 


All this tends sadly to distract people from the fact that when white nationalists like Lynn talk about race/IQ stuff, they are trying to push a political agenda to strip non-whites of their rights, end anti-discrimination measures of any kind, and slash immigration, all on the basis of the fact that, basically, they just really don't like black people. In fact, given the actual history of Nazism, it is reasonable to suspect that at least some and probably a lot of these people would go further and advocate genocide against blacks or other non-whites if they thought they could get away with it. 




*See https://plato.stanford.edu/entries/eugenics/#ArguForLibeEuge

Wei Dai @ 2024-04-19T07:30 (+20)

Materialism is an important trait in individuals, and plausibly could be an important difference between groups. (Certainly the history of the Jewish people attests to the fact that it has been considered important in groups!) But the horrific recent history of false hypotheses about innate Jewish behavior helps us see how scientifically empty and morally bankrupt such ideas really are.

Coincidentally, I recently came across an academic paper that proposed a partial explanation of the current East Asian fertility crisis (e.g., South Korea's fertility decreased from 0.78 to 0.7 in just one year, with 2.1 being replacement level) based on high materialism (which interestingly, the paper suggests is really about status signaling, rather than actual "material" concerns).

The paper did not propose a genetic explanation of this high materialism, but if it did, I would hope that people didn't immediately dismiss it based on similarity to other hypotheses historically or currently misused by anti-Semites. (In other words, the logic of this article seems to lead to absurd conclusions that I can't agree with.)

All this tends sadly to distract people from the fact that when white nationalists like Lynn talk about race/IQ stuff, they are trying to push a political agenda

From my perspective, both sides of this debate are often pushing political agendas. It would be natural, but unvirtuous, to focus our attention on the political agenda of only one side, or to pick sides of an epistemic divide based on which political agenda we like or dislike more. (If I misinterpreted you, please clarify what implications you wanted people to draw from this paragraph.)

Wei Dai @ 2024-04-20T04:40 (+7)

I want to note that within a few minutes of posting the parent comment, it received 3 downvotes totaling -14 (I think they were something like -4, -5, -5, i.e., probably all strong downvotes) with no agreement or disagreement votes, and subsequently received 5 upvotes spread over 20 hours (with no further downvotes AFAIK) that brought the net karma up to 16 as of this writing. Agreement/disagreement is currently 3/1.

This pattern of voting seems suspicious (e.g., why were all the downvotes clustered so closely in time). I reported the initial cluster of downvotes to the mods in case they want to look into it, but have not heard back from them yet. Thought I'd note this publicly in case a similar thing happened or happens to anyone else.

Nathan Young @ 2024-04-20T08:54 (+4)

Yeah the voting on these posts feels pretty bizarre. Though I try not to worry about that. It usually comes out in the wash to something that seems right.

Wei Dai @ 2024-04-20T12:20 (+6)

I was concerned that after the comment was initially downvoted to -12, it would be hidden from the front page and not enough people would see it to vote it back into positive territory. It didn't work out that way, but perhaps could have?

nathan98000 @ 2024-04-21T03:22 (+3)

Any links to where Scott Alexander deliberately argues that black people have lower IQs for genetic reasons? I've been reading his blog for a decade and I don't recall any posts on this.

David Mathers @ 2024-04-22T21:25 (+2)

I should probably stop posting on this or reading the comments, for the sake of my mental health (I mean that literally, this is a major anxiety disorder trigger for me.) But I guess I sort of have to respond to a direct request for sources. 

 

Scott's official position on this is agnosticism, rather than public endorsement*. (See here for official agnosticism: https://www.astralcodexten.com/p/book-review-the-cult-of-smart)

However, for years at SSC he put the dreaded neo-reactionaries on his blogroll. And they are definitely race/IQ guys. Meanwhile, he was telling friends privately at the time, that "HBD" (i.e. "human biodiversity", but generally includes the idea that black people are genetically less intelligent) is "probably partially correct or at least very non-provably non-correct": https://twitter.com/ArsonAtDennys/status/1362153191102677001 . That is technically still leaving some room for agnosticism, but it's pretty clear which way he's leaning. Meanwhile, he also was saying in private not to tell anyone he thinks this (I feel like I figured out his view was something like this anyway though? Maybe that's hindsight bias): 'NEVER TELL ANYONE I SAID THIS, not even in confidence'. And he was also talking about how publicly declaring himself to be a reactionary was bad strategy for PR reasons ("becoming a reactionary would be both stupid and decrease my ability to spread things to non-reactionary readers"). (He also discusses how he writes about this stuff partly because it drives blog traffic. Not shameful in itself, but I think people in EA sometimes have an exaggerated sense of Scott's moral purity and integrity that this sits a little awkwardly with.) Overall, I think his private talk on this paints a picture of someone who is too cautious to be 100% sure that Black people have genetically lower IQs, but wants other people to increase their credence in that to >50%, and is thinking strategically (and arguably manipulatively) about how to get them to do so. (He does seem to more clearly reject the anti-democratic and the most anti-feminist parts of Neo-Reaction.) 

I will say that MOST of what makes me angry about this, is not the object-level race/IQ beliefs themselves, but the lack of repulsion towards the Reactionaries as a  (fascist) political movement. I really feel like this is pretty damning (though obviously Scott has his good traits too). The Reactionaries are known for things like trolling about how maybe slavery was actually kind of good: https://www.unqualified-reservations.org/2009/07/why-carlyle-matters/  Scott has never seemed sufficiently creeped out by this (or really, at all creeped out by it in my experience). But he has been happy to get really, really angry about feminists who say mean things about nerds**, or in one case I remember, stupid woke changes to competitive debate. (I couldn't find that one by googling, so you'll have to trust my memory about it; they were stupid, just not worth the emotional investment.) Personally, I think fascism should be more upsetting than woke debate! (Yes, that is melodramatic phrasing, but I am trying to shock people out what I think is complacency on this topic.) 

I think people in EA have a big blind-spot about Scott's fairly egregious record on this stuff, because it's really embarrassing for the community to admit how bad it is, people (including me often; I feel like I morally ought to give up ACX, but I still check it from time to time) like his writing for other reasons. And frankly, there is also a certain amount of (small-r) reactionary white male backlash in the community. Indeed, I used to enjoy some of Scott's attacks on wokeness myself; I have similar self-esteem issues around autistic masculinity issues as I think many anti-woke rationalists do. The currently strongly negative position is one I've come to slowly over many years of thinking about this stuff, though I was always uncomfortable with his attitude towards the Reactionaries. 



*[Quoting Scott] 'Earlier this week, I objected when a journalist dishonestly spliced my words to imply I supported Charles Murray's The Bell Curve. Some people wrote me to complain that I handled this in a cowardly way - I showed that the specific thing the journalist quoted wasn’t a reference to The Bell Curve, but I never answered the broader question of what I thought of the book. They demanded I come out and give my opinion openly. Well, the most direct answer is that I've never read it. But that's kind of cowardly too - I've read papers and articles making what I assume is the same case. So what do I think of them?

This is far enough from my field that I would usually defer to expert consensus, but all the studies I can find which try to assess expert consensus seem crazy. A while ago, I freaked out upon finding a study that seemed to show most expert scientists in the field agreed with Murray's thesis in 1987 - about three times as many said the gap was due to a combination of genetics and environment as said it was just environment. Then I freaked out again when I found another study (here is the most recent version, from 2020) showing basically the same thing (about four times as many say it’s a combination of genetics and environment compared to just environment). I can't find any expert surveys giving the expected result that they all agree this is dumb and definitely 100% environment and we can move on (I'd be very relieved if anybody could find those, or if they could explain why the ones I found were fake studies or fake experts or a biased sample, or explain how I'm misreading them or that they otherwise shouldn't be trusted. If you have thoughts on this, please send me an email). I've vacillated back and forth on how to think about this question so many times, and right now my personal probability estimate is "I am still freaking out about this, go away go away go away". And I understand I have at least two potentially irresolvable biases on this question: one, I'm a white person in a country with a long history of promoting white supremacy; and two, if I lean in favor then everyone will hate me, and use it as a bludgeon against anyone I have ever associated with, and I will die alone in a ditch and maybe deserve it. So the best I can do is try to route around this issue when considering important questions. This is sometimes hard, but the basic principle is that I'm far less sure of any of it than I am sure that all human beings are morally equal and deserve to have a good life and get treated with respect regardless of academic achievement.

(Hopefully I’ve given people enough ammunition against me that they won’t have to use hallucinatory ammunition in the future. If you target me based on this, please remember that it’s entirely a me problem and other people tangentially linked to me are not at fault.)'

** Personally I hate *some* of the shit he complains about there too, although in other cases I probably agree with the angry feminist takes and might even sometimes defend the way they are expressed. I am autistic and have had great difficulties attracting romantic interest. (And obviously, as my name indicates I am male. And straight as it happens.) But Scott's two most extensive blogposts on this are incredibly bare of sympathetic discussion of why feminists might sometimes be a bit angry and insensitive on this issue. 

nathan98000 @ 2024-04-30T21:46 (+10)

Just to reiterate your original claim, you said that Scott “has done a lot, entirely deliberately in my view, to spread that view [that black people have lower IQs for genetic reasons].”

And your evidence for this claim is that:

  1. He linked to neo-reactionaries on his blogroll who hold this view.
  2. He privately told friends that HBD (which isn’t exclusively about the causes of racial IQ differences) is “probably partially correct or at least very non-provably non-correct.” And he demanded they never reveal this publicly.
  3. He isn’t “repulsed” or “creeped out” or “upset” by reactionaries.

I find this extremely unpersuasive and misleading.

  1. I don’t know which neo-reactionaries you’re referring to when you say he linked to them on his blogroll, but he very clearly doesn’t agree with everything they say. He has explicitly disagreed with the neo-reactionary movement at length.
  2. Telling something to friends in private and demanding secrecy seems like the exact opposite of trying to spread a view. And saying a view is “probably partially correct or at least non-provably non-correct” is hardly a ringing endorsement of the view.
  3. Come on... He doesn’t have the right emotional vibes, therefore he must be deliberately spreading the view?? I’m personally a vegan for ethical reasons. In fact, I think factory farming is among the worst things humanity has ever done. But I’m not “creeped out” or “repulsed” by people who eat meat.

Your evidence is extremely weak, and it’s disappointing that as of my response, it has 18 upvotes.

David Mathers @ 2024-05-02T14:52 (+2)

I think he is spreading the view because he strategizes about doing so in the quoted email (though it's a bit hard to specify what the view is, since it's not clear what probability "probably" amounts to.) 

 

Wei Dai @ 2024-04-27T06:04 (+3)

Personally, I think fascism should be more upsetting than woke debate!

I'm not very familiar with Reactionary philosophy myself, but was suspicious of your use of "fascism" here. Asked Copilot (based on GPT-4) and it answered:

As an AI, I don’t form personal opinions. However, I can share that Reactionary philosophy and Fascism are distinct ideologies, even though they might share some common elements such as a critique of modernity and a preference for traditional social structures.

Fascism is typically characterized by dictatorial power, forcible suppression of opposition, and strong regimentation of society and of the economy which is not necessarily present in Reactionary philosophy. Reactionaries might advocate for a return to older forms of governance, but this does not inherently involve the authoritarian aspects seen in Fascism.

(Normally I wouldn't chime in on some topic I know this little about, but I suspect others who are more informed might fear speaking up and getting associated with fascism in other people's minds as a result.)

Also, I'm not Scott but I can share that I'm personally upset with wokeness, not because of how it changed debate, but based on more significant harms to my family and the community we live in (which I described in general terms in this post), to the extent that we're moving half-way across the country to be in a more politically balanced area, where hopefully it has less influence. (Not to mention damage to other institutions I care about, such as academia and journalism.)

(Yes, that is melodramatic phrasing, but I am trying to shock people out what I think is complacency on this topic.)

Not entirely sure what you're referring to by "melodramatic phrasing", but if this is an excuse for using "fascism" to describe "Reactionary philosophy" in order to manipulate people's reactions to it and/or prevent dissent (I've often seen "racism" used this way in other places), I think I have to stand against that. If everyone started excusing themselves from following good discussion norms when they felt like others were complacent about something, that seems like a recipe for disaster.

Concerned EA Forum User @ 2024-04-27T07:51 (+1)

Neo-reactionary ideology seems like a close match for fascism. The Wikipedia article on it discusses whether it is or isn’t fascism: https://en.wikipedia.org/wiki/Dark_Enlightenment

Two major themes of neo-reactionary ideology seem to be authoritarianism and white supremacy.

There is definitely some overlap between people who identify with neo-reactionary ideas and people who identify with explicitly neo-Nazi/neo-fascist ideas.

Concerned EA Forum User @ 2024-04-27T01:04 (+1)

I should probably stop posting on this or reading the comments, for the sake of my mental health (I mean that literally, this is a major anxiety disorder trigger for me.)

I am with you on this. I have had to disengage for mental health reasons. This stuff affects me quite seriously. I may or may not check back in on this post again. I may have to go as far as completely disengaging from the EA Forum on both this alt and my main account for an indefinite period, maybe forever. 

i don’t know your specific situation, but I will speak on a general dynamic. 

The psychologist Elaine Aron has a hypothesis that there is a neurological subtype called the Highly Sensitive Person that is unusually sensitive to sensory and emotional stimuli. This can include being unusually unsettled if other people appear to be in pain or discomfort or unusually disturbed by depictions of violence or suffering in TV or movies. 

Some have suggested that Aron is describing autism or a form of autism. I’m not sure what’s true. Some people and some psychometric tests have told me that I’m a Highly Sensitive Person and that I’m autistic. 

Aggressive environments or aggressive subcultures can shake out people who are particularly sensitive in this way. When that happens, I believe a certain kind of wisdom and temperance is lost. The soft, gentle side of people must be preserved and a community should be such that particularly soft, gentle people can be included and welcomed without losing their softness and gentleness.

Aristotle talked about practical wisdom (phronêsis). "Practical wisdom" makes me think about the contrast between my analytic philosophy courses in ethics and the social work elective I took in undergrad. First, the atmosphere of the courses was just so different. The philosophy classes usually felt kind of cold, sometimes kind of mean. Social work was a culture shock for me because the people were so palpably kind and warm. Second, my social work professor had been involved in real moral issues deeply and directly. Those included HIV/AIDS activism, dealing with violence in schools, and counselling couples navigating infidelity. I was so impressed with his practical wisdom. How do I assess that he had practical wisdom? I don’t really know. How do I decide when an ethical argument seems rational? I don’t really know, either. 

The contrast between my ethics courses and that social work course is a microcosm of so much for me. It’s that same contrast you see in the EA movement where, for example, you have the absurd situation where people take the principle of impartiality or equal consideration of interests so seriously that they concern themselves with shrimp welfare but, in practical terms, their moral circle doesn’t fully include women. 

Tying it all back together, a movement that can’t align itself:

  • with democracy, against fascism 
  • with women, against sexism 
  • with people of colour, against white supremacy
  • with core moral decency, against Nazis 

is morally bankrupt, has lost the plot, jumped the shark, utterly, disastrously failed. 

One part of the causal story of how that could happen is if you have an influential element of the subculture that disdains softness and gentleness and disdains soft, gentle people. I don’t think you can have future-proof ethics if you don’t, like, care about people’s feelings. 

Going a step deeper, I think people’s disdain for empathy and sensitivity often involves a wounded, tragic history of other people not treating their feelings and experiences with empathy and sensitivity and an ongoing sense of grievance about that continuing to be the case. A lot more could be written on this topic, but I don’t have the time right now and this comment has already gotten quite long.

Jason @ 2024-04-17T00:33 (+21)

As of October 30, the post was a week old and solidly negative in karma (-14). I don't think people were finding the post at this point through the frontpage at that age and karma. There was a big change from that date to November 5 (+23), cause unknown. The other big change was March 13 to 29 (+18 to -16), probably motivated by David's tweet. My guess is that the 37-point positive jump was also motivated by some sort of off-Forum mention. It's unclear whether this net change represents authentic evidence of the broader community's views vs. people inclined to be favorably disposed seeing it off-Forum vs. possible brigading.

But even after going up to +24, I doubt the post re-emerged on the frontpage given that it was about two weeks old at this point. In other words, it's likely that relatively few people saw it after this point unless they were specifically looking for it or found it incidentally when searching for something else. Therefore, I would not infer much of anything from it "remained positive for 4 months."

I do concur that "[t]he voting pattern does not suggest the EA community quickly and thoroughly rejected the post." 

Ben Stewart @ 2024-04-17T00:40 (+27)

Good points. The 38+ point uptick suggests a decent sized group of accounts that were pretty coordinated. Assuming they’re legit accounts that worries me about whatever this subgroup is. (Edit: actually, it looks like the post was on October 28 - so it was only 2 days old before it saw the uptick at some later point. It still seems likely that users would have to look for it to find it, but I’m less confident of that now)

Nathan Young @ 2024-04-17T09:08 (+13)

I think it's kind of weird that the bar is no longer "<0 karma" but "quick and thorough rejection". I didn't even see the article until this whole thing came up. People are allowed to think articles you don't like have merit, it's one of the benefits of hidden voting. 

I can imagine why someone would upvote that. But overall I think it was an article I wouldn't recommend most people spend time on. 

It feels like you want there to be some harsher punishment/censorship/broader discussion here. Is that the case?

Jason @ 2024-04-17T15:17 (+13)

I think it's kind of weird that the bar is no longer "<0 karma" but "quick and thorough rejection".

This doesn't strike me as weird. It is reasonable that people would react strongly to information suggesting that a position enjoys moderate-to-considerable support in the community.

Let's suppose someone posted content equivalent to the infamous Bostrom listserv message today. I doubt (m)any people of color would walk away feeling comfortable being in this community merely because the post ended up with <0 karma. Information suggesting moderate-to-considerable support in the community would be very alarming to them, and for good reason! They would want to see quick and thorough rejection, at a bare minimum, in order to feel safe here. 

I'm not expressing a view that Mr. Parr's posts were of the same nature as the listserv message containing the slur. Where they are on the continuum from appropriate content to listserv-equivalent is likely a crux for many in this conversation, so my point here is to illustrate that whether you think "<0 karma" is enough likely depends on where you place Mr. Parr's posts on that continuum.

Ben Stewart @ 2024-04-17T11:55 (+9)

I take your (and others') argument to be that the negative score showed the forum "worked as it should" and that the community in some holistic sense rejected the post's claims. That argument is very weak if it is based solely on the score being slightly negative (since that could be obtained just by 51% of votes). The argument is strong if the negative score is strong and signals robust rejection. Roughly, the voting pattern was:

  • Group A - early rejection, -14 score at least
  • Group B - subsequent support, +38 score at least (possible selection effect, unknown)
  • Group C - later rejection, -44 score at least (strong selection effect from David's tweet)

Around 40% of vote points were supportive, without adjusting for Group C's selection effect (again, very rough). That's a way higher fraction than I would have expected (I would have guessed maybe 5%, and hoped for less). I agree people are allowed to like posts I don't like. But this pattern suggests a much higher proportion of the forum support views which I personally think are hot garbage. I'm not saying anything should happen as a result of this. This is just another instance of a reason for me to move away from the EA community. It may be such a reason for others too.

Nathan Young @ 2024-04-17T12:48 (+2)

Yeah maybe? 

Do you want to discuss it? I can understand value people would have taken from the post even while disagreeing with its general thrust. Also it's pretty hard for those people to suggest why they supported it if that's gonna tar them with being a racist, so it's possible they have reasons I can't guess. 

I guess there is the possibility of brigading though that always seems less likely to me than people seem to think it is. 

It also seems plausible that people saw it as some kind of free speech bellwether, though that seems mistaken to me (you can just downvote the bad stuff and upvote the good).

Jason @ 2024-04-16T17:48 (+29)

As of 1:44 PM Eastern, the October 2023 post had -21 net karma on 67 votes and the March 2024 post had +4 net karma on 17 votes. I did not change any existing votes I may have rendered prior to taking this measurement a few minutes ago. 

(I am noting this for the record as the karma and vote totals may change in response to this post, and indeed may have changed between today's posting and my measurement. I think it's helpful to have a record of what the Forum community's initial reaction to the challenged posts was.)

Nathan Young @ 2024-04-16T19:04 (+23)

I edited this post several times because I kept finding new things. About +6 karma was from an earlier edit. 

The post is at -22 karma. I don't think this is "An instance of white supremacist and Nazi ideology creeping onto the EA Forum".

I was going to say I found this quote very compelling, but the full quote is quite different to what you've quoted in this piece.

Quote in this artice:

If you are worried that an immigrant may be more likely to vote Democrat/Left, commit a crime, retain their non-Western culture or be on welfare and believe that it is ethical to exclude them from migrating for these reasons, why is it not ethical to prevent someone from giving birth if their offspring are prone to all of these behaviors?

...I believe that if you are concerned about welfare, crime, IQ, culture and so on, then the optimal combination of border control and birth restrictions is not ~98% ~0% because you could be more optimal. Take IQ for example. You could prohibit the lowest 10% from having kids and have open borders for the top 10% of IQ scorers (90% 10%). If all you care about is IQ. But you could extend this to crime, voting, culture, etc. Set whatever criteria you want and permit immigration from the most XX% and prohibit birth for the least XX%.

Full quote of lower paragraph, with following paragraph:

Imagine a 25 year old is given the option: You can either live with similar material conditions as a Sudanese/Haitian/Yemeni, or you can not have children. It would be reasonable to pick not having children. I am not saying it would always be the correct choice but there is a case to be made that in some instances migration seems like a more fundamentally desirable right compared to the right to have children. These two restrictions do not seem to be on different planes in which one is always worse than the other. Therefore, I believe that if you are concerned about welfare, crime, IQ, culture and so on, then the optimal combination of border control and birth restrictions is not ~98% ~0% because you could be more optimal. Take IQ for example. You could prohibit the lowest 10% from having kids and have open borders for the top 10% of IQ scorers (90% 10%). If all you care about is IQ. But you could extend this to crime, voting, culture, etc. Set whatever criteria you want and permit immigration from the most XX% and prohibit birth for the least XX%.

I am opposed to both coercive birth restrictions (unless very extreme circumstances) and closed borders (unless very extreme circumstances) and I find my position to be coherent. I can imagine someone being in favor of both for reasons laid out above as a coherent view but still disagree with it. But I do not see wanting to have nearly 100% closed borders for reasons X,Y,Z but not be willing to have ANY birth restrictions even though X,Y,Z apply here too. If it applies to one it should apply to another even if you view a lot of social harm from birth restrictions.

That seems fairly misleadingly quoted. It seems really important to note that the author is talking about a voluntary option in exchange for immigration as opposed to a mandatory process. One of those seems acceptable to discuss and the other doesn't. 

I would like this post a lot more if it discussed some fundamental error in how the EA forum parses such posts. As it is the original post is bad, but also it's underwater so I don't really see what concretely needs to change. 

Concerned EA Forum User @ 2024-04-18T10:48 (+10)

It seems really important to note that the author is talking about a voluntary option in exchange for immigration as opposed to a mandatory process.

As "Ives Parr" confirmed in this thread, this is not a "voluntary option". This is the state making it illegal for certain people — including people who are not immigrants — to have children because of their "non-Western culture". It is a mandatory, coercive process. 

A key quote from the Substack article:

I can't see this particular form of birth restriction as particularly more egregious than restricting someone's ability to migrate from one country to another. I think both restrictions are immoral, and I can understand why someone would see birth restrictions as more immoral, but I don't understand why it would be so much more immoral that we should have ~98% closed borders and ~0% birth restrictions when both can be used to achieve the same ends.

Nathan Young @ 2024-04-18T10:54 (+2)

Where does it talk about non-immigrants or non-voluntary in this quote?

Concerned EA Forum User @ 2024-04-18T10:57 (+1)

Another quote that hopefully makes it even clearer:

If you are worried that an immigrant may be more likely to vote Democrat/Left, commit a crime, retain their non-Western culture or be on welfare and believe that it is ethical to exclude them from migrating for these reasons, why is it not ethical to prevent someone from giving birth if their offspring are prone to all of these behaviors? There are people within the native country which are, statistically speaking, likely to grow up and vote Democrat/Left, commit crimes and be on welfare. For example, if someone's parents both voted Democrat/Leftist and their parent's parents voted Democrat/Left, they are probably more prone to voting Democrat/Left than an immigrant. I think some will say that they do want to restrict birth but can't because it is not politically feasible, but imagine that you could have full control to implement this policy for the sake of the hypothetical.

Also, a quote from "Ives Parr" in this very thread:

I was not trying to implement a strange voluntary option.

Ives Parr @ 2024-04-17T05:00 (+2)

I was not trying to implement a strange voluntary option. I was giving a hypothetical to make it apparent how egregious immigration restrictions are in terms of their harm. The argument was comparing how extreme the violation of birth restrictions is and comparing it to restrictions on immigration which could have extreme downsides. As I say in the article, I was against closed borders and restrictions on birth unless extreme circumstances (brother-sister marriage type situation). The reductio was supposed to push the reader toward supporting open borders. However, I think my willingness to make a socially undesirable comparison between two rights was used against me. 

I wrote that article years ago and it's hardly relevant to whether or not my other article is true and moral. This sort of reasoning and argumentation style should be rejected outright. I think this person is just trying to throw mud on my reputation so people try to censor me. Quite unvirtuous in my view.

Also why is my original post bad?

Nathan Young @ 2024-04-17T09:05 (+2)

I found the original quote and pointed out you were being misquoted. That seems the relevant update here, over the specific words I used to describe that. 

I wrote about why I think the original post was bad on the post, but in short, it is long and seems to imply doing genetics work that is banned/disapproved of in the West in poor countries. You seem to say that's an error on my part, in which case, please title it differently and make clearer what you are suggesting. 

Ives Parr @ 2024-04-17T13:07 (+1)

That's fine. I was adding more clarity.

I think the title is accurate and the content of my article is clear that I am not suggesting violating anyone's consent or the law. Did you read the article? I don't see how you draw these conclusions from the title alone or how the title is misleading. I gave policy recommendations which mostly involved funding research.

harfe @ 2024-04-16T18:17 (+21)

A relevant (imo) piece of information not in this post: The EA forum post that you are talking about was down-voted a lot. (I have down-voted too, although I don't remember why I did so at the time.)

This makes me less worried than I otherwise would have been.

edit: I did not see Jason's comment prior to posting mine, sorry for duplicate information.

Jason @ 2024-04-16T18:26 (+22)

Yes, but . . .  -21 on 67 votes implies a lot of upvotes too. Now, on a topic like this, I would always consider the possibility of brigading. But I am guessing some of the downvotes were strong and from high-karma users, meaning that the upvotes to get to "only" -21 net karma must have been fairly numerous and/or included strong upvotes from high-karma users.

calebp @ 2024-04-16T18:39 (+7)

I could see people upvoting this post because they think it should be more like -10 than -21. I personally don't see it as concerning that it's "only" on -21.

Jason @ 2024-04-16T19:49 (+6)

That's plausible, although I find it somewhat less likely to be a complete or primary explanation based on my recollection of voting patterns/trends on past eugenics-adjacent posts. 

In any event, I don't think "was down-voted a lot" (from harfe's comment) would be a complete summary of the voting activity.

Jason @ 2024-04-16T18:24 (+17)

I find it difficult to believe that any suggestion that genetic technology (which does not currently exist and would doubtless be expensive to deploy) could somehow be the cost-effective way of improving intelligence in developing countries is both informed and offered in good faith. One might expect an argument about, e.g., prenatal and early-childhood nutrition if that were the case. It seems much more likely that Mr. Parr's posts represent yet another attempt to inject a discussion about race, genetics, eugenics, and intelligence in EA circles with (and I'm being generous) only the most tenuous linkages between that confluence of topics and any plausibly cost-effective actions to take.

(To lay my cards on the table, I do not want content like Mr. Parr's posts on the Forum at all.)

Ives Parr @ 2024-04-17T04:47 (+2)

I am acting in good faith, but it seems that you are incredulous. I have been interested in EA and attending meetups for years. The content of the article explains the argument from an EA perspective. What can I do to prove that I am acting in good faith? What aspect of the article suggests that I am bad faith? Did you read the article before accusing me of bad faith?

Why would I invest what is probably like 100 hours into writing my article and defending it if it was just a simple bad faith attempt to "inject a discussion about race, genetics, eugenics, and intelligence in EA circles."

I discuss environmental interventions and compare them with the benefits of genetic enhancement technology in the article. I discussed the potential for Iodine in the comments, but the relative benefits are constrained in a way that they are not with enhancement. 

Jason @ 2024-04-17T22:11 (+9)

Note: This comment is considerably sharper than most of my comments on the Forum. I find that unavoidable given Mr. Parr's apparent belief that he is being downvoted because his ideas are unpopular and/or optically undesirable, rather than for the merits of his posts.

The evidence available to me does not reasonably support a conclusion that your posts meet the standards I think signify good-faith participation here.

Starting out with Some Strikes

Your first post on the Forum was, in my mind, rather dismissive of objections to the infamous Bostrom listserv, and suggested we instead criticize whoever brought this information to light (even though there is zero reason to believe they are a member of this community or an adjacent community). That's not a good way to start signaling good faith.

Much of your prior engagement in comments on the Forum has related to race, genetics, eugenics, and intelligence, although it has started to broaden as of late. That's not a good way to show that you are not seeking to "inject a discussion about race, genetics, eugenics, and intelligence in EA circles" either.

Single-focus posters are not going to get the same presumption of good faith on topics like this that a more balanced poster might. Maybe you are a balanced EA in other areas, but I can only go by what you have posted here, in your substack, and (presumably) elsewhere as Ives Parr. I understand why you might prefer a pseudonym, but some of us have a consistent pseudonym under which we post on a variety of topics. So I'm not going to count the pseudonym against you, but I'm going to base my starting point on "Ives Parr" as known to me without assuming more well-rounded contributions elsewhere.

A Surprising Conclusion

As far as the environmental/iodine issues, let me set for a metaphor to explain one problem in a less ideologically charged context. Let's suppose I was writing an article on improving life expectancy in developing countries. Someone with a passing knowledge of public health in developing countries, and the principles of EA. might expect that the proposed solution would be bednets or other anti-infectious disease technologies. Some might assign a decent probability to better funding for primary care, a pitch for anti-alcohol campaigns, or sodium reduction work. Almost no one would have standing up quaternary-care cancer facilities in developing countries using yet-to-be-developed drugs on their radar list. If someone wrote a long post suggesting that was the way, I would suspect they might have recently lost a loved one to cancer or might have some other external reason for reaching that conclusion.

I think that's a fair analogy of your recommendation here -- you're proposing technology that doesn't exist and wouldn't be affordable to the majority of people in the most developed countries in the world if it did. The fact that your chosen conclusion is an at least somewhat speculative, very expensive technology should have struck you as pretty anomalous and thrown up some caution flags. Yours could be the first EA cause area that would justify massive per-person individual expenditures of this sort, but the base rate of that being true seems rather low. And in light of your prior comments, it is a bit suspicious that your chosen intervention is one that is rather adjacent to the confluence of "race, genetics, eugenics, and intelligence in EA circles."

A Really Concerning Miss in Your Post

Turning to your post itself, the coverage of possible environmental interventions in developing countries in the text (in the latter portions of Part III) strikes me as rather skimpy. You acknowledge that environmental and nutritional factors could play a role, but despite spending 100+ hours on the post, and despite food fortification being at least a second-tier candidate intervention in EA global health for a long time, you don't seem to have caught the massive effect of cheap iodine supplementation in the original article. None of the citations for the four paragraphs after "The extent to which the failure of interventions in wealthy nations is applicable to developing nations is unclear" seem to be about environmental or nutritional effects or interventions in developing countries.

While I can't tell if you didn't know about iodine or merely chose not to cite any study about nutritional or environmental intervention in developing countries, either way Bob's reference to a 13-point drop in IQ from iodine deficiency should have significantly updated you that your original analysis had either overlooked or seriously undersold the possibility for these interventions. Indeed, much relevant information was in a Wikipedia article you linked on the Flynn effect, which notes possible explanations such as stimulating environment, nutrition, infectious diseases, and removal of lead from gasoline [also a moderately well-known EA initiative]. Given that you are someone who has obviously studied intelligence a great deal, I am pretty confident you would know all of this, so it seems implausible that this was a miss in research.

On a single Google search ("effects of malnutrition in children on iq"), one of the top articles was a study in JAMA Pediatrics describing a stable 15.3-point drop in IQ from malnutrition that was stable over an eight-year time period. This was in Mauritius in the 1970s, which had much lower GDP per capita at the time than now but I believe was still better in adjusted terms than many places are in 2024. The percentage deemed malnourished was about 22%, so this was not a study about statistically extreme malnutrition. And none of the four measures were described as reflecting iodine deficiency. That was the first result I pulled, as it was in a JAMA journal.  A Wikipedia article on "Impact of Health on Intelligence" was also on the front page, which would have clued you into a variety of relevant findings.

This is a really bad miss in my mind, and is really hard for me to square with the post being written by a curious investigator who is following the data and arguments where they lead toward the stated goal of effectively ending poverty through improving intelligence. If readily-available data suggest a significant increase in intelligence from extremely to fairly cheap, well-studied environmental interventions like vitamin/mineral supplementation, lead exposure prevention, etc., then I would expect an author on this Forum pitching a much more speculative, controversial, and expensive proposal to openly acknowledge and cite that. As far as I can see, there is not even a nod toward achieving the low-hanging environmental/nutritional fruit in your conclusion and recommendations. This certainly gives the impression that you were pre-committed to "genetic enhancement" rather than a search for effective, achievable solutions to increase intelligence in developed countries and end poverty. Although I do not expect posts to be perfectly balanced, I don't think the dismissal of environmental interventions here supports a conclusion of good-faith participation in the Forum.

Conclusion

That is not intended as an exhaustive list of reasons I find your posts to be concerning and below the standards I would expect for good-faith participation in the Forum. The heavy reliance on certain sources and authors described in the original post above is not exactly a plus, for instance. The sheer practical implausibility of offering widespread, very expensive medical services in impoverished countries -- both from a financial and a cultural standpoint -- makes the post come across as a thought experiment (again: one that focuses on certain topics that certain groups would like to discuss for various reasons despite tenuous connections to EA).

Also, this is the EA Forum, not a criminal trial. We tend to think probabilistically here, which is why I said things like it being "difficult to believe that any suggestion . . . is both informed and offered in good faith" (emphasis added). The flipside of that is that posters are not entitled to a trial prior to Forum users choosing to dismiss their posts as not reflecting good-faith participation in the Forum, nor are they entitled to have their entire 42-minute article read before people downvote those posts (cf. your concern about an average read time of five minutes).

Ives Parr @ 2024-04-17T23:49 (+11)

Your first post on the Forum was, in my mind, rather dismissive of objections to the infamous Bostrom listserv, and suggested we instead criticize whoever brought this information to light (even though there is zero reason to believe they are a member of this community or an adjacent community). That's not a good way to start signaling good faith.

You may disagree with my argument, but it was made in good faith. I'm not trolling or lying in that article. The reason I wrote that was because I felt that I could contribute a perspective which the majority of EA was overlooking. Similarly for the case for genetic enhancement. It is not discussed very much, so I felt I could make a unique contribution. Whereas, in other areas like animal welfare--I did not feel like I had a particularly important insight. If someone's first post was about veganism and a later posts were about veganism, it would not be a good reason to think the person is arguing in bad faith. 

I think the reason you might think what I am doing might be bad faith is because you attribute nefarious intentions to people interested in genetic enhancement. Perhaps the base rate of people talking about "eugenics" is higher for being bad faith, but it is much simpler to just consider the content of the message they are sending at the moment. Besides, if someone writes a 10K word well-argued article (in my opinion) for topic X that is attempting to be grounded in reality and extensively cited, it seems weird to call it "bad faith" if it is not trollish or highly deceptive.

Much of your prior engagement in comments on the Forum has related to race, genetics, eugenics, and intelligence, although it has started to broaden as of late. That's not a good way to show that you are not seeking to "inject a discussion about race, genetics, eugenics, and intelligence in EA circles" either.

When I see that EAs are making wrong statements about something I know about, I feel like I am in a position to correct them. These not mostly responses to EAs who are already discussing these topics. Moreover, if a discussion of intelligence, genes, genetic enhancement (or even race) could improve human welfare then it is worth having. My work is not merely an effort to "inject" these topics needlessly into EA. 

Single-focus posters are not going to get the same presumption of good faith on topics like this that a more balanced poster might. Maybe you are a balanced EA in other areas, but I can only go by what you have posted here, in your substack, and (presumably) elsewhere as Ives Parr. I understand why you might prefer a pseudonym, but some of us have a consistent pseudonym under which we post on a variety of topics. So I'm not going to count the pseudonym against you, but I'm going to base my starting point on "Ives Parr" as known to me without assuming more well-rounded contributions elsewhere.

If I was a single-issue poster on veganism, would you assume I am bad faith? If you want to have a prior of suspiciousness based being somewhat single-issue, I suppose you can. But you should have a posterior belief based on the actual content of the posts. I'll further add here that I have been thinking about EA generally and considered myself an EA for a long time:

  1. "Should Effective Altruists make Risky Investments?" (Dec 9, 2021)
  2. "What We Owe The Future" book review (Sep 28, 2022)
  3. Defending EA against a critique by Bryan Caplan (Aug 4, 2023)

I could use further evidence of my participation in the EA community, but you have to understand my hesitation as people are suggesting I'm basically a Nazi and parsing over my past work--something I consider immoral and malicious in this context.

But ultimately, I don't think this matters too much because you can just literally read the content. Arguing like this is kind of silly. It involves a type of reputation destruction based on past comments that is quite unvirtuous intellectually. And once we have the content of the post, it no longer seems relevant. We should just majorly update on whether I seem good faith in the post or not.

I must commend you for actually engaging with the content. Thank you.

A Surprising Conclusion

As far as the environmental/iodine issues, let me set for a metaphor to explain one problem in a less ideologically charged context. Let's suppose I was writing an article on improving life expectancy in developing countries. Someone with a passing knowledge of public health in developing countries, and the principles of EA. might expect that the proposed solution would be bednets or other anti-infectious disease technologies. Some might assign a decent probability to better funding for primary care, a pitch for anti-alcohol campaigns, or sodium reduction work. Almost no one would have standing up quaternary-care cancer facilities in developing countriesusing yet-to-be-developed drugs on their radar list. If someone wrote a long post suggesting that was the way, I would suspect they might have recently lost a loved one to cancer or might have some other external reason for reaching that conclusion.

I reject this analogy and I substitute my own which I think is more fitting. If someone was discussing alleviating the impact of malaria with bed nets, and someone came along with a special interest in gene drives and suggested it could have a huge impact--perhaps a much larger impact that bed nets--then it would seem this is a reasonable point of discussion that is not necessarily motivated by some ulterior motive. I used this analogy in the article as well. Whether or not gene drives are better is an empirical question. If someone made an extended argument why they think it could be high impact, then it is questionable to think it's bad faith. Especially if there is not trollish or rude or highly deceptive comments.

I think that's a fair analogy of your recommendation here -- you're proposing technology that doesn't exist and wouldn't be affordable to the majority of people in the most developed countries in the world if it did. The fact that your chosen conclusion is an at least somewhat speculative, very expensive technology should have struck you as pretty anomalous and thrown up some caution flags. Yours could be the first EA cause area that would justify massive per-person individual expenditures of this sort, but the base rate of that being true seems rather low. And in light of your prior comments, it is a bit suspicious that your chosen intervention is one that is rather adjacent to the confluence of "race, genetics, eugenics, and intelligence in EA circles."

Some of the technology currently exists. We can perform polygenic embryo screening and gene-editing is in its early stages but not yet safe. We have also achieved IVG in mice, and there are start ups that are working on it currently. That breakthrough would bring very large returns in terms of health, intelligence, and happiness. Metaculus estimated that IVG was ~10 years away. 

My argument is not for "massive per-person individual expenditures of this sort." This is wrong. I gave 8 policy proposals and give a bunch of money to people to use this technology was not on the list. I was mostly advocating accelerating the research and allowing voluntary adoption. If EA accelerates the breakthroughs, people would use it voluntarily.

A Really Concerning Miss in Your Post

Turning to your post itself, the coverage of possible environmental interventions in developing countries in the text (in the latter portions of Part III) strikes me as rather skimpy. You acknowledge that environmental and nutritional factors could play a role, but despite spending 100+ hours on the post, and despite food fortification being at least a second-tier candidate intervention in EA global health for a long time, you don't seem to have caught the massive effect of cheap iodine supplementation in the original article. None of the citations for the four paragraphs after "The extent to which the failure of interventions in wealthy nations is applicable to developing nations is unclear" seem to be about environmental or nutritional effects or interventions in developing countries.

While I can't tell if you didn't know about iodine or merely chose not to cite any study about nutritional or environmental intervention in developing countries, either way Bob's reference to a 13-point drop in IQ from iodine deficiency should have significantly updated you that your original analysis had either overlooked or seriously undersold the possibility for these interventions. Indeed, much relevant information was in a Wikipedia article you linked on the Flynn effect, which notes possible explanations such as stimulating environment, nutrition, infectious diseases, and removal of lead from gasoline [also a moderately well-known EA initiative]. Given that you are someone who has obviously studied intelligence a great deal, I am pretty confident you would know all of this, so it seems implausible that this was a miss in research.

On a single Google search ("effects of malnutrition in children on iq"), one of the top articles was a study in JAMA Pediatrics describing a stable 15.3-point drop in IQ from malnutrition that was stable over an eight-year time period. This was in Mauritius in the 1970s, which had much lower GDP per capita at the time than now but I believe was still better in adjusted terms than many places are in 2024. The percentage deemed malnourished was about 22%, so this was not a study about statistically extreme malnutrition. And none of the four measures were described as reflecting iodine deficiency. That was the first result I pulled, as it was in a JAMA journal.  A Wikipedia article on "Impact of Health on Intelligence" was also on the front page, which would have clued you into a variety of relevant findings.

We should be giving people iodine where they are deficient and preventing starvation. Bob raised this objection and I addressed it in the comments. It is worth mentioning. I did say that environmental conditions can depress IQ in the original article, especially at the extremes. The part about heritability that I mentioned undermines the impactfulness to some extent because the environmentality of IQ is low and the sources of variation are not particularly clear. But heritability is not well estimated between developing and developed nations, so I expressed some hesistancy about reaching a strong conclusion there. 

There is a lot of work on preventing starvation and malnutrition already, so the aim was to be neglected, tractable, and important. The benefit of accelerating enhancement is that people can voluntarily use it without the need for spending money in each case. Moreover, the gains from enhancement would be very very large for certain forms of technology and there we can embrace both types of intervention where the environmental interventions are effective. Here is what I said in the original article:

The extent to which the failure of interventions in wealthy nations is applicable to developing nations is unclear. If interventions are largely ineffective, this is evidence that they may be ineffective in the developing world. However, there is a plausible case to be made for certain threshold effects or influences unique to the conditions of poor nations. In some countries, children suffer from extreme levels of malnutrition and exposure to parasites. Extremely few children in the developed world face such obstacles. An intervention that prevents extreme malnutrition might appear ineffective in the United States but shows gains in Yemen or South Sudan. When nutrient deprivation is so great that it disrupts proper brain formation, it is likely to depress not only IQ scores but also cognitive ability. Similarly, when groups are wholly unexposed to logical reasoning, they are likely to score lower on IQ tests. Such issues are not wholly uncommon, and interventions would play an important role in such instances. Furthermore, for populations unexposed to academic tests, IQ scores will likely underestimate ability.

The extent to which we can expect environmental interventions to work as a means of improving NIQ largely depends on the extent to which we think environmental differences are driving international differences. If we suspect that NIQ differences are driven entirely by environmental differences, then improvements in nutrition and education may equalize scores. If genetic differences are playing a causal role, equalizing environments will not equalize NIQ scores. A reasonable prior assumption is non-trivial levels of influence from both. Various lines of evidence point to the prospect of zero genetic influence globally being exceptionally unlikely. For example, interventions are largely ineffective in the USA, with an average IQ of approximately 97-99, and the US still lags behind Singapore with an NIQ of approximately 106-107 (Becker, 2019). While some dismiss the genetic influence of genes on NIQ as “not interesting,” it is extremely relevant to the near future of humanity, especially considering that countries with lower NIQ typically have higher fertility (Francis, 2022).

Even if one embraces the 100% environmental explanation for national differences in IQ, one can still consider the possibility of environmental interventions being less cost-effective or more limited in magnitude relative to what could be called “genetic interventions.” Furthermore, since there are little to no means of permanently boosting IQ in more developed countries, there may be stagnation once a country reaches beyond a certain threshold of average nutrition and education.

Looking toward genetic interventions may be more fruitful, even if we accept that environmental interventions are important to some extent. IQ gains without diminishing marginal returns are implausible, given that adults in academic institutions or pursuing academic interests do not continue to add IQ points cumulatively until they achieve superintelligence. Some forms of genetic enhancement would not suffer from this problem of diminishing returns, and could in fact create superintelligent humans. Also importantly, if a genetic intervention could be administered at birth and reduce the need for additional years of schooling, it could save a tremendous amount of a student’s time.

This is a really bad miss in my mind, and is really hard for me to square with the post being written by a curious investigator who is following the data and arguments where they lead toward the stated goal of effectively ending poverty through improving intelligence. If readily-available data suggest a significant increase in intelligence from extremely to fairly cheap, well-studied environmental interventions like vitamin/mineral supplementation, lead exposure prevention, etc., then I would expect an author on this Forum pitching a much more speculative, controversial, and expensive proposal to openly acknowledge and cite that. As far as I can see, there is not even a nod toward achieving the low-hanging environmental/nutritional fruit in your conclusion and recommendations. This certainly gives the impression that you were pre-committed to "genetic enhancement" rather than a search for effective, achievable solutions to increase intelligence in developed countries and end poverty. Although I do not expect posts to be perfectly balanced, I don't think the dismissal of environmental interventions here supports a conclusion of good-faith participation in the Forum.

I've addressed this above and in the original article I compared environmental with genetic, providing some evidence to think that the potential gains are limited in a way that genetic enhancement is not. Much of the effort to prevent the causes that depress IQ are widely understood as problems and addressed by global health initiatives. 

I can understand if someone disagrees, but does this really seem like a bad faith argument? It seems like this accusation is considered more intuitively plausible because what I am arguing elicits feelings of moral disgust.

Conclusion

That is not intended as an exhaustive list of reasons I find your posts to be concerning and below the standards I would expect for good-faith participation in the Forum. The heavy reliance on certain sources and authors described in the original post above is not exactly a plus, for instance. The sheer practical implausibility of offering widespread, very expensive medical services in impoverished countries -- both from a financial and a cultural standpoint -- makes the post come across as a thought experiment (again: one that focuses on certain topics that certain groups would like to discuss for various reasons despite tenuous connections to EA).

 

The technology will be adopted voluntarily without EA funds if the tech is there. I am not advocating for spending on individuals.

EAs seem generally fine with speculation and "thought experiments" generally if they have a plausible aim of improving human flouring, which my argument does. That should be the central focus of critiques.

Also, this is the EA Forum, not a criminal trial. We tend to think probabilistically here, which is why I said things like it being "difficult to believe that any suggestion . . . is both informed and offered in good faith" (emphasis added). The flipside of that is that posters are not entitled to a trial prior to Forum users choosing to dismiss their posts as not reflecting good-faith participation in the Forum, nor are they entitled to have their entire 42-minute article read before people downvote those posts (cf. your concern about an average read time of five minutes).

I understand it's not a criminal trial. But expecting someone to read an article before downvoting or attacking stawman arguments seems quite reasonable as a standard for the forum. This EA forum post we are commenting on suggests that I am supporting Nazi ideology (which I am not!). How can someone recognize this without actually reading? 

This incentivizes these sorts of critiques and creates a culture of fear to discuss important but taboo ideas. If an idea where to rise that was actually important, it may end up neglected if people don't give it a fair chance.

Thank you for grappling with the actual content of the article. I'll state that I do feel your characterization of me being in bad faith feels quite unfair. It seems strange that I would go through all this effort to respond if I was just trolling or trying to mess with the EA forum users. 

Nathan Young @ 2024-04-19T14:26 (+14)

I have thought about this a bit and chatted to people (eg thanks @titotal) and I think there is some missing mood in my responses which is people feeling like they don't want to have to battle this stuff all the time and that the arguments are often long and complicated but wrong. 

Eg I care about truth and so do Jehovas Witnesses, but I don't think it's worthwhile to let them in my house - I can predict that argument isn't going to change my mind or theirs, but it will cost a load of time and perhaps emotional energy.

This doesn't fully change my mind, per se, I still think censoring is the wrong call, but perhaps I lower my bar on what would be a bad situation. eg if there was a post like this every week at 5 karma with huge arguments about journals that most of the community guess are rubbish. Or if sources were debunked and then replaced with similar but equally poor sources. I sense this happens a lot in IQ debates and I don't really have time for that personally. I would have even less energy if I felt the upshot of these discussions was a set of policy proposals that seemed abhorrent to me/ felt like a dicsussion of my value as a person.

Unsure what the answer is here, but seems meaningful to note the change.

Concerned EA Forum User @ 2024-04-21T10:10 (+9)

I would have even less energy if I felt the upshot of these discussions was a set of policy proposals that seemed abhorrent to me/ felt like a dicsussion of my value as a person.

I think this is the key thing. 

First, people are highly motivated to disguise ideas that have already been rejected, although they often disguise them very thinly. Here’s an example from when "creationism" got rebranded as "intelligent design" in the United States. The example focuses on the anti-evolution textbook Of Pandas and People:

Working late one night, I discovered a crucial difference between the two 1987 drafts [of the textbook]: one was written before the Supreme Court’s 1987 Edwards v Aguillard decision outlawing creationism in public schools, and the other was obviously written afterwards. The first version contained blatant creationist terminology. In the second, creationist terminology had been deleted and replaced by "intelligent design" and other ID terms. A new footnote in the latter version referenced the Edwards decision, indicating a conscious attempt to circumvent the Edwards ruling in the revised manuscript that would become Pandas. The "search and replace" operation must have been done in a hurry: in the post-Edwards manuscript, "creationists" was not completely deleted by whoever tried to replace it with "design proponents". The hybrid term "cdesign proponentsists" now stands as a "missing link" between the blatantly creationist earlier drafts and the post-Edwards versions of Pandas.

Roger Pearson, who ran Mankind Quarterly from 1978 to 2015, made some rather feeble attempts to disguise his ideas, such as this one: 

Pearson’s own assistant during the conference was Earl Thomas, a former storm trooper in the American Nazi Party, and when forced to expel two men distributing anti-Semitic literature from the National States Rights Party, he was quoted as telling them, “Not that I’m not sympathetic with what you’re doing … but don’t embarrass me and cut my throat.” He then asked them to give his regards to the secretary of the party.

The main point of this post was to remove the thin disguise that Ives Parr put over his ideas. It seems either I did not succeed or the user base of the EA Forum is disturbingly tolerant of white supremacy, or perhaps some combination of both. 

Second, the discussion and debate of, e.g., coded white supremacist ideas exact a cost on some participants that they do not on others. (A hypothetical "Let’s decide whether to kill Concerned EA Forum User" thread would demonstrate this principle in the extreme.) It’s more than exhausting, it’s acutely distressing to defend your rights as a minority when those rights are under attack. It can also be exhausting and distressing for others who feel the injustice strongly to participate in such debates. Avoiding or disengaging becomes simple self-preservation.

People self-select out of these debates. I think the people who are able to coolly and calmly, ad nauseam, debate, e.g., whether Hitler had a point about the Jews are typically the worst positioned to form good opinions on these subjects. They have the least empathy, the least moral concern, the weakest sense of justice, and are most detached from the reality and actual stakes of what they’re talking about. 

Many people enjoy provoking and offending other people. I think this is very common. Some people even enjoy causing other people distress. This seems to be true of a lot of people who oppose minority rights. The cost is not symmetrical. 

Allowing debate of, e.g., white supremacy on the EA Forum, besides being simply off-topic in most cases, creates a no-win situation for the people whose rights and value are being debated and for other people who care a lot about them. If you engage in the debate, it will exhaust you and distress you, which your interlocutors may very well enjoy. If you avoid the debate or debate a bit and then disengage, this can create the impression that your views can’t be reasonably defended. It can also create the impression that your interlocutors’ views are the dominant ones in the community, which can become a self-fulfilling prophecy. (See: "Nazi death spiral".)

Third, I would like to see a survey of various demographics’ impressions of the EA community’s attitudes about people like them, but I don’t know how you would be able to survey the people who joined then left or refrained from joining because of those impressions. The questions I’m imagining would be something like, "How likely do you think EAs are to support abhorrent policies or practices with regard to people of your race/gender/identity?" or "Do you think EAs see people of your race/gender/identity as having equal value as everyone else?". 

I suspect that, if we could know the answers to those kinds of questions, it would confirm the existence of a serious problem. EA was founded as a movement to escape banal evils (e.g. the banal evil of ignoring the drowning child), but with regard to some banal evils it is quite morally unexceptional. I think the moral circles of many EAs do not encompass other human beings as fully as they could. It’s easy to nominally support universal human equality but fail to live up to that in practice. What I see EAs saying and doing with regard to race and racism is just so sad. 

Universal human equality is a point of core moral integrity for me (as it is for many others). I can’t imagine wholeheartedly supporting EA if universal human equality is not a strong part of the movement. 

Anon 2024 @ 2024-04-17T20:26 (+5)

This post was obviously down rated by Emil Kirkegaard's followers (he retweeted a screenshot of it to his 28k followers, view his X account). The way he operates is Googling his name to find posts criticial of him, then gets his followers to downrate or leave comments attacking whoever made the post. If those don't/can't happen he tries to get the criticism deleted either by reporting or sending legal threats. What frustrates him the most is he cannot get his RationalWiki article taken down.

Jason @ 2024-04-18T17:26 (+2)

This would be a good post to disallow voting by very young accounts on. That's not a complete solution, but it's something. I'd also consider disallowing voting on older posts by young accounts for similiar reasons.