David Mathers's Quick takes

By David Mathers🔸 @ 2024-03-31T12:03 (+7)

null
David Mathers @ 2024-04-05T16:13 (+80)

Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell.

Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically ('a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people' https://en.wikipedia.org/wiki/Richard_Hanania).  Yet in the face of this, and after he made an incredibly grudging apology about his most extreme stuff (after journalists dug it up), he's been invited to Manifiold's events and put on Richard Yetter Chappel's blogroll. 

DO NOT DO THIS. If you want people to distinguish benign transhumanism (which I agree is a real thing*) from the racist history of eugenics, do not fail to shun actual racists and Nazis. Likewise, if you want to promote "decoupling" factual beliefs from policy recommendations, which can be useful, do not duck and dive around the fact that virtually every major promoter of scientific racism ever, including allegedly mainstream figures like Jensen, worked with or published with actual literal Nazis (https://www.splcenter.org/fighting-hate/extremist-files/individual/arthur-jensen). 

I love most of the people I have met through EA, and I know that-despite what some people say on twitter- we are not actually a secret crypto-fascist movement (nor is longtermism specifically, which whether you like it or not, is mostly about what its EA proponents say it is about.) But there is in my view a disturbing degree of tolerance for this stuff in the community, mostly centered around the Bay specifically. And to be clear I am complaining about tolerance for people with far-right and fascist ("reactionary" or whatever) political views, not people with any particular personal opinion on the genetics of intelligence. A desire for authoritarian government enforcing the "natural" racial hierarchy does not become okay, just because you met the person with the desire at a house party and they seemed kind of normal and chill or super-smart and nerdy. 

I usually take a way more measured tone on the forum than this, but here I think real information is given by getting shouty. 


*Anyone who thinks it is automatically far-right to think about any kind of genetic enhancement at all should go read some Culture novels, and note the implied politics (or indeed, look up the author's actual die-hard libertarian socialist views.) I am not claiming that far-left politics is innocent, just that it is not racist. 

Richard Y Chappell @ 2024-04-09T23:16 (+35)

I'd just like to clarify that my blogroll should not be taken as a list of "worthy figure[s] who [are] friend[s] of EA"!  They're just blogs I find often interesting and worth reading. No broader moral endorsement implied!

fwiw, I found TracingWoodgrains' thoughts here fairly compelling.

ETA, specifically:

I have little patience with polite society, its inconsistencies in which views are and are not acceptable, and its games of tug-of-war with the Overton Window. My own standards are strict and idiosyncratic. If I held everyone to them, I'd live in a lonely world, one that would exclude many my own circles approve of. And if you wonder whether I approve of something, I'm always happy to chat.

Yarrow Bouchard @ 2024-04-10T00:52 (+10)

I find it so maddeningly short-sighted to praise a white supremacist for being "respectful". White supremacists are not respectful to non-white people! Expand your moral circle!

A recurring problem I find with replies to criticism of associating with white supremacist figures like Hanania is a complete failure to empathize with or understand (or perhaps to care?) why people are so bothered by white supremacy. Implied in white supremacy is the threat of violence against non-white people. Dehumanizing language is intimately tied to physical violence against the people being dehumanized.

White supremacist discourse is not merely part of some kind of entertaining parlour room conversation. It’s a bullet in a gun.

Richard Y Chappell @ 2024-04-10T03:13 (+3)

fyi, I weakly downvoted this because (i) you seem like you're trying to pick a fight and I don't think it's productive; there are familiar social ratcheting effects that incentivize exaggerated rhetoric on race and gender online, and I don't think we should encourage that. (There was nothing in my comment that invited this response.) (ii) I think you're misrepresenting Trace. (iii) The "expand your moral circle" comment implies, falsely, that the only reason one could have for tolerating someone with bad views is that you don't care about those harmed by their bad views.

I did not mean the reference to Trace to function as a conversation opener. (Quite the opposite!) I've now edited my original comment to clarify the relevant portion of the tweet. But if anyone wants to disagree with Trace, maybe start a new thread for that rather than replying to me. Thanks!

Yarrow Bouchard @ 2024-04-10T04:08 (+4)

exaggerated rhetoric on race


Now I wonder if you’re actually familiar with Hanania’s white supremacist views? (See here, for example.) 

Richard Y Chappell🔸 @ 2024-09-04T15:59 (+7)

Just to expand on the above, I've written a new blog post - It's OK to Read Anyone - that explains (i) why I won't personally engage in intellectual boycotts [obviously the situation is different for organizations, and I'm happy for them to make their own decisions!], and (ii) what it is in Hanania's substack writing that I personally find valuable and worth recommending to other intellectuals.

Ebenezer Dukakis @ 2024-04-07T09:54 (+33)

Your comment seems a bit light on citations, and didn't match my impression of Hanania after spending 10s of hours reading his stuff. I've certainly never seen him advocate for an authoritarian government as a means of enforcing a "natural" racial hierarchy. This claim stood out to me:

Hannania called for trying to get rid of all non-white immigrants in the US

Hanania wrote this post in 2023. It's the first hit on his substack search for "immigration". This apparent lack of fact-checking makes me doubt the veracity of your other claims.

It seems like this is your only specific citation:

a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people

This appears to be a falsified quote. [CORRECTION: The quote appears here on Hanania's Twitter. Thanks David. I'm leaving the rest of my comment as originally written, since I think it provides some valuable context.] Search for "we need more" on Wikipedia's second citation. The actual quote is as follows:

...actually solving our crime problem to any serious extent would take a revolution in our culture or system of government. Whether you want to focus on guns or the criminals themselves, it would involve heavily policing, surveilling, and incarcerating more black people. If any part of you is uncomfortable with policies that have an extreme disparate impact, you don’t have the stomach for what it would take.

This paragraph, from the same post, is useful context:

As I argue in my articles on El Salvador, any polity that has a high enough murder rate needs to make solving crime its number one priority. This was true for that nation before Bukele came along, as it is for major American cities today. It’s not a big mystery how to do this, it’s just politically difficult, because literally everything that works is considered racist. You need more cops, more prisons, and more use of DNA databases and facial recognition technology. You can’t have concerns about disparate impact in a world where crime is so overwhelmingly committed by one group.

Hanania has stated elsewhere that he's a fan of Bukele and his policies. Hanania's position appears to be that since St Louis has a murder rate comparable to El Salvador when Bukele took power, St Louis could benefit from Bukele-style policies, but that would require stuff that liberals don't like. Wikipedia makes it sound like antipathy towards Black people is his explicit motive, but that's not how I understood him. It might be his implicit motive, but that could be true for anyone -- maybe liberals prefer soft-on-crime policies because high crime keeps Black people in poverty. Who knows.

If you want to convince me that Hanania is a current-Nazi, let's discuss the single worst thing he said recently under his real name, and we can see if the specific quote holds up to scrutiny in context.

[EDIT: To be clear, if you want to exclude Hanania because you think he is kinda sketchy, or was a bad person in the past, or is too willing to make un-PC factual claims, that may be a reasonable position. I'm arguing against excluding him on the basis that he's a Nazi, because I don't think that is currently true. His 2023 post advocating for racially diverse immigration to the US seems like a very straightforward disproof. If you manage to get Wikipedia to cite it, I'll be impressed, by the way.]

ZachWeems @ 2024-04-10T05:31 (+17)

Regarding the last paragraph, in the edit:

I think the comments here are ignoring a perfectly sufficient reason to not, eg, invite him to speak at an EA adjacent conference. If I understand correctly, he consistently endorsed white supremacy for several years as a pseudonymous blogger.

Effective Altruism has grown fairly popular. We do not have a shortage of people who have heard of us and are willing to speak at conferences. We can afford to apply a few filtering criteria that exclude otherwise acceptable speakers. 

"Zero articles endorsing white supremacy" is one such useful filter. 

I predict that people considering joining or working with us would sometimes hear about speakers who'd once endorsed white supremacy, and be seriously concerned. I'd put not-insignificant odds that the number that back off because of this would reduce the growth of the movement by over 10%. We can and should prefer speakers who don't bring this potential problem.

 

A few clarifications follow:

-Nothing about this relies on his current views. He could be a wonderful fluffy bunny of a person today, and it would all still apply. Doesn't sound like the consensus in this thread, but it's not relevant.

-This does not mean anyone needs to spurn him, if they think he's a good enough person now. Of course he can reform! I wouldn't ask that he sew a scarlet letter into his clothing or become unemployable or be cast into the outer darkness. But, it doesn't seem unreasonable to say that past actions as a public thinker can impact your future as a public thinker. I sure hope he wouldn't hold it against people that he gets fewer speaking invitations despite reforming.

-I don't see this as a slippery slope towards becoming a close-minded community. The views he held would have been well outside the Overton window of any EA space I've been in, to the best of my knowledge. There were multiple such views, voiced seriously and consistently. Bostrom's ill-advised email is not a good reason to remove him from lists of speakers, and Hanania's multi-year advocacy of racist ideas is a good reason. There will be cases that require careful analysis, but I think both of these cases are extreme enough to be fairly clear-cut.

ZachWeems @ 2024-06-21T05:59 (+10)

Un-endorsed for two reasons. 

  • Manifold invited people based on having advocated for prediction markets, which is a much stricter criterion than being a generic public speaker that feels positively about your organization. With a smaller pool of speakers, it is not trivially cheap to apply filters, so it is not as clear cut as I claimed. (I could have found out this detail before writing, and I feel embarrassed that I didn't.)
  • Despite having an EA in a leadership role and ample EA-adjacent folks that associate with it, Manifold doesn't consider itself EA-aligned. It sucks that potential EA's will sometimes mistake non-EA's for EA's, but it is important to respect it when a group tells the wider EA community that we aren't their real dad and can't make requests. (This does not appear to have been common knowledge so I feel less embarrassed about this one.)
David Mathers @ 2024-04-07T12:38 (+6)

https://twitter.com/RichardHanania/status/1657541010745081857?lang=en. There you go for the quote in the form Wikipedia gives it.

Ebenezer Dukakis @ 2024-04-07T13:38 (+7)

Thank you. Is your thought that "revolution in our culture or system of government" is supposed to be a call for some kind of fascist revolution? My take is, like a lot of right-leaning people, Hanania sees progressive influence as deep and pervasive in almost all American institutions. From this perspective, a priority on fighting crime even when it means heavily disparate impact looks like a revolutionary change.

Hanania has been pretty explicit about his belief that liberal democracy is generally the best form of government -- see this post for example. If he was crypto-fash, I think he would just not publish posts like that.

BTW, I don't agree with Hanania on everything... for example, the "some humans are in a very deep sense better than other humans" line from the post I just linked sketches me out some -- it seems to conflate moral value with ability. I find Hanania interesting reading, but the idea that EA should distance itself from him on the margin seems like something a reasonable person could believe. I think it comes down to your position in the larger debate over whether EA should prioritize optics vs intellectual vibrancy.

Here is another recent post (titled "Shut up About Race and IQ") that I struggle to imagine a crypto-Nazi writing. E.g. these quotes:

The fact that individuals don’t actually care all that much about their race or culture is why conservatives are always so angry and trying to pass laws to change their behavior... While leftists often wish humans were more moral than they actually are, right-wing identitarians are unique in wishing they were worse.

...

People who get really into group differences and put it at the center of their politics don’t actually care all that much about the science. I think for the most part they just think foreigners and other races are icky. They therefore latch on to group differences as a way to justify what they want for tribal or aesthetic reasons.

David Mathers @ 2024-04-07T12:39 (+2)

(Well not quite, Wiki edit out "or our culture" as an alternative to "form of government").

Chris Leong @ 2024-04-05T22:44 (+22)

I have very mixed views on Richard Hannania.

On one hand, some of his past views were pretty terrible (even though I believe that you've exaggerated the extent of these views).

On the other hand, he is also one of the best critics of conservatives. Take for example, this article where he tells conservatives to stop being idiots who believe random conspiracy theories and another where he tells them to stop scamming everyone. These are amazing, brilliant articles with great chutzpah. As someone quite far to the right, he's able to make these points far more credibly than a moderate or liberal ever could.

So I guess I feel he's kind of a necessary voice, at least at this particular point in time when there are few alternatives.

Buck @ 2024-04-06T14:40 (+21)

I think it's pretty unreasonable to call him a Nazi--he'd hate Nazis, because he loves Jews and generally dislikes dumb conservatives.

I agree that he seems pretty racist.

Ariel Simnegar @ 2024-04-06T17:54 (+18)

I'd like to give some context for why I disagree.

Yes, Richard Hanania is pretty racist. His views have historically been quite repugnant, and he's admitted that "I truly sucked back then". However, I think EA causes are more important than political differences. It's valuable when Hanania exposes the moral atrocity of factory farming and defends EA to his right-wing audience. If we're being scope-sensitive, I think we have a lot more in common with Hanania on the most important questions than we do on political issues.

I also think Hanania has excellent takes on most issues, and that's because he's the most intellectually honest blogger I've encountered. I think Hanania likes EA because he's willing to admit that he's imperfect, unlike EA's critics who would rather feel good about themselves than actually help others.

More broadly, I think we could be doing more to attract people who don't hold typical Bay Area beliefs. Just 3% of EAs identify as right wing. I think there are several reasons why, all else equal, it would be better to have more political diversity:

  • In this era of political polarization, It would be a travesty for EA issues to become partisan.
  • All else equal, political diversity is good for community epistemics. In that regard, it should be encouraged for much the same reason that cultural and racial diversity are encouraged.
  • If we want EA to be a global social movement, we need to show that one can be EA even if they hold beliefs on other issues we find repugnant. I live in Panama for my job. When I arrived here, I had a culture shock from how backwards many people's views are on racism and sexism. If we can't be friends with the person next door with bad views, how are we going to make allies globally?
Jason @ 2024-04-06T19:01 (+40)

Being "pretty racist" with a past history of being even worse is not a mere "political issue."

I don't see how the proposition that Hanania has agreeable views on some issues, like factory farming contradicts David's position that we should not treat him "as some sort of worthy figure" and (impliedly) that we should not platform him at our events or on our blogrolls. 

There is a wide gap between the proposition that EA should seek to attract more "people who don't hold typical Bay Area beliefs" (I agree) and that EA should seek to attract people by playing nice with those like Hanania. 

Among other things, the fact is that you can't create a social movement that can encompass 100% of humanity. You can't both be welcoming to people who hold "pretty racist" views and to the targets of their racism. And if you start welcoming in the pretty-racist, you're at least risking the downward spiral of having more racism-intolerant people like --> more openness to racism --> more departures from those intolerant to racism --> soon, you've got a whole lot of racism going on. 

Lukas_Gloor @ 2024-04-07T00:56 (+57)

+1

If even some of the people defending this person start with "yes, he's pretty racist," that makes me think David Mathers is totally right.

Regarding cata's comment:

But I think that the modern idea that it's good policy to "shun" people who express wrong (or heartless, or whatever) views is totally wrong, and is especially inappropriate for EA in practice, the impact of which has largely been due to unusual people with unusual views.

Why move from "wrong or heartless" to "unusual people with unusual views"? None of the people who were important to EA historically have had hateful or heartless-and-prejudiced views (or, if someone had them secretly, at least they didn't openly express it). It would also be directly opposed to EA core principles (compassion, equal consideration of interests).

Whether someone speaks at Manifest (or is on a blogroll, or whatever) should be about whether they are going to give an interesting talk to Manifest, not because of their general moral character.

I think sufficiently shitty character should be disqualifying. I agree with you insofar that, if someone has ideas that seem worth discussing, I can imagine a stance of "we're talking to this person in a moderated setting to hear their ideas," but I'd importantly caveat it by making sure to also expose their shittiness. In other words, I think platforming a person who promotes a dangerous ideology (or, to give a different example, someone who has a tendency to form mini-cults around them that predictably harm some of the people they come into contact with) isn't necessarily wrong, but it comes with a specific responsibility. What would be wrong is implicitly conveying that the person you're platforming is vetted/normal/harmless, when they actually seem dangerous. If someone actually seems dangerous, make sure that, if you do decide to platform them (presumably because you think they also have some good/important things to say), others won't come away with the impression that you don't think they're dangerous.

cata @ 2024-04-08T23:31 (+11)

Why move from "wrong or heartless" to "unusual people with unusual views"?

 

I believe these two things:

A) People don't have very objective moral intuitions, so there isn't widespread agreement on what views are seriously wrong.

B) Unusual people typically come by their unusual views by thinking in some direction that is not socially typical, and then drawing conclusions that make sense to them.

So if you are a person who does B, you probably don't and shouldn't have confidence that many other people won't find your views to be seriously wrong. So a productive intellectual community that wants to hear things you have to say, should be prepared to tolerate views that seem seriously wrong, perhaps with some caveats (e.g. that they are the sort of view that a person might honestly come by, as opposed to something invented simply maliciously.)

None of the people who were important to EA historically have had hateful or heartless-and-prejudiced views (or, if someone had them secretly, at least they didn't openly express it).

I think this is absolutely false. A kind of obvious example (to many, since as above, people do not unanimously agree on what is hateful) is that famous Nick Bostrom email about racial differences. Another example to many is the similar correspondence from Scott Alexander. Another example would be Zack Davis's writing on transgender identity. Another example would be Peter Singer's writing on disability. Another example would be this post arguing in favor of altruistic eugenics. These are all views that many people who are even very culturally close to the authors (e.g. modern Western intellectuals) would consider hateful and wrong.

Of course, having views that substantially different cultures would consider hateful and wrong is so commonplace that I hardly need to give any examples. Many of my extended family members consider the idea that abortion is permissible to be hateful and wrong. I consider their views, in addition to many of their other religious views, to be hateful and wrong. And I don't believe that either of us have come by our views particularly unreasonably.

What would be wrong is implicitly conveying that the person you're platforming is vetted/normal/harmless, when they actually seem dangerous.

Perhaps this is an important crux. If a big conference is bringing a bunch of people to give talks that the speakers are individually responsible for, I personally would infer ~zero vetting or endorsement, and I would judge each talk with an open mind. (I think I am correct to do this, because little vetting is in fact done; the large conferences I have been familiar with hunt for speakers based on who they think will draw crowds, e.g. celebrities and people with knowledge and power, not because they agree with the contents of talks.) So if this is culturally ambiguous it would seem fine to clarify.

titotal @ 2024-04-09T09:50 (+28)

I think this is just naive. People pay money and spend their precious time to go to these conferences. If you invite a racist, the effect will be twofold:

  1. More racists will come to your conference.
  2. more minorities, and people sympathetic to minorities, will stay home. 

When this second group stays home (as is their right), they take their bold and unusual ideas with them. 

By inviting a racist, you are not selecting for "bold and unusual ideas". You are selecting for racism

And yes, a similar dynamic will play out with many controversial ideas. Which is why you need to exit the meta level, and make deliberate choices about which ideas you want to keep, and which groups of people you are okay with driving away. This also comes with a responsibility to treat said topics with appropriate levels of care and consideration, something that, for example, Bostrom failed horribly at. 

Lukas_Gloor @ 2024-04-09T12:55 (+16)

I feel like you're trying to equivocate "wrong or heartless" (or "heartless-and-prejudiced," as I called it elsewhere) with "socially provocative" or "causes outrage to a subset of readers." 

That feels like misdirection.

I see two different issues here:

(1) Are some ideas that cause social backlash still valuable?

(2) Are some ideas shitty and worth condemning?

My answer is yes to both.

When someone expresses a view that belongs into (2), pointing at the existence of (1) isn't a good defense.

You may be saying that we should be humble and can't tell the difference, but I think we can. Moral relativism sucks.

FWIW, if I thought we couldn't tell the difference, then it wouldn't be obvious to me that we should go for "condemn pretty much nothing" as opposed to "condemn everything that causes controversy." Both of these seem equally extremely bad.

I see that you're not quite advocating for "condemn nothing" because you write this bit:

perhaps with some caveats (e.g. that they are the sort of view that a person might honestly come by, as opposed to something invented simply maliciously.)

It depends on what you mean exactly, but I think this may not be going far enough. Some people don't cult-founder-style invent new beliefs with some ulterior motive (like making money), but the beliefs they "honestly" come to may still be hateful and prejudiced. Also, some people might be aware that there's a lot of misanthropy and wanting to feel superior in their thinking, but they might be manipulatively pretending to only be interested in "truth-seeking," especially when talking to impressionable members of the rationality community, where you get lots of social credit for signalling truth-seeking virtues.

To get to the heart of things, do you think Hanania's views are no worse than the examples you give? If so, I would expect people to say that he's not actually racist.

However, if they are worse, then I'd say let's drop the cultural relativism and condemn them.

It seems to me like there's no disagreement by people familiar with Hanania that his views were worse in the past. That's a red flag. Some people say he's changed his views. I'm not per se against giving people second chances, but it seems suspicious to me that someone who admits that they've had really shitty racist views in the past now continues to focus on issues where they – even according to other discussion participants here who defend him – still seem racist. Like, why isn't he trying to educate people on how not to fall victim to a hateful ideology, since he has personal experience with that. It's hard to come away with "ah, now the motivation is compassion and wanting the best for everyone, when previously it was something dark." (I'm not saying such changes of heart are impossible, but I don't view it as likely, given what other commenters are saying.)

Anyway, to comment on your examples:

Singer faced most of the heat for his views on preimplantation diagnostics and disability before EA became a movement. Still, I'd bet that, if EAs had been around back then, many EAs, and especially the ones I most admire and agree with, would've come to his defense.

I just skimmed that eugenics article you link to and it seems fine to me, or even good. Also, most of the pushback there from EA forum participants is about the strategy of still using the word "eugenics" instead of using a different word, so many people don't seem to disagree much with the substance of the article.

In Bostrom's case, I don't think anyone thinks that Bostrom's comments from long ago were a good thing, but there's a difference between them being awkward and tone-deaf, vs them being hateful or hate-inspired. (And it's more forgivable for people to be awkward and tone-deaf when they're young.)

Lastly, on Scott Alexander's example, whether intelligence differences are at least partly genetic is an empirical question, not a moral one. It might well be influenced by someone having hateful moral views, so it matters where a person's interest in that sort of issue is coming from. Does it come from a place of hate or wanting to seem superior, or does it come from a desire for truth-seeking and believing that knowing what's the case makes it easier to help? (And: Does the person make any actual efforts to help disadvantaged groups?) As Scott Alexander points out himself:

Somebody who believes that Mexicans are more criminal than white people might just be collecting crime stats, but we’re suspicious that they might use this to justify an irrational hatred toward Mexicans and desire to discriminate against them. So it’s potentially racist, regardless of whether you attribute it to genetics or culture.

So, all these examples (I think Zach Davis's writing is more "rationality community" than EA, and I'm not really familiar with it, so I won't comment on it) seem fine to me. 

When I said,

None of the people who were important to EA historically have had hateful or heartless-and-prejudiced views (or, if someone had them secretly, at least they didn't openly express it).

This wasn't about, "Can we find some random people (who we otherwise wouldn't listen to when it comes to other topics) who will be outraged."

Instead, I meant that we can look at people's views at the object level and decide whether they're coming from a place of compassion for everyone and equal consideration of interests, or whether they're coming from a darker place.

And someone can have wrong views that aren't hateful:

Many of my extended family members consider the idea that abortion is permissible to be hateful and wrong. I consider their views, in addition to many of their other religious views, to be hateful and wrong.

I'm not sure if you're using "hateful" here as a weird synonym to "wrong," or whether your extended relatives have similarities to the Westboro Baptist Church.

Normally, I think of people who are for abortion bans as merely misguided (since they're often literally misguided about empirical questions, or sometimes they seem to have an inability to move away from rigid-category thinking and not understand the necessity of having a different logic for non-typical examples/edge cases).

When I speak of "hateful," it's something more. I then mean that the ideology has an affinity for appealing to people's darker motivations. I think ideologies like that are properly dangerous, as we've seen historically. (And it applies to, e.g., Communism just as well as to racism.)

I agree with you that conferences do very little "vetting" (and find this is okay), but I think the little vetting that they do and should do includes "don't bring in people who are mouthpieces to ideologies that appeal to people's dark instincts." (And also things like, "don't bring in people who are known to cause harm to others," whether that's through sexually predatory behavior or the tendency to form mini-cults around themselves.)

Jason @ 2024-04-10T02:21 (+11)

It seems to me like there's no disagreement by people familiar with Hanania that his views were worse in the past. That's a red flag. Some people say he's changed his views. I'm not per se against giving people second chances, but it seems suspicious to me that someone who admits that they've had really shitty racist views in the past now continues to focus on issues where they – even according to other discussion participants here who defend him – still seem racist.

Agreed. I think the 2008-10 postings under the Hoste pseudonym are highly relevant insofar as they show a sustained pattern of bigotry during that time. They are just not consistent in my mind with having fallen into error despite even minimally good-faith, truth-seeking behavior combined with major errors in judgment. Sample quotations in this article. Once you get to that point, you may get a second chance at some future time, but I'm not inclined to give you the benefit of the doubt on your second chance:

  • A person who published statements like the Hoste statements over a period of time, but has reformed, should be on notice that there was something in them that led them to the point of glorifying white nationalism and at least espousing white supremacist beliefs. (I don't care to read any more of the Hoste writings to be more precise than that.) An actually reformed white nationalist should know to be very cautious in what they write about Hispanic and African-American persons, because they should know that a deep prejudice once resided within them and might still be lurking beneath at some level.
  • The establishment of clear, sustained bigotry at time-1 would ordinarily justify an inference that any deeply problematic statements at later times are also the result of bigotry unless the evidence suggests otherwise. In contrast, it is relatively more likely that a deeply problematic statement by someone without a past history of bigotry could reflect unconscious (or at least semi-conscious?) racism, a severe but fairly isolated lack of judgment, or other serious issues that are nevertheless more forgivable than outright bigotry.
Yarrow Bouchard @ 2024-04-09T22:02 (+4)

I agree with you when you said that we can know evil ideas when we see them and rightly condemn them. We don't have to adopt some sort of generic welcomingness to all ideas, including extremist hate ideologies.

I disagree with you about some of the examples of alleged racism or prejudice or hateful views attributed to people like Nick Bostrom and Scott Alexander. I definitely wouldn't wave these examples away by saying they "seem fine to me." I think one thing you're trying to say is that these examples are very different from someone being overtly and egregiously white supremacist in the worst way like Richard Hanania, and I agree. But I wouldn't say these examples are "fine".

It is okay to criticize the views and behaviour of figures perceived to be influential in EA. I think that's healthy.

cata @ 2024-04-09T22:20 (+1)

Appreciate the reply. I don't have a well-informed opinion about Hanania in particular, and I really don't care to read enough of his writing to try to get one, so I think I said everything I can say about the topic (e.g. I can't really speak to whether Hanania's views are specifically worse than all the examples I think of when I think of EA views that people may find outrageous.)

Yarrow Bouchard @ 2024-04-10T01:00 (+5)

Wikipedia:

Under the pseudonym, Hanania argued for eugenics, including the forcible sterilization of everyone with an IQ below 90.[4] He also denounced "race-mixing" and said that white nationalism "is the only hope".[6] He opposed immigration to the United States, saying that "the IQ and genetic differences between them and native Europeans are real, and assimilation is impossible". He cited a speech by neo-Nazi William Luther Pierce, who had used Haiti as an example to argue that black people are incapable of governing themselves.[4] 

Yarrow Bouchard @ 2024-04-10T07:32 (+3)

See this comment for a more detailed survey of Hanania's white supremacy.

Yarrow Bouchard @ 2024-04-05T20:14 (+18)

When someone makes the accusation that transhumanism or effective altruism or longtermism or worries about low birth rates is a form of thinly veiled covert racism, I generally think they don’t really understand the topic and are tilting at windmills.

But then I see people who are indeed super racist talking about these topics and I can’t really say the critics are fully wrong. Particularly if communities like the EA Forum or the broader online EA community don’t vigorously repudiate the racism.

sapphire @ 2024-04-07T08:10 (+15)

I don't think it makes any sense to punish people for past political or moral views they have sincerely recanted. There is some sense in which it shows bad judgement but ideology is a different domain from most. I am honestly quite invested in something like 'moral progress'. Its a bit of a naive position to have to defend philosophically but I think most altruists are too. At least if they are being honest with themselves. Lots of people are empirically quite racist. Very few people grew up with what I would consider to be great values. If someone sincerely changes their ways Im happy to call them brother or sister. Have a party. Slaughter the uhhhhh fattest pumpkin and make vegan pumpkin pie. 

However mr Hanania is stil quite racist. He may or may not still be more of a Nazi than he lets on but even his professed views are quite bad. Im not sure what the policy should be on cooperating with people with opposing value sets. Or on Hanania himself. I just wanted to say something in support of being truly welcoming to anyone who real deal rejects their past harmful ideology. 

cata @ 2024-04-06T22:47 (+12)

I have been extremely unimpressed with Richard Hanania and I don't understand why people find his writing interesting. But I think that the modern idea that it's good policy to "shun" people who express wrong (or heartless, or whatever) views is totally wrong, and is especially inappropriate for EA in practice, the impact of which has largely been due to unusual people with unusual views.

Whether someone speaks at Manifest (or is on a blogroll, or whatever) should be about whether they are going to give an interesting talk to Manifest, not because of their general moral character. Especially not because of the moral character of their beliefs, rather than their actions. And really especially not because of the moral character of things they used to believe.

titotal @ 2024-04-07T07:40 (+22)

By not "shunning" (actual, serious) racists, you are indirectly "shunning" everybody they target. 

Imagine if there was a guy who's "unusual idea" was that some random guy called ben was the source of all the evils in the world. Furthermore, this is somehow a widespread belief, and he has to deal with widespread harrasment and death threats, despite doing literally nothing wrong. You invite, as speaker at your conference, someone who previously said that Ben is a "demonic slut who needs to be sterilised". 

Do you think Ben is going to show up to your conference? 

And this can sometimes set into motion a "nazi death spiral". You let a few nazis into your community for "free speech" reasons. All the people uncomfortable with the presence of one or two nazis leave, making the nazis a larger percentage of the community, attracting more, which makes more people leave, until only nazi's and people who are comfortable with nazis are left. This has literally happened on several occasions! 

Shunning people for saying vile things is entirely fine and necessary for the health of a community. This is called "having standards". 

Timothy Chan @ 2024-04-07T13:28 (+11)

I would add that it's shunning people for saying vile things with ill intent which seems necessary. This is what separates the case of Hanania from others. In most cases, punishing well-intentioned people is counterproductive. It drives them closer to those with ill intent, and suggests to well-intentioned bystanders that they need to choose to associate with the other sort of extremist to avoid being persecuted. I'm not an expert on history but from my limited knowledge a similar dynamic might have existed in Germany in the 1920s/1930s; people were forced to choose between the far-left and the far-right.

David T @ 2024-04-09T11:08 (+5)

The Germany argument works better the other way round: there were plenty of non-communist alternatives to Hitler (and the communists weren't capable of winning at the ballot box), but a lot of Germans who didn't share his race obsession thought he had some really good ideas worth listening to, and then many moderate rivals eventually concluded they were better off working with him.

I don't think it's "punishing" people not to give them keynote addresses and citations as allies. I doubt Leif Wenar is getting invitations to speak at EA events any time soon, not because he's an intolerable human being but simply because his core messaging is completely incompatible with what EA is trying to do...

titotal @ 2024-04-08T14:02 (+3)

I do not think the rise of Nazi germany had much to do with social "shunning". More it was a case of the economy being in shambles, both the far-left and far-right wanting to overthrow the government, and them fighting physical battles in the street over it, until the right-wing won enough of the populace over. I guess there was left-wing infighting between the communists and the social democrats, but that was less over "shunning" than over murdering the other sides leader

I think intent should be a factor when thinking about whether to shun, but it should not be the only factor. If you somehow convinced me that a holocaust denier genuinely bore no ill intent, I still wouldn't want them in my community, because it would create a massively toxic atmosphere and hurt everybody else. I think it's good to reach out and try to help well-intentioned people see the errors of their ways, but it's not the responsibility of the EA movement to do so here

Timothy Chan @ 2024-04-08T14:09 (+1)

Yes, a similar dynamic (relating to siding with another side to avoid persecution) might have existed in Germany in the 1920s/1930s (e.g. I imagine industrialists preferred Nazis to Communists). I agree it was not a major factor in the rise of Nazi Germany - which was one result of the political violence - and that there are differences.

Timothy Chan @ 2024-04-05T21:57 (+10)

Given his past behavior, I think it's more likely than not that you're right about him. Even someone more skeptical should acknowledge that the views he expressed in the past and the views he now expresses likely stem from the same malevolent attitudes.

But about far-left politics being 'not racist', I think it's fair to say that far-left politics discriminates in favor or against individuals on the basis of race. It's usually not the kind of malevolent racial discrimination of the far-right - which absolutely needs to be condemned and eliminated by society. The far-left appear primarily motivated by benevolence towards racial groups perceived to be disadvantaged or are in fact disadvantaged, but it is still racially discriminatory (and it sometimes turns into the hateful type of discrimination). If we want to treat individuals on their own merits, and not on the basis of race, that sort of discrimination must also be condemned.

Sean_o_h @ 2024-04-06T17:25 (+10)

Also, there is famously quite a lot of antisemitism on the left and far left. Sidestepping the academic debate on whether antisemitism is or is not technically a form of racism, it seem strange to me to claim that racism-and-adjacent only exist on the right.

(for avoidance of doubt, I agree with the OP that Hanania seems racist, and not a good ally for this community)

jacobjacob @ 2024-06-20T03:03 (+8)

(I haven't read the full comment here and don't want to express opinions about all its claims. But for people who saw my comments on the other post, I want to state for the record that based on what I've seen of Richard Hanania's writing online, I think Manifest next year would be better without him. It's not my choice, but if I organised it, I wouldn't invite him. I don't think of him as a "friend of EA".)

yanni kyriacos @ 2024-04-10T03:26 (+8)

This is such a common-sense take, that it worries me it needs writing. I assume this is happening over on twitter (where I don't have an account)? The average non-EA would consider this take to be extremely obvious and is partly why I think we should be considered about the composition of the movement in general.

Thomas Kwa @ 2024-06-20T06:22 (+7)

Given the Guardian piece, inviting Hannania to Manifest seems like an unforced error on the part of Manifold and possibly Lightcone. This does not change because the article was a hitpiece with many inaccuracies. I might have more to say later.

Jason @ 2024-04-05T17:32 (+4)

To clarify, I think when you say "sterilization of everyone under 90" you mean that he favored the "forcible sterilization of everyone with an IQ below 90" (quoting Wikipedia here)?

David Mathers @ 2024-04-05T17:53 (+1)

Yeah sorry!

David Mathers @ 2024-03-31T12:03 (+17)

I feel like people haven't taken the "are mosquito nets bad because of overfishing" question seriously enough and that it might be time to stop funding mosquito nets because of it. (Or at least until we can find an org that only gives them out in places with very little opportunity for or reliance on fishing.) I think people just trust GiveWell on this, but I think that is  a mistake: I can't find any attempt by them to actually do even a back of the envelope calculation of the scale of the harm through things like increased food insecurity (or indeed harm to fish I guess.) And also, it'd be so mega embarrassing for them if nets were net negative, that I don't really trust them to evaluate this fairly. (And actually that probably goes for any EA org, or to some extent public health people as a whole.) The last time this was discussed on the forum:

 1) the scale seemed quite concerning (https://forum.effectivealtruism.org/posts/enH4qj5NzKakt5oyH/is-mosquito-net-fishing-really-net-positive)

2) No one seemed to have a quick disproof that it made nets net negative. (Plus we also care if it just pushes their net effect below Give Directly or other options.)

3) There was surprisingly little participation in the discussion given how important this is. (Compared how much time we all spent on the Nonlinear scandal!). 

I've seen people (i.e. Scott Alexander here: https://www.reddit.com/r/slatestarcodex/comments/1brg5t3/the_deaths_of_effective_altruism/) claim that this can't be an issue, because AMF checks and most nets are used for their intended purpose in the first 3 years after they are given out. But I think it's just an error to think that gets rid of the problem because nets can be used for fishing after they are used to protect from malaria. So the rate of misuse is not really capped by the rate of proper usage. 

Considering how much of what EA has done so far has been bednets, I'm having a slight "are we the baddies" crisis about this.

David T @ 2024-03-31T20:36 (+26)

"Fishing," said the old man "is at least as complicated as any other industry". 

I was sitting in a meeting of representatives of the other end of the fishing industry: fleets of North Sea trawlers turning over >ÂŁ1million each per year, fishing in probably the world's most studied at-risk fishing ecosystem. They were fuming because in the view of the scientists studying North Sea fish, cod stocks had reached dangerously low levels and their quotas needed reducing, but in the view of the fishermen actually catching the fish, cod stocks off the east coast of England were at such high levels they hit their month's cod quota in a day whilst actively trying to avoid catching cod. (I have no reason to believe that either view was uninformed or deceptive). "What they're probably not factoring in," he closed on, "is that cod populations in different regions are cyclical"

The point of that waffly anecdote is that factoring in the effects of mosquito nets on local fish ecosystems would actually be really hard, because an RCT in one area over one year really isn't going to tell you much about the ecosystems in other areas, or in other years. Even more so in isolated African watercourses. (I think we can largely rule out the hypothesis that amateurs with free 2x2m malaria nets instead of proper nets and boats are depleting the seas more than foreign factory ships with nets hundreds of metres long...)

Like you, I don't find the idea that ~80% of distributed nets are appropriately used (at least for the first year) to have settled the debate, but quite a few things have to be the case for AMF's distribution programs to be the major factor in ecosystem harm caused by overfishing:

  • the level of fishing with relatively small mosquito nets is sufficient to destroy fish stocks in the immediate area 
  • other areas are unable to replenish the fish stocks depleted in the local area
  • in the absence of a continuing supply of free mosquito nets, people wouldn't nevertheless use mosquito nets - which are also available for sale at a relatively low cost - for fishing
  • the most likely alternatives to mosquito nets for local fishermen - potentially including other fabrics which also don't allow smaller fish to pass through - don't have the same impact on the ecosystem

for the program to be net negative for humans, you've also got to assume

  • avoidable future falls in fishing stocks actually kill people. Or at least that the net economic loss from them is so great it outweighs the lives saved and malaria cases averted (and short term positive impact on availability of nutritious fish!). For the overall programme to be net negative we're looking at ~24k deaths per year...

Plus of course many of the nets are distributed in regions where fishing wasn't viable in the first place...

David Mathers @ 2024-04-01T13:45 (+5)

There is a tension between different EA ideas here in my view. Early on, I recall, the emphasis was on how you need charity evaluators like GiveWell, and RTCs by randomista development economists, because you can't predict what interventions will work well, or even do more good than harm, on the basis of common sense intuition. (I remember Will giving examples like "actually, having prisoners talk to children about how horrible being in prison is seems to make the children more likely to grow up to commit crimes.) But it turns out that when assessing interventions, there are always points where there just isn't high quality data  on whether some particular factor importantly reduces (or increases) the intervention's effectiveness. So at that point, we have to rely on commonsense, mildly disciplined by back of the envelope calculations, possibly supplemented by poor quality data if we're lucky. And then it feels unfair when this is criticized by outsiders (like the recent polemical anti-EA piece in Wired) because well, what else can you possibly do if high-quality studies aren't available and it's not feasible to do them yourself? But I guess from the outsider's perspective, it's easy to see why this looks like hypocrisy: they criticized other people for relying on their general hunches about how things work, but now the EAs are doing it themselves! I'm not really sure what the general solution (if any) to this is. But it does feel to my like there are a vast number of choice points in GiveWell's analyses where they are mostly guessing, and if those guesses are all biased in some direction rather than uncorrelated, assessments of interventions will be way off. 

David Mathers @ 2024-04-01T10:28 (+3)

Thanks that is helpful. It's frustrating how hard it is to be sure about this. 

Seth Ariel Green @ 2024-04-02T11:27 (+5)

there have been a few "EA" responses to this issue but TBF they can be a bit hard to find

https://www.cold-takes.com/minimal-trust-investigations/

As an aside, I'm pretty underwhelmed by concerns about using LLINs as fishing nets. These concerns are very media-worthy, but I'm more worried about things like "People just never bother to hang up their LLIN," which I'd guess is a more common issue. The LLIN usage data we use would (if accurate) account for both.

https://blog.givewell.org/2015/02/05/putting-the-problem-of-bed-nets-used-for-fishing-in-perspective/

Besides the harm caused by some people contracting malaria because they don’t sleep under their nets, which we already account for in our cost-effectiveness analysis, the article warns that fishing with insecticide treated nets may deplete fish stocks. In making this case, the article cites only one study, which reports that about 90% of households in villages along Lake Tanganyika used bed nets to fish. It doesn’t cite any studies examining the connection between bed nets and depleted fish stocks more directly. The article states that “Recent hydroacoustic surveys show that Zambia’s fish populations are dwindling” and “recent surveys show that Madagascar’s industrial shrimp catch plummeted to 3,143 tons in 2010 from 8,652 tons in 2002,” but declines in fish populations and shrimp catch may have causes other than mosquito net-fishing.

Rebecca @ 2024-04-03T10:22 (+9)

The Wired article says that there’s been a bunch more research in recent years about the effects of bed nets on fish stocks, so I would consider the GiveWell response out of date

David Mathers @ 2024-04-02T13:07 (+2)

I don't actually find either all THAT reassuring. The GW blogpost just says most nets are used for their intended purpose, but 30% being used otherwise is still a lot, not to mention they can be used for their intended purpose and the later to fish. The Cold Takes blog post just cites the same data about most nets being used for their intended purpose. 

David Mathers @ 2024-04-02T12:28 (+2)

I had seen the second of these at some point I think, but not the first. 

wes R @ 2024-03-31T15:10 (+1)
  1. you do bring up an interesting point that this should be factored into where nets are distributed.
  2. (this is a very draft-stage idea) maybe if the nets weren't that water-proof, this issue would be solved? (cons: flooding, rain, potential pollution if, in the water, it dissipates, and less durability)
  3. Maybe mention this to someone at givewell? idk tho
David Mathers🔸 @ 2024-11-21T11:52 (+7)

I'm working on a "who  has funded what in AI safety" doc. Surprisingly, when I looked up Lightspeed Grants online (https://lightspeedgrants.org/) I couldn't find any list of what they funded. Does anyone know where I could find such a list? 

harfe @ 2024-11-21T14:54 (+11)

Some (or all?) Lightspeed grants are part of SFF: https://survivalandflourishing.fund/sff-2023-h2-recommendations

Habryka @ 2024-11-21T16:38 (+9)

Yep, the Lightspeed Grants table is part of the SFF table! I also think we should have published our own table, but it seemed lower priority after it was included in the SFF one. 

We might also release a Lightspeed Grants retrospective soon.

Benevolent_Rain @ 2024-11-25T09:43 (+2)

Thanks for doing that and I look forward to hopefully publishing your findings. It would be valuable at least to me for the doc to show clearly, if you have time for that, if there might be biases in funding - it might be as important what is not funded as what is funded. For example, if some collection of smaller donors put 40% of funding towards considering slowing down AI, while a larger donor spends less than 2%, that might might be interesting at least as a pointer towards investigating such disparities in more detail (I noticed that Pause AI was a bit higher up in the donation election results, for example).

David Mathers🔸 @ 2024-11-26T05:58 (+2)

Firstly, it's not really me you should be thanking, it's not my project, I am just helping with it a bit. 

Secondly, it's just another version of this, don't expect any info about funding beyond an update to the funding info in this: https://www.alignmentforum.org/posts/zaaGsFBeDTpCsYHef/shallow-review-of-live-agendas-in-alignment-and-safety

David Mathers @ 2024-06-15T10:45 (+4)

Some very harsh criticism of Leopold Aschenbrenner's recent AGI forecasts in the recent comments on this Metaculus question. People who are following stuff more closely than me will be able to say whether or not they are reasonable: 

Linch @ 2024-06-16T00:35 (+5)

I didn't read all the comments, but Order's are obvious nonsense, of the "(a+b^n)/n = x, therefore God exists" tier. Eg take this comment:

But something like 5 OOMs seems very much in the realm of possibilities; again, that would just require another decade of trend algorithmic efficiencies (not even counting algorithmic gains from unhobbling).

Here he claims that 100,000x improvement is possible in LLM algorithmic efficiency, given that 10x was possible in a year. This seems unmoored from reality - algorithms cannot infinitely improve, you can derive a mathematical upper bound. You provably cannot get better than Ω(n log n) comparisons for sorting a randomly distributed list. Perhaps he thinks new mathematics or physics will also be discovered before 2027?

This is obviously invalid. The existence of a theoretical complexity upper bound (which incidentally Order doesn't have numbers of) doesn't mean we are anywhere near it, numerically. Those aren't even the same level of abstraction! Furthermore, we have clear theoretical proofs for how fast sorting can get, without AFAIK any such theoretical limits for learning. "algorithms cannot infinitely improve" is irrelevant here, it's the slightly more mathy way to say a deepity like "you can't have infinite growth on a finite planet," without actual relevant semantic meaning[1]

Numerical improvements happen all the time, sometimes by OOMs. No "new mathematics or physics" required.

Frankly, as a former active user of Metaculus, I feel pretty insulted by his comment. Does he really think no one on Metaculus took CS 101? 

  1. ^

    It's probably true that every apparently "exponential" curve become a sigmoid eventually, knowing this fact doesn't let you time the transition. You need actual object-level arguments and understanding, and even then it's very very hard (as people arguing against Moore's Law or for "you can't have infinite growth on a finite planet" found out).

Linch @ 2024-06-16T01:37 (+6)

To be clear I also have high error bars on whether traversing 5 OOMs of algorithmic efficiency in the next five years are possible, but that's because a) high error bars on diminishing returns to algorithmic gains, and b) a tentative model that most algorithmic gains in the past were driven by compute gains, rather than exogeneous to it. Algorithmic improvements in ML seems much more driven by the "f-ck around and find out" paradigm than deep theoretical or conceptual breakthroughs; if we model experimentation gains as a function of quality-adjusted researchers multiplied by compute multiplied by time, it's obvious that the compute term is the one that's growing the fastest (and thus the thing that drives the most algorithmic progress).

Order @ 2024-06-19T05:44 (+5)

In the future I would recommend reading the full comment. Admitting your own lack of knowledge (not having read the comments) and then jumping to "obviously nonsense" and "insulting" and "Does he really think no one on Metaculus took CS 101?" is not an amazing first impression of EA. You selected the one snippet where I was discussing a complicated topic (ease of algorithmic improvements) instead of low hanging and obviously wrong topics like Aschenbrenner seemingly being unable to do basic math (3^3) using his own estimates for compute improvements. I consider this to be a large misrepresentation of my argument and I hope that you respond to this forthcoming comment in good faith.

Anyway, I am crossposting my response from Metaculus, since I responded there at length:

...there is a cavernous gap between:

- we don't know the lower bound computational complexity

versus

- 100,000x improvement is very much in the realm of possibilities, and
- if you extend this trendline on a log plot, it will happen by 2027, and we should take this seriously (aka there is nothing that makes [the usual fraught issues with extending trendlines](https://xkcd.com/605/) appear here)

I find myself in the former camp. If you question that a sigmoid curve is likely, there is no logical basis to believe that 100,000x improvement in LLM algorithm output speed at constant compute (Aschenbrenner's claim) is likely either.

Linch's evidence to suggest that 100,000x is likely is:

- Moore's Law happened [which was a hardware miniaturization problem, not strictly an algorithms problem, so doesn't directly map onto this. But it is evidence that humans are capable of log plot improvement sometimes]

- "You can't have infinite growth on a finite planet" is false [it is actually true, but we are not utilizing Earth anywhere near fully]

- "Numerical improvements happen all the time, sometimes by OOMs" [without cited evidence]

None of these directly show that 100,000x improvement in compute or speed is forthcoming for LLMs specifically. They are attempts to map other domains onto LLMs without a clear correspondence. Most domains don't let you do trendline expansion like this. But I will entertain it, and provide a source to discuss (since they did not): [How Fast Do Algorithms Improve? (2021)](https://ieeexplore.ieee.org/document/9540991)

Some key takeaways:

1. Some algorithms do exhibit better-than-Moore's-Law improvements when compared to brute force, although the likelihood of this is ~14% over the course of the entire examined time window (80 years). I would also add from looking at the plots that many of these historical improvements happened when computer science was still relatively young (1970s-1990s) and it is not obvious that this is so common nowadays with more sophisticated research in computer science. The actual yearly probability is super low (<1%) as you can see in the state diagram at the bottom of these charts in Figure 1: https://ieeexplore.ieee.org/document/9540991/figures#figures

2. Moore's Law has slowed down, at least for CPUs. Although there is still further room in GPUs / parallel compute, the slowdown in CPUs is not a good portent for the multi-decade outlook of continued GPU scaling.

Some other things I would add:

1. LLMs already rest on decades of algorithmic advancements, for example, matrix multiplication. I would be very surprised if any algorithmic advancements can make matrix multiplication on the order of O(n^2) with a reasonable constant - it is a deeply researched human field of study and gains in it are harder to reach every year. We in theory have O(n^2.371552) but the constant in front (hidden in big O notation) is infeasibly large. Overall this one seems to have hit diminishing returns since 1990:
![](https://upload.wikimedia.org/wikipedia/commons/5/5b/MatrixMultComplexity_svg.svg)

2. There are currently trillions of dollars per year in LLMs and the current algorithmic improvements are the best we can muster. (Most of the impressive results recently have been compute driven, not algorithmic driven.) This implies that the problem might actually be very difficult instead of easy.

These two points nudge me in the direction that LLM algorithmic improvement might actually be harder than other algorithms, and therefore lead me to think that much less than 1% chance of big O improvement will happen each year. Sure, a priori ML model improvements have seemed ad hoc to an outside viewer, but that we still haven't done better than ad hoc improvements also implies something about the problem difficulty.

Linch @ 2024-06-19T13:01 (+8)

I appreciate that you replied! I'm sorry if I was rude. I think you're not engaging with what I actually said in my comment, which is pretty ironic. :) 

(eg there are multiple misreadings. I've never interacted with you before so I don't really know if they're intentional

Linch @ 2024-06-21T04:27 (+2)

(I replied more substantively on Metaculus)

JWS @ 2024-06-15T11:32 (+3)

The Metaculus timeline is already highly unreasonable given the resolution criteria,[1] and even these people think Aschenbrenner is unmoored from reality.

  1. ^

    Remind me to write this up soon

David Mathers @ 2024-06-15T13:23 (+3)

No reason to assume an individual Metaculus commentator agrees with the Metaculus timeline, so I don't think that's very fair.


I actually think the two Metaculus questions are just bad questions. The detailed resolution criteria don't necessarily match what we intuitively think=AGI or transformative AI, or obviously capture anything that important, and it is just unclear whether people are forecasting on the actual resolution criteria or on their own idea of what "AGI" is. 

All the tasks in both AGI questions are quite short, so it's easy to imagine an AI beating all of them, and yet not being able to replace most human knowledge workers, because it can't handle long-running tasks. It's also just not clear how performance on benchmark questions and the Turing test translates to competence with even short-term tasks in the real world. So even if you think AGI in the sense of "AI that can automate all knowledge work" (let alone all work) is far away, it might make sense to think we are only a few years from a system that can resolve these questions yes. 

On the other hand, resolving the questions 'yes' could conceivably lag the invention of some very powerful and significant systems, perhaps including some that some reasonable definition would count as AGI. 

As someone points out in the comments of one of the questions; right now, any mainstream LLM will fail the Turing test, however smart, because if you ask "how do I make chemical weapons" it'll read you a stiff lecture about why it can't do that as it would violate its principles. In theory, that could remain true even if we reach AGI. (The questions only resolve 'yes' if a system that can pass the Turing test is actually constructed, it's not enough for this to be easy to do if Open AI or whoever want to.) And the stronger of the two questions requires that a system can do a complex manual task. Fair enough, some reasonable definitions of "AGI" do require machines that can match humans at every manual dexterity-based cognitive task. But a system that could automate all knowledge work, but not handle piloting a robot body would still be quite transformative. 

JWS @ 2024-06-16T15:04 (+3)

Which particular resolution criteria do you think it's unreasonable to believe will be met by 2027/2032 (depending on whether it's the weak AGI question or the strong one)?

Two of the four in particular stand out. First, the Turing Test one exactly for the reason you mention - asking the model to violate the terms of service is surely an easy way to win. That's the resolution criteria, so unless the Metaculus users think that'll be solved in 3 years[1] then the estimates should be higher. Second, the SAT-passing requires "having less than ten SAT exams as part of the training data", which is very unlikely in current Frontier models, and labs probably aren't keen to share what exactly they have trained on.

it is just unclear whether people are forecasting on the actual resolution criteria or on their own idea of what "AGI" is. 

No reason to assume an individual Metaculus commentator agrees with the Metaculus timeline, so I don't think that's very fair.

I don't know if it is unfair. This is Metaculus! Premier forecasting website! These people should be reading the resolution criteria and judging their predictions according to them. Just going off personal vibes on how much they 'feel the AGI' feels like a sign of epistemic rot to me. I know not every Metaculus user agrees with this, but it is shaped by the aggregate - 2027/2032 are very short timelines, and those are median community predictions. This is my main issue with the Metaculus timelines atm.

I actually think the two Metaculus questions are just bad questions. 

I mean, I do agree with you in the sense that they don't fully match AGI, but that's partly because 'AGI' covers a bunch of different ideas and concepts. It might well be possible for a system to satisfy these conditions but not replace knowledge workers, perhaps a new market focusing on automation and employment might be better but that also has its issues with operationalisation.
 

  1. ^

    On top of everything else needed to successfully pass the imitation game

David Mathers @ 2024-06-16T16:52 (+2)

What I meant to say was unfair was basing "even Metaculus users, think Aschenbrenner's stuff is bad, and they have short time lines, off the reaction to Aschenbrenner of only one or two people.

David Mathers @ 2024-06-15T13:24 (+2)

Which particular resolution criteria do you think it's unreasonable to believe will be met by 2027/2032 (depending on whether it's the weak AGI question or the strong one)?