Evan_Gaensbauer's Quick takes

By Evan_Gaensbauer @ 2022-03-05T09:50 (+6)

null
Evan_Gaensbauer @ 2023-02-06T04:18 (+25)

I know multiple victims/survivors/whatever who were interviewed by TIME, not only one of the named individuals but some of the anonymous interviewees as well.

The first time I cried because of everything that has happened in EA during the last few months was when I learned for the fifth or sixth time that some of my closer friends in EA lost everything because of the FTX collapse.

The second time I cried about it all was today. 

Evan_Gaensbauer @ 2023-01-11T02:44 (+11)

After the collapse of FTX, any predictions that the effective altruism movement will die with it are greatly exaggerated. Effective altruism will change that maybe none of us ourselves can even predict but it won't die.

There are countless haters of so many movements who on the internet will themselves into believing what will happen to that movement when it fails is what they wish will happen. I.e., that the movement will die. Sensationalist polemicists and internet trolls don't understand history or the world enough to know what they're talking about when they celebrate the gleeful end of whatever cultural forces they hate. 

This isn't just true for effective altruism. This is true for every such movement towards which anyone takes such a shallow interpretation. If movements like socialism, communism, and fascism can make a worldwide comeback in the 2010s and 2020s in spite of their histories, effective altruism isn't going to just up and die, not by a longshot. 

niplav @ 2023-01-11T15:21 (+3)

Small movements (like species with few members, I think[1]) die more quickly, as do younger movements.

Also EA seems to have a quite specific type of person it appeals to & a stronger dependence on current intellectual strands (it did not develop separately in China & the Anglosphere and continental Europe), which seems narrower than socialism/communism/reactionary thought.

I think it's good to worry about EA disappearing or failing in other ways (becoming a cargo-cult shell of its original form, mixing up instrumental and terminal goals, stagnating & disappearing like general semantics &c).


  1. I've tried to find a paper investigating this question, but haven't been successful—anyone got a link? ↩︎

Evan_Gaensbauer @ 2023-01-15T23:34 (+10)

Events of the last few months have shown that in the last few years many whistleblowers weren't taken seriously enough. If they had been, a lot of problems in EA that have come to pass might have been avoided or prevented entirely. They at least could have been resolved much sooner and before the damage became so great.

As much as more effective altruists have come to recognize this in the last year, one case I think deserves to be revisited but hasn't been is this review of problems in EA and related research communities originally written by Simon Knutsson in 2019, based on his own experiences working in the field.

https://www.simonknutsson.com/problems-in-effective-altruism-and-existential-risk-and-what-to-do-about-them/

ChanaMessinger @ 2023-01-23T11:32 (+9)

I'd be curious about more concretization on this, if possible. I don't think my current model is that "whistleblowers weren't taken seriously enough" is the reason a bunch of bad stuff happened here, but there's something that rhymes with that that I maybe do agree with.

Wil Perkins @ 2023-01-17T14:14 (+8)

Why are you posting these shortform instead of as a top level post?

Evan_Gaensbauer @ 2023-01-18T16:56 (+2)

I wrote my other reply yesterday from my smartphone and it was hard to tell which one of my short form posts you were replying to, so I thought it was a different one and that's why my comment from yesterday may not have seemed so relevant. I'm sorry for any confusion.

Anyway, why I'm posting short forms like this too is that they're thoughts on my mind I want to express for at least some effective altruists to notice, though I'm not prepared right now to contend with the feedback and potential controversy that makings these as top level posts would provoke right now.

Evan_Gaensbauer @ 2023-01-17T19:53 (+2)

It's long enough to be a top level post, though I didn't have time around the days these thoughts were on my mind to flesh it out more, with links or more details, or time to address what I'm sure would be a lot of good questions I'd receive. I wouldn't want to post before it could be of better quality.

I've started using my short form to draft stubs or snippets of top level posts. I'd appreciate any comments or feedback on them encouraging me to turn them it top level posts, or, alternatively, feedback even discouraging me from turning them into a top level post if someone would think it's worthwhile.

Evan_Gaensbauer @ 2024-10-12T00:33 (+9)

This is a section of a EAF post I've begun drafting about the question of the community and culture of EA in the Bay Area, and its impact on the rest of EA worldwide. That post isn't intended to only be about longtermism as it relates to EA as an overlapping philosophy/movement often originally attributed to the Bay Area. I've still felt like my viewpoint here in its rough form is still worth sharing as a quick take post.

@JWS 🔸 self-describes as "anti-Bay Area EA." I get where anyone is coming from with that, though the issue is that, pro- or anti-, this certain subculture in EA isn't limited to the Bay Area. It's bigger than that, and people pointing to the Bay Area as a source of greatness or setbacks in EA is to me a wrongheaded sort of provincialism. To clarify, specifically "Bay Area EA" culture entails the stereotypes-both accurate and misguided--of the rationality community and longtermism, as well as the trappings of startup culture and other overlapping subcultures in Silicon Valley.

Prior even to the advent of EA, a sort of ‘proto-longtermism’ was collaboratively conceived on online forums like LessWrong in the 2000s. Back then, like now, a plurality of the userbase of those forums might have lived in California. Yet it wasn't only rationalists in the Bay Area who took up the mantle to consecrate those futurist memeplexes into what longtermism is today. It was academic research institutes and think tanks in England. It wasn't @EliezerYudkowsky, nor anyone else at the Machine Intelligence Research Institute or the Center for Applied Rationality, who mostly coined the phrase ‘longtermism’ and wrote entire books about it. That was @Toby_Ord and @William_MacAskill It wasn't anyone in the Bay Area who spent a decade trying to politically and academically legitimize longtermism as a prestigious intellectual movement in Europe. That was the Future of Humanity Institute (FHI), as spearheaded by the likes of Nick Bostrom and @Anders Sandberg, and the Global Priorities Institute (GPI).

In short, EA is an Anglo-American movement and philosophy, if it's going to be made about culture like that (not withstanding other features started introduced by Germany via Schopenhauer). It takes two to tango. This is why I think calling oneself "pro-" or "anti-" Bay Area EA is pointless.

titotal @ 2024-10-15T12:02 (+1)

Maybe it's worth pointing out that Bostrom, Sandberg, and Yudkowsky were all in the same extropian listserv together (the one from the infamous racist email), and have been collaborating with each other for decades. So maybe it's not precisely a geographic distinction, but there is a very tiny cultural one.

Evan_Gaensbauer @ 2023-01-16T01:03 (+9)

Any formal conflict of interest I ever had in effective altruism I shed myself of almost five years ago. I've been a local and online group organizer in EA for a decade, so I've got lots of personal friends who work at or with support from EA-affilated organizations. Those might be called more informal conflicts of interest, though I don't know how much they might count as conflicts of interest at all.

I haven't had any greater social conflicts of interest, like being in a romantic relationship with anyone else in EA, for that long as well.

I've never signed a non-disclosure agreement for any EA-affiliated organization I might have had a role at or contracted with for any period of time. Most of what I'm referring to here is nothing that should worry anyone who is aware of the specific details of my personal history in effective altruism. My having dated someone for a few months who wasn't a public figure or a staffer at any EA-affiliated organization, or me having been a board member in name only for a few months to help get off the ground a budding EA organization that has now been defunct for years anyway, are of almost no relevance or significance to anything happening in EA in 2023.

In 2018, I was a recipient of an Effective Altruism Grant, one of the kinds of alternative funding programs administered by the Centre for Effective Altruism (CEA), like the current Effective Altruism Funds or the Community Building Grants program, though the EA Grants program was discontinued a few years ago.

I was also contracted for a couple months in 2018 with the organization then known as the Effective Altruism Foundation, as a part-time researcher for one of the EA Foundation's projects, the Foundational Research Institute (FRI), which has for a few years now been succeeded by a newer effort launched by many of the same effective altruists who operated FRI, called the Center for Long-Term Risk (CLTR).

Most of what I intend to focus on posting about on this forum in the coming months won't be at all about CLTR as it exists today or its background, though there will be some. Much of what I intend to write will technically entail referencing some of the CEA's various activities, past and present, though that's almost impossible to avoid when trying to address the dynamics of the effective altruism community as a whole anyway. Most of what I intend to write that will touch upon the CEA will have nothing to do with my past conflict of interest of having been a grant recipient in 2018.

Much of the above is technically me doing due diligence, though that's not my reason for writing this post.

I'm writing this post because everyone else should understand that I indeed have zero conflicts of interest, that I've never signed a non-disclosure agreement, and that for years and still into the present, I've had no active desire to work up to netting a job or career within most facets of EA. (Note, Jan. 17: Some of that could change but I don't expect any of it to change for at least the next year.)

Evan_Gaensbauer @ 2023-01-24T08:57 (+8)

People complained about how the Centre for Effective Altruism (CEA) had said they were trying not to be like the "government of Effective Altruism" but then they kept acting exactly like they were the Government of EA for years and years.

Yet that's wrong. The CEA was more like the police force of effective altruism. The de facto government of effective altruism was for the longest time, maybe from 2014-2020, Good Ventures/Open Philanthropy. All of that changed with the rise of FTX. All of that changed again with the fall of FTX. 

I've put everything above in the past tense because that was the state of things before 2022. There's no such thing as a "government of effective altruism" anymore, regardless of whether anyone wants one or not. Neither the  CEA, Open Philanthropy, nor Good Ventures could fill that role, regardless of whether anyone would want it or not.

 We can't go back. We can only go forward. There is no backup plan anyone in effective altruism had waiting in the wings to roll out in case of a movement-wide leadership crisis. It's just us. It's just you. It's just me. It's just left to everyone who is still sticking around in this movement together. We only have each other.

 

Evan_Gaensbauer @ 2023-08-29T06:34 (+6)

I just posted on the Facebook wall of another effective altruist:

 Hey, I really appreciate everything you do for the effective altruism community! Happy birthday! 

We would all greatly benefit from expressing our gratitude like this to each other more often.

Evan_Gaensbauer @ 2023-05-21T03:46 (+5)

I can't overstate how much the UX and UI for the EA Forum on mobile sucks. It sucks so much. I know the Online Team at the CEA is endlessly busy and I don't blame anyone for this as their fault, though the UX/UI on mobile for the EA Forum is abysmal.

Evan_Gaensbauer @ 2024-08-05T06:27 (+5)

Update: it got better.

Evan_Gaensbauer @ 2023-01-16T01:11 (+4)

It should be noted that for most of the period that the Centre for Effective Altruism itself admits and acknowledges as its longest continuous period of a pattern of mistakes from 2016-2020, according to the Mistakes page on the CEA's website, two of the only three members of the board of directors were Nick Beckstead, Toby Ord and William MacAskill.

(Note, January 15th: as I'm initially writing this and as of right now, I want to be clear and correct about this enough that I'll be running it by someone from the CEA. If someone from CEA reads this before I contact any of you, please feel free to either reply here or send me a private message for any mistakes/errors I've made here.)

(Note, Jan. 16th: I previously stated that Holden Karnofsky was a board member, not Toby. I also stated that this was the board of the CEA in the UK, that was my mistake. I've now been corrected by a staffer at the CEA, as I mentioned before that I'd be in contact with. I apologize for my previous errors.)

Evan_Gaensbauer @ 2023-02-14T03:07 (+2)

I'll probably make a link post with a proper summary later but here is a follow-up from Simon Knutsson on recent events related to longtermism and the EA school of thought out of Oxford.

https://www.simonknutsson.com/on-the-results-of-oxford-style-effective-altruism-existential-risk-and-longtermism/

Evan_Gaensbauer @ 2022-12-11T09:09 (+2)

The FTX bankruptcy broke something in the heart effective altruism but in the process, I'm astonished with how dank it has become. This community was never supposed to be this dank and has never been danker. I never would've expected this. It's absurd. 

Evan_Gaensbauer @ 2023-06-14T20:15 (+1)

I thought more this morning about my shortform post from yesterday (https://forum.effectivealtruism.org/posts/KfwFDkfQFQ4kAurwH/evan_gaensbauer-s-shortform?commentId=SjzKMiw5wBe7bGKyT)and I've changed my mind about much of it. I expected my post to be downvoted because most people would perceive it as a stupid and irrelevant take. Here are some reasons I disagree now, though I couldn't guess whether anyone downvoted my post because they took my take seriously but still thought it sucked.

  1. I've concluded that Dustin Moskowitz shouldn't go full Dark Brandon after all. It'd not just be suboptimal. It'd be too risky and could backfire. I don't know at what point it'd happen specifically, though at some point there'd be diminishing marginal returns to Dustin adopting more of Dark Brandon-esque personal style. In hindsight, I should've applied the classic too for so much effective altruism, thinking on the margin, to the question: what is the optimal amount of Dark Brandon Dustin Moskowitz should embrace?

  2. Dustin leaning in a more Dark-Brandon-esque direction couldn't totally solve any problems EA faces. There are some kinds of problems Dustin doing so couldn't solve. It could ameliorate the severity of some problems, in particular some image problems EA has.

  3. For those who don't know at all what I'm getting at, I'm thinking about how Dustin Moskowitz might tweak his public image or personal brand than to improve upon its decent standing right now. Dustin is not the subject of as many conspiracy theories as many other billionaires and philanthropists, especially as one who had his start on Silicon Valley. He's not the butt of as many jokes as Mark Zuckerberg or Jeff Bezos about how he's a robot or an alien. If you asked a socialist, or someone who just hates billionaires for whatever reason, to make a list of the ten worst billionaires they hate the most, Dustin Moskowitz is one name that would almost certainly not make it onto the list.

  4. The downside risk of Dustin becoming a more controversial or bold personality gets at the value he provides to EA by being the opposite. That he has been a quieter philanthropist has caused him not to be seen nearly as much as the poster boy for EA as a movement. Hypothetically, for the sake of argument, if Asana went bankrupt for some reason, that would not be nearly as bad for EA as the collapse of FTX was. Dustin not feuding with so many people like Elon Musk has means he doesn't have nearly as many enemies. That means the EA community overall has far fewer enemies. It's less hated. It's not as polarized or politicized. These are all very good things. Much of that is thanks to Dustin being more normal and less eccentric, less volatile and more predictable, and more of a private person than blowhard.

Evan_Gaensbauer @ 2023-01-15T23:23 (+1)

As of June 2022, Holden Karnofsky said he was "currently on 4 boards in addition to Open Philanthropy's."

https://www.lesswrong.com/posts/nSjavaKcBrtNktzGa/nonprofit-boards-are-weird

If that's still the case, that's too many organizations for a single individual in effective altruism to hold board positions at.