DC's Quick takes
By DC @ 2020-08-22T17:49 (+4)
nullDonyChristie @ 2022-11-12T19:06 (+61)
I am a bit worried people are going to massively overcorrect on the FTX debacle in ways that don't really matter and impose needless costs in various ways. We should make sure to get a clear picture of what happened first and foremost.
DonyChristie @ 2022-11-14T05:09 (+6)
I disagree with you somewhat: now is the time for group annealing to take place, and I want to make a bunch of wild reversible updates now because otherwise I may lose the motivation as will others. The 80/20 of information is already here and there are a bunch of decisions we can make to do what we can to improve things within our circle of control. There is something seriously wrong that's going on and it's better to take massive action in light of this plus other patterns.
DC @ 2023-10-02T06:46 (+43)
"Jon Wertheim: He made a mockery of crypto in the eyes of many. He's sort of taken away the credibility of effective altruism. How do you see him?
Michael Lewis: Everything you say is just true. And it–and it's more interesting than that. Every cause he sought to serve, he damaged. Every cause he sought to fight, he helped. He was a person who set out in life to maximize the consequences of his actions-- never mind the intent. And he had exactly the opposite effects of the ones he set out to have. So it looks to me like his life is a cruel joke."
😢
David Mathers @ 2023-10-02T10:00 (+20)
Pretty astonishing that Lewis answered "put that way, no" to "do you think he knowingly stole customer money". Feels to me like evidence of the corrupting effect of getting special insider access to a super-rich and powerful person.
Pablo @ 2023-10-03T19:00 (+21)
I don't understand your underlying model of human psychology. Sam Bankman-Fried was super-rich and powerful, but is now the kind of person no one would touch with the proverbial ten-foot pole. If the claim is that humans tend to like super-rich and powerful people even after they become disgraced, that seems false based on informal evidence.
In any case, from what I know about Bankman-Fried and his actions, the claim that he did not knowingly steal customer money doesn't strike me as obviously false, and in line with my sense that much of his behavior is explained by a combination of gross incompetence and pathological delusion.
DC @ 2023-10-04T06:04 (+2)
humans tend to like super-rich and powerful people even after they become disgraced, that seems false based on informal evidence
I think you fail to empathize with aspects of the nature of power, particularly in that there is a certain fraction of humans who will find cachet in the edgy and criminal. I am not that surprised Lewis may have been unduly affected by being in Sam's orbit and getting front-row seats to such a story. Though for all I know maybe he has accurate insider info, and Sam actually didn't knowingly steal money. ¯\_(ツ)_/¯
Manuel Del Río Rodríguez @ 2023-10-02T10:15 (+9)
I was surprised too, and would be more except for awareness of human fallibility and how much of a sucker we are for good stories. I don't doubt that some of what Lewis said in that interview might be true, but it is being massively distorted by affinity and closeness to Sam.
NickLaing @ 2023-10-02T16:35 (+8)
I interpreted this as not such a negative for EA - sad for sure, it puts the blame more squarely on SBF than the movement which isn't so terrible.
DC @ 2025-01-08T23:53 (+25)
Reminder that there is an EA Focusmate group, where you can do 50 minute coworking calls with other EAs. Also, if you're already in the group, please give any feedback on it here or via DM.
EffectiveAdvocate🔸 @ 2025-01-10T15:04 (+3)
The EA Focusmate group has been a massive productivity boost, and to my own surprise, I even made some friends through it!
I just wish the group element on Focusmate were actually a little bit stronger (e.g., more means of interaction, other shared accountability), but this is a limitation of the platform, not the group.
DC @ 2025-11-30T23:04 (+12)
I have thought for years that targeted EA outreach to 'weirdoes' on the internet is much better than college clubs. I think it's much more likely to get aligned, interesting people.
Yarrow Bouchard 🔸 @ 2025-12-01T20:56 (+5)
Can you elaborate? What does that mean, specifically?
DC @ 2025-12-03T07:23 (+9)
My take was inspired by seeing this take: https://www.lesswrong.com/posts/FuGfR3jL3sw6r8kB4/richard-ngo-s-shortform?commentId=YbqaALPE3G2wRRCGt
EA's recruitment MO has been to recruit the best elites it can on the margin, which I agree with due to power laws. However I disagree how to measure "elite". Selecting from people attending Ivy Leagues does adverse selection on the kind of person who gets into Ivy Leagues. Other people get into this rabbithole by following links on the internet. I would rather engage with someone who cares about ideas than someone following the power-seeking gradient. Now, SBF was both someone who was an early contributor to Felicifia and went to an elite university, so it's not to say that college clubs aren't drawing from both sets. On the margin though, these clubs will want to recruit themselves more by say tabling at their college, and that makes sense they want to do that but if I was a funder I would rather support something like say paying for some NEET running a Discord server to grow their server (depending on the topic naturally). This does select for less conscientiousness and my specific story for what to do could be wrong, but I think the overall thrust is right that selectivity should be more weird and in the age of AI we have better tooling for this kind of selection.
Concrete operationalization: There's a long tail of search terms that orgs like CEA could do ad spend on that would be terms generated by highly thoughtful people. I would bet they are underspending on these terms. Also focusing on what these terms translate to in other languages, and doing more deep talent search in other countries and trying to integrate those people into our network. Is anyone buying ads on Baidu for the Chinese equivalent of the word "utilitarianism"? There could be a lot of low-hanging fruit like this that hasn't been considered.
I'm not sure what I think about this recent take about the attention arms race but I think we share a sense of "changing up how things are advertised". My point is more about subtle signalling in the information ecology.
It is possible I cached this thought a long time ago and haven't properly investigated to see whether the evidence reflects this or we are in fact in the world where most of the portfolio of outreach resources are being spent the way I'd endorse. Like maybe more of the resources are going to these new AI safety Youtube videos instead of uni clubs, and the actual form of my critique should be comparing those videos to some other outreach tactic.
Yarrow Bouchard 🔸 @ 2025-12-03T09:00 (+6)
I don't think I agree either with the idea of recruiting people from elite colleges or recruiting "Internet weirdoes". I'm not against inviting in either of those kinds of people, but why target them specifically? I prefer a version of the EA movement that is more wholesome, populist, inclusive, and egalitarian.
I don't mean populist in the typical political sense used these days of being against institutions, against experts, highly distrustful, framing things as good people vs. bad people, or adopting the "paranoid style". I mean populist in the sense of believing average, everyday, ordinary people are good, have a lot to contribute, are diverse and heterogenous, are often talented, wise, intelligent, and moral, and often are full of surprises. A belief in people, in the average person, in the median person, in the diversity of people who are never quite captured by an average or a median.
I don't like the somewhat more traditional, more institutionalist elitism you sometimes see in EA, and I don't like the idiosyncratic, anti-institutionalist nerd elitism of the rationalist community, where people seem to think the best people by far, and maybe the only people really worth a damn, are them, or people just like them. I'm a weird person, and I've often had to fight to find a place in the world, but I think it's the wrong lesson to learn to say, "People treated me badly because I was different and acted like I was inferior just because I wasn't like them... now I finally see the truth... it's normal people who are inferior and it's people like me who are better than everyone else!" Good job, God or karma or whatever sent you a trial so you'd have a chance to become more enlightened and learn compassion, and instead you're repeating the cycle of samsara. Better luck next life.
It's possible there are all kinds of ways to reach people from different walks of life that would be a good idea. I'm just highly suspicious of any idea that there's a superior kind of person, suspiciously similar to the person saying who's superior and who's not, and that outreach should be focused specifically on that kind of person.
Bella @ 2025-12-03T08:52 (+5)
Concrete operationalization: There's a long tail of search terms that orgs like CEA could do ad spend on that would be terms generated by highly thoughtful people. I would bet they are underspending on these terms. Also focusing on what these terms translate to in other languages, and doing more deep talent search in other countries and trying to integrate those people into our network. Is anyone buying ads on Baidu for the Chinese equivalent of the word "utilitarianism"? There could be a lot of low-hanging fruit like this that hasn't been considered.
I would totally love somebody to do this; I know of at least one attempt to do something a bit like this a while back, but it wasn't easy / I don't think it went anywhere in the end.
It's possible my team at 80k would be best placed to try it again, so it's going back on my longlist, thanks :)
DC @ 2024-01-16T22:13 (+11)
"X-Risk" Movement-Building Considered Probably Harmful
My instinct has generally been for a while now that it's probably really really bad for the majority of the population to be aware of the meme of x-risk, or at least more harm than good. See climate doomerism. See (attempted) gain of function research at Wuhan. See asteroid deflection techniques that are dual-use with respect to asteroid weaponization which is orders of magnitude worse of a still far-off risk than natural asteroid impact. See gain of function research at Anthropic which, idk, maybe it's good but that's kinda concerning, as well as all the other resources provided to questionably benevolent AGI companies under the assumption it will do good. "X-risk" seems like something that will make people go crazy in ways that will cause destruction, e.g. people use the term "pivotal act" even when I'd claim it's been superceded by Critch's "pivotal process". I'm also worried about dark triad elites or bureaucrats co-opting these memes for unnecessary power and control, a take from the e/acc vein of thought that I find their most sympathetic position, because it's probably correct when you think in the limit of social memetic momentum. Sorta relatedly, I'm worried about EA becoming a collection of high modernist midwittery as it mainstreams, watered down and unable to course correct from co-options and simplifications. Please message me if you want to riff on these topics.
DC @ 2023-12-22T20:04 (+11)
one part of me is under the impression that more people should commit themselves to things that probably won't work out but would pay off massively if they do. The relevant conflict here is this means losing optionality and taking yourself out of the game for other purposes. We need more wild visions of the future that may work out if e.g. AI doesn't. Playing to your outs is very related but I'm thinking more generally we do in fact need more visions based on different epistemics about how the world is going, and someone might necessarily have to adopt some kind of provisional story of the world that will probably be wrong but is requisite to model any kind of payoff their commitment may have. Real change requires real commitment. Also, most ways to help look like particular bets towards building particular infrastructural upgrades, vs starting an AGI company that Solves Everything. On the flip side, we also need people holding onto their wealth and paying attention, ready to pounce on opportunities that may arise. And maybe you really should just get as close to the dynamo of technocapital acceleration as possible.
DonyChristie @ 2020-11-19T04:59 (+9)
Would you be interested in a Cause Prioritization Newsletter? What would you want to read on it?
EdoArad @ 2020-11-19T15:05 (+4)
I'll sign up and read if it'd be good 😊
What I'd be most interested in are the curation of
- New suggestions for possible top cause areas
- New (or less known) organizations or experts in the field
- Examples of new methodologies
- and generally, interesting new research on prioritization between and within practically any EA-relevant causes.
Ramiro @ 2020-11-20T15:13 (+7)
Add to (3) new explanations or additions to methodologies - e.g., I still haven't found anything substantial about the idea of adding something like 'urgence' to the ITN framework.
EdoArad @ 2020-11-21T18:17 (+2)
Definitely! And I'll raise by my general interest in thoughtful analyses of existing frameworks
EdoArad @ 2020-12-17T19:44 (+2)
Is there some sort of a followup?
DonyChristie @ 2020-12-13T00:04 (+6)
What does it mean for a human to properly orient their lives around the Singularity, to update on upcoming accelerating technological changes?
This is a hard problem I've grappled with for years.
It's similar to another question I think about, but with regards to downsides: if you in fact knew Doom was coming, in the form of World War 3 or whatever GCR is strong enough to upset civilization, then what in fact should you do? Drastic action is required. For this, I think the solution is on the order of building an off-grid colony that can survive, assuming one can't prevent the Doom. It's still hard to act on that, though. What is it like to go against the grain in order to do that?
DonyChristie @ 2020-12-12T05:27 (+5)
Would you be interested in a video coworking group for EAs? Like a dedicated place where you can go to work for 4-8 hours/day and see familiar faces (vs Focusmate which is 1 hour, one-on-one with different people). EAWork instead of WeWork.
DonyChristie @ 2022-10-19T04:35 (+4)
This seems like an important consideration with regard to the profusion of projects that people are starting in EA: https://twitter.com/robinhanson/status/1582476452141797378?s=20&t=pTbeJY5mXaf-54e0xxzz-A
People instinctively tend toward solutions that consist of adding something rather than subtracting something, even if the subtraction would be superior. https://psyarxiv.com/4jkvn/ - Rolf Degen
Jonas Moss @ 2022-10-20T17:32 (+1)
Could you elaborate?
phgubbins @ 2022-10-24T13:13 (+1)
Seems like it could be a case of trying to maintain some sort of standard of high fidelity with EA ideas? Avoid dilution in the community and of the term by not too eagerly labeling ideas as “EA”.
DonyChristie @ 2020-08-22T17:49 (+4)
Someday, someone is going to eviscerate me on this forum, and I'm not sure how to feel about that. The prospect feels bad. I tentatively think I should just continue diving into not giving a fuck and inspire others similarly since one of my comparative advantages is that my social capital is not primarily tied in with fragile appearance-keeping for employment purposes. But it does mean I should not rely on my social capital with Ra-infested EA orgs.
I'm registering now that if you snipe me on here, I'm not gonna defensively respond. I'm not going to provide 20 citations on why I think I'm right. In fact, I'm going to double down on whatever it is I'm doing, because I anticipate in advance that the expected disvalue of discouraging myself due to really poor feedback on here is greater than the expected disvalue of unilaterally continuing something the people with Oxford PhDs think is bad.
EdoArad @ 2020-08-23T06:03 (+5)
This sounds very worrying, can you expand a bit more?
DonyChristie @ 2020-08-25T20:43 (+9)
I don't have much slack to respond given I don't enjoy internet arguments, but if you think about the associated reference class of situations, you might note that a common problem is a lack of self-awareness of there being a problem. This is not the case with this dialogue, which should allay your worry somewhat.
The main point here, which this is vagueposting about, is that people on here will dismiss things rather quickly especially if it's a dismissal by someone with a lot of status, in a pile-on way without much overt reflection by the people who upvote such comments. I concluded from seeing this several times that at some point this will happen with a project of mine, and that I should be ok with this world, because this is not a location in which to get good project feedback as far as I can tell. The real risk here I am facing is that I would be dissuaded from the highest-impact projects by people who only believe in things vetted by a lot of academic-style reasoning and evidence that makes legible sense, at the cost of not being able to exploit secrets in the Thielian sense.
Khorton @ 2020-08-22T17:52 (+3)
It's interesting that the Oxford PhDs are the ones you worry about! Me, I worry about the Bay Area Rat Pack.
DonyChristie @ 2020-08-25T20:41 (+1)
This is also valid! :)
Khorton @ 2020-08-26T07:51 (+4)
Omg I can't believe that someone downvoted you for admitting your insecurities on your own shortform!! That's absolutely savage, I'm so sorry.
DC @ 2023-09-14T00:52 (+2)
Thoughts on liability insurance for global catastrophic risks (either voluntary or mandatory) such as for biolabs or AGI companies? Do you find this to be a high-potential line of intervention?
DonyChristie @ 2021-02-21T03:16 (+2)
I am seeking funding so I can work on my collective action project over the next year without worrying about money so much. If this interests you, you can book a call with me here. If you know nothing about me, one legible accomplishment of mine is creating the EA Focusmate group, which has 395 members as of writing.
DonyChristie @ 2020-11-29T00:34 (+2)
What are ways we could get rid of the FDA?
(Flippant question inspired by the FDA waiting a month to discuss approval for coronavirus vaccines, and more generally it dragging its legs during the pandemic, killing many people, in addition to its other prohibitions being net-negative for humanity. IMO.)
Matthew Tromp @ 2020-11-30T17:16 (+3)
So, I take issue with the implication that the FDA's process for approving the covid vaccine actually delays rollout or causes a significant number of deaths. From my understanding, pharma companies have been ramping up production since they determined their vaccines probably work. They aren't sitting around waiting for FDA approval. Furthermore, I think the approval process is important for ensuring that the public has faith in the vaccine, and that it's actually safe and effective.