Latest comments on the EA Forum

Comments on 2024-11-21

Yelnats T.J. @ 2024-11-21T05:03 (+1) in response to Averting autocracy in the United States of America

As we discussed at EAG B, the material change between the 1st term and 2nd term is that there were many "adults in the room" who kept the former president from fulfilling his worst instincts. Whereas now there has been a 4-year effort to cultivate a pipeline of loyalists to staff the government. Ezra's episode on Trump and his disinhibition is a good piece on the topic.

The nominations for the national security apparatus are the strongest signal that he wants power consolidated and will test GOP Senators out the gate if they will be a check on his power.

I think Ezra's start to the podcast that Michael linked was apt. If someone two months ago said that Gaetz, Gabbard, and Hegeseth were going to be nominated for DoJ, DNI, DoD, it would have been framed as hyperbolic doomer Liberal talk. However that is the universe we are in.

Have the nominations and the proposal to purge military generals updated your priors at all since EAG B?

MarcusAbramovitch @ 2024-11-21T06:26 (+2)

I find the Gaetz and Hegeseth picks to be a bit worrying. I struggle to find a reason that the Gabbard is bad at all. In fact, I think she is probably good? She's a former congresswoman, city councillor, hawaii house rep and member of the national guard, etc. She seems like a good pick who is concerned about the US tendency to intervene in foreign countries.

Now, to be clear, I find the Gaetz and Hegeseth picks to be bad but I thought Trump would do these types of things and I think there is a whole universe of things that Trump could have done and so he did some mildly-moderately bad ones. 

So, he did some bad things but it was around expectation and nothing yet in the tails and thus I shouldn't update in the direction of totalitarianism.

I'm still not finding anything to really be alarmed about other than people I know being alarmed.

John Huang @ 2024-11-21T06:23 (+1) in response to Averting autocracy in the United States of America

I take the perspective that the United States is just tending towards the more typical behavior of presidential electoral systems. America will start acting more and more like Latin American presidential regimes, because the of the deadlock that presidential systems create. The checks and balances aren't protecting us. Instead, the checks and balances are what drive the public to elect "strongmen" who can "get things done" - often through illegal and unconstitutional measures. 

Trump for example is celebrated for "getting things done" - things that are often illegal and unconstitutional. That's the selling point. Therefore I'm not the only one who has suggested that presidential regimes are unstable. Yet as we look across the world, parliamentary systems also have their own problems with authoritarian takeovers. 

I write about what I think the solution is here.

In short, I think we can create a smarter democracy using a system called "sortition". Please take a read of the article I linked for more information. 

...

Even if sortition might be an interesting policy to you, it's not particularly clear if implementation is politically feasible. The inertia of the US political system is so vast it's hard for any money to budge it. Any financial investment will yield highly nonlinear results. Policy might not change for years, or decades, until suddenly one day policy changes. Yet just because the response to investment is extremely nonlinear doesn't therefore mean it's unwise to invest. (There's also the question if America is the wisest place to invest in. Pro-sortition movements also exist in Europe. Could pro-sortition movements be launched more easily in African and South America?)

In terms of what you can impact in terms of an idea such as sortition, your investment can be used to drive "public awareness" and "lobbying". Money can be used to persuade local governments to adopt pro-sortition policies. Or money could be spent raising public awareness of sortition - public awareness that might lead to movement growth. 

yanni kyriacos @ 2024-11-21T01:29 (+8) in response to Yanni Kyriacos's Quick takes

Ten months ago I met Australia's Assistant Defence Minister about AI Safety because I sent him one email asking for a meeting. I wrote about that here. In total I sent 21 emails to Politicians and had 4 meetings. AFAICT there is still no organisation with significant funding that does this as their primary activity. AI Safety advocacy is IMO still extremely low hanging fruit. My best theory is EAs don't want to do it / fund it because EAs are drawn to spreadsheets and google docs (it isn't their comparative advantage). Hammers like nails etc.

huw @ 2024-11-21T05:36 (+3)

I also think many EAs are still allergic to direct political advocacy, and that this tendency is stronger in more rationalist-ish cause areas such as AI. We shouldn’t forget Yudkowsky’s “politics is the mind-killer”!

Larks @ 2024-11-21T05:21 (+6) in response to Where I Am Donating in 2024

I remember removing an org entirely because they complained, though in that case they claimed they didn't have enough time to engage with me (rather than the opposite). It's also possible there are other cases I have forgotten. To your point, I have no objections to Michael's "make me overly concerned about being nice" argument which I do think is true.

Habryka @ 2024-11-21T05:32 (+4)

Cool, I might just be remembering that one instance. 

Ben_West🔸 @ 2024-11-21T05:17 (+8) in response to Ben_West's Quick takes

Maybe instead of "where people actually listen to us" it's more like "EA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didn't exist."

MichaelDickens @ 2024-11-21T05:21 (+4)

On that framing, I agree that that's something that happens and that we should be able to anticipate will happen.

Habryka @ 2024-11-21T05:14 (+4) in response to Where I Am Donating in 2024

IIRC didn’t you somewhat frequently remove sections if the org objected because you didn’t have enough time to engage with them? (which I think was reasonably costly)

Larks @ 2024-11-21T05:21 (+6)

I remember removing an org entirely because they complained, though in that case they claimed they didn't have enough time to engage with me (rather than the opposite). It's also possible there are other cases I have forgotten. To your point, I have no objections to Michael's "make me overly concerned about being nice" argument which I do think is true.

MichaelDickens @ 2024-11-20T16:10 (+9) in response to Ben_West's Quick takes

I don't want to claim all EAs believe the same things, but if the congressional commission had listened to what you might call the "central" EA position, it would not be recommending an arms race because it would be much more concerned about misalignment risk. The overwhelming majority of EAs involved in AI safety seem to agree that arms races are bad and misalignment risk is the biggest concern (within AI safety). So if anything this is a problem of the commission not listening to EAs, or at least selectively listening to only the parts they want to hear.

Ben_West🔸 @ 2024-11-21T05:17 (+8)

Maybe instead of "where people actually listen to us" it's more like "EA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didn't exist."

Larks @ 2024-11-21T02:24 (+4) in response to Where I Am Donating in 2024

It takes a lot longer. I reviewed 28 orgs; it would take me a long time to send 28 emails and communicate with potentially 28 people. 

This is quite a scalable activity. When I used to do this, I had a spreadsheet to keep track, generated emails from a template, and had very little back and forth - orgs just saw a draft of their section, had a few days to comment, and then I might or might not take their feedback into account.

Habryka @ 2024-11-21T05:14 (+4)

IIRC didn’t you somewhat frequently remove sections if the org objected because you didn’t have enough time to engage with them? (which I think was reasonably costly)

MarcusAbramovitch @ 2024-11-20T22:06 (+7) in response to Averting autocracy in the United States of America

Sure, we don't have to bet at 50/50 odds. I'm willing to bet at say 90/10 odds in your favor that the next election is decided by electoral college or popular vote with a (relatively) free and fair election comparable to 2016, 2020 and 2024.

I agree that Trump is... bad for lack of a better word and that he seeks loyalty and such. But the US democracy is rather robust and somehow people took the fact that it held up strongly as evidence that... democracy was more fragile than we thought.

Yelnats T.J. @ 2024-11-21T05:03 (+1)

As we discussed at EAG B, the material change between the 1st term and 2nd term is that there were many "adults in the room" who kept the former president from fulfilling his worst instincts. Whereas now there has been a 4-year effort to cultivate a pipeline of loyalists to staff the government. Ezra's episode on Trump and his disinhibition is a good piece on the topic.

The nominations for the national security apparatus are the strongest signal that he wants power consolidated and will test GOP Senators out the gate if they will be a check on his power.

I think Ezra's start to the podcast that Michael linked was apt. If someone two months ago said that Gaetz, Gabbard, and Hegeseth were going to be nominated for DoJ, DNI, DoD, it would have been framed as hyperbolic doomer Liberal talk. However that is the universe we are in.

Have the nominations and the proposal to purge military generals updated your priors at all since EAG B?

MichaelDickens @ 2024-11-21T00:39 (+4) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

I believe the "consciousness requires having a self-model" is the only coherent model for rejecting animals' moral patienthood, but I don't understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.

David Mathers🔸 @ 2024-11-21T04:51 (+4)

I've seen Dan Dennett (in effect) argue for it as follows: if a human adult subject reports NOT experiencing something in a lab experiment and we're sure they're sincere, and that they were paying attention to what they were experiencing, that is immediately pretty much 100% proof that they are not having a conscious experience of that thing, no matter what is going on in the purely perceptual (functional) regions of their brains and how much it resembles typical cases of a conscious experience of that thing.  The best explanation for this is that its just part of our concept of "conscious" that a conscious experience is one that you're (at least potentially) introspectively aware that you're having. Indeed (my point not Dennett's), this is how we found out that there is such a thing as "unconscious perception", we found out that information about external things can get into the brain through the eye, without the person being aware that that information is there. If we don't think that conscious experiences are ones you're (at least potentially) introspectively aware of having, it's not clear why this would be evidence for the existence of unconscious perception. But almost all consciousness scientists and philosophers of mind accept that unconscious perception can happen. 

Here's Dennett (from a paper co-authored with someone else) in his own words on this, critiquing a particular neuroscientific theory of consciousness: 

"It is easy to imagine what a conversation would sound like between F&L and a patient (P) whose access to the locally recurrent activity for color was somehow surgically removed. F&L: ‘You are conscious of the redness of the apple.’ P: ‘I am? I don’t see any color. It just looks grey. Why do you think I’m consciously experiencing red?’ F&L: ‘Because we can detect recurrent processing in color areas in your visual cortex.’ P: ‘But I really don’t see any color. I see the apple, but nothing colored. Yet you still insist that I am conscious of the color red?’ F&L: ‘Yes, because local recurrency correlates with conscious awareness.’ P: ‘Doesn’t it mean something that I am telling you I’m not experiencing red at all? Doesn’t that suggest local recurrency itself isn’t sufficient for conscious awareness?"

I don't personally endorse Dennett's view on this, I give to animal causes, and I think it is a big mistake to be so sure of it that you ignore the risk of animal suffering entirely, plus I don't think we can just assume that animals can't be introspectively aware of their own experiences. But I don't think the view itself is crazy or inexplicable, and I have moderate credence (25% maybe?) that it is correct. 

Yelnats T.J. @ 2024-11-21T04:49 (+1) in response to Averting autocracy in the United States of America

A post/submission I wrote to OP two years ago has some thoughts on this:

https://forum.effectivealtruism.org/posts/kmx3rKh2K4ANwMqpW/destabilization-of-the-united-states-the-top-x-factor-ea

It has some recommended readings and outlines potential interventions.

I'm still distilling what I would add to it in the present day.

The top thing on my mind is the proposed board to purge generals. (Note: presidents already have the authority to dismiss generals, however the implication of this proposal is that they want to purge so many generals that they need a systematic vehicle to do it.) As I wrote in my piece, our biggest bulwark against an authoritarian power grab is that the United States Military is very strong, professional, competent, and apolitical. Any changes away from that status quo should raise alarm bells.

The nominations to the military/national security apparatus are clearly about total loyalty over competence. These are the military and intelligence services that when captured in other countries by authoritarians have cemented regimes.

Interventions in the immediate term targeted at disrupting the consolidation of power (in the aforementioned moves) could be very high leverage.

For a longer-term intervention that focuses more on the upstream drivers of our political dysfunction which enables authoritarians, I still back the idea of doing local/state ballot initiatives to reform the political system. A gap I see in the space is that political system reform via initiatives is pursued piecemeal instead of comprehensively. Also, anti-establishment sentiments poll very high amongst Americans including the Left and Right, yet that bi-populist agreement is not being effectively tapped. Not only could mobilizing it help get initiatives over the line, but it would create depolarizing interactions between regular citizens.

Charlie_Guthmann @ 2024-11-21T03:53 (+1) in response to Ben_West's Quick takes

I guess in thinking about this I realize it's so hard to even know if someone is a "PR disaster" that I probably have just been confirming my biases. What makes you say that he hasn't been?

David Mathers🔸 @ 2024-11-21T04:35 (+3)

Just  the stuff I already said about the success he seems to have had. It is also true that many people hate him and think he's ridiculous, but I think that makes him polarizing rather than disastrous. I suppose you could phrase it as "he was a disaster in some ways but a success in others" if you want to. 

Brad West🔸 @ 2024-11-21T04:18 (+2) in response to The impact of the counterfactual dollar: a defence of your local food pantry

I don't think people are saying putting time and or money to charities that address the poor in rich countries is not helping people, but merely that you could help more poor people in poor countries with the same resources. Thus, if we are saying that we are considering the interests of the unfortunate in poor and rich countries equally, we would want to commit our limited resources to the developing world.

I think a lot of times EAs are assuming a given set of resources that they have to commit to doing good. With that assumption, the counterfactual to the food pantry is the most cost effective charity. The "warm fuzzy/utilon" dichotomy that you deride here actually supports your notion that the food pantry could compete with the door's luxury consumption instead. This is because warm fuzzies (the donor's psychic benefit derived from giving) could potentially be a substitute for the consumption of luxury goods (going out to eat, etc.).

So, the concept of the fuzzies (albeit maybe with language you find offensive) actually supports your notion that, within individual donation decisions, helping locally does not always compete with effective giving.

Seth Herd @ 2024-11-20T19:53 (+15) in response to China Hawks are Manufacturing an AI Arms Race

Copied from my LW comment, since this is probably more of an EAF discussion:

This is really important pushback. This is the discussion we need to be having.

Most people who are trying to track this believe China has not been racing toward AGI up to this point. Whether they embark on that race is probably being determined now - and based in no small part on the US's perceived attitude and intentions.

Any calls for racing toward AGI should be closely accompanied with "and of course we'd use it to benefit the entire world, sharing the rapidly growing pie". If our intentions are hostile, foreign powers have little choice but to race us.

And we should not be so confident we will remain ahead if we do race. There are many routes to progress other than sheer scale of pretraining. The release of DeepSeek r1 today indicates that China is not so far behind. Let's remember that while the US "won" the race for nukes, our primary rival had nukes very soon after - by stealing our advancements. A standoff between AGI-armed US and China could be disastrous - or navigated successfully if we take the right tone and prevent further proliferation (I shudder to think of Putin controlling an AGI, or many potentially unstable actors).

This discussion is important, so it needs to be better. This pushback is itself badly flawed. In calling out the report's lack of references, it provides almost none itself. Citing a 2017 official statement from China seems utterly irrelevant to guessing their current, privately held position. Almost everyone has updated massively since 2017. (edit: It's good that this piece does note that public statements are basically meaningless in such matters.) If China is "racing toward AGI" as an internal policy, they probably would've adopted that recently. (I doubt that they are racing yet, but it seems entirely possible they'll start now in response to the US push to do so - and the their perspective on the US as a dangerous aggressor on the world stage. But what do I know - we need real experts on China and international relations.)

Pointing out the technical errors in the report seems irrelevant to harmful. You can understand very little of the details and still understand that AGI would be a big, big deal if true — and the many experts predicting short timelines could be right. Nitpicking the technical expertise of people who are essentially probably correct in their assessment just sets a bad tone of fighting/arguing instead of having a sensible discussion.

And we desperately need a sensible discussion on this topic.

Garrison @ 2024-11-21T04:13 (+2)

Pasted from LW:

Hey Seth, appreciate the detailed engagement. I don't think the 2017 report is the best way to understand what China's intentions are WRT to AI, but there was nothing in the report to support Helberg's claim to Reuters. I also cite multiple other sources discussing more recent developments (with the caveat in the piece that they should be taken with a grain of salt). I think the fact that this commission was not able to find evidence for the "China is racing to AGI" claim is actually pretty convincing evidence in itself. I'm very interested in better understanding China's intentions here and plan to deep dive into it over the next few months, but I didn't want to wait until I could exhaustively search for the evidence that the report should have offered while an extremely dangerous and unsupported narrative takes off.

I also really don't get the error pushback. These really were less technical errors than basic factual errors and incoherent statements. They speak to a sloppiness that should affect how seriously the report should be taken. I'm not one to gatekeep ai expertise, but idt it's too much to expect a congressional commission with a top recommendation to commence in a militaristic AI arms race to have SOMEONE read a draft who knows that chatgpt-3 isn't a thing.

MikhailSamin @ 2024-11-21T00:35 (+19) in response to Where I Am Donating in 2024

Note that we've only received a speculation grant from the SFF and haven’t received any s-process funding. This should be a downward update on the value of our work and an upward update on a marginal donation's value for our work.

I'm waiting for feedback from SFF before actively fundraising elsewhere, but I'd be excited about getting in touch with potential funders and volunteers. Please message me if you want to chat! My email is ms@contact.ms, and you can find me everywhere else or send a DM on EA Forum.

On other organizations, I think:

  • MIRI’s work is very valuable. I’m optimistic about what I know about their comms and policy work. As Malo noted, they work with policymakers, too. Since 2021, I’ve donated over $60k to MIRI. I think they should be the default choice for donations unless they say otherwise.
  • OpenPhil risks increasing polarization and making it impossible to pass meaningful legislation. But while they make IMO obviously bad decisions, not everything they/Dustin fund is bad. E.g., Horizon might place people who actually care about others in places where they could have a huge positive impact on the world. I’m not sure, I would love to see Horizon fellows become more informed on AI x-risk than they currently are, but I’ve donated $2.5k to Horizon Institute for Public Service this year.
  • I’d be excited about the Center for AI Safety getting more funding. SB-1047 was the closest we got to a very good thing, AFAIK, and it was a coin toss on whether it would’ve been signed or not. They seem very competent. I think the occasional potential lack of rigor and other concerns don't outweigh their results. I’ve donated $1k to them this year.
  • By default, I'm excited about the Center for AI Policy. A mistake they plausibly made makes me somewhat uncertain about how experienced they are with DC and whether they are capable of avoiding downside risks, but I think the people who run it are smart and have very reasonable models. I'd be excited about them having as much money as they can spend and hiring more experienced and competent people.
  • PauseAI is likely to be net-negative, especially PauseAI US. I wouldn’t recommend donating to them. Some of what they're doing is exciting (and there are people who would be a good fit to join them and improve their overall impact), but they're incapable of avoiding actions that might, at some point, badly backfire.

    I’ve helped them where I could, but they don’t have good epistemics, and they’re fine with using deception to achieve their goals.

    E.g., at some point, their website represented the view that it’s more likely than not that bad actors would use AI to hack everything, shut down the internet, and cause a societal collapse (but not extinction). If you talk to people with some exposure to cybersecurity and say this sort of thing, they’ll dismiss everything else you say, and it’ll be much harder to make a case for AI x-risk in the future. PauseAI Global’s leadership updated when I had a conversation with them and edited the claims, but I'm not sure they have mechanisms to avoid making confident wrong claims. I haven't seen evidence that PauseAI is capable of presenting their case for AI x-risk competently (though it's been a while since I've looked).

    I think PauseAI US is especially incapable of avoiding actions with downside risks, including deception[1], and donations to them are net-negative. To Michael, I would recommend, at the very least, donating to PauseAI Global instead of PauseAI US; to everyone else, I'd recommend ideally donating somewhere else entirely.

  • Stop AI's views include the idea that a CEV-aligned AGI would be just as bad as an unaligned AGI that causes human extinction. I wouldn't be able to pass their ITT, but yep, people should not donate to Stop AI. The Stop AGI person participated in organizing the protest described in the footnote. 
  1. ^

    In February this year, PauseAI US organized a protest against OpenAI "working with the Pentagon", while OpenAI only collaborated with DARPA on open-source cybersecurity tools and is in talks with the Pentagon about veteran suicide prevention. Most participants wanted to protest OpenAI because of AI x-risk and not because of Pentagon, but those I talked to have said they felt it was deceptive upon discovering the nature of OpenAI's collaboration with the Pentagon. Also, Holly threatened me trying to prevent the publication of a post about this and then publicly lied about our conversations, in a way that can be easily falsified by looking at the messages we've exchanged.

MichaelDickens @ 2024-11-21T04:10 (+4)

Thanks for the comment! Disagreeing with my proposed donations is the most productive sort of disagreement. I also appreciate hearing your beliefs about a variety of orgs.


A few weeks ago, I read your back-and-forth with Holly Elmore about the "working with the Pentagon" issue. This is what I thought at the time (IIRC):

  • I agree that it's not good to put misleading messages in your protests.
  • I think this particular instance of misleadingness isn't that egregious, it does decrease my expectation of the value of PauseAI US's future protests but not by a huge margin. If this was a recurring pattern, I'd be more concerned.
  • Upon my first reading, it was unclear to me what your actual objection was, so I'm not surprised that Holly also (apparently) misunderstood it. I had to read through twice to understand.
  • Being intentionally deceptive is close to a dealbreaker for me, but it doesn't look to me like Holly was being intentionally deceptive.
  • I thought you both could've handled the exchange better. Holly included misleading messaging in the protest and didn't seem to understand the problem, and you did not communicate clearly and then continued to believe that you had communicated well in spite of contrary evidence. Reading the exchange weakly decreased my evaluation of both your work and PauseAI US's, but not by enough to change my org ranking. You both made the sorts of mistakes that I don't think anyone can avoid 100% of the time. (I have certainly made similar mistakes.) Making a mistake once is evidence that you'll make it more, but not very strong evidence.

I re-read your post and its comments just now and I didn't have any new thoughts. I feel like I still don't have great clarity on the implications of the situation, which troubles me, but by my reading, it's just not as big a deal as you think it is.

General comments:

  • I think PauseAI US is less competent than some hypothetical alternative protest org that wouldn't have made this mistake, but I also think it's more competent than most protest orgs that could exist (or protest orgs in other cause areas).
  • I reviewed PauseAI's other materials, although not deeply or comprehensively, and they seemed good to me. I listened to a podcast with Holly and my impression was that she had an unusually clear picture of the concerns around misaligned AI.
Chris Leong @ 2024-11-20T06:16 (+2) in response to Winter 2025 Impact Accelerator Program for EA Professionals

This was posted twice.

High Impact Professionals @ 2024-11-21T03:55 (+1)

Thank you, Chris. We've looked for a duplicate, in case one was inadvertently posted, but are only seeing this one. In any event, we hope that the information contained in the post is helpful and would encourage all interested to apply to the program.

David Mathers🔸 @ 2024-11-21T03:03 (+3) in response to Ben_West's Quick takes

Yeah, I'm not a Yudkowsky fan. But I think the fact that he mostly hasn't been a PR disaster is striking, surprising and not much remarked upon, including by people who are big fans.

Charlie_Guthmann @ 2024-11-21T03:53 (+1)

I guess in thinking about this I realize it's so hard to even know if someone is a "PR disaster" that I probably have just been confirming my biases. What makes you say that he hasn't been?

AllisonA @ 2024-11-21T02:08 (+1) in response to Winter 2025 Impact Accelerator Program for EA Professionals

Just a friendly flag that winter is during different months depending on what hemisphere you are in:)

High Impact Professionals @ 2024-11-21T03:53 (+1)

Thank you for the friendly flag. HIP welcomes all applicants to the Impact Accelerator Program regardless of hemisphere, country, or timezone.

Pat Myron 🔸 @ 2024-11-21T03:47 (+3) in response to Giving What We Can is celebrating our 15th birthday!

GWWC website in 2010:
For a person earning £15,000 per year, this would mean saving 5 lives every year

£300/$450 (~£450/$650 inflation-adjusted) per life then.. unfathomably low

https://old.reddit.com/r/EffectiveAltruism/comments/1gmtdrm/has_average_cost_to_save_a_life_increased_or/

David Mathers🔸 @ 2024-11-21T03:11 (+3) in response to Averting autocracy in the United States of America

I hate Trump as much as anyone but it seems unlikely EA can make much difference here, given how many other well-resourced, powerful actors there are trying to shape outcomes in US politics.

MarcusAbramovitch @ 2024-11-21T02:51 (+4) in response to Averting autocracy in the United States of America

This seems to overstate how important the ea forum is

Charlie_Guthmann @ 2024-11-21T03:07 (+1)

It has very little to do with the forum. I don't think most people here that think they might be interacting with the executive branch would post anything super negative on the internet if they are thinking clearly. 

MarcusAbramovitch @ 2024-11-21T02:51 (+5) in response to Averting autocracy in the United States of America

My problem with this is that it's not falsifiable.

Charlie_Guthmann @ 2024-11-21T03:04 (–1)

Read a history book? 

edit: this was super rude but yea my point is there is lots of literature you can comb through to think about if my graph is accurate. 

edit 2: What exactly are you saying is not falsifiable?

Charlie_Guthmann @ 2024-11-20T19:35 (+1) in response to Ben_West's Quick takes

Hmm, I hear what you are saying but that could easily be attributed to some mix of 

(1) he has really good/convincing ideas 

(2) he seems to be a a public representative for the EA/LW community for a journalist on the outside.

And I'm responding to someone saying that we are in "phase 3" - that is to say people in the public are listening to us - so I guess I'm not extremely concerned about him not being able to draw attention or convince people. I'm more just generally worried that people like him are not who we should be promoting to positions of power, even if those are de jure positions. 

David Mathers🔸 @ 2024-11-21T03:03 (+3)

Yeah, I'm not a Yudkowsky fan. But I think the fact that he mostly hasn't been a PR disaster is striking, surprising and not much remarked upon, including by people who are big fans.

MarcusAbramovitch @ 2024-11-21T02:48 (+3) in response to Averting autocracy in the United States of America

If I'm willing to bet, I need to take "edge". I am not going to bet at my actual odds since that gives no profit for me.

1/2. I think nearly every president committed crimes, for example, war crimes. This mainly depends on what he is prosecuted for as opposed to what is committed.

  1. If the constitution is amended that seems fine. I'm fine to bet on something like this though.

  2. I'm not sure why that matters. People can elect people you and I disagree with ideologically.

  3. I don't think I understand this one. Can you clarify?

I feel like people are converting their dislike of Trump into unwarranted fears. I don't like Trump but it's not helpful to fear monger.

Charlie_Guthmann @ 2024-11-21T03:02 (+2)

If I'm willing to bet, I need to take "edge". 

This is pretty patronizing. You don't know me but do you really think the average person on the EA forum needs that explained?

hence why I wrote 1/20 (95-5). If you believe the chance is <1/100 you are 10x. Given the asymmetry of my/other users knowledge of your internal probability, I understand offering the best possible odds for yourself that you still think the other side would take, but it's a bit of an icky norm to come on here and play poker when people might assume you would be happy to take a 2x-5x bet. More importantly the bet you offered proves nothing in my mind since anywhere between a 1-5% chance of the next election being rigged would still be really really bad and worth hyperventilating about. 

If you want, read my comment to lark. I don't think my resolution criteria are good. It's rather that I don't personally expect the next election to be rigged ( I would be on the same side of the 90/10 bet as you) but I do expect trump to continue to denigrate the checks and balances that we have in this country, whether it be official laws or unofficial norms, hence why I am trying to pose intermediate questions. I'll try to improve the original questions though. 

1/2 - just specify a specific crime that we think most presidents don't commit and would obviously be worth prosecuting. 

3 - really? You think this wouldn't be a clear step towards autocracy?

4 - The general position of MAGA's is that the 2020 election was stolen. 
5- Admittedly a Pretty awful market, just ignore this one 

Again I don't think even these modified versions are good, but I think we can still do better. 

Charlie_Guthmann @ 2024-11-20T23:58 (+3) in response to Averting autocracy in the United States of America

Side note - I think you will not get full honesty from many people here (more likely they just won't comment). Anyone with a public reputation that wants to interact with trump's admin is not going to want to comment (for good reason), plus this subject can be a bit touchy anyway. 

MarcusAbramovitch @ 2024-11-21T02:51 (+4)

This seems to overstate how important the ea forum is

Charlie_Guthmann @ 2024-11-21T02:23 (+1) in response to Averting autocracy in the United States of America

Completely agree - I think all of my markets are bad. However the direction I'm trying to move in by proposing these questions is to operationalize steps along the way towards autocracy. You could semi replicate this but saying ok well will one of the next 5 elections going to be rigged (if you believe you can operationalize this), but even if you could set up a futures market for it I don't think you will get all that much market efficiency from it. 

Betting on the prob of next election is going to paint a very incomplete picture. There is a world in which we are 99% the next election is not going to get rigged but acts during this admin would credibly increase the chance of future riggings by a lot. For instance lets assume trump himself as no interest in being an autocrat. Then he wouldn't rig the election purposely right? And yet the fact that we now have a precedent that you won't be prosecuted for essentially anything if you win the presidency surely changes the incentives of future politicians who are considering meddling. 

This is literally my position. I think the next election is >90% to be "relatively fair", but I also think trump is going to do a ton of stuff that paves the way for a future election to not be fair. Picture below to help explain thesis.

 

MarcusAbramovitch @ 2024-11-21T02:51 (+5)

My problem with this is that it's not falsifiable.

Charlie_Guthmann @ 2024-11-20T23:52 (+2) in response to Averting autocracy in the United States of America

how about 99/1? pretty wild to me that you would say

I have generally found the fears of democracy failing in the US to be hyperbolic and without much good evidence. The claims are also very "vibes-based" and/or partisan rather than at the object level.

and then only offer 90/10 odds. Are you saying you think there is a ~1 in 20 chance the next election is not going to be free and fair? I would not consider freaking about about 1/100 to be hyperbolic, much less 1/20.

Also It would be nice to break this up a little bit more. Here are some things I would probably bet you on, though they need to be clarified and thought out a bit more. 

  • Trump will commit more than x crimes during his presidency. 
  • Trumps secretaries will commit more than x crimes during his presidency
  • Trump will attempt to run for a third term 
  • The winner of the republican primary in the next two presidential elections will be a MAGA
  • In the next x years, a future president or (sufficiently) high up politician will not be convicted of any crimes conditional on their party controlling the justice department
MarcusAbramovitch @ 2024-11-21T02:48 (+3)

If I'm willing to bet, I need to take "edge". I am not going to bet at my actual odds since that gives no profit for me.

1/2. I think nearly every president committed crimes, for example, war crimes. This mainly depends on what he is prosecuted for as opposed to what is committed.

  1. If the constitution is amended that seems fine. I'm fine to bet on something like this though.

  2. I'm not sure why that matters. People can elect people you and I disagree with ideologically.

  3. I don't think I understand this one. Can you clarify?

I feel like people are converting their dislike of Trump into unwarranted fears. I don't like Trump but it's not helpful to fear monger.

MichaelDickens @ 2024-11-20T21:26 (+13) in response to Where I Am Donating in 2024

I did it in my head and I haven't tried to put it into words so take this with a grain of salt.

Pros:

  • Orgs get time to correct misconceptions.

(Actually I think that's pretty much the only pro but it's a big pro.)

Cons:

  • It takes a lot longer. I reviewed 28 orgs; it would take me a long time to send 28 emails and communicate with potentially 28 people. (There's a good chance I would have procrastinated on this and not gotten my post out until next year, which means I would have had to make my 2024 donations without publishing this writeup first.)
  • Communicating beforehand would make me overly concerned about being nice to the people I talked to, and might prevent me from saying harsh but true things because I don't want to feel mean.
  • Orgs can still respond to the post after it's published, it's not as if it's impossible for them to respond at all.

Here are some relevant EA Forum/LW posts (the comments are relevant too):

Larks @ 2024-11-21T02:24 (+4)

It takes a lot longer. I reviewed 28 orgs; it would take me a long time to send 28 emails and communicate with potentially 28 people. 

This is quite a scalable activity. When I used to do this, I had a spreadsheet to keep track, generated emails from a template, and had very little back and forth - orgs just saw a draft of their section, had a few days to comment, and then I might or might not take their feedback into account.

Larks @ 2024-11-21T02:10 (+2) in response to Averting autocracy in the United States of America

These seem like poor things to bet on:

  • Trump will commit more than x crimes during his presidency.
    • This lacks an objective resolution criteria, and 'number of crimes' in the US is often a fairly random number because a single act can give rise to multiple violations. Also, committing crimes is very different from being an autocrat - you could be an autocrat and obey the law, and you can be a democrat and break the law.
  • Trumps secretaries will commit more than x crimes during his presidency
    • Similar issues.
  • Trump will attempt to run for a third term
    • Not as bad, but seems insufficient. Michael Bloomberg ran for a third term as NYC mayor, even though this required changing the rules just for him, but he was not an autocrat.
  • The winner of the republican primary in the next two presidential elections will be a MAGA
    • This is subjective, and also insufficient, as whatever 'MAGA' is, it is not the same as an autocrat.
  • In the next x years, a future president or (sufficiently) high up politician will not be convicted of any crimes conditional on their party controlling the justice department
    • This also seems insufficient to demonstrate autocracy - for example to my knowledge Obama was never convicted of any crimes when his party controlled the Justice Department, but he was not an autocrat.

I think the best thing to bet on is the probability of winning the next election. Unfortunately this doesn't work nearly as well as it would have a few weeks ago, but I think think it is the best approach.

Charlie_Guthmann @ 2024-11-21T02:23 (+1)

Completely agree - I think all of my markets are bad. However the direction I'm trying to move in by proposing these questions is to operationalize steps along the way towards autocracy. You could semi replicate this but saying ok well will one of the next 5 elections going to be rigged (if you believe you can operationalize this), but even if you could set up a futures market for it I don't think you will get all that much market efficiency from it. 

Betting on the prob of next election is going to paint a very incomplete picture. There is a world in which we are 99% the next election is not going to get rigged but acts during this admin would credibly increase the chance of future riggings by a lot. For instance lets assume trump himself as no interest in being an autocrat. Then he wouldn't rig the election purposely right? And yet the fact that we now have a precedent that you won't be prosecuted for essentially anything if you win the presidency surely changes the incentives of future politicians who are considering meddling. 

This is literally my position. I think the next election is >90% to be "relatively fair", but I also think trump is going to do a ton of stuff that paves the way for a future election to not be fair. Picture below to help explain thesis.

 

Jason @ 2024-11-20T23:05 (+18) in response to Where I Am Donating in 2024

I think it's reasonable for a donor to decide where to donate based on publicly available data and to share their conclusions with others. Michael disclosed the scope and limitations of his analysis, and referred to other funders having made different decisions. The implied reader of the post is pretty sophisticated and would be expected to know that these funders may have access to information on initiatives that haven’t been/can't be publicly discussed.

While I appreciate why orgs may not want to release public information on all initiatives, the unavoidable consequence of that decision is that small/medium donors are not in a position to consider those initiatives when deciding whether to donate. Moreover, I think Open Phil et al. are capable of adjusting their own donation patterns in consideration of the fact that some orgs' ability to fundraise from the broader EA & AIS communities is impaired by their need for unusually-low-for-EA levels of public transparency.

"Run posts by orgs" is ordinarily a good practice, at least where you are conducting a deep dive into some issue on which one might expect significant information to be disclosed. Here, it seems reasonable to assume that orgs will have made a conscious decision about what general information they want to share with would-be small/medium donors. So there isn't much reason to expect that an inquiry (along with notice that the author is planning to publish on-Forum) would yield material additional information.[1] Against that, the costs of reaching out to ~28 orgs is not insignificant and would be a significant barrier to people authoring this kind of post. The post doesn't seem to rely on significant non-public information, accuse anyone of misconduct, or have other characteristics that would make advance notice and comment particularly valuable. 

Balancing all of that, I think the opportunity for orgs to respond to the post in comments was and is adequate here.

  1. ^

    In contrast, when one is writing a deep dive on a narrower issue, the odds seem considerably higher that the organization has material information that isn't published because of opportunity costs, lack of any reason to think there would be public interest, etc. But I'd expect most orgs' basic fundraising ask to have been at least moderately deliberate.

Larks @ 2024-11-21T02:22 (+4)

Here, it seems reasonable to assume that orgs will have made a conscious decision about what general information they want to share with would-be small/medium donors. So there isn't much reason to expect that an inquiry (along with notice that the author is planning to publish on-Forum) would yield material additional information.[1]

This seems quite false to me. Far from "isn't much reason", we already know that such an inquiry would have yielded additional information, because Malo almost definitely would have corrected Michael's material misunderstanding about MIRI's work.

Additionally, my experience of writing similar posts is that there are often many material small facts that small orgs haven't disclosed but would happily explain in an email. Even basic facts like "what publications have you produced this year" would be impossible to determine otherwise. Small orgs just aren't that strategic about what they disclose!

CB🔸 @ 2024-11-21T02:21 (+1) in response to Donation Election Discussion Thread

I think work on animals is comparatively neglected, due to the high numbers of individuals of bad conditions. More specifically, the smaller the animals, the more numerous and neglected they tend to be, which leads to underfunding.

Charlie_Guthmann @ 2024-11-20T23:52 (+2) in response to Averting autocracy in the United States of America

how about 99/1? pretty wild to me that you would say

I have generally found the fears of democracy failing in the US to be hyperbolic and without much good evidence. The claims are also very "vibes-based" and/or partisan rather than at the object level.

and then only offer 90/10 odds. Are you saying you think there is a ~1 in 20 chance the next election is not going to be free and fair? I would not consider freaking about about 1/100 to be hyperbolic, much less 1/20.

Also It would be nice to break this up a little bit more. Here are some things I would probably bet you on, though they need to be clarified and thought out a bit more. 

  • Trump will commit more than x crimes during his presidency. 
  • Trumps secretaries will commit more than x crimes during his presidency
  • Trump will attempt to run for a third term 
  • The winner of the republican primary in the next two presidential elections will be a MAGA
  • In the next x years, a future president or (sufficiently) high up politician will not be convicted of any crimes conditional on their party controlling the justice department
Larks @ 2024-11-21T02:10 (+2)

These seem like poor things to bet on:

  • Trump will commit more than x crimes during his presidency.
    • This lacks an objective resolution criteria, and 'number of crimes' in the US is often a fairly random number because a single act can give rise to multiple violations. Also, committing crimes is very different from being an autocrat - you could be an autocrat and obey the law, and you can be a democrat and break the law.
  • Trumps secretaries will commit more than x crimes during his presidency
    • Similar issues.
  • Trump will attempt to run for a third term
    • Not as bad, but seems insufficient. Michael Bloomberg ran for a third term as NYC mayor, even though this required changing the rules just for him, but he was not an autocrat.
  • The winner of the republican primary in the next two presidential elections will be a MAGA
    • This is subjective, and also insufficient, as whatever 'MAGA' is, it is not the same as an autocrat.
  • In the next x years, a future president or (sufficiently) high up politician will not be convicted of any crimes conditional on their party controlling the justice department
    • This also seems insufficient to demonstrate autocracy - for example to my knowledge Obama was never convicted of any crimes when his party controlled the Justice Department, but he was not an autocrat.

I think the best thing to bet on is the probability of winning the next election. Unfortunately this doesn't work nearly as well as it would have a few weeks ago, but I think think it is the best approach.

AllisonA @ 2024-11-21T02:08 (+1) in response to Winter 2025 Impact Accelerator Program for EA Professionals

Just a friendly flag that winter is during different months depending on what hemisphere you are in:)

yanni kyriacos @ 2024-11-21T01:29 (+8) in response to Yanni Kyriacos's Quick takes

Ten months ago I met Australia's Assistant Defence Minister about AI Safety because I sent him one email asking for a meeting. I wrote about that here. In total I sent 21 emails to Politicians and had 4 meetings. AFAICT there is still no organisation with significant funding that does this as their primary activity. AI Safety advocacy is IMO still extremely low hanging fruit. My best theory is EAs don't want to do it / fund it because EAs are drawn to spreadsheets and google docs (it isn't their comparative advantage). Hammers like nails etc.

Ben_West🔸 @ 2024-11-19T18:20 (+38) in response to Ben_West's Quick takes

EA in a World Where People Actually Listen to Us

I had considered calling the third wave of EA "EA in a World Where People Actually Listen to Us". 

Leopold's situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders aren't going to be listening to some random internet charity nerds and changing policy as a result.

Well, they are and they are. Let's hope it's for the better.

yanni kyriacos @ 2024-11-21T01:24 (+2)

Hi Ben! You might be interested to know I literally had a meeting with the Assistant Defence Minister in Australia about 10 months ago off the back of one email. I wrote about it here. AI Safety advocacy is IMO still low extremely hanging fruit. My best theory is EAs don't want to do it because EAs are drawn to spreadsheets etc (it isn't their comparative advantage).

MichaelDickens @ 2024-11-21T00:39 (+4) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

I believe the "consciousness requires having a self-model" is the only coherent model for rejecting animals' moral patienthood, but I don't understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.

MichaelStJules @ 2024-11-21T01:04 (+2)

Why would consciousness (or moral patienthood) require having a self-model?

From my comment above:

More on this kind of view here and here.

But to elaborate, the answer is illusionism about phenomenal consciousness, the only (physicalist) account of consciousness that seems to me to be on track to address the hard problem (by dissolving it and saying there are no phenomenal properties) and the meta-problem of consciousness. EDIT: To have an illusion of phenomenal properties, you have to model those phenomenal properties. The illusion is just the model, aspects of it, or certain things that depend on it. That model is (probably) some kind of model of yourself, or aspects of your own internal processing, e.g. an attention schema.

To prevent any misunderstanding, illusionism doesn't deny that consciousness exists in some form, it just denies that consciousness is phenomenal, or that there are phenomenal properties. It also denies the classical account of qualia, i.e ineffable and so on.

Ula Zarosa @ 2024-11-21T00:30 (+2) in response to Donation Election Discussion Thread

It's still very unclear that the decrease in pain in cage-free system would not be significant enough to make the intervention not worth funding. What has convinced you specifically?
 

MichaelStJules @ 2024-11-21T00:59 (+2)

Rather than being convinced that cage-free is worse, I'm just not convinced it's better, so why support it?

I'm not convinced nest deprivation reaches the disabling intensity. It's definitely possible, and I not very unlikely, but it's hard to say either way based on the current evidence. And whether or not it does, maybe keel bone fracture inflammation pain could still just be at least few more times intense anyway.

MichaelDickens @ 2024-11-21T00:39 (+4) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

I believe the "consciousness requires having a self-model" is the only coherent model for rejecting animals' moral patienthood, but I don't understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.

Omnizoid @ 2024-11-21T00:42 (+2)

Yeah it's very bizarre.  Seems just to be vibes. 

MichaelStJules @ 2024-11-20T18:09 (+6) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

though unlike Eliezer, I don’t come to my conclusions about animal consciousness from the armchair without reviewing any evidence

A bit of nitpick, but I think Eliezer has a very high bar for attributing consciousness and is aware of relevant evidence for that bar, e.g. evidence for theory of mind or a robust self-model.

And this gets into the kind of views to which I'm sympathetic.

 

I am quite sympathetic to the kind of view Eliezer seems to endorse and the importance of something like a self-model, but my bar for self-models is probably much lower and I think many animals have at least modest self-models, including probably all mammals and birds, but I'm not confident about others. More on this kind of view here and here.

 

On the other hand, I am also sympathetic to counting anything that looks like pleasure, unpleasantness, desire as motivational salience (an attentional mechanism) or beliefs about betterness/worseness/good/bad, basically any kind of evaluative attitude about anything, or any way of caring about anything. If some system cares about something, I want to empathize and try to care about that in the same way on their behalf.[1] I discuss such attitudes more here and this view more in a draft (which I'm happy to share).

And I'm inclined to count these attitudes whether they're "conscious" or not, however we characterize consciousness. Or, these processes just ground something worth recognizing as conscious, anyway.

Under this view, I probably end up basically agreeing with you about which animals count, on the basis of evidence about desire as motivational salience and/or pleasure/unpleasantness-like states. 

However, there could still be important differences in degree if and because they meet different bars, and I have some sympathy for some neuron count-related arguments that favour brains with more neurons (point 2 here). I also give substantial weight to the possibilities that:

  1. maximum intensities for desires as motivational salience (and maybe hedonic states like pleasure and unpleasantness) are similar,
  2. there's (often) no fact of the matter about how to compare them.

 

  1. ^

    Focusing on what they care about intrinsically or terminally, not instrumental or derived concerns. And, of course, I have to deal with intrapersonal and interpersonal trade-offs.

MichaelDickens @ 2024-11-21T00:39 (+4)

I believe the "consciousness requires having a self-model" is the only coherent model for rejecting animals' moral patienthood, but I don't understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.

MikhailSamin @ 2024-11-21T00:35 (+19) in response to Where I Am Donating in 2024

Note that we've only received a speculation grant from the SFF and haven’t received any s-process funding. This should be a downward update on the value of our work and an upward update on a marginal donation's value for our work.

I'm waiting for feedback from SFF before actively fundraising elsewhere, but I'd be excited about getting in touch with potential funders and volunteers. Please message me if you want to chat! My email is ms@contact.ms, and you can find me everywhere else or send a DM on EA Forum.

On other organizations, I think:

  • MIRI’s work is very valuable. I’m optimistic about what I know about their comms and policy work. As Malo noted, they work with policymakers, too. Since 2021, I’ve donated over $60k to MIRI. I think they should be the default choice for donations unless they say otherwise.
  • OpenPhil risks increasing polarization and making it impossible to pass meaningful legislation. But while they make IMO obviously bad decisions, not everything they/Dustin fund is bad. E.g., Horizon might place people who actually care about others in places where they could have a huge positive impact on the world. I’m not sure, I would love to see Horizon fellows become more informed on AI x-risk than they currently are, but I’ve donated $2.5k to Horizon Institute for Public Service this year.
  • I’d be excited about the Center for AI Safety getting more funding. SB-1047 was the closest we got to a very good thing, AFAIK, and it was a coin toss on whether it would’ve been signed or not. They seem very competent. I think the occasional potential lack of rigor and other concerns don't outweigh their results. I’ve donated $1k to them this year.
  • By default, I'm excited about the Center for AI Policy. A mistake they plausibly made makes me somewhat uncertain about how experienced they are with DC and whether they are capable of avoiding downside risks, but I think the people who run it are smart and have very reasonable models. I'd be excited about them having as much money as they can spend and hiring more experienced and competent people.
  • PauseAI is likely to be net-negative, especially PauseAI US. I wouldn’t recommend donating to them. Some of what they're doing is exciting (and there are people who would be a good fit to join them and improve their overall impact), but they're incapable of avoiding actions that might, at some point, badly backfire.

    I’ve helped them where I could, but they don’t have good epistemics, and they’re fine with using deception to achieve their goals.

    E.g., at some point, their website represented the view that it’s more likely than not that bad actors would use AI to hack everything, shut down the internet, and cause a societal collapse (but not extinction). If you talk to people with some exposure to cybersecurity and say this sort of thing, they’ll dismiss everything else you say, and it’ll be much harder to make a case for AI x-risk in the future. PauseAI Global’s leadership updated when I had a conversation with them and edited the claims, but I'm not sure they have mechanisms to avoid making confident wrong claims. I haven't seen evidence that PauseAI is capable of presenting their case for AI x-risk competently (though it's been a while since I've looked).

    I think PauseAI US is especially incapable of avoiding actions with downside risks, including deception[1], and donations to them are net-negative. To Michael, I would recommend, at the very least, donating to PauseAI Global instead of PauseAI US; to everyone else, I'd recommend ideally donating somewhere else entirely.

  • Stop AI's views include the idea that a CEV-aligned AGI would be just as bad as an unaligned AGI that causes human extinction. I wouldn't be able to pass their ITT, but yep, people should not donate to Stop AI. The Stop AGI person participated in organizing the protest described in the footnote. 
  1. ^

    In February this year, PauseAI US organized a protest against OpenAI "working with the Pentagon", while OpenAI only collaborated with DARPA on open-source cybersecurity tools and is in talks with the Pentagon about veteran suicide prevention. Most participants wanted to protest OpenAI because of AI x-risk and not because of Pentagon, but those I talked to have said they felt it was deceptive upon discovering the nature of OpenAI's collaboration with the Pentagon. Also, Holly threatened me trying to prevent the publication of a post about this and then publicly lied about our conversations, in a way that can be easily falsified by looking at the messages we've exchanged.

Jason @ 2024-11-20T23:53 (+5) in response to Donation Election Discussion Thread

One's second, third, etc. choices would only come into play when/if their first choice is eliminated by the IRV system. Although there could be some circumstances in which voting solely for one's #1 choice could be tactically wise, I believe they are rather narrow and would only be knowable in the last day or two.

Tyler Johnston @ 2024-11-21T00:31 (+2)

Ooh interesting. Thanks for pointing this out, I'm revising my ballot now.

lauren_mee @ 2024-11-18T20:23 (+1) in response to Donation Election Discussion Thread

I believe these are the most effective organisations and will use the money wisely

Ula Zarosa @ 2024-11-21T00:30 (+2)

Which ones?

MichaelStJules @ 2024-11-18T20:56 (+4) in response to Donation Election Discussion Thread

I voted for The Humane League UK (meat/broiler chicken welfare), Fish Welfare Initiative, Shrimp Welfare Project and Arthropoda Foundation for cost-effective programs for animal welfare with low risk of backfire. I'm specifically concerned with backfire due to wild animal effects (also here), or increasing keel bone fractures for cage-free hens, so I avoid reducing animal product consumption/production and cage-free work.

Ula Zarosa @ 2024-11-21T00:30 (+2)

It's still very unclear that the decrease in pain in cage-free system would not be significant enough to make the intervention not worth funding. What has convinced you specifically?
 

Jason @ 2024-11-20T23:53 (+5) in response to Donation Election Discussion Thread

One's second, third, etc. choices would only come into play when/if their first choice is eliminated by the IRV system. Although there could be some circumstances in which voting solely for one's #1 choice could be tactically wise, I believe they are rather narrow and would only be knowable in the last day or two.

crunk004 @ 2024-11-21T00:07 (+5)

Is there any scenario where only voting for your first choice would be wise? I don't think there is any downside to listing a second choice, assuming that you do actually prefer that second choice over any of the other options should your first choice be eliminated.

Habryka @ 2024-11-20T23:14 (+4) in response to Ben_West's Quick takes

The export controls seemed like a pretty central example of hawkishness towards China and a reasonable precursor to this report. The central motivation in all that I have written related to them was about beating China in AI capabilities development.

Of course no one likes a symmetric arms race, but the question is did people favor the "quickly establish overwhelming dominance towards China by investing heavily in AI" or the "try to negotiate with China and not set an example of racing towards AGI" strategy. My sense is many people favored the former (though definitely not all, and I am not saying that there is anything like consensus, my sense is it's a quite divisive topic).

To support your point, I have seen much writing from Helen Toner on trying to dispel hawkishness towards China, and have been grateful for that. Against your point, at the recent "AI Security Forum" in Vegas, many x-risk concerned people expressed very hawkish opinions.

Dicentra @ 2024-11-21T00:01 (+3)

Yeah re the export controls, I was trying to say "I think CSET was generally anti-escalatory, but in contrast, the effect of their export controls work was less so" (though I used the word "ambiguous" because my impression was that some relevant people saw a pro of that work that it also mostly didn't directly advance AI progress in the US, i.e. it set China back without necessarily bringing the US forward towards AGI). To use your terminology, my impression is some of those people were "trying to establish overwhelming dominance over China" but not by "investing heavily in AI". 



Comments on 2024-11-20

Charlie_Guthmann @ 2024-11-20T23:58 (+3) in response to Averting autocracy in the United States of America

Side note - I think you will not get full honesty from many people here (more likely they just won't comment). Anyone with a public reputation that wants to interact with trump's admin is not going to want to comment (for good reason), plus this subject can be a bit touchy anyway. 

Tyler Johnston @ 2024-11-20T18:41 (+8) in response to Donation Election Discussion Thread

(Edited at 19:35 UTC-5 as I misunderstood how the voting system works)

My top 10 right now look something like:

1. The Midas Project
2. EA Animal Welfare Fund
3. Rethink Priorities
4. MATS Research
5. Shrimp Welfare Project
6. Apart Research
7. Legal Impact for Chickens
8. PauseAI
9. Wild Animal Initiative
10. High Impact Professionals

I ranked my organization, The Midas Project, first on my ballot. I don't think we have a stronger track record than many of the organizations in this election (and I expect the winners will be a few familiar top contenders like Rethink Priorities, who certainly deserve to be there), but I do think the election will undervalue our project due to general information asymmetries and most of our value being speculative/heavy-tailed. This seems in line with the tactical voting suggestion, but it does feel a bit icky/full of hubris.

Also, in making this list, I realized that I favored large orgs whose work I'm familiar with, and most skipped over small orgs who I know little about (including ones that made posts for marginal funding week that I just haven't read). This was a funny feeling because (as mentioned) I run a small org that I expect many people don't know about and will skip over. 

One way people can counteract this would be, in making your selection, choose 1-2 orgs you've never heard of at random, do a deep dive on them, and place them somewhere in your rankings (even at the bottom if you aren't excited about them). With enough people doing this, there should be enough coverage of small orgs for the results of the election to be a bit more informative, at least in terms of how smaller orgs compare to each other.

Jason @ 2024-11-20T23:53 (+5)

One's second, third, etc. choices would only come into play when/if their first choice is eliminated by the IRV system. Although there could be some circumstances in which voting solely for one's #1 choice could be tactically wise, I believe they are rather narrow and would only be knowable in the last day or two.

MarcusAbramovitch @ 2024-11-20T22:06 (+7) in response to Averting autocracy in the United States of America

Sure, we don't have to bet at 50/50 odds. I'm willing to bet at say 90/10 odds in your favor that the next election is decided by electoral college or popular vote with a (relatively) free and fair election comparable to 2016, 2020 and 2024.

I agree that Trump is... bad for lack of a better word and that he seeks loyalty and such. But the US democracy is rather robust and somehow people took the fact that it held up strongly as evidence that... democracy was more fragile than we thought.

Charlie_Guthmann @ 2024-11-20T23:52 (+2)

how about 99/1? pretty wild to me that you would say

I have generally found the fears of democracy failing in the US to be hyperbolic and without much good evidence. The claims are also very "vibes-based" and/or partisan rather than at the object level.

and then only offer 90/10 odds. Are you saying you think there is a ~1 in 20 chance the next election is not going to be free and fair? I would not consider freaking about about 1/100 to be hyperbolic, much less 1/20.

Also It would be nice to break this up a little bit more. Here are some things I would probably bet you on, though they need to be clarified and thought out a bit more. 

  • Trump will commit more than x crimes during his presidency. 
  • Trumps secretaries will commit more than x crimes during his presidency
  • Trump will attempt to run for a third term 
  • The winner of the republican primary in the next two presidential elections will be a MAGA
  • In the next x years, a future president or (sufficiently) high up politician will not be convicted of any crimes conditional on their party controlling the justice department
Vasco Grilo🔸 @ 2024-11-18T12:54 (+32) in response to Donation Election Discussion Thread

I ranked the Shrimp Welfare Project 1st because I think their Humane Slaughter Initiative is the most cost-effective intervention around.

MichaelDickens @ 2024-11-20T23:31 (+4)

+1 for doing a Fermi estimate, I would like to see more of those.

Dicentra @ 2024-11-20T22:36 (+5) in response to Ben_West's Quick takes

This is inconsistent with my impressions and recollections. Most clearly, my sense is that CSET was (maybe still is, not sure) known for being very anti-escalatory towards China, and did substantial early research debunking hawkish views about AI progress in China, demonstrating it was less far along than ways widely believed in DC (and that EAs were involved in this, because they thought it was true and important, because they thought current false fears in the greater natsec community were enhancing arms race risks) (and this was when Jason was leading CSET, and OP supporting its founding). Some of the same people were also supportive of export controls, which are more ambiguous-sign here.

Habryka @ 2024-11-20T23:14 (+4)

The export controls seemed like a pretty central example of hawkishness towards China and a reasonable precursor to this report. The central motivation in all that I have written related to them was about beating China in AI capabilities development.

Of course no one likes a symmetric arms race, but the question is did people favor the "quickly establish overwhelming dominance towards China by investing heavily in AI" or the "try to negotiate with China and not set an example of racing towards AGI" strategy. My sense is many people favored the former (though definitely not all, and I am not saying that there is anything like consensus, my sense is it's a quite divisive topic).

To support your point, I have seen much writing from Helen Toner on trying to dispel hawkishness towards China, and have been grateful for that. Against your point, at the recent "AI Security Forum" in Vegas, many x-risk concerned people expressed very hawkish opinions.

Habryka @ 2024-11-20T16:49 (+18) in response to Ben_West's Quick takes

In most cases this is a rumors based thing, but I have heard that a substantial chunk of the OP-adjacent EA-policy space has been quite hawkish for many years, and at least the things I have heard is that a bunch of key leaders "basically agreed with the China part of situational awareness". 

Again, people should really take this with a double-dose of salt, I am personally at like 50/50 of this being true, and I would love people like lukeprog or Holden or Jason Matheny or others high up at RAND to clarify their positions here. I am not attached to what I believe, but I have heard these rumors from sources that didn't seem crazy (but also various things could have been lost in a game of telephone, and being very concerned about China doesn't result in endorsing a "Manhattan project to AGI", though the rumors that I have heard did sound like they would endorse that)

Less rumor-based, I also know that Dario has historically been very hawkish, and "needing to beat China" was one of the top justifications historically given for why Anthropic does capability research. I have heard this from many people, so feel more comfortable saying it with fewer disclaimers, but am still only like 80% on it being true. 

Overall, my current guess is that indeed, a large-ish fraction of the EA policy people would have pushed for things like this, and at least didn't seem like they would push back on it that much. My guess is "we" are at least somewhat responsible for this, and there is much less of a consensus against a U.S. china arms race in US governance among EAs than one might think, and so the above is not much evidence that there was no listening or only very selective listening to EAs.

MichaelDickens @ 2024-11-20T23:07 (+9)

I looked thru the congressional commission report's list of testimonies for plausibly EA-adjacent people. The only EA-adjacent org I saw was CSET, which had two testimonies (1, 2). From a brief skim, neither one looked clearly pro- or anti-arms race. They seemed vaguely pro-arms race on vibes but I didn't see any claims that look like they were clearly encouraging an arms race—but like I said, I only briefly skimmed them, so I could have missed a lot.

Abby Babby @ 2024-11-20T04:38 (+4) in response to Where I Am Donating in 2024

I appreciate the effort you’ve put into this, and your analysis makes sense based on publicly available data and your worldview. However, many policy organizations are working on initiatives that haven’t been/can't be publicly discussed, which might lead you to make some incorrect conclusions. For example, I'm glad Malo clarified MIRI does indeed work with policymakers in this comment thread.

Tone is difficult to convey online, so I want to clarify I'm saying the next statement gently: I think if you do this kind of report--that a ton of people are reading and taking seriously--you have some responsibility to send your notes to the mentioned organizations for fact checking before you post.

I also want to note: the EA community does not have good intuitions around how politics works or what kind of information is net productive for policy organizations to share. The solution is not to blindly defer to people who say they understand politics, but I am worried that our community norms actively work against us in this space. Consider checking some of your criticisms of policy orgs with a person who has worked for the US Government; getting an insider's perspective on what makes sense/seems suspicious could be useful. 

Jason @ 2024-11-20T23:05 (+18)

I think it's reasonable for a donor to decide where to donate based on publicly available data and to share their conclusions with others. Michael disclosed the scope and limitations of his analysis, and referred to other funders having made different decisions. The implied reader of the post is pretty sophisticated and would be expected to know that these funders may have access to information on initiatives that haven’t been/can't be publicly discussed.

While I appreciate why orgs may not want to release public information on all initiatives, the unavoidable consequence of that decision is that small/medium donors are not in a position to consider those initiatives when deciding whether to donate. Moreover, I think Open Phil et al. are capable of adjusting their own donation patterns in consideration of the fact that some orgs' ability to fundraise from the broader EA & AIS communities is impaired by their need for unusually-low-for-EA levels of public transparency.

"Run posts by orgs" is ordinarily a good practice, at least where you are conducting a deep dive into some issue on which one might expect significant information to be disclosed. Here, it seems reasonable to assume that orgs will have made a conscious decision about what general information they want to share with would-be small/medium donors. So there isn't much reason to expect that an inquiry (along with notice that the author is planning to publish on-Forum) would yield material additional information.[1] Against that, the costs of reaching out to ~28 orgs is not insignificant and would be a significant barrier to people authoring this kind of post. The post doesn't seem to rely on significant non-public information, accuse anyone of misconduct, or have other characteristics that would make advance notice and comment particularly valuable. 

Balancing all of that, I think the opportunity for orgs to respond to the post in comments was and is adequate here.

  1. ^

    In contrast, when one is writing a deep dive on a narrower issue, the odds seem considerably higher that the organization has material information that isn't published because of opportunity costs, lack of any reason to think there would be public interest, etc. But I'd expect most orgs' basic fundraising ask to have been at least moderately deliberate.

AxellePB 🔹 @ 2024-11-20T13:47 (+29) in response to AxellePB 's Quick takes

I'd love to see an 'Animal Welfare vs. AI Safety/Governance Debate Week' happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related to AI. It would help to have rich discussions comparing both causes's current priorities and bottlenecks, and a debate week would hopefully expose some useful crucial considerations.

MichaelDickens @ 2024-11-20T23:04 (+4)

I would like to see this. I have considerable uncertainty about whether to prioritize (longtermism-oriented) animal welfare or AI safety.

Dicentra @ 2024-11-20T22:57 (+3) in response to Criticism is sanctified in EA, but, like any intervention, criticism needs to pay rent

I largely agree with this post, and think this is a big problem in general. There's also a lot of adverse selection that can't be called out because it's too petty and/or would require revealing private information. In a reasonable fraction of cases where I know the details, the loudest critics of a person or project is someone who has a pretty substantial negative-COI that isn't being disclosed, like that the project fired them or defunded them or the person used to date them and broke up with them or something. As with positive COIs, there's a problem where being closely involved with something both gives you more information you could use to form a valid criticism (or make a good hire or grant) that others might miss and is correlated with factors that could bias your judgment. 

But with hiring and grantmaking there are generally internal processes for flagging these, whereas when people are making random public criticisms, there generally isn't such a process 

MarcusAbramovitch @ 2024-11-20T22:06 (+7) in response to Averting autocracy in the United States of America

Sure, we don't have to bet at 50/50 odds. I'm willing to bet at say 90/10 odds in your favor that the next election is decided by electoral college or popular vote with a (relatively) free and fair election comparable to 2016, 2020 and 2024.

I agree that Trump is... bad for lack of a better word and that he seeks loyalty and such. But the US democracy is rather robust and somehow people took the fact that it held up strongly as evidence that... democracy was more fragile than we thought.

jackva @ 2024-11-20T22:40 (+12)

This strikes me as too optimistic/not taking the evidence from the last two elections seriously enough.

In both of them a leading contender for the Presidency did not commit to a peaceful transfer of power. In one of them he incited an insurrection. In both of them, more severe outcomes were prevented by contingent factors.

Evan_Gaensbauer @ 2024-11-20T22:38 (0) in response to Donation Election Discussion Thread

test vote, please ignore

Habryka @ 2024-11-20T16:49 (+18) in response to Ben_West's Quick takes

In most cases this is a rumors based thing, but I have heard that a substantial chunk of the OP-adjacent EA-policy space has been quite hawkish for many years, and at least the things I have heard is that a bunch of key leaders "basically agreed with the China part of situational awareness". 

Again, people should really take this with a double-dose of salt, I am personally at like 50/50 of this being true, and I would love people like lukeprog or Holden or Jason Matheny or others high up at RAND to clarify their positions here. I am not attached to what I believe, but I have heard these rumors from sources that didn't seem crazy (but also various things could have been lost in a game of telephone, and being very concerned about China doesn't result in endorsing a "Manhattan project to AGI", though the rumors that I have heard did sound like they would endorse that)

Less rumor-based, I also know that Dario has historically been very hawkish, and "needing to beat China" was one of the top justifications historically given for why Anthropic does capability research. I have heard this from many people, so feel more comfortable saying it with fewer disclaimers, but am still only like 80% on it being true. 

Overall, my current guess is that indeed, a large-ish fraction of the EA policy people would have pushed for things like this, and at least didn't seem like they would push back on it that much. My guess is "we" are at least somewhat responsible for this, and there is much less of a consensus against a U.S. china arms race in US governance among EAs than one might think, and so the above is not much evidence that there was no listening or only very selective listening to EAs.

Dicentra @ 2024-11-20T22:36 (+5)

This is inconsistent with my impressions and recollections. Most clearly, my sense is that CSET was (maybe still is, not sure) known for being very anti-escalatory towards China, and did substantial early research debunking hawkish views about AI progress in China, demonstrating it was less far along than ways widely believed in DC (and that EAs were involved in this, because they thought it was true and important, because they thought current false fears in the greater natsec community were enhancing arms race risks) (and this was when Jason was leading CSET, and OP supporting its founding). Some of the same people were also supportive of export controls, which are more ambiguous-sign here.

Ariel Simnegar 🔸 @ 2024-11-20T20:43 (+3) in response to Fund Causes Open Phil Underfunds (Instead of Your Most Preferred Causes)

I don't think most people take as a given that maximizing expected value makes perfect sense for donations. In the theoretical limit, many people balk at conclusions like accepting a gamble with a 51% chance of doubling the universe's value and a 49% chance of destroying it. (Especially so at the implication of continuing to accept that gamble until the universe is almost surely destroyed.) In practice, people have all sorts of risk aversion, including difference-making risk aversion, avoiding worst case scenarios, and reducing ambiguity.

I argue here against the view that animal welfare's diminishing marginal returns would be sufficient for global health to win out against it at OP levels of funding, even if one is risk neutral.

So long as small orgs apply to large grantmakers like OP, so long as one is locally confident that OP is trying to maximize expected value, I'd actually expect that OP's full-time staff would generally be much better positioned to make these kinds of judgments than you or I. Under your value system, I'd echo Jeff's suggestion that you should "top up" OP's grants.

Jesper 🔸 @ 2024-11-20T22:11 (+3)

My main reason for trying to be mostly risk-neutral in my donations is that my donations are very small relative to the total size of the problem, while this is not the case for my personal investments. I would donate differently (more risk-averse) if I had control over a significant part of all charitable donations in a given area. In particular, I do not endorse double-or-nothing gambling on the fate of the universe.

You make a good point that OP is more likely to make judgements regarding small donation opportunities, so I'll have to revise my position that small donors should specifically seek out smaller organizations to donate to. But the same argument for "topping up" OP donations could equally be made to support simply donating to an EA fund (which I expect will also take into account how their donations funge with OP).

Michael_2358 🔸 @ 2024-11-20T21:51 (+10) in response to Averting autocracy in the United States of America

I agree it’s not >50% probability, but it’s high enough of a probability that we should be very concerned about it.

I’m sure I won’t do a good job of describing all the evidence in this comment, but here’s a link (https://youtu.be/dwtUoJfkHlQ?si=80d8M4e5e1y0P9UF) to a recent podcast in which a historian specializing in authoritarianism outlines how Trump is following the standard playbook of leaders, like Putin and Xi, who have consolidated power and made their countries less democratic for a prolonged period of time.

A key difference between the last time Trump was president and this time is that last time the military, the DOJ, and other government departments opposed his illegal requests on the grounds that their highest loyalty was to the U.S. Constitution. Trump has stated specifically he wants to avoid that resistance this time, which is why he is appointing his most die hard loyalists to the top positions in defense, intelligence, and Justice. That should be an enormous red flag.

Democracy is actually quite rare in the historical context, and there are many example of democratic governments turning into authoritarian governments. We should not assume that history is on our side.

Finally, it is clear that the current global trend is toward less democratic and more authoritarian governments. That may be because of information manipulation, globalization backlash, or other reasons. But it is the current direction of things.

MarcusAbramovitch @ 2024-11-20T22:06 (+7)

Sure, we don't have to bet at 50/50 odds. I'm willing to bet at say 90/10 odds in your favor that the next election is decided by electoral college or popular vote with a (relatively) free and fair election comparable to 2016, 2020 and 2024.

I agree that Trump is... bad for lack of a better word and that he seeks loyalty and such. But the US democracy is rather robust and somehow people took the fact that it held up strongly as evidence that... democracy was more fragile than we thought.

MarcusAbramovitch @ 2024-11-20T21:23 (+7) in response to Averting autocracy in the United States of America

I'm willing to bet (and i already have one bet) against US democracy falling.

I have generally found the fears of democracy failing in the US to be hyperbolic and without much good evidence. The claims are also very "vibes-based" and/or partisan rather than at the object level.

Perhaps, to quell your concerns, you should make concrete what you are concerned about and I will try to respond to that

For starters, we have already had a Trump presidency and while the transition was not ideal, it happened and thus should make you less concerned about a Trump dictatorship/autocracy. US institutions help up strong against a real overturning of the election.

Again, happy to formalize a bet on this.

Michael_2358 🔸 @ 2024-11-20T21:51 (+10)

I agree it’s not >50% probability, but it’s high enough of a probability that we should be very concerned about it.

I’m sure I won’t do a good job of describing all the evidence in this comment, but here’s a link (https://youtu.be/dwtUoJfkHlQ?si=80d8M4e5e1y0P9UF) to a recent podcast in which a historian specializing in authoritarianism outlines how Trump is following the standard playbook of leaders, like Putin and Xi, who have consolidated power and made their countries less democratic for a prolonged period of time.

A key difference between the last time Trump was president and this time is that last time the military, the DOJ, and other government departments opposed his illegal requests on the grounds that their highest loyalty was to the U.S. Constitution. Trump has stated specifically he wants to avoid that resistance this time, which is why he is appointing his most die hard loyalists to the top positions in defense, intelligence, and Justice. That should be an enormous red flag.

Democracy is actually quite rare in the historical context, and there are many example of democratic governments turning into authoritarian governments. We should not assume that history is on our side.

Finally, it is clear that the current global trend is toward less democratic and more authoritarian governments. That may be because of information manipulation, globalization backlash, or other reasons. But it is the current direction of things.

CB🔸 @ 2024-11-20T21:44 (+4) in response to Fund Causes Open Phil Underfunds (Instead of Your Most Preferred Causes)


For the typical EA, this would likely imply donating more to animal welfare, which is currently heavily underfunded under the typical EA's value system.

Opportunities Open Phil is exiting from, including invertebrates, digital minds, and wild animals, may be especially impactful.

I strongly agree: the comparative underfunding of these areas always felt off to me, given their very large numbers of individuals and low-hanging fruits. 
However, it feels like more and more people are recognizing the need for more funding for animal welfare, given the results of the recent debate.

Abby Babby @ 2024-11-20T20:31 (+3) in response to Where I Am Donating in 2024

Thanks for being thoughtful about this! Could you clarify what your cost benefit analysis was here? I'm quite curious!

MichaelDickens @ 2024-11-20T21:26 (+13)

I did it in my head and I haven't tried to put it into words so take this with a grain of salt.

Pros:

  • Orgs get time to correct misconceptions.

(Actually I think that's pretty much the only pro but it's a big pro.)

Cons:

  • It takes a lot longer. I reviewed 28 orgs; it would take me a long time to send 28 emails and communicate with potentially 28 people. (There's a good chance I would have procrastinated on this and not gotten my post out until next year, which means I would have had to make my 2024 donations without publishing this writeup first.)
  • Communicating beforehand would make me overly concerned about being nice to the people I talked to, and might prevent me from saying harsh but true things because I don't want to feel mean.
  • Orgs can still respond to the post after it's published, it's not as if it's impossible for them to respond at all.

Here are some relevant EA Forum/LW posts (the comments are relevant too):

MarcusAbramovitch @ 2024-11-20T21:23 (+7) in response to Averting autocracy in the United States of America

I'm willing to bet (and i already have one bet) against US democracy falling.

I have generally found the fears of democracy failing in the US to be hyperbolic and without much good evidence. The claims are also very "vibes-based" and/or partisan rather than at the object level.

Perhaps, to quell your concerns, you should make concrete what you are concerned about and I will try to respond to that

For starters, we have already had a Trump presidency and while the transition was not ideal, it happened and thus should make you less concerned about a Trump dictatorship/autocracy. US institutions help up strong against a real overturning of the election.

Again, happy to formalize a bet on this.

Charlie_Guthmann @ 2024-11-20T20:16 (+2) in response to Where I Am Donating in 2024

How do you feel about EA's investing in AI companies with their personal portfolio?

MichaelDickens @ 2024-11-20T21:12 (+3)

It depends. I think investing in publicly-traded stocks has a smallish effect on helping the underlying company (see Harris (2022), (Pricing Investor Impact)[https://sustainablefinancealliance.org/wp-content/uploads/2023/05/GRASFI2023_paper_1594.pdf]). I think investing in private companies is probably much worse and should be avoided.

Jesper 🔸 @ 2024-11-20T18:16 (+1) in response to Fund Causes Open Phil Underfunds (Instead of Your Most Preferred Causes)

I agree with the overall conclusion of this post but not completely with the reasoning. In particular, there is an important difference between allocating investments and allocating charitable donations in that for investments it makes sense to be (at least somewhat) risk averse, while for donations a simple strategy maximizing expected benefits makes perfect sense.

Even a risk-neutral approach to charitable donations will have to spread its investments however, because there is only so much money that the most effective charity can absorb before reaching its funding gap, which makes the next best charity the new most effective one.

For a big organization such as OP, this can become a real problem for a cause area where there are many charities with high effectiveness but (relatively) low funding gaps. This might be part of the explanation why OP pays more to global health, where there are very large organizations that can effectively absorb a lot of funding, over animal welfare.

For small individual donors, this means that there are likely opportunities to make very effective donations to organizations that might be too new or too small to be picked up by the big donors. You might even help them grow to the size where they can effectively absorb much larger donations.

So to reiterate, I think it makes sense to prefer donating to smaller charities and cause areas as an individual donor, but the reason is that they might be overlooked by the big donors, not to "balance out" some imaginary overall EA portfolio.

Ariel Simnegar 🔸 @ 2024-11-20T20:43 (+3)

I don't think most people take as a given that maximizing expected value makes perfect sense for donations. In the theoretical limit, many people balk at conclusions like accepting a gamble with a 51% chance of doubling the universe's value and a 49% chance of destroying it. (Especially so at the implication of continuing to accept that gamble until the universe is almost surely destroyed.) In practice, people have all sorts of risk aversion, including difference-making risk aversion, avoiding worst case scenarios, and reducing ambiguity.

I argue here against the view that animal welfare's diminishing marginal returns would be sufficient for global health to win out against it at OP levels of funding, even if one is risk neutral.

So long as small orgs apply to large grantmakers like OP, so long as one is locally confident that OP is trying to maximize expected value, I'd actually expect that OP's full-time staff would generally be much better positioned to make these kinds of judgments than you or I. Under your value system, I'd echo Jeff's suggestion that you should "top up" OP's grants.

MichaelDickens @ 2024-11-20T20:03 (+2) in response to Ben_West's Quick takes

I think that's not a reasonable position to hold but I don't know how to constructively argue against it in a short comment so I'll just register my disagreement.

Like, presumably China's values include humans existing and having mostly good experiences.

Habryka @ 2024-11-20T20:39 (+2)

Yep, I agree with this, but it appears nevertheless a relatively prevalent opinion among many EAs working in AI policy.

Manuel Allgaier @ 2024-11-19T10:45 (+25) in response to Manuel Allgaier's Quick takes

How tractable is improving (moral) philosophy education in high schools? 


tldr: Do high school still neglect ethics / moral philosophy in their curriculums? Mine did (year 2012). Are there tractable ways to improve the situation, through national/state education policy or reaching out to schools and teachers? Has this been researched / tried before?
 

The public high school I went to in Rottweil (rural Southern Germany) was overall pretty good, probably top 2-10% globally, except for one thing: Moral philosophy. 90min/week "Christian Religion" was the default for everyone, in which we spent most of the time interpreting stories from the bible, most of which to me felt pretty irrelevant to the present. This was in 2012 in Germany, a country with more atheists than Christians as of 2023, and even in 2012 my best guess is that <20% of my classmates were practicing a religion. 

Only in grade 10, we got the option to switch to secular Ethics classes instead, which only <10% of the students did (Religion was considered less work). 

Ethics class quickly became one of my favorite classes. For the first time in my life I had a regular group of people equally interested in discussing Vegetarianism and other such questions (almost everyone in my school ate meat, and vegetarians were sometimes made fun of). Still, the curriculum wasn't great, we spent too much time with ancient Greek philosophers and very little time discussing moral philosophy topics relevant to the present. 

How have your experiences been in high school? I'm especially curious about more recent experiences. 

Are there tractable ways to improve the situation? Has anyone researched this? 

1) Could we get ethics classes in the mandatory/default curriculum in more schools? Which countries or states seem best for that? In Germany, education is state-regulated - which German state might be most open to this? Hamburg? Berlin? 

2) Is there a shortage in ethics teachers (compared to religion teachers)? Can we get teachers more interested in teaching ethics? 

3) Are any teachers here teaching ethics? Would you like to connect more with other (EA/ethics) teachers? We could open a whatsapp group, if there's not already one. 
 

Vidur Kapur @ 2024-11-20T20:31 (+1)

In England, secular ethics isn't really taught until Year 9 (age 13-14) or Year 10, as part of Religious Studies classes. Even then, it might be dependent on the local council, the type of school or even the exam boards/modules that are selected by the school. And by Year 10, students in some schools can opt out of taking religious studies for their GCSEs.

Anecdotally, I got into EA (at least earlier than I would have) because my high school religious studies teacher (c. 2014) could see that I had utilitarian intuitions (e.g. in discussions about animal experimentation and assisted dying) and gave me a copy of Practical Ethics to read. I then read The Life You Can Save.

MichaelDickens @ 2024-11-20T15:04 (+7) in response to Where I Am Donating in 2024

you have some responsibility to send your notes to the mentioned organizations for fact checking before you post

I spent a good amount of time thinking about whether I should do this and I read various arguments for and against it, and I concluded that I don't have that responsibility. There are clear advantages to running posts by orgs, and clear disadvantages, and I decided that the disadvantages outweighted the advantages in this case.

Abby Babby @ 2024-11-20T20:31 (+3)

Thanks for being thoughtful about this! Could you clarify what your cost benefit analysis was here? I'm quite curious!

mjkerrison🔸️ @ 2024-11-20T07:33 (+3) in response to Fund Causes Open Phil Underfunds (Instead of Your Most Preferred Causes)

I think this is a really compelling addition to EA portfolio theory. Two half-formed thoughts:

  • Does portfolio theory apply better at the individual level than the community level? I think something like treating your own contributions (giving + career) as a portfolio makes a lot of sense, if you're explicitly trying to hedge personal epistemic risk. I think this is a slightly different angle on one of Jeff's points: is this "k-level 2" aggregate portfolio a 'better' aggregation of everyone's information than the "k-level 1" of whatever portfolio emerges from everyone individually optimising their own portfolios? You could probably look at this analytically... might put that on the to-do list.

  • At some point what matters is specific projects...? Like when I think about 'underfunded', I'm normally thinking there's good projects with high expected ROI that aren't being done, relative to some other cause area where the marginal project has a lower ROI. Maybe my point is something like - underfunding and accounting for it should be done at a different stage of the donation process, rather than in looking at overall what the % breakdown of the portfolio is. Maybe we're more private equity than index fund.

Ariel Simnegar 🔸 @ 2024-11-20T20:27 (+3)

Does portfolio theory apply better at the individual level than the community level?

I think the individual level applies if you have risk aversion on a personal level. For example, I care about having personally made a difference, which biases me towards certain individually less risky ideas.

is this "k-level 2" aggregate portfolio a 'better' aggregation of everyone's information than the "k-level 1" of whatever portfolio emerges from everyone individually optimising their own portfolios?

I think it's a tough situation because k=2 includes these unsavory implications Jeff and I discuss. But as I wrote, I think k=2 is just what happens when people think about everyone's donations game-theoretically. If everyone else is thinking in k=2 mode but you're thinking in k=1 mode, you're going to get funged such that your value system's expression in the portfolio could end up being much less than what is "fair". It's a bit like how the Nash equilibrium in the Prisoner's Dilemma is "defect-defect".

At some point what matters is specific projects...?

I agree with this. My post frames the discussion in terms of cause areas for simplicity and since the lessons generalize to more people, but I think your point is correct.

Charlie_Guthmann @ 2024-11-20T20:16 (+2) in response to Where I Am Donating in 2024

How do you feel about EA's investing in AI companies with their personal portfolio?

Habryka @ 2024-11-20T17:53 (+2) in response to Ben_West's Quick takes

I think most of those people believe that "having an AI aligned to 'China's values'" would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with "aligned AI" instead.

MichaelDickens @ 2024-11-20T20:03 (+2)

I think that's not a reasonable position to hold but I don't know how to constructively argue against it in a short comment so I'll just register my disagreement.

Like, presumably China's values include humans existing and having mostly good experiences.

AGB 🔸 @ 2024-11-20T18:47 (+11) in response to Ben_West's Quick takes

Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on 'we need to beat China' arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an 'overwhelming majority of EAs involved in AI safety' disagree with it even now.

Example from August 2022:

https://www.astralcodexten.com/p/why-not-slow-ai-progress

So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies...

This is the most common question I get on AI safety posts: why isn’t the rationalist / EA / AI safety movement doing this more? It’s a great question, and it’s one that the movement asks itself a lot...

Still, most people aren’t doing this. Why not?

Later, talking about why attempting a regulatory approach to avoiding a race is futile:

The biggest problem is China. US regulations don’t affect China. China says that AI leadership is a cornerstone of their national security - both as a massive boon to their surveillance state, and because it would boost their national pride if they could beat America in something so cutting-edge.

So the real question is: which would we prefer? OpenAI gets superintelligence in 2040? Or Facebook gets superintelligence in 2044? Or China gets superintelligence in 2048?

Might we be able to strike an agreement with China on AI, much as countries have previously made arms control or climate change agreements? This is . . . not technically prevented by the laws of physics, but it sounds really hard. When I bring this challenge up with AI policy people, they ask “Harder than the technical AI alignment problem?” Okay, fine, you win this one.

I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? It's right there in the section that most explicitly talks about policy. 

MichaelDickens @ 2024-11-20T19:59 (+2)

Scott's last sentence seems to be claiming that avoiding an arms race is easier than solving alignment (and it would seem to follow from that that we shouldn't race). But I can see how a politician reading this article wouldn't see that implication.

elifland @ 2024-11-20T19:26 (+5) in response to Where I Am Donating in 2024

Centre for the Governance of AI does alignment research and policy research. It appears to focus primarily on the former, which, as I've discussed, I'm not as optimistic about. (And I don't like policy research as much as policy advocacy.)

I'm confused, the claim here is that GovAI does more technical alignment than policy research?

MichaelDickens @ 2024-11-20T19:55 (+7)

That's the claim I made, yes. Looking again at GovAI's publications, I'm not sure why I thought that at the time since they do look more like policy research. Perhaps I was taking a strict definition of "policy research" where it only counts if it informs policy in some way I care about.

Right now it looks like my past self was wrong but I'm going to defer to him because he spent more time on it than I'm spending now. I'm not going to spend more time on it because this issue isn't decision-relevant, but there's a reasonable chance I was confused about something when I wrote that.

Kenneth_Diao @ 2024-11-20T17:39 (+5) in response to Donation Election Discussion Thread

The donation election post (meet the candidates) and the actual voting platform need to be cross-checked. I saw that Animetrics was included in the vote but not in the post, while Giving Green was included in the post and not in the vote. There may be other errors which I missed.

Sarah Cheng @ 2024-11-20T19:53 (+5)

Thanks so much for flagging this, and really sorry for the mistakes. I've gone through and updated both, hopefully they are now both up-to-date. Please let me know if you see any other issues.

Seth Herd @ 2024-11-20T19:53 (+15) in response to China Hawks are Manufacturing an AI Arms Race

Copied from my LW comment, since this is probably more of an EAF discussion:

This is really important pushback. This is the discussion we need to be having.

Most people who are trying to track this believe China has not been racing toward AGI up to this point. Whether they embark on that race is probably being determined now - and based in no small part on the US's perceived attitude and intentions.

Any calls for racing toward AGI should be closely accompanied with "and of course we'd use it to benefit the entire world, sharing the rapidly growing pie". If our intentions are hostile, foreign powers have little choice but to race us.

And we should not be so confident we will remain ahead if we do race. There are many routes to progress other than sheer scale of pretraining. The release of DeepSeek r1 today indicates that China is not so far behind. Let's remember that while the US "won" the race for nukes, our primary rival had nukes very soon after - by stealing our advancements. A standoff between AGI-armed US and China could be disastrous - or navigated successfully if we take the right tone and prevent further proliferation (I shudder to think of Putin controlling an AGI, or many potentially unstable actors).

This discussion is important, so it needs to be better. This pushback is itself badly flawed. In calling out the report's lack of references, it provides almost none itself. Citing a 2017 official statement from China seems utterly irrelevant to guessing their current, privately held position. Almost everyone has updated massively since 2017. (edit: It's good that this piece does note that public statements are basically meaningless in such matters.) If China is "racing toward AGI" as an internal policy, they probably would've adopted that recently. (I doubt that they are racing yet, but it seems entirely possible they'll start now in response to the US push to do so - and the their perspective on the US as a dangerous aggressor on the world stage. But what do I know - we need real experts on China and international relations.)

Pointing out the technical errors in the report seems irrelevant to harmful. You can understand very little of the details and still understand that AGI would be a big, big deal if true — and the many experts predicting short timelines could be right. Nitpicking the technical expertise of people who are essentially probably correct in their assessment just sets a bad tone of fighting/arguing instead of having a sensible discussion.

And we desperately need a sensible discussion on this topic.

Omnizoid @ 2024-11-20T19:30 (+2) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

I don't think this is right.  We could imagine a very simple creature experience very little pain but be totally focused on it.  It's true that normally for creatures like us, we tend to focus more on more intense pain, but this doesn't mean that's the relevant benchmark for intensity.  My claim is the causal arrow goes the other way. 

But if I did, I think this would make me think animal consciousness is even more serious.  For simple creatures, pain takes up their whole world.  

MichaelStJules @ 2024-11-20T19:49 (+4)

Maybe it'll help for me to rephrase: if a being has more things it can attend to (be aware of, have in its attention) simultaneously, then it has more attention to pull. It can attend to more, all else equal, for example, if it has a richer/more detailed visual field, similar to more pixels in a computer screen.

We could imagine a very simple creature experience very little pain but be totally focused on it.

If it's very simple, it would probably have very little attention to pull (relatively), so the pain would not be intense under the hypothesis I'm putting forward.

But if I did, I think this would make me think animal consciousness is even more serious.  For simple creatures, pain takes up their whole world. 

I also give some weight to this possibility, i.e. that we should measure attention in individual-relative terms, and it's something more like the proportion of attention pulled that matters.

David Mathers🔸 @ 2024-11-20T09:59 (+6) in response to Ben_West's Quick takes

The thing about Yudkowsky is that, yes, on the one hand, every time I read him, I think he surely must be coming across as super-weird and dodgy to "normal" people. But on the other hand, actually, it seems like he HAS done really well in getting people to take his ideas seriously? Sam Altman was trolling Yudkowsky on twitter a while back about how many of the people running/founding AGI labs had been inspired to do so by his work. He got invited to write on AI governance for TIME despite having no formal qualifications or significant scientific achievements whatsoever. I think if we actually look at his track record, he has done pretty well at convincing influential people to adopt what were once extremely fringe views, whilst also succeeding in being seen by the wider world as one of the most important proponents of those views, despite an almost complete lack of mainstream, legible credentials. 

Charlie_Guthmann @ 2024-11-20T19:35 (+1)

Hmm, I hear what you are saying but that could easily be attributed to some mix of 

(1) he has really good/convincing ideas 

(2) he seems to be a a public representative for the EA/LW community for a journalist on the outside.

And I'm responding to someone saying that we are in "phase 3" - that is to say people in the public are listening to us - so I guess I'm not extremely concerned about him not being able to draw attention or convince people. I'm more just generally worried that people like him are not who we should be promoting to positions of power, even if those are de jure positions. 

MichaelStJules @ 2024-11-20T19:19 (+4) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing. 

I might have written some of them! I still have some sympathy for the hypothesis and that it matters when you reason using expected values, taking the arguments into account, even if you assign the hypothesis like 1% probability. The probabilities can matter here.

 

In regards to your first point, I don't see either why we'd think that degree of attention correlates with neuron counts or determines the intensity of consciousness

I believe the intensity of suffering consists largely (maybe not exclusively) in how much it pulls your attention, specifically its motivational salience. Intense suffering that's easy to ignore seems like an oxymoron. I discuss this a bit more here.

Welfare Footprint Project's pain definitions also refer to attention as one of the criteria (along with other behaviours):

Annoying pain:

(...) Sufferers can ignore this sensation most of the time. Performance of cognitive tasks demanding attention are either not affected or only mildly affected. (...)

Hurtful pain:

(...) Different from Annoying pain, the ability to draw attention away from the sensation of pain is reduced: awareness of pain is likely to be present most of the time, interspersed by brief periods during which pain can be ignored depending on the level of distraction provided by other activities. (...)

Disabling pain:

(...) Inattention and unresponsiveness to milder forms of pain or other ongoing stimuli and surroundings is likely to be observed. (...)

Excruciating pain seems entirely behaviourally defined, but I would assume effects on attention like disabling pain or (much) stronger.

 

Then, we can ask "how much attention can be pulled?" And we might think:

  1. having more things you're aware of simultaneously (e.g. more details in your visual field) means you have more attention to pull, and
  2. more neurons allows you to be aware of more things simultaneously,

so brains with more neurons can have more attention to pull.

Omnizoid @ 2024-11-20T19:30 (+2)

I don't think this is right.  We could imagine a very simple creature experience very little pain but be totally focused on it.  It's true that normally for creatures like us, we tend to focus more on more intense pain, but this doesn't mean that's the relevant benchmark for intensity.  My claim is the causal arrow goes the other way. 

But if I did, I think this would make me think animal consciousness is even more serious.  For simple creatures, pain takes up their whole world.  

elifland @ 2024-11-20T19:26 (+5) in response to Where I Am Donating in 2024

Centre for the Governance of AI does alignment research and policy research. It appears to focus primarily on the former, which, as I've discussed, I'm not as optimistic about. (And I don't like policy research as much as policy advocacy.)

I'm confused, the claim here is that GovAI does more technical alignment than policy research?

Omnizoid @ 2024-11-20T18:59 (+2) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing.  

In regards to your first point, I don't see either why we'd think that degree of attention correlates with neuron counts or determines the intensity of consciousness

MichaelStJules @ 2024-11-20T19:19 (+4)

RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing. 

I might have written some of them! I still have some sympathy for the hypothesis and that it matters when you reason using expected values, taking the arguments into account, even if you assign the hypothesis like 1% probability. The probabilities can matter here.

 

In regards to your first point, I don't see either why we'd think that degree of attention correlates with neuron counts or determines the intensity of consciousness

I believe the intensity of suffering consists largely (maybe not exclusively) in how much it pulls your attention, specifically its motivational salience. Intense suffering that's easy to ignore seems like an oxymoron. I discuss this a bit more here.

Welfare Footprint Project's pain definitions also refer to attention as one of the criteria (along with other behaviours):

Annoying pain:

(...) Sufferers can ignore this sensation most of the time. Performance of cognitive tasks demanding attention are either not affected or only mildly affected. (...)

Hurtful pain:

(...) Different from Annoying pain, the ability to draw attention away from the sensation of pain is reduced: awareness of pain is likely to be present most of the time, interspersed by brief periods during which pain can be ignored depending on the level of distraction provided by other activities. (...)

Disabling pain:

(...) Inattention and unresponsiveness to milder forms of pain or other ongoing stimuli and surroundings is likely to be observed. (...)

Excruciating pain seems entirely behaviourally defined, but I would assume effects on attention like disabling pain or (much) stronger.

 

Then, we can ask "how much attention can be pulled?" And we might think:

  1. having more things you're aware of simultaneously (e.g. more details in your visual field) means you have more attention to pull, and
  2. more neurons allows you to be aware of more things simultaneously,

so brains with more neurons can have more attention to pull.

Habryka @ 2024-11-20T18:52 (+15) in response to US government commission pushes Manhattan Project-style AI initiative

I think a non-trivial fraction of Aschenbrenner's influence as well as intellectual growth is due to us and the core EA/AI-Safety ideas, yeah. I doubt he would have written it if the extended community didn't exist, and if he wasn't mentored by Holden, etc.

akash 🔸 @ 2024-11-20T19:18 (–2)

I don't disagree with this at all. But does this mean that blame can be attributed to the entire EA community? I think not. 

Re mentorship/funding: I doubt that his mentors were hoping that he would accelerate the chances of an arms race conflict. As a corollary, I am sure nukes wouldn't have been developed if the physics community in the 1930s didn't exist or mentored different people or adopted better ethical norms. Even if they did the latter, it is unclear if that would have prevented the creation of the bomb. 

(I found your comments under Ben West's posts insightful; if true, it highlights a divergence between the beliefs of the broader EA community and certain influential EAs in DC and AI policy circles.)

Currently, it is just a report, and I hope it stays that way.

MichaelStJules @ 2024-11-20T18:52 (+4) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

I also find RP's arguments against neuron counts completely devastating.

I worked on some of them with RP myself here.

FWIW, I found Adam's arguments convincing against the kinds of views he argued against, but I don't think they covered the cases in point 2 here.

Omnizoid @ 2024-11-20T18:59 (+2)

RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing.  

In regards to your first point, I don't see either why we'd think that degree of attention correlates with neuron counts or determines the intensity of consciousness

Omnizoid @ 2024-11-20T18:43 (+4) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

Interesting!  I intended the post largely as a response to someone with views like yours.  In short, I think the considerations I provided based on how animals behave is very well explained by the supposition that they're conscious.  I also find RP's arguments against neuron counts completely devastating. 

MichaelStJules @ 2024-11-20T18:52 (+4)

I also find RP's arguments against neuron counts completely devastating.

I worked on some of them with RP myself here.

FWIW, I found Adam's arguments convincing against the kinds of views he argued against, but I don't think they covered the cases in point 2 here.

akash 🔸 @ 2024-11-20T18:36 (+3) in response to US government commission pushes Manhattan Project-style AI initiative

And we contributed to this.

What makes you say this? I agree that it is likely that Aschenbrenner's report was influential here, but did we make Aschenbrenner write chapter IIId of Situational Awareness the way he did? 

But the background work predates Leopold's involvement.

Is there some background EA/aligned work that argues for an arms race? Because the consensus seems to be against starting a great power war.

Habryka @ 2024-11-20T18:52 (+15)

I think a non-trivial fraction of Aschenbrenner's influence as well as intellectual growth is due to us and the core EA/AI-Safety ideas, yeah. I doubt he would have written it if the extended community didn't exist, and if he wasn't mentored by Holden, etc.

MichaelDickens @ 2024-11-20T17:00 (+6) in response to Ben_West's Quick takes

It looks to me like the online EA community, and the EAs I know IRL, have a fairly strong consensus that arms races are bad. Perhaps there's a divide in opinions with most self-identified EAs on one side, and policy people / company leaders on the other side—which in my view is unfortunate since the people holding the most power are also the most wrong.

(Is there some systematic reason why this would be true? At least one part of it makes sense: people who start AGI companies must believe that building AGI is the right move. It could also be that power corrupts, or something.)

So maybe I should say the congressional commission should've spent less time listening to EA policy people and more time reading the EA Forum. Which obviously was never going to happen but it would've been nice.

AGB 🔸 @ 2024-11-20T18:47 (+11)

Slightly independent to the point Habryka is making, which may well also be true, my anecdotal impression is that the online EA community / EAs I know IRL were much bigger on 'we need to beat China' arguments 2-4 years ago. If so, simple lag can also be part of the story here. In particular I think it was the mainstream position just before ChatGPT was released, and partly as a result I doubt an 'overwhelming majority of EAs involved in AI safety' disagree with it even now.

Example from August 2022:

https://www.astralcodexten.com/p/why-not-slow-ai-progress

So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies...

This is the most common question I get on AI safety posts: why isn’t the rationalist / EA / AI safety movement doing this more? It’s a great question, and it’s one that the movement asks itself a lot...

Still, most people aren’t doing this. Why not?

Later, talking about why attempting a regulatory approach to avoiding a race is futile:

The biggest problem is China. US regulations don’t affect China. China says that AI leadership is a cornerstone of their national security - both as a massive boon to their surveillance state, and because it would boost their national pride if they could beat America in something so cutting-edge.

So the real question is: which would we prefer? OpenAI gets superintelligence in 2040? Or Facebook gets superintelligence in 2044? Or China gets superintelligence in 2048?

Might we be able to strike an agreement with China on AI, much as countries have previously made arms control or climate change agreements? This is . . . not technically prevented by the laws of physics, but it sounds really hard. When I bring this challenge up with AI policy people, they ask “Harder than the technical AI alignment problem?” Okay, fine, you win this one.

I feel like a generic non-EA policy person reading that post could well end up where the congressional commission landed? It's right there in the section that most explicitly talks about policy. 

MichaelStJules @ 2024-11-20T18:09 (+6) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

though unlike Eliezer, I don’t come to my conclusions about animal consciousness from the armchair without reviewing any evidence

A bit of nitpick, but I think Eliezer has a very high bar for attributing consciousness and is aware of relevant evidence for that bar, e.g. evidence for theory of mind or a robust self-model.

And this gets into the kind of views to which I'm sympathetic.

 

I am quite sympathetic to the kind of view Eliezer seems to endorse and the importance of something like a self-model, but my bar for self-models is probably much lower and I think many animals have at least modest self-models, including probably all mammals and birds, but I'm not confident about others. More on this kind of view here and here.

 

On the other hand, I am also sympathetic to counting anything that looks like pleasure, unpleasantness, desire as motivational salience (an attentional mechanism) or beliefs about betterness/worseness/good/bad, basically any kind of evaluative attitude about anything, or any way of caring about anything. If some system cares about something, I want to empathize and try to care about that in the same way on their behalf.[1] I discuss such attitudes more here and this view more in a draft (which I'm happy to share).

And I'm inclined to count these attitudes whether they're "conscious" or not, however we characterize consciousness. Or, these processes just ground something worth recognizing as conscious, anyway.

Under this view, I probably end up basically agreeing with you about which animals count, on the basis of evidence about desire as motivational salience and/or pleasure/unpleasantness-like states. 

However, there could still be important differences in degree if and because they meet different bars, and I have some sympathy for some neuron count-related arguments that favour brains with more neurons (point 2 here). I also give substantial weight to the possibilities that:

  1. maximum intensities for desires as motivational salience (and maybe hedonic states like pleasure and unpleasantness) are similar,
  2. there's (often) no fact of the matter about how to compare them.

 

  1. ^

    Focusing on what they care about intrinsically or terminally, not instrumental or derived concerns. And, of course, I have to deal with intrapersonal and interpersonal trade-offs.

Omnizoid @ 2024-11-20T18:43 (+4)

Interesting!  I intended the post largely as a response to someone with views like yours.  In short, I think the considerations I provided based on how animals behave is very well explained by the supposition that they're conscious.  I also find RP's arguments against neuron counts completely devastating. 

Tyler Johnston @ 2024-11-20T18:41 (+8) in response to Donation Election Discussion Thread

(Edited at 19:35 UTC-5 as I misunderstood how the voting system works)

My top 10 right now look something like:

1. The Midas Project
2. EA Animal Welfare Fund
3. Rethink Priorities
4. MATS Research
5. Shrimp Welfare Project
6. Apart Research
7. Legal Impact for Chickens
8. PauseAI
9. Wild Animal Initiative
10. High Impact Professionals

I ranked my organization, The Midas Project, first on my ballot. I don't think we have a stronger track record than many of the organizations in this election (and I expect the winners will be a few familiar top contenders like Rethink Priorities, who certainly deserve to be there), but I do think the election will undervalue our project due to general information asymmetries and most of our value being speculative/heavy-tailed. This seems in line with the tactical voting suggestion, but it does feel a bit icky/full of hubris.

Also, in making this list, I realized that I favored large orgs whose work I'm familiar with, and most skipped over small orgs who I know little about (including ones that made posts for marginal funding week that I just haven't read). This was a funny feeling because (as mentioned) I run a small org that I expect many people don't know about and will skip over. 

One way people can counteract this would be, in making your selection, choose 1-2 orgs you've never heard of at random, do a deep dive on them, and place them somewhere in your rankings (even at the bottom if you aren't excited about them). With enough people doing this, there should be enough coverage of small orgs for the results of the election to be a bit more informative, at least in terms of how smaller orgs compare to each other.

sapphire @ 2024-11-20T08:20 (+14) in response to US government commission pushes Manhattan Project-style AI initiative

I spent all day crying about this. An arms race is about the least safe way to approach. And we contributed to this. Many important people read Leopold's report. He promoted it quite hard. But the background work predates Leopold's involvement.

We were totally careless and self aggrandizing. I hope other people don't pay for our sins.

akash 🔸 @ 2024-11-20T18:36 (+3)

And we contributed to this.

What makes you say this? I agree that it is likely that Aschenbrenner's report was influential here, but did we make Aschenbrenner write chapter IIId of Situational Awareness the way he did? 

But the background work predates Leopold's involvement.

Is there some background EA/aligned work that argues for an arms race? Because the consensus seems to be against starting a great power war.

ozymandias @ 2024-11-20T18:35 (+7) in response to Share what GWWC and 10% Pledge has meant to you

I have written a couple of times about my feelings about taking the GWWC pledge. (I don't believe I'm signed up on the website, but I in fact have given 10% for my entire working life, except two years that were particularly bad financially.) I think the essence, for me, is a sense of empowerment. 

The world is full of enormous problems that I can't do anything about. I often feel weak and powerless and helpless. GWWC says to me that I do have power to positively affect the world, and always will. I don't have to be exceptional: as long as I have money, I can donate to highly effective charities and save lives. I don't have to worry about dying and leaving the world the same as it was when I entered it. Though I will never know their names, there are people who would be dead if not for me, and who are alive. And that means so much. 

Mikolaj Kniejski @ 2024-11-20T17:06 (+1) in response to LLMs are weirder than you think

I've always been impressed with Rethink Priorities' work, but this post is underwhelming.

As I understand it, the post argues that we can't treat LLMs as coherent persons. The author seems to think this idea is vaguely connected to the claim that LLMs are not experiencing pain when they say they do. I guess the reasoning goes something like this: If LLMs are not coherent personas, then we shouldn't interpret statements like "I feel pain" as genuine indicators that they actually feel pain, because such statements are more akin to role-playing than honest representations of their internal states.

I think this makes sense but the way it's argued for is not great.

1. The user is not interacting with a single dedicated system.

The argument here seems to be: If the user is not interacting with a single dedicated system, then the system shouldn't be treated as a coherent person.

This is clearly incorrect. Imagine we had the ability to simulate a brain. You could run the same brain simulation across multiple systems. A more hypothetical scenario: you take a group of frozen, identical humans, connect them to a realistic VR simulation, and ensure their experiences are perfectly synchronized. From the user’s perspective, interacting with this setup would feel indistinguishable from interacting with a single coherent person. Furthermore, if the system is subjected to suffering, the suffering would multiply with each instance the experience is replayed. This shows that coherence doesn't necessarily depend on being a "single" system.

2. An LLM model doesn't clearly distinguish the text it generates from the text the user inputs.

Firstly, this claim isn't accurate. If you provide an LLM with the transcript of a conversation, it can often identify which parts are its responses and which parts are user inputs. This is an empirically testable claim. Moreover, statements about how LLMs process text don't necessarily negate the possibility of them being coherent personas. For instance, it’s conceivable that an LLM could function exactly as described and still be a coherent persona. 

Derek Shiller @ 2024-11-20T18:33 (+6)

I appreciate the pushback on these claims, but I want to flag that you seem to be reading too much into the post. The arguments that I provide aren't intended to support the conclusion that we shouldn't treat "I feel pain" as a genuine indicator or that there definitively aren't coherent persons involved in chatbot text production. Rather, I think people tend to think of their interactions with chatbots in the way they interact with other people, and there are substantial differences that are worth pointing out. I point out four differences. These differences are relevant to assessing personhood, but I don't claim any particular thing I say has any straightforward bearing on such assessments. Rather, I think it is important to be mindful of these differences when you evaluate LLMs for personhood and moral status. These considerations will affect how you should read different pieces of evidence. A good example of this is the discussion of the studies in the self-identification section. Should you take the trouble LLMs have with counting tokens as evidence that they can't introspect? No, I don't think it provides particularly good evidence, because it relies on the assumption that LLMs self-identify with the AI assistant in the dialogue and it is very hard to independently tell whether they do.

Firstly, this claim isn't accurate. If you provide an LLM with the transcript of a conversation, it can often identify which parts are its responses and which parts are user inputs. This is an empirically testable claim. Moreover, statements about how LLMs process text don't necessarily negate the possibility of them being coherent personas. For instance, it’s conceivable that an LLM could function exactly as described and still be a coherent persona.

I take it that you mean that LLMs can distinguish their text from others, presumably on the basis of statistical trends, so they can recognize text that reads like the text they would produce? This seems fully in line with what I say: what is important is that LLMs don't make any internal computational distinction in processing text they are reading and text they are producing. The model functions as a mapping from inputs to outputs, and the mapping changes solely based on words and not their source. If you feed them text that is like the text they would produce, they can't tell whether or not they produced it. This is very different from the experience of a human conversational partner, who can tell the difference between being spoken to and speaking and doesn't need to rely on distinguishing whether words sound like something they might say. More importantly, they don't know in the moment they are processing a given token whether they are in the middle of reading a block of user-supplied text or providing additional text through autoregressive text generation.

Jesper 🔸 @ 2024-11-20T18:16 (+1) in response to Fund Causes Open Phil Underfunds (Instead of Your Most Preferred Causes)

I agree with the overall conclusion of this post but not completely with the reasoning. In particular, there is an important difference between allocating investments and allocating charitable donations in that for investments it makes sense to be (at least somewhat) risk averse, while for donations a simple strategy maximizing expected benefits makes perfect sense.

Even a risk-neutral approach to charitable donations will have to spread its investments however, because there is only so much money that the most effective charity can absorb before reaching its funding gap, which makes the next best charity the new most effective one.

For a big organization such as OP, this can become a real problem for a cause area where there are many charities with high effectiveness but (relatively) low funding gaps. This might be part of the explanation why OP pays more to global health, where there are very large organizations that can effectively absorb a lot of funding, over animal welfare.

For small individual donors, this means that there are likely opportunities to make very effective donations to organizations that might be too new or too small to be picked up by the big donors. You might even help them grow to the size where they can effectively absorb much larger donations.

So to reiterate, I think it makes sense to prefer donating to smaller charities and cause areas as an individual donor, but the reason is that they might be overlooked by the big donors, not to "balance out" some imaginary overall EA portfolio.

MichaelStJules @ 2024-11-20T18:09 (+6) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

though unlike Eliezer, I don’t come to my conclusions about animal consciousness from the armchair without reviewing any evidence

A bit of nitpick, but I think Eliezer has a very high bar for attributing consciousness and is aware of relevant evidence for that bar, e.g. evidence for theory of mind or a robust self-model.

And this gets into the kind of views to which I'm sympathetic.

 

I am quite sympathetic to the kind of view Eliezer seems to endorse and the importance of something like a self-model, but my bar for self-models is probably much lower and I think many animals have at least modest self-models, including probably all mammals and birds, but I'm not confident about others. More on this kind of view here and here.

 

On the other hand, I am also sympathetic to counting anything that looks like pleasure, unpleasantness, desire as motivational salience (an attentional mechanism) or beliefs about betterness/worseness/good/bad, basically any kind of evaluative attitude about anything, or any way of caring about anything. If some system cares about something, I want to empathize and try to care about that in the same way on their behalf.[1] I discuss such attitudes more here and this view more in a draft (which I'm happy to share).

And I'm inclined to count these attitudes whether they're "conscious" or not, however we characterize consciousness. Or, these processes just ground something worth recognizing as conscious, anyway.

Under this view, I probably end up basically agreeing with you about which animals count, on the basis of evidence about desire as motivational salience and/or pleasure/unpleasantness-like states. 

However, there could still be important differences in degree if and because they meet different bars, and I have some sympathy for some neuron count-related arguments that favour brains with more neurons (point 2 here). I also give substantial weight to the possibilities that:

  1. maximum intensities for desires as motivational salience (and maybe hedonic states like pleasure and unpleasantness) are similar,
  2. there's (often) no fact of the matter about how to compare them.

 

  1. ^

    Focusing on what they care about intrinsically or terminally, not instrumental or derived concerns. And, of course, I have to deal with intrapersonal and interpersonal trade-offs.

abrahamrowe @ 2024-11-20T18:07 (+14) in response to Donation Election Discussion Thread

I voted for Wild Animal Initiative, followed by Shrimp Welfare Project and Arthropoda Foundation (I have COIs with WAI and Arthropoda).

  • All three cannot be funded by OpenPhil/GVF currently, despite WAI/SWP being heavily funded previously by them.
  • I think that wild animal welfare is the single most important animal welfare issue, and it remains incredibly neglected, with just WAI working on it exclusively.
    • Despite this challenge, WAI seems to have made a ton of progress on building the scientific knowledge needed to actually make progress on these issues.
    • Since founding and leaving WAI, I've just become increasingly optimistic about there being a not-too-long-term pathway to robust interventions to help wild animals, and to wild animal welfare going moderately mainstream within conservation biology/ecology.
  • Wild animal welfare is downstream from ~every other cause area. If you think it is a problem, but that we can't do anything about it because the issue is so complicated, then the same is true of the wild animal welfare impacts of basically all other interventions EAs pursue. This seems like a huge issue for knowing the impact of our work. No one is working on this except WAI, and no other issues seem to cut across all causes the way wild animal welfare does.
  • SWP seems like they are implementing the most cost-effective animal welfare intervention that is remotely scalable right now.
  • In general, I favor funding research, because historically OpenPhil has been far more likely to fund research than other funders, and it is pretty hard for research-focused organizations to compete with intervention-focused organizations in the animal funding scene, despite lots of interventions being downstream from research. Since Arthropoda also does scientific field building / research funding, I added it to my list.
MichaelDickens @ 2024-11-20T17:26 (+2) in response to Ben_West's Quick takes

(though they are mostly premised on alignment being relatively easy, which seems very wrong to me)

Not just alignment being easy, but alignment being easy with overwhelmingly high probability. It seems to me that pushing for an arms race is bad even if there's only a 5% chance that alignment is hard.

Habryka @ 2024-11-20T17:53 (+2)

I think most of those people believe that "having an AI aligned to 'China's values'" would be comparably bad to a catastrophic misalignment failure, and if you believe that, 5% is not sufficient, if you think there is a greater than 5% of China ending up with "aligned AI" instead.

drwahl @ 2024-11-20T03:32 (+4) in response to Donation Election Discussion Thread

Please consider using star (or approval) voting next year instead of RCV

Kenneth_Diao @ 2024-11-20T17:43 (+1)

I'm not an expert, but this may be a good idea. Apparently ranked-choice voting is always vulnerable to certain types of failures (https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem), but these can be avoided with rated voting systems.

MichaelDickens @ 2024-11-20T16:49 (+5) in response to Where I Am Donating in 2024

Applying this to your estimate would suggest a probability of a 10 % population drop over the next 10 years of 2.39

Tell me if I'm understanding this correctly:

  1. My (rough) numbers suggest a 6% chance that 100% of people die
  2. According to a fitted power law, that implies a 239% chance that 10% of people die

I disagree but I like your model and I think it's a pretty good way of thinking about things.

On my plurality model (i.e. the model to which I assign the plurality of subjective probability), superintelligent AI (SAI) either kills 100% of people or it kills no people. I don't think the outcome of SAI fits a power law.

A power law is typically generated by a combination of exponentials, which might be a good description of battle deaths, but I don't think it's a good description of AI. I think power laws are often a decent fit for combinations of heterogeneous events (such as mass deaths from all causes combined), but maybe not a great fit, so I wouldn't put too much credence in the power law model in this case.

I think it's very unlikely that an AI catastrophe kills 10% of the population in the next 10 years (not 10^-6 unlikely, more like 10^-3 unlikely). I can think of a few ways this could happen (e.g., a country gives an autonomous AI control over its nuclear arsenal and the AI decides to nuke a bunch of cities), but they seem much less likely than an SAI deciding to completely extinguish humanity.

I estimated a probability of human extinction before 2100 due to an AI malfunction of 0.004 %.

Even if you put 99% credence in this model, surely P(extinction) will be dominated by other models? Even within the model, P(extinction) should be higher than that based on uncertainty about the value of the alpha parameter.

Vasco Grilo🔸 @ 2024-11-20T17:42 (+3)

Tell me if I'm understanding this correctly:

  1. My (rough) numbers suggest a 6% chance that 100% of people die
  2. According to a fitted power law, that implies a 239% chance that 10% of people die

On 1, yes, and over the next 10 years or so (20 % chance of superintelligent AI over the next 10 years, times 30 % chance of extinction quickly after superintelligent AI)? On 2, yes, for a power law with a tail index of 1.60, which is the mean tail index of the power laws fitted to battle deaths per war here.

I think it's very unlikely that an AI catastrophe kills 10% of the population in the next 10 years (not 10^-6 unlikely, more like 10^-3 unlikely).

I meant to ask about the probability of human population becoming less than (not around) 90 % as large as now over the next 10 years, which has to be higher than the probability of human extinction. Since 10^-3 << 6 %, I guess your probability of a population loss of 10 % or more is just slighly higher than your probability of human extinction.

Even if you put 99% credence in this model, surely P(extinction) will be dominated by other models? Even within the model, P(extinction) should be higher than that based on uncertainty about the value of the alpha parameter.

I think using a power law will tend to overestimate the probability of human extinction, as my sense is that tail distributions usually starts to decay faster as severity increases. This is the case for the annual conflict deaths as a fraction of the global population, and arguably annual epidemic/pandemic deaths as a fraction of the global population. The reason is that the tail distribution has to reach 0 for a 100 % population loss, whereas a power law will predict that going from 8 billion to 16 billion deaths is as likely as going from 4 billion to 8 billion deaths.

Kenneth_Diao @ 2024-11-20T17:39 (+5) in response to Donation Election Discussion Thread

The donation election post (meet the candidates) and the actual voting platform need to be cross-checked. I saw that Animetrics was included in the vote but not in the post, while Giving Green was included in the post and not in the vote. There may be other errors which I missed.

Kenneth_Diao @ 2024-11-20T17:37 (+3) in response to Donation Election Discussion Thread

I voted for mainly animal welfare/rights charities first, particularly ones which focused on highly neglected, large-scale populations like insects, shrimps, and fishes. I also voted highly for PauseAI because I believe in creating greater public pressure to slow AI progress and shifting the Overton Window, even if I am agnostic about pausing AI progress itself. After these, I voted for some of the meta/mixed organizations which I thought were especially promising, including Rethink Priorities and the Unjournal. Then I voted for mental health/resilience interventions. Then I voted for GCR initiatives. I did not vote for any human welfare interventions which I expected to cause net harm to animals. I did not vote for any other AI organizations because I did not trust that they were sufficiently decelerationist.

SummaryBot @ 2024-11-20T17:34 (+1) in response to Quantum, China, & Tech bifurcation; Why it Matters

Executive summary: The growing bifurcation between China and the West in quantum technology development poses significant risks for responsible governance and global security, requiring urgent attention to establish dialogue and coordination mechanisms before technological divergence becomes entrenched.

Key points:

  1. China is a near-peer to the US in quantum technology, with particular strength in quantum communications, making it a crucial player in the field's development.
  2. Current trends in export controls, visa restrictions, and push for technological self-sufficiency are leading to bifurcated supply chains and reduced international collaboration in quantum tech.
  3. Bifurcation creates serious risks: asymmetric capability gaps, difficulties in regulating dangerous applications, challenges in establishing safety dialogues, and potential destabilizing effects of secret quantum computing breakthroughs.
  4. Recommended actions include limiting bifurcation through open-source initiatives and standardization, establishing international governance frameworks, and increasing transparency between Chinese and Western quantum developments.
  5. Career opportunities exist in prioritization research, technical research (including studying quantum science in China), facilitating international dialogue, and working on policy/governance frameworks.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-11-20T17:32 (+1) in response to Why I Think All The Species Of Significantly Debated Consciousness Are Conscious And Suffer Intensely

Executive summary: Evidence strongly suggests that all species of debated consciousness (including fish, crustaceans, and insects) are likely conscious and capable of experiencing intense suffering, based on behavioral, evolutionary, and neurological evidence.

Key points:

  1. Five key arguments support widespread consciousness: evolutionary benefit, behavioral evidence, probabilistic reasoning, theoretical coherence (under both dualism and physicalism), and historical trend of underestimating animal consciousness.
  2. Evidence across species shows consistent markers of consciousness: response to painkillers, wound-tending, pain-reward tradeoffs, physiological pain responses, learned avoidance, anxiety symptoms, individual personalities, and information integration.
  3. Simple creatures may experience more intense pain than complex ones, as they need stronger signals to learn and their entire consciousness may be occupied by pain when experiencing it.
  4. Research consistently finds evidence supporting consciousness in disputed species, while evidence against consciousness is largely absent or based on outdated assumptions.
  5. Practical implication: We should take insect and crustacean suffering seriously and support organizations working to reduce their suffering.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.