Latest comments on the EA Forum

Comments on 2024-07-24

Owen Cotton-Barratt @ 2024-07-24T19:17 (+8) in response to We need an independent investigation into how EA leadership has handled SBF and FTX

I'm confused about how to relate to speaking about these issues. I feel like I can speak to several but not all of the questions you raise (as well as some things you don't directly ask about). I'm not sure there's anything too surprising there, but I'd I feel generically good about the EA community having more information.

But -- this is a topic which invites drama, in a way that I fear is sometimes disproportionate. And while I'm okay (to a fault) sharing information which invites drama for me personally, I'd feel bad about potentially stirring it up for other people.

That makes me hesitant. And I'm not sure how much my speaking would really help (of course I can't speak with anything like the authority of an external investigation). So my default is not to speak, at least yet (maybe in another year or two?).

Open to hearing arguments or opinions that that's the wrong meta-level orientation. (An additional complication is that some-but-not-all of the information I have came via my role on the board of EV, which makes me think it's not properly my information to choose whether to share. But this can be regarded as a choice about the information which does feel like it's mine to choose what to do with.)

richard_ngo @ 2024-07-24T23:49 (+2)

This seems like the wrong meta-level orientation to me. A meta-level orientation that seems better to me is something like "Truth and transparency have strong global benefits, but often don't happen enough because they're locally aversive. So assume that sharing information is useful even when you're not concretely sure how it'll help, and assume by default that power structures (including boards, social networks, etc) are creating negative externalities insofar as they erect barriers to you sharing information".

The specific tradeoff between causing drama and sharing useful information will of course be situation-dependent, but in this situation the magnitude of the issues involved feels like it should significantly outweigh concerns about "stirring up drama", at least if you make attempts to avoid phrasing the information in particularly-provocative or careless ways.

Joseph Lemien @ 2024-07-24T23:30 (+2) in response to Evidence of Poor Cross-Cultural Interactions in the EA community

One thing that seems notable to me about these cross-cultural communication/norms issues is how often they are simply a result of ignorance. If I live in a country for several years I'm probably going to learn that people view it as rude to do some things, and I'll see how the Romans do it. But if I am only visiting for a short period of time, I will probably be profoundly ignorant of how people there view various behaviors. If I haven't previously spent time living in, thinking about, or reading about different cultures, I might not even be aware that people have different norms.[1]

Before reading this, I didn't know that it was a norm in Malaysia to not greet people in a elevator or a corridor at an apartment. I almost certainly would be guilty of violating this norm if I were to visit Malaysia.

  1. ^

    Or at most, I would be aware of relatively obvious artifacts and things that are easy to describe, such as how some cultures tend to take showers in the evening/morning, people eat using forks/chopsticks, greeting a person involves a handshake/hug/kiss/two kisses/three kisses. But it is much harder to describe similar underlying assumptions and values (relationships to parents, happiness with conformity, desire for uniqueness, etc.). I find the Edgar Schein 3-level framework for culture very simple, but useful for starting to think about these things.Edgar Schein's organization culture model with the three components of Artifacts, Espoused values, and Underlying assumptions. Artifacts   The visible constructed environment of an organization, including its architecture, technology, office layout, dress code, and public documents. Espoused values are the reasons and/or rationalizations for why members behave the way they do in an organization. Underlying assumptions are unconscious beliefs that determine how group members perceive, think, and feel.

Ben Millwood @ 2024-07-24T21:48 (+6) in response to The Drowning Child Argument Is Simply Correct

I'm not clear on whether you think the drowning child argument is browbeating by nature, or whether you think that just this particular presentation of it is browbeating. (Your remark about retiring the drowning child implies the former, but another of your comments elsewhere implies that you can use the drowning child argument without browbeating people with it?)

Anyway, I don't think it's time to retire the argument, I still feel like I hear a lot of people cite it as insightful for them.

Karthik Tadepalli @ 2024-07-24T23:20 (+2)

Maybe it was an exaggeration to say it should be retired. It was an important source of insight for me as well. But I think it is used in a browbeating way very often, and this post is a strong example of that. I think the drowning child argument is best used as a way to provoke people to introspect about the inconsistency in their values, not to tell them how immoral all of their actions are.

jacquesthibs @ 2024-07-24T23:17 (+4) in response to jacquesthibs's Quick takes

Hey everyone, in collaboration with Apart Research, I'm helping organize a hackathon this weekend to build tools for accelerating alignment research. This hackathon is very much related to my effort in building an "Alignment Research Assistant."

Here's the announcement post:

2 days until we revolutionize AI alignment research at the Research Augmentation Hackathon!

As AI safety researchers, we pour countless hours into crucial work. It's time we built tools to accelerate our efforts! Join us in creating AI assistants that could supercharge the very research we're passionate about.

Date: July 26th to 28th, online and in-person
Prizes: $2,000 in prizes

Why join?

* Build tools that matter for the future of AI
* Learn from top minds in AI alignment
* Boost your skills and portfolio

We've got a Hackbook with an exciting project to work on waiting for you! No advanced AI knowledge required - just bring your creativity!

Register now: Sign up on the website here, and don't miss this chance to shape the future of AI research!

MichaelDickens @ 2024-07-24T23:15 (+2) in response to It's OK to kill and eat animals - but don't get caught slapping one.

I agree that this is kind of absurd but I expect that public concern for small-scale animal suffering weakly increases potential future concern for large-scale animal suffering, rather than funging against it. I think it weakly helps by propagating the meme of "animal suffering is a problem worth taking seriously".

I wouldn't promote concern for Olympic horses as an effective cause area, but I wouldn't fight against it, either.

Joseph Lemien @ 2024-07-24T23:04 (+2) in response to Non-Western EAs’ perception of cross cultural interactions they had with Western EAs

It can be tricky to explore some of these topics that overlap between cultural background, nationality, how others perceive us, differing norms, assumptions, and communication styles.  It can be hard to parse between what we view as the gradient between reasonable and unreasonable assumptions (such as predicting that a Black American in Chengdu probably doesn't speak much Mandarin, as opposed to confidently assuming that an Asian of unknown nationality in Chicago couldn't possibly have grown up speaking English).

Nonetheless, I also really like that there are people in this community who notice and who are aware of subtle things. 

I'm glad to read explorations of these kinds of things, and I'm glad that you've spent all this time and effort exploring it and sharing some of your findings. Thank you.

Rainbow Affect @ 2024-07-24T20:52 (+3) in response to Notes on impostor syndrome

Thanks a lot for your feedback!

Why do you think that data poisoning, scaling and water scarcity are a distraction to issues like AI alignment and safety? Am I missing something obvious? Did conflicts over water happen too few times (or not at all)? Can we easily deal with data poisoning and model scaling? Are AI alignment and safety that much bigger issues?

Emrik @ 2024-07-24T22:54 (+2)

To clarify, I'm mainly just sceptical that water-scarcity is a significant consideration wrt the trajectory of transformative AI. I'm not here arguing against water-scarcity (or data poisoning) as an important cause to focus altruistic efforts on.

Hunches/reasons that I'm sceptical of water as a consideration for transformative AI:

  • I doubt water will be a bottleneck to scaling
    • My doubt here mainly just stems from a poorly-argued & uncertain intuition about other factors being more relevant. If I were to look into this more, I would try to find some basic numbers about:
      • How much water goes into the maintenance of data centers relative to other things fungible water-sources are used for?
      • What proportion of a data center's total expenditures are used to purchase water?
      • I'm not sure how these things work, so don't take my own scepticism as grounds to distrust your own (perhaps-better-informed) model of these things.
  • Assuming scaling is bottlenecked by water, I think great-power conflict are unlikely to be caused by it
  • Assuming conflicts happen due to water-bottleneck, I don't think this will significantly influence the long-term outcome of transformative AI

Note: I'll read if you respond, but I'm unlikely to respond in turn, since I'm trying to prioritize other things atm. Either way, thanks for an idea I hadn't considered before! : )

OscarD🔸 @ 2024-07-24T22:00 (+5) in response to Peter Singer AMA (July 30th)

Is there a principled place to disembark the crazy train?

To elaborate, if we take EV-maximization seriously, this appears to have non-intuitive implications about e.g. small animals being of overwhelming moral importance in aggregate, the astronomical value of X-risk reduction, the possibility of infinite amounts of (dis)value, suffering in fundamental physics (in roughly ascending order of intuitive craziness to me).

But rejecting EV maximization also seems problematic.

David T @ 2024-07-24T19:02 (+1) in response to The Drowning Child Argument Is Simply Correct

This argument is understandably unpopular because it's inconsistent with core principles of EA. 

But the principle of reciprocity (and adjacent kin selection arguments) absolutely is the most plausible argument for why the human species evolved to behave in an apparently altruistic[1] manner and value it in others in the first place, long before we started on abstract value systems like utilitarianism, and in many cases people still value or practice some behaviours that appear altruistic despite indifference to or active disavowal of utilitarian or deontological arguments for improving others' welfare.

  1. ^

    there's an entire literature on "reciprocal altruism"

Ben Millwood @ 2024-07-24T21:53 (+8)

I don't know if you're even implying this, but the causal mechanism for altruism arising in humans doesn't need to hold any moral force over us. Just because kin selection caused us to be altruistic, doesn't mean we need to think "what would kin selection want?" when deciding how to be altruistic in future. We can replace the causal origin with our own moral foundations, and follow those instead.

OscarD🔸 @ 2024-07-24T21:45 (+2) in response to Forum update: User database, card view, and more (Jul 2024)

Nice! I didn't actually know we had access to our author stats, cool. What is the difference between 'views' and 'reads'? Also, how 'true' do you think these numbers are? They seem rather surprisingly high to me, could there just be a bunch of bots racking up numbers?

OscarD🔸 @ 2024-07-24T21:53 (+2)

(answered my own question - 'read' means stayed on page >30s).

However, one of my posts has a negative bounce rate, seems like a bug! Or maybe my post was just that engaging ;)
 

Karthik Tadepalli @ 2024-07-24T09:12 (+9) in response to The Drowning Child Argument Is Simply Correct

I am not receptive to browbeating. I suspect most people in the world are not, either. I don't know what you intend to accomplish by telling people that every single one of their valued life choices is morally equivalent to letting a child die.

If your answer is "I think people will be receptive to this", I have completely different intuitions. If your answer is "I want to highlight true and important arguments even if nobody is receptive to them", you're welcome to do that, but that has basically no impact on the audience of this forum.

The drowning child motivated a lot of people to be more thoughtful about helping people far away from them. But the EA project has evolved much further beyond that. We have institutions to manage, careers to create, money to spend, regulatory agendas to advance, causes to explore. I think it's time to retire the drowning child, and send it the way of the paperclip maximizer.

Ben Millwood @ 2024-07-24T21:48 (+6)

I'm not clear on whether you think the drowning child argument is browbeating by nature, or whether you think that just this particular presentation of it is browbeating. (Your remark about retiring the drowning child implies the former, but another of your comments elsewhere implies that you can use the drowning child argument without browbeating people with it?)

Anyway, I don't think it's time to retire the argument, I still feel like I hear a lot of people cite it as insightful for them.

OscarD🔸 @ 2024-07-24T21:45 (+2) in response to Forum update: User database, card view, and more (Jul 2024)

Nice! I didn't actually know we had access to our author stats, cool. What is the difference between 'views' and 'reads'? Also, how 'true' do you think these numbers are? They seem rather surprisingly high to me, could there just be a bunch of bots racking up numbers?

Chris Leong @ 2024-07-24T15:50 (+4) in response to The last era of human mistakes

It would be useful to have a term along the lines of outcome lock-in to describe situations where the future is out of human hands.

That said, this is more of a spectrum than a dichotomy. As we outsource more decisions to AI, outcomes become more locked in and, as you note, we may never completely eliminate the human in the loop.

Nonetheless, this seems like a useful concept for thinking about what the future might look like.

Ben Millwood @ 2024-07-24T21:39 (+2)

I think the word "lock-in" can be confusing here. I usually think of "lock-in" as worrying about a future where things stop improving, or a particular value system or set of goals gets permanent supremacy. If this is what we mean, then I don't think "the future is out of human hands" is a sufficient for lock-in, because the future could continue to be dynamic or uncertain or getting better or worse, with AIs facing new and unique challenges and rising to them or failing to rise to them. Whatever story humans have set in motion is "locked in" in the sense that we can no longer influence it, but not in the sense that it'll necessarily have a stable state of affairs persist for those who exist in it. Maybe it's clearer to think of humans being "locked out" here, while AIs continue to have influence.

Vasco Grilo🔸 @ 2024-07-24T21:36 (+4) in response to The Precipice Revisited

Thanks for the update, Toby. I used to defer to you a lot. I no longer do. After investigating the risks myself in decent depth, I consistently arrived to estimates of the risk of human extinction orders of magnitude lower than your existential risk estimates. For example, I understand you assumed in The Precipice an annual existential risk for:

  • Nuclear war of around 5*10^-6 (= 0.5*10^-3/100), which is 843 k (= 5*10^-6/(5.93*10^-12)) times mine.
  • Volcanoes of around 5*10^-7 (= 0.5*10^-4/100), which is 14.8 M (= 5*10^-7/(3.38*10^-14)) times mine.

In addition, I think the existential risk linked to the above is lower than their extinction risk. The worst nuclear winter of Xia et. al 2022 involves an injection of soot into the stratosphere of 150 Tg, which is just 1 % of the 15 Pg of the Cretaceous–Paleogene extinction event. Moreover, I think this would only be existential with a chance of 0.0513 % (= e^(-10^9/(132*10^6))), assuming:

  • An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:
    • An exponential distribution with a mean of 66 M years describes the time between:
      • 2 consecutive such catastrophes.
      • i) and ii) if there are no such catastrophes.
    • Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1/2).
    • Consequently, one should expect the time between i) and ii) to be 2 times (= 1/0.50) as long as that if there were no such catastrophes.
  • An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.
Henri Thunberg @ 2024-07-24T21:09 (+1) in response to Forum update: User database, card view, and more (Jul 2024)

Thank you for the work that you do – the forum is one of the best products I use that just WORKS! :)

Denis @ 2024-07-24T21:08 (+3) in response to The Drowning Child Argument Is Simply Correct

I agree fully with the sentiment, but IMHO as a logical argument it fails, as so many arguments do, not in the details but in making a flawed assumption at the start. 

You write: "Clearly, in such a case, even though it would cost significant money, you’d be obligated to jump into the pond to save the child."

But this is simply not true. 

For two reasons:

  1. If we are obligated, it is by social pressure rather than ethics. If we thought people would find out about it, of course we'd feel obligated. But if not, maybe we would walk past. The proof of this is in exactly what you're trying to discourage - the fact that when faced with a similar situation without the same social pressure, most people do not feel obligated. 
  2. The scenario you describe isn't realistic. None of us wear $5000 suits. For someone who wears a $5000 suit, you're probably right. But for most of us, our mental picture of "I don't want to ruin my clothes" does not translate to "I am not willing to give up $5000." I'm not sure what the equivalent realistic scenario is. But in real cases of people drowning, choking or needing to be resuscitated, many people struggle even to overcome their own timidity to act in public. We see people stabbed and murdered in public places and bystanders not intervening. I do not see compelling evidence that most strangers feel morally compelled to make major personal risks or sacrifices to save a stranger's life. To give a very tangible example, how many people feel obligated to donate a kidney while they're alive to save the life of a stranger? It is something that many of us could do, but almost nobody does. I know that is probably worth more than $5000, but it's closer in order-of-magnitude than ruining our clothes. 

    Absolutely, it would be a better world for all of us if people did feel obliged to help strangers to the tune of $5000, but we don't live in that world ... yet.

The drowning child analogy is a great way to help people to understand why they should donate to charities like AMF, why they should take the pledge. 

But if you present it as a rigorous proof, then it must meet the standards of rigorous proof in order to convince people to change their minds. 


Additionally, my sense is that presenting it as an obligation rather than a free, generous act is not helpful. You risk taking the pleasure and satisfaction out of it for many people, and replacing that with guilt. This might convince some people, but might just cause others to resist and become defensive. There is so much evidence of this, where there are immensely compelling reasons to do things that even cost us nothing (e.g. vote against Trump) and still they do not change most people's behaviour. I think we humans have developed very thick skins and do not get forced into doing things by logical reasoning if we don't want to be. 

Ben Millwood @ 2024-07-24T20:34 (+6) in response to Evidence of Poor Cross-Cultural Interactions in the EA community

FWIW it strikes me as odd / surprising to say that e.g. Black African Americans are not Westerners purely by virtue of not being white or mixed race.

Ben Millwood @ 2024-07-24T21:07 (+4)

I should also say I'm grateful to you for what is obviously a pretty significant amount of work, done thoughtfully and with measured conclusions.

(also I think there are some links missing from the Recommendations section?)

jackva @ 2024-07-24T19:03 (+8) in response to The Precipice Revisited

Apologies if my comment was triggering the sense that I am questioning published climate science. I don't. I think / hope we are mostly misunderstanding each other.

With "politicized" here I do not mean that the report says inaccurate things, but merely that the selection of what is shown and how things are being framed in the SMP is a highly political result.
And the climate scientists here are political agents as well, so comparing it with prior versions would not provide counter-evidence.

To make clear what I mean with "politicized".
1. I do not think it is a coincidence that the fact that the graphic on climate impacts only shows very subtly that this assumes no adaptation. 
2. And that the graph on higher impacts at lower levels of warming does not mention that since the last update of the IPCC report we also have now expectations of much lower warming.

These kind of things are presentational choices that are being made, omissions one would not make if the goal was to maximally clarify the situation because those choices are always made in ways justifying more action. This is what I mean with "politicized", selectively presented and framed evidence.

 [EDIT: This is a good reference from very respected IPCC authors that discusses the politicized process with many examples]

ClimateDoc @ 2024-07-24T21:03 (+1)

Yeah I think that it's just that, to me at least, "politicized" has strong connotations of a process being captured by a particular non-broad political constituency or where the outcomes are closely related to alignment with certain political groups or similar. The term "political", as in "the IPCC SPMs are political documents", seems not to give such an impression. "Value-laden" is perhaps another possibility. The article you link to also seems to use "political" to refer to IPCC processes rather than "politicized" - it's a subtle difference but there you go. (Edit - though I do notice I said not to use "political" in my previous comment. I don't know, maybe it depends on how it's written too. It doesn't seem like an unreasonable word to use to me now.)

Re point 1 - I guess we can't know the intentions of the authors re the decision to not discuss climate adaptation there.

Re 2 - I'm not aware of the IPCC concluding that "we also have now expectations of much lower warming". So a plausible reason for it not being in the SPM is that it's not in the main report. As I understand it, there's not a consensus that we can place likelihoods on future emissions scenarios and hence on future warming, and then there's not a way to have consensus about future expectations about that. One line of thought seems to be that it's emission scenario designers' and the IPCC's job to say what is required to meet certain scenarios and what the implications of doing so are, and then the likelihood of the emissions scenarios are determined by governments' choices. Then, a plausible reason why the IPCC did not report on changes in expectations of warming is that it's largely about reporting consensus positions, and there isn't one here. The choice to report consensus positions and not to put likelihoods on emissions scenarios is political in a sense, but not in a way that a priori seems to favour arguments for action over those against. (Though the IPCC did go as far as to say we are likely to exceed 1.5C warming, but didn't comment further as far as I'm aware.)

So I don't think we could be very confident that it is politicized/political in the way you say, in that there seem to be other plausible explanations.

Furthermore, if the IPCC wanted to motivate action better, it could make clear the full range of risks and not just focus so much on "likely" ranges etc.! So if it's aiming to present evidence in a way to motivate more action, it doesn't seem that competent at it! (Though I do agree that in a lot of other places in the SYR SPM, the presentational choices do seem to be encouraging of taking greater action.)

Emrik @ 2024-07-24T18:35 (+2) in response to Notes on impostor syndrome

I think this is 100% wrong, but 100% the correct[1] way to reason about it!

I'm pretty sure water scarcity is a distraction wrt modelling AI futures; but it's best to just assert a model to begin with, and take it seriously as a generator of your own plans/actions, just so you have something to iterate on. If you don't have an evidentially-sensitive thing inside your head that actually generates your behaviours relevant to X, then you can't learn to generate better behaviours wrt to X.

Similarly: To do binary search, you must start by planting your flag at the exact middle of the possibility-range.

  1. ^

    One plausible process-level critique is that… perhaps this was not actually your best effort, even within the constraints of producing a quick comment? It's important to be willing to risk thinking&saying dumb things, but it's also important that the mistakes are honest consequences of your best effort.

    A failure-mode I've commonly inhabited in the past is to semi-consciously handicap myself with visible excuses-to-fail, so that if I fail or end up thinking/saying/doing something dumb, I always have the backup-plan of relying on the excuse / crutch. Eg,

    • While playing chess, I would be extremely eager to sacrifice material in order to create open tactical games; and when I lost, I reminded myself that "ah well, I only lost because I deliberately have an unusual playstyle; not because I'm bad or anything."
Rainbow Affect @ 2024-07-24T20:52 (+3)

Thanks a lot for your feedback!

Why do you think that data poisoning, scaling and water scarcity are a distraction to issues like AI alignment and safety? Am I missing something obvious? Did conflicts over water happen too few times (or not at all)? Can we easily deal with data poisoning and model scaling? Are AI alignment and safety that much bigger issues?

Ben Millwood @ 2024-07-24T20:34 (+6) in response to Evidence of Poor Cross-Cultural Interactions in the EA community

FWIW it strikes me as odd / surprising to say that e.g. Black African Americans are not Westerners purely by virtue of not being white or mixed race.

richard_ngo @ 2024-07-24T19:42 (+4) in response to JWS's Quick takes

I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.

Ryan Greenblatt @ 2024-07-24T20:34 (+1)

I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.

I don't disagree with this statement, but also think the original comment is reading into twitter way too much.

MichaelDickens @ 2024-07-24T20:24 (+2) in response to Dialogue on Donation Splitting

Is the amount of current donation splitting plus correlation enough that in practice ""EA should"" donation split more

I don't understand this sentence. If donation splitting is already happening to some degree, doesn't that make correlation less important, which weakens the case for donation splitting on the margin? But the context seems to suggest that JP thinks it strengthens the case for donation splitting.

huw @ 2024-07-24T12:16 (+12) in response to We need an independent investigation into how EA leadership has handled SBF and FTX

This is clearly an outstanding issue for a non-negligible proportion of the community. It doesn't matter if some people consider the issue closed, or the investigation superfluous; this investigation would bring that closure to the rest of EA. Everyone here should be interested in the unity that would come from this.

Ben Millwood @ 2024-07-24T20:23 (+6)

this investigation would bring that closure to the rest of EA.

I think how much closure the investigation brings will depend significantly on what it includes and what it concludes, and I think different people will have different standards about what will satisfy them. While I am in favour of more investigation, I would guess that realistically feasible investigations will not be able to close all relevant questions or really settle everything relevant in the collective mind of the community.

Dave Cortright @ 2024-07-24T19:47 (+1) in response to Dave Cortright's Quick takes

Mental health org in India that follows the paraprofessional model
https://reasonstobecheerful.world/maanasi-mental-health-care-women/

#mental-health-cause-area

Ryan Greenblatt @ 2024-07-23T20:30 (+11) in response to JWS's Quick takes

Once again, if you disagree, I'd love to actually here why.

I think you're reading into twitter way too much.

richard_ngo @ 2024-07-24T19:42 (+4)

I disagree FWIW. I think that the political activation of Silicon Valley is the sort of thing which could reshape american politics, and that twitter is a leading indicator.

Chris Leong @ 2024-07-24T15:50 (+4) in response to The last era of human mistakes

It would be useful to have a term along the lines of outcome lock-in to describe situations where the future is out of human hands.

That said, this is more of a spectrum than a dichotomy. As we outsource more decisions to AI, outcomes become more locked in and, as you note, we may never completely eliminate the human in the loop.

Nonetheless, this seems like a useful concept for thinking about what the future might look like.

Owen Cotton-Barratt @ 2024-07-24T19:27 (+2)

I think there's maybe a useful distinction to make between future-out-of-human-hands (what this post was about, where human incompetence no longer matters) and future-out-of-human-control (where humans can no longer in any meaningful sense choose what happens).

Owen Cotton-Barratt @ 2024-07-24T19:17 (+8) in response to We need an independent investigation into how EA leadership has handled SBF and FTX

I'm confused about how to relate to speaking about these issues. I feel like I can speak to several but not all of the questions you raise (as well as some things you don't directly ask about). I'm not sure there's anything too surprising there, but I'd I feel generically good about the EA community having more information.

But -- this is a topic which invites drama, in a way that I fear is sometimes disproportionate. And while I'm okay (to a fault) sharing information which invites drama for me personally, I'd feel bad about potentially stirring it up for other people.

That makes me hesitant. And I'm not sure how much my speaking would really help (of course I can't speak with anything like the authority of an external investigation). So my default is not to speak, at least yet (maybe in another year or two?).

Open to hearing arguments or opinions that that's the wrong meta-level orientation. (An additional complication is that some-but-not-all of the information I have came via my role on the board of EV, which makes me think it's not properly my information to choose whether to share. But this can be regarded as a choice about the information which does feel like it's mine to choose what to do with.)

ClimateDoc @ 2024-07-23T19:50 (+2) in response to The Precipice Revisited

Whilst policymakers have a substantial role in drafting the SPM, I've not generally heard scientists complain about political interference in writing it. Some heavy fossil fuel-producing countries have tried removing text they don't like, but didn't come close to succeeding. The SPM has to be based on the underlying report, so there's quite a bit of constraint. I don't see anything to suggest the SPM differs substantially from researchers' consensus. The initial drafts by scientists should be available online, so it could be checked what changes were made by the rounds of review.

When people say things are "politicized", it indicates to me that they have been made inaccurate. I think it's a term that should be used with great care re the IPCC, since giving people the impression that the reports are inaccurate or political gives people reason to disregard them.

I can believe the no adaptation thing does reflect the literature, because impacts studies do very often assume no adaptation, and there could well be too few studies that credibly account for adaptation to do a synthesis. The thing to do would be to check the full report to see if there is a discrepancy before presuming political influence. Maybe you think the WGII authors are politicised - that I have no particular knowledge of, but again climate impacts researchers I know don't seem concerned by it.

jackva @ 2024-07-24T19:03 (+8)

Apologies if my comment was triggering the sense that I am questioning published climate science. I don't. I think / hope we are mostly misunderstanding each other.

With "politicized" here I do not mean that the report says inaccurate things, but merely that the selection of what is shown and how things are being framed in the SMP is a highly political result.
And the climate scientists here are political agents as well, so comparing it with prior versions would not provide counter-evidence.

To make clear what I mean with "politicized".
1. I do not think it is a coincidence that the fact that the graphic on climate impacts only shows very subtly that this assumes no adaptation. 
2. And that the graph on higher impacts at lower levels of warming does not mention that since the last update of the IPCC report we also have now expectations of much lower warming.

These kind of things are presentational choices that are being made, omissions one would not make if the goal was to maximally clarify the situation because those choices are always made in ways justifying more action. This is what I mean with "politicized", selectively presented and framed evidence.

 [EDIT: This is a good reference from very respected IPCC authors that discusses the politicized process with many examples]

William the Kiwi @ 2024-07-24T08:18 (+2) in response to The Drowning Child Argument Is Simply Correct

The strongest counterargument for the Drowning Child argument is "reciprocity". 


If a person saves a nearby drowning child, there is a probability that the saved child then goes onto provide positive utility for the rescuer or their family/tribe/nation. A child who is greatly geographically distant, or is unwilling to provide positive utility to others, is less likely to provide positive utility for the rescuer or their family/tribe/nation. This is an evolutionary explanation of why people are more inclined to save children who are nearby, however the argument also applies to ethical egoists. 

David T @ 2024-07-24T19:02 (+1)

This argument is understandably unpopular because it's inconsistent with core principles of EA. 

But the principle of reciprocity (and adjacent kin selection arguments) absolutely is the most plausible argument for why the human species evolved to behave in an apparently altruistic[1] manner and value it in others in the first place, long before we started on abstract value systems like utilitarianism, and in many cases people still value or practice some behaviours that appear altruistic despite indifference to or active disavowal of utilitarian or deontological arguments for improving others' welfare.

  1. ^

    there's an entire literature on "reciprocal altruism"

AnonymousEAForumAccount @ 2024-07-24T18:48 (+2) in response to Why hasn't EA done an SBF investigation and postmortem?

I’ve written a post adding my own call for an independent investigation, which also outlines new information in support of that position. Specifically, my post documents important issues where EA leaders have not been forthcoming in their communications, troublesome discrepancies between leaders’ communications and credible media reports, and claims that leaders have made about post-FTX reforms that appear misleading.

D0TheMath @ 2024-07-24T16:25 (+7) in response to The Drowning Child Argument Is Simply Correct

I do think this is correct to an extent, but also that much moral progress has been made by reflecting on our moral inconsistencies, and smoothing them out. I at least value fairness, which is a complicated concept, but also is actively repulsed by the idea that those closer to me should weigh more in society's moral calculations. Other values I have, like family, convenience, selfish hedonism, friendship, etc are at odds with this fairness value in many circumstances.

But I think its still useful to connect the drowning child argument with the parts of me which resonate with it, and think about actually how much I care about those parts of me over other parts in such circumstances.

Human morality is complicated, and I would prefer more people 'round these parts do moral reflection by doing & feeling rather than thinking, but I don't think there's no place for argument in moral reflection.

David T @ 2024-07-24T18:47 (+1)

I think there's plenty of place for argument in moral reflection, but part of that argument includes accepting that things aren't necessarily "obvious" or "irrefutable" because they're intuitively appealing. Personally I think the drowning child experiment is pretty useful as thought experiments go, but human morality in practice is so complicated that even Peter Singer doesn't act consistently with it, and I don't think it's because he doesn't care.

Rainbow Affect @ 2024-07-24T17:38 (+7) in response to Notes on impostor syndrome

Sounds good! I'll try it!

Ahem. What if AGI won't be developed with current ML techniques? Data poisoning is a thing. AI models need a lot of data, and AI-generated content sits on the internet and when AI models get that data as training they begin to perform worse. There's also an issue with scaling. To make an AI model marginally better you need to scale it exponentially. AI models sit in data centers that need to be cooled off with water. To build microprocessors you need to spend a lot of water in mining metals and their production. Such water is scarce and it needs to also be used in agriculture, and when the water was used for producing microprocessors it can't really be used for other stuff. This means there might be resource constraints on building better AI models, especially if AI becomes monopolized by a few big tech companies (open source models seem smaller and you can develop one on a PC). Maybe AI won't be a big issue, unless wealthy countries wage wars over water availability in poorer countries. But I didn't put in any effort in writing this comment, so I'm wrong with a probability of 95% +- 5%. Here you have it, I just wrote a dumb comment. Yay for dumb space!

Emrik @ 2024-07-24T18:35 (+2)

I think this is 100% wrong, but 100% the correct[1] way to reason about it!

I'm pretty sure water scarcity is a distraction wrt modelling AI futures; but it's best to just assert a model to begin with, and take it seriously as a generator of your own plans/actions, just so you have something to iterate on. If you don't have an evidentially-sensitive thing inside your head that actually generates your behaviours relevant to X, then you can't learn to generate better behaviours wrt to X.

Similarly: To do binary search, you must start by planting your flag at the exact middle of the possibility-range.

  1. ^

    One plausible process-level critique is that… perhaps this was not actually your best effort, even within the constraints of producing a quick comment? It's important to be willing to risk thinking&saying dumb things, but it's also important that the mistakes are honest consequences of your best effort.

    A failure-mode I've commonly inhabited in the past is to semi-consciously handicap myself with visible excuses-to-fail, so that if I fail or end up thinking/saying/doing something dumb, I always have the backup-plan of relying on the excuse / crutch. Eg,

    • While playing chess, I would be extremely eager to sacrifice material in order to create open tactical games; and when I lost, I reminded myself that "ah well, I only lost because I deliberately have an unusual playstyle; not because I'm bad or anything."
M_Allcock @ 2024-07-23T11:57 (+38) in response to Peter Singer AMA (July 30th)

Please could you outline your views on moral realism? In particular your recent-ish transition from anti-realist to realist. What triggered this? Has it had any impacts on the way you live your life?

Daniel Birnbaum @ 2024-07-24T18:33 (+10)

He did a whole interview on this that can be found here: 

Daniel Birnbaum @ 2024-07-24T18:32 (+4) in response to Peter Singer AMA (July 30th)

How do you generally respond to evolutionary debunking arguments and the epistemological problem for moral realism (how we acquire facts about the moral truth), especially considering that, unlike mathematics, there are no empirical feedback loops to work off of (i.e. you can't go out and check if the facts fit with the external world)? It seems to me like we wouldn't think our mathematical intuitions if 1) we didn't have the empirical feedback loops or 2) the world told us that math didn't work sometimes. 

Emrik @ 2022-06-06T19:22 (+19) in response to Notes on impostor syndrome

This is excellent. Personally, (3) does everything for me. I don't need to think I'm especially clever if I think I'm ok being dumb. I'm not causing harm if I express my thoughts, as long as I give people the opportunity to ignore or reject me if they think I don't actually have any value to offer them. Here are some assorted personal notes on how being dumb is ok, so you don't need to be smart in order not to worry about it.

Exhibit A: Be conspicuously dumb as an act of altruism!

It must be ok to be dumber than average in a community, otherwise it will iteratively evaporate half its members until only one person remains. If a community is hostile to the left half of the curve, the whole community suffers. And the people who are safely in the top 10% are only "safe" because the dumber people stick around.

So if you're worried about being too dumb for the community... consider that maybe you're actually just contributing to lowering the debilitating pressure felt by the community as a whole. Perhaps even think of yourself as a hero, shouldering the burden of being dumber-than-average so that people smarter than you don't have to. Be conspicuously safe in your own stupidity, and you're helping others realise that they can be safe too. ^^

Exhibit B: Naive kindness perpetuates shame

Self-fulfilling norm tragedies. When the naive mechanism by which good people try to make something better, makes it worse instead.

1. No one wants intelligence to be the sole measure of a human's worth. Everyone affirms that "all humans are created equal."

2. Everyone worries that other people think dumb people are worth less because they're dumb.

3. So everyone also worries that other people will think they think that dumb people are worth less. They don't want to be seen as offensive, nor do they want to accidentally cause offense. They want to be good and be seen as good.

4. That's why they're overly cautious about even speaking about dumbness, to the point of pretending it doesn't even exist. (Remember, this follows from their kind motivations.)

5. But by being overly cautious about speaking about dumbness, and by pretending it doesn't exist, they're also unwittingly reinforcing the impression that dumbness is shamefwl. Heck, it's so shamefwl that people won't even talk about it!

You can find similar self-reinforcing patterns for other kinds of discrimination/prejudices. All of it seems to share a common solution: break down barriers to talking openly about so-called "shamefwl" things. I didn't say it was easy.

Exhibit C: Why I use the word "dumb"

I'm in favour of using the word "dumb" as a non-derogatory antonym of "smart".

The way society is right now you'd think the sole measure of human worth is how smart you are. My goal here is to make it feel alright to be dumb. And a large part of the problem is that no one is willing to point at the thing (dumbness) and treat it as a completely normal, mundane, and innocuous part of everyday life.

Every time you use an obvious euphemism for it like "less smart" or "specialises in other things", you are making it clear to everyone that being dumb is something so shamefwl that we need to pretend it doesn't exist. And sure, when you use the word "dumb" instead, someone might misunderstand and conclude that you think dumb people are bad in some way. But euphemisms *guarantee* that people learn the negative association.

Compare it to how children learn social norms. The way to teach your child that being dumb is ok is to actually behave as if that's true, and euphemisms are doing the exact opposite. We don't use "not-blue" to refer to brown eyes, but if we did you can be sure your children will try to pretend their eyes are blue.

Exhibit D: You need a space where you can be dumb

Where's the space in which you can speak freely, ask dumb questions, reveal your ignorance, display your true stupidity? You definitely need a space like that. And where's the space in which you must speak with care, try to seem smarter and more knowledgeable than you are, and impress professionals? Unfortunately, this too becomes necessary at times.

Wherever those spaces are, keep them separate. And may the gods have mercy on your soul if you only have the latter.

Rainbow Affect @ 2024-07-24T17:38 (+7)

Sounds good! I'll try it!

Ahem. What if AGI won't be developed with current ML techniques? Data poisoning is a thing. AI models need a lot of data, and AI-generated content sits on the internet and when AI models get that data as training they begin to perform worse. There's also an issue with scaling. To make an AI model marginally better you need to scale it exponentially. AI models sit in data centers that need to be cooled off with water. To build microprocessors you need to spend a lot of water in mining metals and their production. Such water is scarce and it needs to also be used in agriculture, and when the water was used for producing microprocessors it can't really be used for other stuff. This means there might be resource constraints on building better AI models, especially if AI becomes monopolized by a few big tech companies (open source models seem smaller and you can develop one on a PC). Maybe AI won't be a big issue, unless wealthy countries wage wars over water availability in poorer countries. But I didn't put in any effort in writing this comment, so I'm wrong with a probability of 95% +- 5%. Here you have it, I just wrote a dumb comment. Yay for dumb space!

alx @ 2024-07-24T15:45 (+1) in response to The Drowning Child Argument Is Simply Correct

Firstly: all hypotheticals such as this can be valuable as philosophical thought experiments but not to make moral value judgements on broad populations. Frankly this is a flawed argument at several levels because there are always consequences to actions and we must consider aggregate impact of action and consequence. 

Obviously I think we're all in favor of doing what good we find to be reasonable, however, you may as well take the hypothetical a step or two further: suppose you now have to sacrifice your entire life savings while also risking some probability of loosing your own life in the process of saving that child. Now let's assume that you're a single parent and have several children of your own to care for which may face starvation if you die.

My point is not to be some argumentative pedant. My point is that these moral hypotheticals are rarely black and white. There is always nuance to be considered.

Omnizoid @ 2024-07-24T16:58 (+2)

I address that in the article. 

JWS 🔸 @ 2024-07-24T09:41 (+2) in response to JWS's Quick takes

a) r.e. Twitter, almost tautologically true I'm sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all.

b) I haven't seen those comments,[1] could you point me to them or where they happened? I know there was a bunch of discussion around their concerns about the Biorisk paper, but I'm particularly concerned with the "Many AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AI" article - which I haven't seen good pushback to. Again, welcome to being wrong on this. 

  1. ^

    Ok, I've seen Ladish and Kokotajlo offer to talk which is good, would have like 1a3orn to take them up on that offer for sure.

Ryan Greenblatt @ 2024-07-24T16:48 (+1)

I haven't seen those comments

Scroll down to see comments.

elena @ 2024-07-24T16:31 (+3) in response to Peter Singer AMA (July 30th)

What strategies do you think are most effective for animal liberation? Which charities do you donate to and why! Thanks for all your work.

Joseph Miller @ 2024-07-24T08:15 (+9) in response to The Drowning Child Argument Is Simply Correct

proximity [...] is obviously not morally important

People often claim that you have a greater obligation to those in your own country than to foreigners. I’m doubtful of this

imagining drowning children that there are a bunch of nearby assholes ignoring the child as he drowns. Does that eliminate your reason to save the child? No, obviously not


Your argument seems to be roughly an appeal to the intuition that moral principles should be simple - consistent across space and time, without weird edge cases, not specific to the circumstances of the event. But why should they be?

Imo this is the mistake that people make when they haven't internalized reductionism and naturalism. In other words they are moral realist or otherwise confused. When you realize that "morality" is just "preferences" with a bunch of pointless religious, mystical and philosophical baggage, the situation becomes clearer.

Because preferences are properties of human brains, not physical laws there is no particular reason to expect them to have low Kolmogorov complexity. And to say that you "should" actually be consistent about moral principles is an empty assertion that entirely rests on a hazy and unnatural definition of "should".

D0TheMath @ 2024-07-24T16:25 (+7)

I do think this is correct to an extent, but also that much moral progress has been made by reflecting on our moral inconsistencies, and smoothing them out. I at least value fairness, which is a complicated concept, but also is actively repulsed by the idea that those closer to me should weigh more in society's moral calculations. Other values I have, like family, convenience, selfish hedonism, friendship, etc are at odds with this fairness value in many circumstances.

But I think its still useful to connect the drowning child argument with the parts of me which resonate with it, and think about actually how much I care about those parts of me over other parts in such circumstances.

Human morality is complicated, and I would prefer more people 'round these parts do moral reflection by doing & feeling rather than thinking, but I don't think there's no place for argument in moral reflection.

Karthik Tadepalli @ 2024-07-24T09:12 (+9) in response to The Drowning Child Argument Is Simply Correct

I am not receptive to browbeating. I suspect most people in the world are not, either. I don't know what you intend to accomplish by telling people that every single one of their valued life choices is morally equivalent to letting a child die.

If your answer is "I think people will be receptive to this", I have completely different intuitions. If your answer is "I want to highlight true and important arguments even if nobody is receptive to them", you're welcome to do that, but that has basically no impact on the audience of this forum.

The drowning child motivated a lot of people to be more thoughtful about helping people far away from them. But the EA project has evolved much further beyond that. We have institutions to manage, careers to create, money to spend, regulatory agendas to advance, causes to explore. I think it's time to retire the drowning child, and send it the way of the paperclip maximizer.

D0TheMath @ 2024-07-24T16:16 (+2)

Even if most aren't receptive to the argument, the argument may still be correct. In which case its still valuable to argue for and write about.

Hauke Hillebrandt @ 2024-07-22T14:50 (+3) in response to Peter Singer AMA (July 30th)

What's your production function?

henryj @ 2024-07-24T16:13 (+3)

I think this paragraph from the linked article captures the gist:

Near the end of most episodes, Tyler asks some version of this question to his guests: "What is your production function?". For those without an economics background, a "production function" is a mathematical equation that explains how to get outputs from inputs. For example, the relationship between the weather in Florida and the number of oranges produced could be explained by a production function. In this case, Tyler is tongue-in-cheek asking his guests what factors drive their success.

Not to anchor Singer too much, but it looks like other people seem to say things like "saying yes to new experiences," "reading a lot," and "being disciplined."

Denis @ 2024-07-24T16:02 (+1) in response to Introducing Mieux Donner: A new effective giving initiative in France

Formidable !! 

Great work Jen and Romain !

If you're desperate enough, I'm pretty good at BOTEC, and my French, while not great, isn't as bad as some other people's in the cohort, according to Romain ... 

Let me know if I can help!
 

Chris Leong @ 2024-07-24T15:50 (+4) in response to The last era of human mistakes

It would be useful to have a term along the lines of outcome lock-in to describe situations where the future is out of human hands.

That said, this is more of a spectrum than a dichotomy. As we outsource more decisions to AI, outcomes become more locked in and, as you note, we may never completely eliminate the human in the loop.

Nonetheless, this seems like a useful concept for thinking about what the future might look like.

Denkenberger @ 2024-07-20T21:51 (+6) in response to New 80k problem profile: Nuclear weapons
Existential catastrophe, annual0.30%20.04%David Denkenberger, 2018
Existential catastrophe, annual0.10%3.85%Anders Sandberg, 2018

 

 

 

 

You mentioned how some of the risks in the table were for extinction, rather than existential risk. However, the above two were for the reduction in long-term future potential, which could include trajectory changes that do not qualify as existential risk, such as slightly worse values ending up in locked-in AI. Also another source by this definition was the 30% reduction in long-term potential from 80,000 Hours' earlier version of this profile. By the way, the source attributed to me was based on a poll of GCR researchers - my own estimate is lower.

Vasco Grilo🔸 @ 2024-07-24T15:48 (+2)

Hi David,

Existential catastrophe, annual0.30%20.04%David Denkenberger, 2018
Existential catastrophe, annual0.10%3.85%Anders Sandberg, 2018

Based on my adjustments to CEARCH's analysis of nuclear and volcanic winter, the expected annual mortality of nuclear winter as a fraction of the global population is 7.32*10^-6. I estimated the deaths from the climatic effects would be 1.16 times as large as the ones from direct effects. In this case, the expected annual mortality of nuclear war as a fraction of the global population would be 1.86 (= 1 + 1/1.16) times the expected annual mortality of nuclear winter as a fraction of the global population, i.e. 0.00136 %(= 1.86*7.32*10^-6). So the annual losses in future potential mentioned in the table above are 221 (= 0.0030/(1.36*10^-5)) and 73.5 (= 0.0010/(1.36*10^-5)) times my expected annual death toll, whereas I would have expected the annual loss in future potential to be much lower than the expected annual death toll.

alx @ 2024-07-24T15:45 (+1) in response to The Drowning Child Argument Is Simply Correct

Firstly: all hypotheticals such as this can be valuable as philosophical thought experiments but not to make moral value judgements on broad populations. Frankly this is a flawed argument at several levels because there are always consequences to actions and we must consider aggregate impact of action and consequence. 

Obviously I think we're all in favor of doing what good we find to be reasonable, however, you may as well take the hypothetical a step or two further: suppose you now have to sacrifice your entire life savings while also risking some probability of loosing your own life in the process of saving that child. Now let's assume that you're a single parent and have several children of your own to care for which may face starvation if you die.

My point is not to be some argumentative pedant. My point is that these moral hypotheticals are rarely black and white. There is always nuance to be considered.

Ozzie Gooen @ 2024-07-24T15:40 (+36) in response to We need an independent investigation into how EA leadership has handled SBF and FTX

I've been contemplating writing a post about my side of the issue. I wasn't particularly close, but did get a chance to talk to some of the people involved.

Here's my rough take, at this point:
1. I don't think any EA group outside of FTX would take responsibility for having done a lot ($60k+ worth) of due-diligence and investigation of FTX. My impression is that OP considered this as not their job, and CEA was not at all in a position to do this (to biased, was getting funded by FTX). In general, I think that our community doesn't have strong measures in place to investigate funders. For example, I doubt that EA orgs have allocated $60k+ to investigate Dustin Moskovitz (and I imagine he might complain if others did!).
My overall impression was that this was just a large gap that the EA bureaucracy failed at. I similarly think that the "EA bureaucracy" is much weaker / less powerful than I think many imagine it being, and expect that there are several gaps like this. Note that OP/CEA/80k/etc are fairly limited organizations with specific agendas and areas of ownership. 

2. I think there were some orange/red flags around, but that it would have taken some real investigation to figure out how dangerous FTX was. I have uncertainty in how difficult it would have been to notice that fraud or similar were happening (I previously assumed this would be near impossible, but am less sure now, after discussions with one EA in finance). I think that the evidence / flags around then were probably not enough to easily justify dramatically different actions at the time, without investigation - other than the potential action of doing a lengthy investigation - but again, that doing one would have been really tough, given the actors involved.

Note that actually pulling off a significant investigation, and then taking corresponding actions, against an actor as powerful as SBF, would be very tough and require a great deal of financial independence.

3. My impression is that being a board member at CEA was incredibly stressful/intense, in the following months after the FTX collapse. My quick guess is that most of the fallout from the board would have been things like, "I just don't want to have to deal with this anymore" rather than particular disagreements with the organizations. I didn't get the impression that Rebecca's viewpoints/criticisms were very common for other board members/execs, though I'd be curious to get their takes.

4. I think that OP / CEA board members haven't particularly focused on / cared about being open and transparent with the EA community. Some of the immediate reason here was that I assume lawyers recommended against speaking up then - but even without that, it's kind of telling how little discussion there has been in the last year or so.

I suggest reading Dustin Moskovitz's comments for some specific examples. Basically, I think that many people in authority (though to be honest, basically anyone who's not a major EA poster/commenter) find "posting to the EA forum and responding to comments" to be pretty taxing/intense, and don't do it much.

Remember that OP staff members are mainly accountable to their managers, not the EA community or others. CEA is mostly funded by OP, so is basically similarly accountable to high-level OP people. (accountable means, "being employed/paid by" here)


5. In terms of power, I think there's a pretty huge power gap between the funders and the rest of the EA community. I don't think that OP really regards themselves as responsible for or accountable to the EA community. My impression is that they fund EA efforts opportunistically, in situations where it seems to help both parties, but don't want to be seen as having any long-term obligations or such. We don't really have strong non-OP funding sources to fund things like "serious investigations into what happened." Personally, I find this situation highly frustrating, and think it gets under-appreciated.
 

6. My rough impression is that from the standpoint of OP / CEA leaders, there's not a great mystery around the FTX situation, and they also don't see it happening again. So I think there's not that much interest here into a deep investigation.
 


So, in summary, my take is less, "there was some conspiracy where a few organizations did malicious things," and more, "the EA bureaucracy has some significant weaknesses that were highlighted here." 


Note: Some of my thinking on this comes from my time at the reform group. We spent some time coming up with a list of potential reform projects, including having better investigative abilities. My impression is that there generally hasn't been much concern/interest in this space.
 

huw @ 2024-07-24T12:16 (+12) in response to We need an independent investigation into how EA leadership has handled SBF and FTX

This is clearly an outstanding issue for a non-negligible proportion of the community. It doesn't matter if some people consider the issue closed, or the investigation superfluous; this investigation would bring that closure to the rest of EA. Everyone here should be interested in the unity that would come from this.

AnonymousEAForumAccount @ 2024-07-24T15:35 (+14)

Indeed. And if EA leaders do believe that the issue is closed or that an investigation would be superfluous (which seems to be a common, if not the default, leadership position), they should make the case for that position explicitly and publicly. As things stand, the clearest articulation I’ve seen as to why there hasn’t been an independent investigation comes from Rob Bensinger’s account of what an unidentified “EA who was involved in EA’s response to the FTX implosion” told him based on information that dated from ~April 2023 and “might be out of date”.

Matt Boyd @ 2024-07-20T21:50 (+3) in response to New 80k problem profile: Nuclear weapons

Similarly to Owen's comment, I also think that AI and nuclear interact in important ways (various pathways to destabilisation that do not necessarily depend on AGI). It seems that many (most?) pathways from AI risk to extinction lead via other GCRs eg pandemic, nuclear war, great power war, global infrastructure failure, catastrophic food production failure, etc. So I'd suggest quite a bit more hedging with focus on these risks, rather than putting all resources into 'solving AI' in case that fails and we need to deal with these other risks. 

Vasco Grilo🔸 @ 2024-07-24T15:22 (+2)

Great points, Matt.

I think essentially all (not just many) pathways from AI risk will have to flow through other more concrete pathways. AI is a general purpose technology, so I feel like directly comparing AI risk with other lower level pathways of risk, as 80 k seems to be doing somewhat when they describe the scale of their problems, is a little confusing. To be fair, 80 k tries to account for this talking about the indirect risk of specific risks, which they often set to 10 times the direct risk, but these adjustments seem very arbitrary to me.

In general, one can get higher risk estimates by describing risk at a higher level. So the existential risk from LLMs is smaller than the risk from AI, which is smaller than the risk from computers, which is smaller than the risk from e.g. subatomic particles. However, this should only update one towards e.g. prioritise "computer risk" over "LLM risk" to the extent the ratio between the cost-effectiveness of "computer risk interventions" and "LLM risk interventions" is proportional to the ratio between the scale of "computer risk" and "LLM risk", which is quite unclear given the ambiguity and vagueness of the 4 terms involved[1].

To get more clarity, I believe it is be better to prioritise at a lower level, assessing the cost-effectiveness of specific classes of interventions, as Ambitious Impact (AIM), Animal Charity Evaluators (ACE), the Centre for Exploratory Altruism Research (CEARCH), and GiveWell do.

  1. ^

    "Computer risk", "LLM risk", "computer risk interventions" and "LLM risk interventions".

Lostelly @ 2024-07-24T15:09 (+1) in response to Pilot Results: High-Impact Psychology/ Mental Health (HIPsy)

This is fascinating stuff! The demand for mental health-related activities is clearly high. It's great to see that mentoring, meetups, and workshops are top priorities. I think the idea of hiring a project manager for a year is a smart move. Having someone dedicated to building and organizing these resources can make a big difference. Plus, it will help maintain momentum and ensure these initiatives are sustainable. Does anyone know if there are similar projects in other cause areas that we could learn from?

Lostelly @ 2024-07-24T15:01 (+1) in response to Introducing Mieux Donner: A new effective giving initiative in France

This sounds really cool! It's great to see something new like Mieux Donner focused on effective giving in France and Switzerland. I love your goals and how clear your plan is. The 10% Pledge pilot project sounds interesting—how will you get people involved and keep them motivated? Also, I’m curious about how you’ll work alongside Don Efficace since you both have different focuses. It seems like there’s a lot of potential here!

Aaron Graifman @ 2024-07-24T14:59 (+2) in response to Peter Singer AMA (July 30th)

Motivations behind question: Novel. I'm curious to hear what Peter Singer thinks about arguments that explain away free will due to prior causality, and how this is reconciled with the Drowning Child argument. I still want to do good, and believe the argument cannot be falsified, but I'm curious to hear his thinking. For me, I believe doing good is right for a number of reasons, and whether or not free will exists, it doesn't matter to me (choice or not), because I will donate, and share EA, and buy into the argument.

Whether I had any choice in the matter... well who knows?


I would love to hear what Peter thinks about the free will debate and the ideas posed by Robert Sapolsky in Determined.

Epistemic Status of Paraphrase below:  Read Sam Harris' Free Will essay, and listened to a number of podcasts on free will, as well as this one mentioned partially. 

For those who don't know, Sapolsky is claiming a hard deterministic stance, and explains why downward causation still does not account for the idea of free will, because for this common idea to exist, the constituents would need to somehow become different. For example, wetness is an emergent property of water because wetness only exists with many water molecules involved... but this doesn't mean that somehow the water molecules become O2H instead of H2O when they become wet. 
But this is what is being claimed in free will debates. Our consciousness doesn't magically exhibit structural changes bearing free will. The feeling of free will arises but not some structural change. 

Anyway, that's my paraphrase of what I heard in the conversation between Sam Harris and Sapolsky recently. Figured it was worth a shot posting this question, but I understand it is somewhat irrelevant and respect if it is passed over. 

Cheers,
and I do truly hope this finds you well

Chris Leong @ 2024-07-24T14:53 (+3) in response to Subtle Acts of Exclusion <> Microaggression and Internalised Racism

And because microaggression and internalised racism (MIR) may come across as “culture war” loaded terms (despite them also being academic terms)

 

You seem to be assuming that just because something is an academic term that it isn't culture war loaded, despite the fact that some of these fields don't actually see objectivity as having any value.

(I actually upvoted this post because it is very well written and I appreciate you taking all of this time to define a key term).

Devin Lam @ 2024-07-24T13:12 (+1) in response to The Drowning Child Argument Is Simply Correct

My understanding of this (blog) post is a restating of the drowning child thought experiment in OP's voice, with their confident personal writing style. I'm not certain about their intentions behind the article.

In terms using the drowning child argument in general, particularly when explaining what is EA to people who have never heard it before, I do still think it's useful; people understand the general meaning behind it even when only half-explained in 45 seconds by non-philsophers.

Karthik Tadepalli @ 2024-07-24T14:45 (0)

That's fair, if it's more of an expository exercise for OP's own sake, I can respect that. But

people understand the general meaning behind it even when only half-explained in 45 seconds by non-philsophers.

is exactly why I'm not a fan of using it to browbeat people. It is simple and makes its point clear without you needing to tell people how immoral they are.

manueins @ 2024-07-24T14:23 (+1) in response to The Precipice Revisited

Thank you for sharing this. Your reflections are insightful and provide a hopeful perspective on our collective future. It is good news that climate risks are declining and that global leaders and institutions are starting to take existential risks seriously. You emphasize the importance of immediate action and long-term stability, and it is encouraging to see the UN and other influential figures prioritizing these issues. This is inspiring to see a way forward in the face of such a monumental challenge.

Ozzie Gooen @ 2024-07-23T13:07 (+2) in response to My Current Claims and Cruxes on LLM Forecasting & Epistemics

Wait - is this written by Claude or ChatGPT? I'm not sure if you intended it as such, but it has a writing style that seems almost exactly what I'd expect from LLMs. 

manueins @ 2024-07-24T14:12 (+1)

Hi, thanks for replying! AI just help me a lot to understand and write this, but before commenting i have rewritten it with my understanding. I'm sorry if it seems like AI writing style or bad.  But what do you think about humanize ai text?

SummaryBot @ 2024-07-24T13:35 (+1) in response to We need an independent investigation into how EA leadership has handled SBF and FTX

Executive summary: The post argues that EA leadership has not been sufficiently transparent about their relationships with Sam Bankman-Fried (SBF) and FTX, and calls for an independent investigation into how EA leaders handled the situation before and after FTX's collapse.

Key points:

  1. EA leaders have not fully disclosed important facts about SBF's involvement with EA organizations, including his role as a major donor and board member.
  2. There are discrepancies between EA leaders' statements and credible media reports regarding warnings about SBF's behavior and ethics.
  3. EA leadership has not adequately addressed reports of internal investigations into SBF's conduct at Alameda Research.
  4. Claims about post-FTX reforms by EA leaders may be misleading or overstated.
  5. Many questions remain unanswered about due diligence, awareness of red flags, and actions taken by EA leaders regarding FTX.
  6. An independent investigation is needed to clarify these issues and ensure accountability within the EA community.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-07-24T13:33 (+1) in response to Positive visions for AI

Executive summary: AI has the potential to bring tremendous benefits to humanity, including automating mundane work, lowering coordination costs, spreading intelligence, accelerating technological progress, and enabling greater self-actualization, but also carries serious risks that must be carefully managed.

Key points:

  1. AI could automate mundane mental and physical tasks, freeing humans for more meaningful pursuits.
  2. AI may dramatically lower coordination costs at all scales, from job matching to geopolitics.
  3. AI could spread the benefits of intelligence more widely through AI advisors and tutoring.
  4. As a meta-technology, AI has the potential to greatly accelerate technological progress across all fields.
  5. Increased wealth and energy from AI advances correlate strongly with improved human wellbeing.
  6. In a post-scarcity world enabled by AI, humans may have greater opportunity for self-actualization.
  7. Serious existential risks from advanced AI must be mitigated to realize these potential benefits safely.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-07-24T13:32 (+1) in response to Subtle Acts of Exclusion <> Microaggression and Internalised Racism

Executive summary: Microaggressions and internalized racism (MIR) are subtle forms of discrimination that can occur between Westerners and non-Westerners, with examples and non-examples provided to aid understanding.

Key points:

  1. Subtle acts of exclusion (SAE) is used as an alternative term for microaggressions to avoid loaded language.
  2. Microaggressions are subtle, exclusionary acts that are prejudicial or unjust, while internalized racism involves accepting negative messages about one's own group.
  3. Examples of MIR include assuming inferiority, unfair treatment, and overvaluing Western norms and people.
  4. Identifying MIR requires understanding intention and context, making it challenging to definitively label behaviors.
  5. Non-examples are provided to distinguish MIR from general meanness, genuine surprise, or practical choices.
  6. The author acknowledges the complexity of the topic and potential for misinterpretation or over-correction.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-07-24T13:30 (+1) in response to The Drowning Child Argument Is Simply Correct

Executive summary: The drowning child argument demonstrates that we have a moral obligation to donate significantly to effective charities, as failing to do so is equivalent to ignoring a drowning child we could easily save.

Key points:

  1. The drowning child scenario shows we should sacrifice money to save a life when the cost is comparatively small.
  2. Donating to effective charities that save lives for a few thousand dollars each is morally equivalent to saving a drowning child.
  3. Common objections like proximity, special obligations, or others' inaction do not negate our duty to save lives through charity.
  4. While we may not be obligated to give everything, we should make charitable giving a significant part of our lives.
  5. Spending on luxuries is hard to justify when that money could save children's lives.
  6. Recommended action: Take the Giving What We Can pledge or donate to GiveWell charities.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-07-24T13:28 (+1) in response to Some low-confidence takes about cross-cultural interactions between Western EAs and non-Western EAs

Executive summary: Cross-cultural interactions (CCIs) in the EA community can lead to minor but common issues for non-Western EAs, and the author provides low-confidence suggestions for improving these interactions.

Key points:

  1. Meta-conversations can help deconfuse uncomfortable CCIs by discussing the interaction itself.
  2. Avoid jokes or backhanded compliments about names or language skills, as they can be subtle acts of exclusion.
  3. Western EAs should be mindful of norm hijacking in non-Western settings and adapt to local customs when appropriate.
  4. Non-Western EA organizers should design and enforce norms that balance comfort for their target audience with program goals.
  5. When addressing norm violations or cultural conflicts, consider private conversations and seek advice from community health resources if needed.
  6. EA professional norms and codes of conduct should take precedence over potentially conflicting local cultural norms.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-07-24T13:27 (+1) in response to The last era of human mistakes

Executive summary: As AI capabilities advance, we are approaching a final era where human mistakes matter greatly before entering an era where AI systems prevent most consequential human errors, raising important questions about how to navigate this transition period.

Key points:

  1. An era is coming where AI will advise on most important decisions, preventing many human errors.
  2. The transition period before this era - the "last era of human mistakes" - will be critical and challenging to navigate.
  3. Key challenges will include setting up the "gameboard" well (players, power distribution, social equilibrium, technology).
  4. Potential strategies to help from our current vantage point: 
    a) Deepening understanding of foundational matters 
    b) Power-seeking on behalf of desirable values (with caution) 
    c) Differential technological development
  5. This framing highlights how strange the future may be, but doesn't provide clear actionable guidance.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

mikbp @ 2024-07-24T13:25 (+1) in response to On the Value of Advancing Progress

I really like the paper and I appreciate the effort to put it together and easy to understand. And, I particularly appreciate the effort put in rising attention to this problem. But I am extremely surprised/puzzled that this was not common understanding! This is what lies below the AGI and even degrowth discourses, for example, no? This is why one has to first make sure AGI is safe before putting it out there. What am I missing?

Karthik Tadepalli @ 2024-07-24T09:12 (+9) in response to The Drowning Child Argument Is Simply Correct

I am not receptive to browbeating. I suspect most people in the world are not, either. I don't know what you intend to accomplish by telling people that every single one of their valued life choices is morally equivalent to letting a child die.

If your answer is "I think people will be receptive to this", I have completely different intuitions. If your answer is "I want to highlight true and important arguments even if nobody is receptive to them", you're welcome to do that, but that has basically no impact on the audience of this forum.

The drowning child motivated a lot of people to be more thoughtful about helping people far away from them. But the EA project has evolved much further beyond that. We have institutions to manage, careers to create, money to spend, regulatory agendas to advance, causes to explore. I think it's time to retire the drowning child, and send it the way of the paperclip maximizer.

Devin Lam @ 2024-07-24T13:12 (+1)

My understanding of this (blog) post is a restating of the drowning child thought experiment in OP's voice, with their confident personal writing style. I'm not certain about their intentions behind the article.

In terms using the drowning child argument in general, particularly when explaining what is EA to people who have never heard it before, I do still think it's useful; people understand the general meaning behind it even when only half-explained in 45 seconds by non-philsophers.

H. E. Baber @ 2024-07-24T12:47 (+5) in response to Peter Singer AMA (July 30th)

Why shouldn't one be a moral satisficer? I'm a satisficer in most things. I'd do better on the piano if I practiced longer and more regularly, but I'm happy with late intermediate/early advanced, etc. And I'm satisfied with the results of my satisficing in most things. And I'm satisfied with roughly a B- goodness rating--which given grade inflation is about average. Why should being being moral be any different from working at the piano or anything else in this regard? Or do you agree that moral satisficing is satisfactory?

huw @ 2024-07-24T12:16 (+12) in response to We need an independent investigation into how EA leadership has handled SBF and FTX

This is clearly an outstanding issue for a non-negligible proportion of the community. It doesn't matter if some people consider the issue closed, or the investigation superfluous; this investigation would bring that closure to the rest of EA. Everyone here should be interested in the unity that would come from this.

lauren_mee @ 2024-07-24T11:49 (+1) in response to Peter Singer AMA (July 30th)

Thanks for everything you do! We wouldn't be here without you.
 

What do you think is the most neglected, potentially high-impact career opportunity that could make significant progress for farmed animals.

Felipe Camargo @ 2024-07-24T11:17 (+1) in response to Why I find longtermism hard, and what keeps me motivated

I find it quite hard to talk about longtermism. It always seems like I'm getting into a conspiracy theory group. Especially since most of my friends are hardcore leftists that believe global inequality, climate change and factory farming are by far the most pressing problems. Telling a person like that that you consider that AI related catastrophes are more relevant that people dying today from preventable diseases makes me feel like a complete as***le and a conspiracy theorist. On the other hand, longtermists are right and I want to tell people that and share my beliefs and decisions. Besides that, nice piece.

Ryan Greenblatt @ 2024-07-23T20:23 (+7) in response to JWS's Quick takes

absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;dr - AI Safety people, engage with 1a3orn more!)

There are many (edit: 2) comments responding and offering to talk. 1a3orn doesn't appear to have replied to any of these comments. (To be clear, I'm not saying they're under any obligation here, just that there isn't a absence of attempted engagement and thus you shouldn't update in the direction you seem to be updating here.)

JWS 🔸 @ 2024-07-24T09:41 (+2)

a) r.e. Twitter, almost tautologically true I'm sure. I think it is a bit of signal though, just very noisy. And one of the few ways for non-Bay people such as myself to try to get a sense of the pulse of the Bay, though obviously very prone to error, and perhaps not worth doing at all.

b) I haven't seen those comments,[1] could you point me to them or where they happened? I know there was a bunch of discussion around their concerns about the Biorisk paper, but I'm particularly concerned with the "Many AI Safety Orgs Have Tried to Criminalize Currently-Existing Open-Source AI" article - which I haven't seen good pushback to. Again, welcome to being wrong on this. 

  1. ^

    Ok, I've seen Ladish and Kokotajlo offer to talk which is good, would have like 1a3orn to take them up on that offer for sure.

Conor Barnes @ 2024-07-05T19:43 (+8) in response to 80,000 hours should remove OpenAI from the Job Board (and similar EA orgs should do similarly)

Update: We've changed the language in our top-level disclaimers: example. Thanks again for flagging! We're now thinking about how to best minimize the possibility of implying endorsement.

Linda Linsefors @ 2024-07-24T09:20 (+1)

I can't find the disclaimer. Not saying it isn't there. But it should be obvious from just skimming the page, since that is what most people will do. 

Lostelly @ 2024-07-24T09:19 (+1) in response to AMA: Beast Philanthropy's Darren Margolias

Love the work you’re doing with Beast Philanthropy. Can you share a bit about what goes into selecting the projects you guys work on? Like, what criteria do you use to decide which causes to support? Thanks!

Joseph Miller @ 2024-07-24T08:15 (+9) in response to The Drowning Child Argument Is Simply Correct

proximity [...] is obviously not morally important

People often claim that you have a greater obligation to those in your own country than to foreigners. I’m doubtful of this

imagining drowning children that there are a bunch of nearby assholes ignoring the child as he drowns. Does that eliminate your reason to save the child? No, obviously not


Your argument seems to be roughly an appeal to the intuition that moral principles should be simple - consistent across space and time, without weird edge cases, not specific to the circumstances of the event. But why should they be?

Imo this is the mistake that people make when they haven't internalized reductionism and naturalism. In other words they are moral realist or otherwise confused. When you realize that "morality" is just "preferences" with a bunch of pointless religious, mystical and philosophical baggage, the situation becomes clearer.

Because preferences are properties of human brains, not physical laws there is no particular reason to expect them to have low Kolmogorov complexity. And to say that you "should" actually be consistent about moral principles is an empty assertion that entirely rests on a hazy and unnatural definition of "should".

Karthik Tadepalli @ 2024-07-24T09:16 (–7)

OP is full of claims that are "obvious" and "clear", which fail the intellectual turing test. Most people who are unconvinced by the drowning child thought experiment will not agree with the core premises. Making this whole exercise a bit pointless.

Karthik Tadepalli @ 2024-07-24T09:12 (+9) in response to The Drowning Child Argument Is Simply Correct

I am not receptive to browbeating. I suspect most people in the world are not, either. I don't know what you intend to accomplish by telling people that every single one of their valued life choices is morally equivalent to letting a child die.

If your answer is "I think people will be receptive to this", I have completely different intuitions. If your answer is "I want to highlight true and important arguments even if nobody is receptive to them", you're welcome to do that, but that has basically no impact on the audience of this forum.

The drowning child motivated a lot of people to be more thoughtful about helping people far away from them. But the EA project has evolved much further beyond that. We have institutions to manage, careers to create, money to spend, regulatory agendas to advance, causes to explore. I think it's time to retire the drowning child, and send it the way of the paperclip maximizer.

Marta_Krzeminska @ 2024-07-24T08:21 (+2) in response to EA Fuck Up Night | Celebrating Failures

To everyone who joined the first FUN (F*uck Up Night) Session 💕 Thank you for open sharing, active listening, and co-creating a welcoming atmosphere! 

I have two things for you 👉 one ask and one gift. 

🙋‍♀️ Ask. Did you enjoy the session? Did you think it was a disaster? This anonymous feedback form is your chance to let me know! It takes just 2 min and will really help me improve future events of similar kind!

🎁 A gift of bonus links. Some resources that my attention has selectively caught before the event:

William the Kiwi @ 2024-07-24T08:18 (+2) in response to The Drowning Child Argument Is Simply Correct

The strongest counterargument for the Drowning Child argument is "reciprocity". 


If a person saves a nearby drowning child, there is a probability that the saved child then goes onto provide positive utility for the rescuer or their family/tribe/nation. A child who is greatly geographically distant, or is unwilling to provide positive utility to others, is less likely to provide positive utility for the rescuer or their family/tribe/nation. This is an evolutionary explanation of why people are more inclined to save children who are nearby, however the argument also applies to ethical egoists. 

Joseph Miller @ 2024-07-24T08:15 (+9) in response to The Drowning Child Argument Is Simply Correct

proximity [...] is obviously not morally important

People often claim that you have a greater obligation to those in your own country than to foreigners. I’m doubtful of this

imagining drowning children that there are a bunch of nearby assholes ignoring the child as he drowns. Does that eliminate your reason to save the child? No, obviously not


Your argument seems to be roughly an appeal to the intuition that moral principles should be simple - consistent across space and time, without weird edge cases, not specific to the circumstances of the event. But why should they be?

Imo this is the mistake that people make when they haven't internalized reductionism and naturalism. In other words they are moral realist or otherwise confused. When you realize that "morality" is just "preferences" with a bunch of pointless religious, mystical and philosophical baggage, the situation becomes clearer.

Because preferences are properties of human brains, not physical laws there is no particular reason to expect them to have low Kolmogorov complexity. And to say that you "should" actually be consistent about moral principles is an empty assertion that entirely rests on a hazy and unnatural definition of "should".

indrekk @ 2024-07-20T06:34 (+12) in response to AMA: Beast Philanthropy's Darren Margolias

Can someone share a link to the interview? I can't find it anywhere.

Toby Tremlett @ 2024-07-24T08:14 (+5)

It'll be up soon, we'll post about it or update this post. Just needs to be trimmed etc. Stay tuned!

Fai @ 2024-07-24T07:51 (+2) in response to Peter Singer AMA (July 30th)

written about my Richard Chappell 

Minor stuff: Is this meant to be "written about by Richard Chappell "?

Toby Tremlett @ 2024-07-24T08:04 (+2)

Haha yes, thanks!

Fai @ 2024-07-24T07:51 (+2) in response to Peter Singer AMA (July 30th)

written about my Richard Chappell 

Minor stuff: Is this meant to be "written about by Richard Chappell "?

PabloAMC @ 2024-07-24T06:17 (+5) in response to Introducing Mieux Donner: A new effective giving initiative in France

Hi there! Some minor feedback for the webpage: instead of starting with the causes, I’d argue you should start with the value proposition: “your euro goes further or something along those lines”. You may want to check ayudaefectiva.org for an example. Congratulations on the new org!

AnonymousEAForumAccount @ 2024-07-24T04:37 (+4) in response to Quick Update on Leaving the Board of EV

I’m very grateful that Rebecca had the integrity to resign her board seat and to share the reason why. I’ve published a new post that shares evidence supporting her allegations that EA leaders made mistakes around FTX and don’t seem interested in helping the community learn the appropriate lessons, and echoes her call for an independent investigation. My post documents important issues where EA leaders have not been forthcoming in their communications, troublesome discrepancies between leaders’ communications and credible media reports, and claims that leaders have made about post-FTX reforms that appear misleading.

Douglas Knight @ 2024-07-24T00:24 (0) in response to Warren Buffett changes giving plans (for the worse)

Gates has more money than he knows what to do with. If he wants to spend another hundred billion, he could just donate it himself. He doesn't have quite as much money outside the foundation as Buffet, but almost. Donating to Gates has zero value. Maybe spending on militias has negative value, but so do most of these foundations.

Donating to Gates was a bad idea 20 years ago. Maybe there was some option value that he would think of a way to spend the money, but he didn't. Gates should have tried to convince Buffet to donate not his money but his time, his expertise in management. Maybe he tried, but he failed 20 years ago and today changes nothing.

Rakefet Cohen Ben-Arye @ 2024-07-24T00:08 (+3) in response to Peter Singer AMA (July 30th)

Hi, thanks so much for being here! Could you please talk me through the rationale for assigning moral value to non-human animals?



Comments on 2024-07-23

huw @ 2024-07-23T23:00 (+9) in response to Peter Singer AMA (July 30th)

What beings are inside and outside of your moral circle these days? If your views (e.g. on insects) have meaningfully changed recently, why?

JWS 🔸 @ 2024-07-23T12:34 (+3) in response to JWS's Quick takes

Folding in Responses here

@thoth hermes (or https://x.com/thoth_iv if someone can get it to them if you're Twitter friends then pls go ahead.[1] I'm responding to this thread here - I am not saying "that EA is losing the memetic war because of its high epistemic standards", in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/not caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if there's a way for you to get in touch directly, I'd love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking 'why is that? what are we getting wrong?' rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didn't make that clear enough in my OP though.

@Iyngkarran Kumar - Thanks for sharing your thoughts, but I must say that I disagree with it. I don't think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while it's good for Eliezer to say what he thinks accurately, the 'bomb the datacenters'[3] piece has probably been harmful for AI Safety's cause, and things like it a very liable to turn people away from supporting AI Safety. I also don't think it's good to say that it's a claim of 'what we believe', as I don't really agree with Eliezer on much.

(r.e. inside vs outside game, see this post from Holly Elmore)

@anormative/ @David Mathers - Yeah it's difficult to manage the exact hypothesis here, especially for falsified preferences. I'm pretty sure SV is 'liberal' overall, but I wouldn't be surprised if Trump % is greater than 16 and 20, and it definitely seems to be a lot more open this time, e.g. a16z and Musk openly endorsing Trump, Sequoia Capital partners claiming that Biden dropping out was worse than the Jan 6th riot. Things seem very different this time around, different enough to be paid attention to.

-    -    -    -    -    -    -    -    -    -    -    -    

Once again, if you disagree, I'd love to actually here why. Up/down voting is a crude feedback to, and discussion of ideas leads to much quicker sharing of knowledge. If you want to respond but don't want to publicly, then by all means please send a DM :)

  1. ^

    I don't have Twitter and think it'd be harmful for my epistemic & mental health if I did get an account and become immersed in 'The Discourse'

  2. ^

    This piece from @1a3orn is excellent and to absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;dr - AI Safety people, engage with 1a3orn more!)

  3. ^

    I know that's not what it literally says but it's what people know it as

Ryan Greenblatt @ 2024-07-23T20:30 (+11)

Once again, if you disagree, I'd love to actually here why.

I think you're reading into twitter way too much.

JWS 🔸 @ 2024-07-23T12:34 (+3) in response to JWS's Quick takes

Folding in Responses here

@thoth hermes (or https://x.com/thoth_iv if someone can get it to them if you're Twitter friends then pls go ahead.[1] I'm responding to this thread here - I am not saying "that EA is losing the memetic war because of its high epistemic standards", in fact quite the opposite r.e. AI Safety, and maybe because of misunderstanding of how politics work/not caring about the social perception of the movement. My reply to Iyngkarran below fleshes it out a bit more, but if there's a way for you to get in touch directly, I'd love to clarify what I think, and also hear your thoughts more. But I think I was trying to come from a similar place that Richard Ngo is, and many of his comments on the LessWrong thread here very much chime with my own point-of-view. What I am trying to push for is the AI Safety movement reflecting on losing ground memetically and then asking 'why is that? what are we getting wrong?' rather than doubling down into lowest-common denominator communication. I think we actually agree here? Maybe I didn't make that clear enough in my OP though.

@Iyngkarran Kumar - Thanks for sharing your thoughts, but I must say that I disagree with it. I don't think that the epistemic standards are working against us by being too polite, quite the opposite. I think the epistemic standards in AI Safety have been too low relative to the attempts to wield power. If you are potentialy going to criminalise existing Open-Source models,[2] you better bring the epistemic goods. And for many people in the AI Safety field, the goods have not been brought (which is why I see people like Jeremy Howard, Sara Hooker, Rohit Krishnan etc get increasingly frustrated by the AI Safety field). This is on the field of AI Safety imo for not being more persuasive. If the AI Safety field was right, the arguments would have been more convincing. I think, while it's good for Eliezer to say what he thinks accurately, the 'bomb the datacenters'[3] piece has probably been harmful for AI Safety's cause, and things like it a very liable to turn people away from supporting AI Safety. I also don't think it's good to say that it's a claim of 'what we believe', as I don't really agree with Eliezer on much.

(r.e. inside vs outside game, see this post from Holly Elmore)

@anormative/ @David Mathers - Yeah it's difficult to manage the exact hypothesis here, especially for falsified preferences. I'm pretty sure SV is 'liberal' overall, but I wouldn't be surprised if Trump % is greater than 16 and 20, and it definitely seems to be a lot more open this time, e.g. a16z and Musk openly endorsing Trump, Sequoia Capital partners claiming that Biden dropping out was worse than the Jan 6th riot. Things seem very different this time around, different enough to be paid attention to.

-    -    -    -    -    -    -    -    -    -    -    -    

Once again, if you disagree, I'd love to actually here why. Up/down voting is a crude feedback to, and discussion of ideas leads to much quicker sharing of knowledge. If you want to respond but don't want to publicly, then by all means please send a DM :)

  1. ^

    I don't have Twitter and think it'd be harmful for my epistemic & mental health if I did get an account and become immersed in 'The Discourse'

  2. ^

    This piece from @1a3orn is excellent and to absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;dr - AI Safety people, engage with 1a3orn more!)

  3. ^

    I know that's not what it literally says but it's what people know it as

Ryan Greenblatt @ 2024-07-23T20:23 (+7)

absence of evidence of good arguments against it is evidence of the absence of said arguments. (tl;dr - AI Safety people, engage with 1a3orn more!)

There are many (edit: 2) comments responding and offering to talk. 1a3orn doesn't appear to have replied to any of these comments. (To be clear, I'm not saying they're under any obligation here, just that there isn't a absence of attempted engagement and thus you shouldn't update in the direction you seem to be updating here.)

Seth Herd @ 2024-07-23T19:52 (+3) in response to Caring about excellence

Yes, but pursuing excellence also costs time that could be spent elsewhere, and time/results tradeoffs are often highly nonlinear. 

The perfect is the enemy of the good. It seems to me that the most common LW/EA personality already pursues excellence more than is optimal.

For more, see my LW comment

jackva @ 2024-07-23T18:46 (+5) in response to The Precipice Revisited

It is true that this is not true for the long-form summary of the science.

What I mean is that this graphic is out of the "Summary for Policymakers", which is approved by policymakers and a fairly political document. 

Less formalistically, all of the infographics in the Summary for Policymakers are carefully chosen and one goal of the Summary for Policymakers is clearly to give ammunition for action (e.g. the infographic right above the cited one displays impacts in scenarios without any additional adaptation by end of century, which seems like a very implausible assumption as a default and one that makes a lot more sense when the goal is to display gravity of climate impacts rather than making a best guess of climate impacts).

ClimateDoc @ 2024-07-23T19:50 (+2)

Whilst policymakers have a substantial role in drafting the SPM, I've not generally heard scientists complain about political interference in writing it. Some heavy fossil fuel-producing countries have tried removing text they don't like, but didn't come close to succeeding. The SPM has to be based on the underlying report, so there's quite a bit of constraint. I don't see anything to suggest the SPM differs substantially from researchers' consensus. The initial drafts by scientists should be available online, so it could be checked what changes were made by the rounds of review.

When people say things are "politicized", it indicates to me that they have been made inaccurate. I think it's a term that should be used with great care re the IPCC, since giving people the impression that the reports are inaccurate or political gives people reason to disregard them.

I can believe the no adaptation thing does reflect the literature, because impacts studies do very often assume no adaptation, and there could well be too few studies that credibly account for adaptation to do a synthesis. The thing to do would be to check the full report to see if there is a discrepancy before presuming political influence. Maybe you think the WGII authors are politicised - that I have no particular knowledge of, but again climate impacts researchers I know don't seem concerned by it.

Toby_Ord @ 2024-07-12T16:02 (+29) in response to The Precipice Revisited

I hadn't seen that and I agree that it looks like a serious negative update (though I don't know what exactly it is measuring). Thanks for drawing it to my attention. I'm also increasingly worried about the continued unprecedentedly hot stretch we are in. I'd been assuming it was just one of these cases of a randomly hot year that will regress back to the previous trend, but as it drags on the hypothesis of there being something new happening does grow in plausibility.

Overall, 'mixed' might be a better summary of Climate.

jackva @ 2024-07-23T18:52 (+3)

Sorry for the delay!

Here is a good summary of whether or not the recent warming should make us worried more: https://www.carbonbrief.org/factcheck-why-the-recent-acceleration-in-global-warming-is-what-scientists-expect/

It is nuanced, but I think the TLDR is that recent observations are within the expected range (the trend observed since 2009 is within the range expected by climate models, though the observations are noisy and uncertain, as are the models).

 

Tyler Kolota @ 2024-07-23T18:47 (+1) in response to AMA: Beast Philanthropy's Darren Margolias

When & where can I watch the video?

ClimateDoc @ 2024-07-20T20:06 (+6) in response to The Precipice Revisited

"IPCC reports are famously politicized documents"

Why do you say that? It's not my impression when it comes to physical changes and impacts. (Not so sure about the economics and mitigation side.)

Though I find the "burning embers" diagrams like the one you show hard to interpret as what "high" risk/impact means doesn't seem well-defined and it's not clear to me it's being kept consistent between reports (though most others seem to love them for some reason...).

jackva @ 2024-07-23T18:46 (+5)

It is true that this is not true for the long-form summary of the science.

What I mean is that this graphic is out of the "Summary for Policymakers", which is approved by policymakers and a fairly political document. 

Less formalistically, all of the infographics in the Summary for Policymakers are carefully chosen and one goal of the Summary for Policymakers is clearly to give ammunition for action (e.g. the infographic right above the cited one displays impacts in scenarios without any additional adaptation by end of century, which seems like a very implausible assumption as a default and one that makes a lot more sense when the goal is to display gravity of climate impacts rather than making a best guess of climate impacts).

JWS 🔸 @ 2024-07-23T16:55 (+14) in response to defun's Quick takes

It's an unfortunate naming clash, there are different ARC Challenges:

ARC-AGI (Chollet et al) - https://github.com/fchollet/ARC-AGI

ARC (AI2 Reasoning Challenge) - https://allenai.org/data/arc

These benchmarks are reporting the second of the two.

LLMs (at least without scaffolding) still do badly on ARC, and I'd wager Llama 405B still doesn't do well on the ARC-AGI challenge, and it's telling that all the big labs release the 95%+ number they get on AI2-ARC, and not whatever default result they get with ARC-AGI...

(Or in general, reporting benchmarks where they can go OMG SOTA!!!! and not helpfully advance the general understanding of what models can do and how far they generalise. Basically, traditional benchmark cards should be seen as the AI equivalent of "IN MICE")

EJT @ 2024-07-23T18:15 (0)

Thanks!

EJT @ 2024-07-23T16:48 (+9) in response to defun's Quick takes

Wait, all the LLMs get 90+ on ARC? I thought LLMs were supposed to do badly on ARC.

JWS 🔸 @ 2024-07-23T16:55 (+14)

It's an unfortunate naming clash, there are different ARC Challenges:

ARC-AGI (Chollet et al) - https://github.com/fchollet/ARC-AGI

ARC (AI2 Reasoning Challenge) - https://allenai.org/data/arc

These benchmarks are reporting the second of the two.

LLMs (at least without scaffolding) still do badly on ARC, and I'd wager Llama 405B still doesn't do well on the ARC-AGI challenge, and it's telling that all the big labs release the 95%+ number they get on AI2-ARC, and not whatever default result they get with ARC-AGI...

(Or in general, reporting benchmarks where they can go OMG SOTA!!!! and not helpfully advance the general understanding of what models can do and how far they generalise. Basically, traditional benchmark cards should be seen as the AI equivalent of "IN MICE")

Nathan Young @ 2024-07-23T16:52 (+3) in response to Nathan Young's Quick takes

I want to say thanks to people involved in the EA endeavour. I know things can be tough at times, but you didn't have to care about this stuff, but you do. Thank you, it means a lot to me. Let's make the world better!

defun @ 2024-07-23T15:22 (+14) in response to defun's Quick takes

Meta has just released Llama 3.1 405B. It's open-source and in many benchmarks it beats GPT-4o and Claude 3.5 Sonnet:

Zuck's letter "Open Source AI Is the Path Forward".

EJT @ 2024-07-23T16:48 (+9)

Wait, all the LLMs get 90+ on ARC? I thought LLMs were supposed to do badly on ARC.

defun @ 2024-07-23T15:22 (+14) in response to defun's Quick takes

Meta has just released Llama 3.1 405B. It's open-source and in many benchmarks it beats GPT-4o and Claude 3.5 Sonnet:

Zuck's letter "Open Source AI Is the Path Forward".

SummaryBot @ 2024-07-23T15:01 (+1) in response to Vida Plena’s 2023 Impact Report: Measuring Progress and Looking Ahead

Executive summary: Vida Plena's 2023 Impact Report shows promising results in providing mental health care to vulnerable communities in Ecuador, with participants experiencing significant reductions in depression symptoms, though challenges and areas for improvement remain.

Key points:

  1. 434 participants received group therapy, with 68% showing clinically significant improvement in depression symptoms.
  2. Program reached vulnerable groups, including those experiencing food insecurity, female heads of households, and migrants/refugees.
  3. Challenges include improving retention rates and increasing the average reduction in PHQ-9 scores.
  4. Limitations of the report include lack of a control group and potential biases in data collection.
  5. Future plans involve expanding to rural areas, enhancing monitoring and evaluation systems, and adapting the therapy model to local contexts.
  6. Organization seeks feedback, connections in global mental health, and donations to support their mission.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-07-23T14:59 (+1) in response to How Canada's first EAGx may lead to change

Key points:

  1. EAGxToronto aims to provide networking, workshops, and presentations for 350-400 attendees focused on addressing global challenges.
  2. The conference's theory of change involves bringing together EA-familiar individuals for high-quality content and networking opportunities.
  3. Three key leverage points identified for maximizing impact: 
    a) Fostering a culture of playful self-organization 
    b) Facilitating excellent small group dialogues 
    c) Enabling participants to see the big picture and connect the dots
  4. These leverage points are mutually supportive and can be enhanced through specific strategies and participant behaviors.
  5. The author suggests that organizers and participants can actively support these dynamics to improve the conference's outcomes.
  6. The post encourages EA organizers to collaboratively explore and refine mental models for event planning and community building.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Vasco Grilo🔸 @ 2024-07-23T14:33 (+2) in response to EA Forum feature suggestion thread

Here is an example with text in a table aligned to the left (select all text -> cell properties -> table cell text alignement).

StatisticAnnual epidemic/pandemic deaths as a fraction of the global population
Mean0.236 %
Minimum0
5th percentile1.19*10^-6
10th percentile3.60*10^-6
Median0.0276 %
90th percentile0.414 %
95th percentile0.684 %
Maximum10.3 %
Will Howard @ 2024-07-23T14:46 (+3)

Ah thanks, I didn't know we had that feature. In that case we should be able to fix this when importing, I'll get back to you when it's done

OllieBase @ 2024-07-23T14:44 (+10) in response to How Canada's first EAGx may lead to change

Thanks for writing this! I'm excited to see organisers and advisors really dive into theories of change for EAGx.

That said, I think the models here might benefit from looking at the existing data. Reading this post, you might think will be the first EA Global or EAGx event to take place—but there have been many, and we have lots of data on what people find useful.

The most useful post is probably this one

I also gathered data about what part of the event was most valuable from 2023 events (from follow-up surveys, so asking people what was valuable several months after the event). Sharing below.