Latest comments on the EA Forum

Comments on 2024-10-23

lroberts @ 2024-10-23T07:32 (+1) in response to Announcing: biosecurity.world

Great work!

Chris Leong @ 2024-10-20T14:51 (+4) in response to Chris Leong's Quick takes

There is a world that needs to be saved. Saving the world is a team sport.  All we can do is to contribute our part of the puzzle, whatever that may be and no matter how small, and trust in our companions to handle the rest. There is honor in that, no matter how things turn out in the end.

yanni kyriacos @ 2024-10-23T05:07 (+2)

hear hear šŸ‘šŸ¼šŸ‘šŸ¼

Wes Reisen @ 2024-10-23T00:46 (+1) in response to (Continuously developing article) Itā€™s time world leaders get their values aligned. Hereā€™s how we can align them this year:

If you know about psychology or world leaders, please let me know how true this might be. If it isnā€™t true, weā€™d have to work out how we might handle a world where only some people have their morals aligned. My first thought on this is that:

  1. a world where more world leaders have their values aligned is probably AT LEAST better than the current status quo.
  2. Over time, these people might be phased out, and maybe one strategy is to try to phase them out faster.
Wes Reisen @ 2024-10-23T00:51 (+1)

Supposedly, a more morally aligned global order might try to make itself more morally aligned. We only need this to work enough for it to sort itself out.

Wes Reisen @ 2024-10-23T00:46 (+1) in response to (Continuously developing article) Itā€™s time world leaders get their values aligned. Hereā€™s how we can align them this year:

If you know about psychology or world leaders, please let me know how true this might be. If it isnā€™t true, weā€™d have to work out how we might handle a world where only some people have their morals aligned. My first thought on this is that:

  1. a world where more world leaders have their values aligned is probably AT LEAST better than the current status quo.
  2. Over time, these people might be phased out, and maybe one strategy is to try to phase them out faster.
Wes Reisen @ 2024-10-23T00:50 (+1)

Maybe replacing the keys to power?

Wes Reisen @ 2024-10-20T12:55 (+1) in response to (Continuously developing article) Itā€™s time world leaders get their values aligned. Hereā€™s how we can align them this year:

A world leaderā€™s goals are probably adjustable one way or another. In the case where a world leader is committed to some values that depend on something (e.g., whatever is seen as ā€œpatrioticā€, whatever their religion says (this only applies to some religions), changing those things changes their values. That might be very difficult for some value systems, but luckily [a commitment to the values of something that can easily change] has plenty of good logical arguments against them (https://youtu.be/wRHBwxC8b8I), which could be a better strategy to change someoneā€™s mind if they have such a commitment that is difficult to change, but for which one can change if they have such a commitment.

Wes Reisen @ 2024-10-23T00:46 (+1)

If you know about psychology or world leaders, please let me know how true this might be. If it isnā€™t true, weā€™d have to work out how we might handle a world where only some people have their morals aligned. My first thought on this is that:

  1. a world where more world leaders have their values aligned is probably AT LEAST better than the current status quo.
  2. Over time, these people might be phased out, and maybe one strategy is to try to phase them out faster.
Wes Reisen @ 2024-10-23T00:38 (+1) in response to (Continuously developing article) Itā€™s time world leaders get their values aligned. Hereā€™s how we can align them this year:

One way to advertise this idea is that it reminds people of what the UN/UN charter was for, and that it is an improvement upon it.



Comments on 2024-10-22

Wes Reisen @ 2024-10-22T22:58 (+3) in response to Tomorrow we fight for the future of one billion chickens.

Is here anything I can do to help?

Arepo @ 2024-10-19T03:32 (+2) in response to What wheels do EAs and rationalists reinvent?

The extent to which you think they're the same is going to depend heavily on 

  1. your long term moral discounting rate (if it's high, then you're going to be equally concerned between highly destructive events that very likely won't kill everyone and comparably destructive events that might),
  2. your priors on specific events leading to human extinction (which, given the lack of data, will have a strong impact on your conclusion), and
  3. your change in credence of civilisation flourishing post-catastrophe.

Given the high uncertainty behind each of those considerations (arguably excluding the first), I think it's too strong to say they're 'not the same at all'. I don't know what you mean by fields only looking into regional disasters - how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it? 

Robi Rahman @ 2024-10-22T20:15 (+2)

I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.

But one of the key differences between EA/LT and these fields is that we're almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn't be very high. Under that assumption, the work done is indeed very different in what it accomplishes.

I don't know what you mean by fields only looking into regional disasters - how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it? 

I'm skeptical that the insurance industry isn't bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don't agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I'm not aware of, but I doubt it.)

Charlie_Guthmann @ 2024-10-18T04:47 (+9) in response to Help with an ordinary career

A lot of the advice for young EA's is reasonably targeted at folks much more impressive than I am, for good reasons I think

Assuming you have, but still probably worth skimming the earn to give sections of 80k/probably good and looking at their job boards. 

Hard to say more without knowing your interests and how "mediocre" you are. You can sign up for career coaching with 80k and/or probably good. Also, it could be useful to reach out to other ea students studying math at other universities and talk to them. can query link below by career stage. 

 

https://forum.effectivealtruism.org/people-directory?utm_source=ea_hub&utm_medium=website

Probably Good @ 2024-10-22T19:24 (+4)

Just wanted to chime in and say weā€™d be happy to chat more about your specific situation and collaborate on finding impactful career options. Probably Goodā€™s aim is to help more people make a difference within their unique circumstances. It totally makes sense to feel some uncertainty/discouragement, but as others have mentioned, you could be underselling yourself ā€“ especially since perceived ā€˜prestigeā€™ doesnā€™t always correlate with how qualified you are. If youā€™re interested, feel free to reach out directly or apply for advising. Weā€™d be happy to help!

NickLaing @ 2024-10-22T18:46 (+5) in response to Tomorrow we fight for the future of one billion chickens.

This is one of the most exciting things I've seen on the forum all year, amazing work.

It also re-affirms for me that court cases can be one of the most effective ways to both make progress and raise awareness. Many climate wins in the last few years have been through the law.

On that note apropros of everything while also being completely off topic...

...Can someone please sue Open AI?

christian @ 2024-10-22T18:29 (+1) in response to Predictions as Public Works Project ā€” What Metaculus Is Building Next

Hey Ozzie, I'll add that it's also a brand new post. But yes, your feedback is/was definitely appreciated. 

Ozzie Gooen @ 2024-10-22T18:42 (+2)

Ah, I didn't quite notice that at the time - that wasn't obvious from the UI (you need to hover over the date to see the time of it being posted).

Anyway, happy this was resolved! Also, separately, kudos for writing this up, I'm looking forward to seeing where Metaculus goes this next year +.

Ozzie Gooen @ 2024-10-22T18:22 (+2) in response to Predictions as Public Works Project ā€” What Metaculus Is Building Next

(The opening line was removed)

christian @ 2024-10-22T18:29 (+1)

Hey Ozzie, I'll add that it's also a brand new post. But yes, your feedback is/was definitely appreciated. 

Ozzie Gooen @ 2024-10-22T18:10 (+2) in response to Predictions as Public Works Project ā€” What Metaculus Is Building Next

I feel like the bulk of this is interesting, but the title and opening come off as more grandiose than necessary. 

Ozzie Gooen @ 2024-10-22T18:22 (+2)

(The opening line was removed)

Ozzie Gooen @ 2024-10-22T18:10 (+2) in response to Predictions as Public Works Project ā€” What Metaculus Is Building Next

I feel like the bulk of this is interesting, but the title and opening come off as more grandiose than necessary. 

BenWilliamson @ 2023-10-31T01:18 (+3) in response to Want to help animals? Focus on corporate decisions, not peopleā€™s plates.

This post is far too assertive considering the weak evidence given in the post. 

The blog points out that it may be hard to measure the success of campaigns towards customers. Unfortunately it makes a large unjustified logical leap to assume this means that these campaigns are unsuccessful. As referred to in the EA's [Hits-based giving] post, it "arrogantly" places one solution over another shortly after the very experts they consulted say "there's no real answer" and "we need more research". In fact, there's no hard evidence given to support the posts headline at all.

Some of the links seem tenuous too, and here's an example. The point is made that "60 percent of Americans who say theyā€™re vegetarian on surveys also say that theyā€™ve eaten meat in the past 24 hours". This links to a BusinessInsider post which links a PsychologyToday post, which quotes a CNN poll which I can't find anywhere (feel free to link below if you find it). The PsychologyToday post also cites a 20 year old study in which a small number of respondents claim to eat <10g of meat per day in an initial poll, but not in a follow-up 3-10 days later. While the study does not claim that this is due to dishonesty or bias, the linking posts claim both lies and social desirability bias.

This example shows how the author is writing their own narrative onto an tertiarily linked study, and greatly lowers the confidence I can have for the links I didn't check (Brandolini's law).

I'd really like to see some sort of justification of having this low-quality post linked among the otherwise well-written blogs on this forum.

amaury lorin @ 2024-10-22T17:32 (+1)

Also, I don't see it justify that targeting corporations works?
Is it more effective to convert high-profile individuals in animal slaughter-related corporations, to pass regulations for these corporations, to become experts working to create welfare standards from within, or what? It doesn't tell me much about where to orient my animal welfare charity towards.

Ozzie Gooen @ 2024-10-22T17:27 (+4) in response to How Likely Are Various Precursors of Existential Risk?

This is neat to see!

Obviously, some of these items are much more likely than others to kill 100M+ lives.

WW3 seems like a big wild card to me. I'd be curious if there are any/many existing attempts to try to estimate would it would look like and how bad it would be. 

arvomm @ 2024-10-22T16:54 (+4) in response to Bargaining among worldviews

I agree with you Oscar, and we've highlighted this in the summary table where I borrowed your 'contrasting project preferences' terminology. Still, I think it could still be worth drawing the conceptual distinctions because it might help identify places where bargains can occur.

I liked your example too! We tried to add a few (GCR-focused agent believes AI advances are imminent, while a GHD agent is skeptical; AI safety view borrows resources from a Global Health to fund urgent AI research; meat-eater; gun rights and another supporting gun control both fund a neutral charity like Oxfam...) but we could have done better in highlighting them. I've also added these to the table.

I found your last mathematical note a bit confusing because I originally read A,B,C as projects they might each support. But if it's outcomes (i.e. pairs of projects they would each support), then I think I'm with you!

OscarDšŸ”ø @ 2024-10-22T17:14 (+2)

Nice!

Hmm, yes actually I think my notation wasn't very helpful. Maybe the simpler framing is that if the agents have opposite preference rankings, but convex ratings such that the middling option is more than halfway between the best and worst options, then a bargain is in order.

FelixM @ 2024-10-22T17:02 (+3) in response to Tomorrow we fight for the future of one billion chickens.

This seems like a great campaign with a chance of raising awareness at least or leading to legal changes if it succeeds - and I hope you get a good result at the trial. I had a question about something else the Humane League was involved in, though, which is that in this article the League was mentioned in the issue of Defra's policy on chickens being carried by their feet was described as a new Labour policy decision, but in this article discussing the issue a few months ago, also mentioning the League, it was presented as a decision by the Conservative government. It seems like they can't both have created this policy - so is it true that both major parties in the UK are against basic animal rights, or is this previous decision more nuanced than that?

OscarDšŸ”ø @ 2024-10-19T19:37 (+8) in response to Bargaining among worldviews

Nice, I liked the examples you gave (e.g. the meat-eater problem) and I think the post would be stronger if each type had a practical example. E.g. another example I thought of is that a climate change worldview might make a bet about the amount of fossil fuels used in some future year, not because their empirical views are different, but because money would be more valuable to them in slower-decarbonising worlds (this would be 'insurance' in your taxonomy I think).

Compromises and trades seem structurally the same to me. The key feature is that the two worldviews have contrasting but not inverse preferences, where there is some 'middle' choice that is more than halfway between the worst choice and the best choice from the POV of both worldviews. It doesn't seem to matter greatly whether the worst choice is neutral or negative according to each worldview. Mathematically, we could say if one worldview's utility function across options is U and the other worldviews is V, then we are talking about cases where U(A) > U(B) > U(C) and V(A) < V(B) < V(C) and U(B) + V(B) > max(U(A) + V(A), U(C) + V(C)).

arvomm @ 2024-10-22T16:54 (+4)

I agree with you Oscar, and we've highlighted this in the summary table where I borrowed your 'contrasting project preferences' terminology. Still, I think it could still be worth drawing the conceptual distinctions because it might help identify places where bargains can occur.

I liked your example too! We tried to add a few (GCR-focused agent believes AI advances are imminent, while a GHD agent is skeptical; AI safety view borrows resources from a Global Health to fund urgent AI research; meat-eater; gun rights and another supporting gun control both fund a neutral charity like Oxfam...) but we could have done better in highlighting them. I've also added these to the table.

I found your last mathematical note a bit confusing because I originally read A,B,C as projects they might each support. But if it's outcomes (i.e. pairs of projects they would each support), then I think I'm with you!

Gloria Mogoi @ 2024-10-22T16:45 (+1) in response to Exercise for 'Putting it into Practice'
  1. I think Climate Change,AI and biotechnology and Biweapon posses  a bigger risk. The reason is because most of the problems are human made and we think that maybe the risks wont be that higher but in the long run we get ffected big time

    1. For AI, we are not 100% sure of what it is capable of.

    2. We might be aware of how to mitigate climate crises but we are waiting for the problem instead of solving the root cause. 

  2. I will do more research and  reading on the same.

    I can give back to the community by organizing charity events where we donate foods,clothes , sanitary pproducts and medicine

SummaryBot @ 2024-10-22T16:45 (+2) in response to Three journeys for effective altruism

Executive summary: The CEO of CEA outlines three key journeys for effective altruism: combining individual and institutional strengths, improving internal and external communications, and continuing to engage with core EA principles.

Key points:

  1. EA needs to build up trustworthy institutions while maintaining the power of individual stories and connections.
  2. As EA grows, it must improve both internal community communications and external messaging to the wider world.
  3. Engaging with core EA principles (e.g. scope sensitivity, impartiality) remains crucial alongside cause-specific work.
  4. CEA is committed to a principles-first approach to EA, while recognizing the value of cause-specific efforts.
  5. AI safety is expected to remain the most featured cause, but other major EA causes will continue to have meaningful representation.
  6. The CEO acknowledges uncertainty in EA's future path and the need for ongoing adaptation.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-10-22T16:43 (+1) in response to Distinguishing ways AI can be "concentrated"

Executive summary: The concept of AI concentration needs to be clarified by distinguishing between three dimensions: development, service provisioning, and control, each of which can vary independently and has different implications for AI risks and governance.

Key points:

  1. Three distinct dimensions of AI concentration: development (who creates AI), service provisioning (who provides AI services), and control (who directs AI systems)
  2. Current trends show concentration in AI development and moderate concentration in service provisioning, but more diffuse control
  3. Distinguishing these dimensions is crucial for accurately assessing AI risks, particularly misalignment concerns
  4. Decentralized control over AI systems may reduce the risk of a unified, misaligned super-agent
  5. More precise language is needed when discussing AI concentration to avoid miscommunication and better inform policy decisions

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-10-22T16:42 (+1) in response to Tomorrow we fight for the future of one billion chickens.

Executive summary: The Humane League UK is challenging the legality of fast-growing chicken breeds ("Frankenchickens") in the UK High Court, aiming to improve the lives of one billion chickens raised for food annually.

Key points:

  1. The legal battle against the Department for Environment, Food & Rural Affairs (Defra) has been ongoing for three years, with an appeal hearing on October 23-24, 2024.
  2. "Frankenchickens" are bred to grow unnaturally fast, leading to severe health issues and suffering.
  3. The case argues that fast-growing breeds violate the Welfare of Farmed Animals Regulations 2007.
  4. A favorable ruling could force Defra to create new policies discouraging or banning fast-growing chicken breeds.
  5. Even if unsuccessful, the case raises public awareness about the issue of fast-growing chicken breeds.
  6. The Humane League UK is seeking donations and support for their ongoing animal welfare efforts.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Jason @ 2024-10-22T15:39 (+2) in response to What wheels do EAs and rationalists reinvent?

I don't think most development economists would endorse the idea that a viable pathway exists for LDCs to escape the poverty trap based on ~$600-800MM/year in EA funding (even assuming you could concentrate all GH&D funding on a single project) and near-zero relevant political influence, either. And those are the resources that GH&D EA has on the table right now in my estimation.

To fund something at even the early stages, one needs either the ability to execute any resulting project or the ability to persuade those who do. The type of projects you're implying are very likely to require boatloads of cash, widespread and painful-to-some changes in the LDCs, or both. Even conditioned on a consensus within development economics, I am skeptical that EA has that much ability to get Western foreign aid departments and LDC politicians to do what the development economists say they should be doing. 

Matrice Jacobine @ 2024-10-22T16:32 (+1)

Okay, so why is the faction of EA with ostensibly the most funds the one with "near-zero relevant political influence" while one of the animalist faction's top projects is creating an animalist movement in East Asia from scratch, and the longtermist faction has the president of RAND? That seems like a choice to divide influence that way in the first place.

Elliot Billingsley @ 2024-10-22T16:32 (+1) in response to Retrospective on EA Nigeria Summit: Our Successes and Learnings

Congrats EA Nigeria!

Is there any place to see more about who spoke about what? Any recorded talks?

Matrice Jacobine @ 2024-10-22T12:28 (+5) in response to What wheels do EAs and rationalists reinvent?

The concept was coined by Singer, who is an EA, but he coined it in 1981 and it has been a term of mainstream moral philosophy for a while.

Ulrik Horn @ 2024-10-22T16:20 (+2)

Ah that might explain it - it is coming from philosophy not psychology!

defun @ 2024-10-22T16:09 (+9) in response to defun's Quick takes

Anthropic has just launched "computer use". "developers can direct Claude to use computers the way people do".

https://www.anthropic.com/news/3-5-models-and-computer-use

Jason @ 2024-10-22T15:54 (+4) in response to Author of Much-Discussed Forum Piece Defending Eugenics Works for a Magazine with Far-Right Links

Trigger warning on the linkposted article: contains a quotation from someone -- who I emphasize is not D.F. -- which I would characterize as defending sexual violence against children.

Matrice Jacobine @ 2024-10-22T12:31 (+1) in response to What wheels do EAs and rationalists reinvent?

GH&D also has a clearly successful baseline with near-infinite room for more funding, and so more speculative projects need to clear that baseline before they become viable.

Again, that is exactly what I am calling "constantly retreading the streetlight-illuminated ground". I do not think most institutional development economists would endorse the idea that LDCs can escape the poverty trap through short-term health interventions alone.

Jason @ 2024-10-22T15:39 (+2)

I don't think most development economists would endorse the idea that a viable pathway exists for LDCs to escape the poverty trap based on ~$600-800MM/year in EA funding (even assuming you could concentrate all GH&D funding on a single project) and near-zero relevant political influence, either. And those are the resources that GH&D EA has on the table right now in my estimation.

To fund something at even the early stages, one needs either the ability to execute any resulting project or the ability to persuade those who do. The type of projects you're implying are very likely to require boatloads of cash, widespread and painful-to-some changes in the LDCs, or both. Even conditioned on a consensus within development economics, I am skeptical that EA has that much ability to get Western foreign aid departments and LDC politicians to do what the development economists say they should be doing. 

Agnes Stenlund @ 2024-10-22T15:36 (+13) in response to Agnes Stenlund's Quick takes

Me and a working group at CEA have started scoping out improvements for effectivealtruism.org. Our main goals are:

  1. Improve understanding of what EA is (clarify and simplify messaging, better address common misconceptions, showcase more tangible examples of impact, people, and projects)
  2. Improve perception of EA (show more of the altruistic and other-directedness parts of EA alongside the effective, pragmatic, results-driven parts, feature more testimonials and impact stories from a broader range of people, make it feel more human and up-to-date)
  3. Increase high-value actions (improve navigation, increase newsletter and VP signups, make it easier to find actionable info)

For the first couple of weeks, Iā€™ll be testing how the current site performs against these goals, then move on to the redesign, which Iā€™ll user-test against the same goals.

If youā€™ve visited the current site and have opinions, Iā€™d love to hear them. Some prompts that might help:

  • Do you remember what your first impression was?
  • Have you ever struggled to find specific info on the site?
  • Is there anything that annoys you?
  • What do you think could be confusing to someone who hasn't heard about EA before?
  • Whatā€™s been most helpful to you? What do you like?

If you prefer to write your thoughts anonymously you can do so here, although Iā€™d encourage you to comment on this quick take so others can agree or disagree vote (and I can get a sense of how much the feedback resonates).

Ben MillwoodšŸ”ø @ 2024-10-20T15:35 (+9) in response to Criticism is sanctified in EA, but, like any intervention, criticism needs to pay rent

obviously there's not really any objective way to settle the matter, but I disagree that criticizers acquire more social capital than doers. When I think of the people who seem to me most prestigious in EA, it's all people who got there by doing things, not by criticising anything.

I do agree that some people with a lot of social capital are seemingly oblivious to how that capital affects the weight of what they say, and I think it's good to point out when this is happening, but the examples I can think of are still people who got that capital by doing things.

ethai @ 2024-10-22T14:48 (+1)

this could be true, i don't have a good sense of who's most prestigious in EA aside from the obvious* - my claim is more that i've seen this happen in examples and that it would be bad if that was happening all the time, but i am not attuned enough to broad EA social dynamics to know if that is happening all the time

*the obvious ones are the ones who are prestigious because they Did Something a long time ago, which I think doesn't really count as a counterexample to the critical tendency as it manifests now

MichaelStJules @ 2024-10-21T23:19 (+26) in response to Weighted Factor Models: Consider using the geometric mean instead of the arithmetic mean

FWIW, when I have a weighted factor model to build, I think about how I can turn it into a BOTEC, and try to get it close(r) to a BOTEC. I did this for my career comparison and a geographic weighted factor model.

MichaelStJules @ 2024-10-22T13:11 (+4)

And I think this usually means some factors, in their units, like scale (e.g. number of individuals, years of life, DALYs amount of suffering) and probability of success (%), should be multiplied. And usually not weighted at all, except when you want to calculate a factor multiple ways and average them. Otherwise, you'll typically get weird units.

And what is the unit conversion between DALYs and a % chance of success, say? This doesnā€™t make much sense, and probably neither will any weights, in a weighted sum. Adding factors with different units together doesn't make much sense if you wanted to interpret the final results in a scope-sensitive way.

This all makes most sense if you only have one effect you're estimating, e.g. one direct effect and no indirect effects. Different effects should be added. A more complete model could then be the sum of multiplicative models, one multiplicative model for each effect.

EDIT: But also BOTECs and multiplicative models may be more sensitive to their factors, and more sensitive to errors in factor values when ranking. So, it may be best to do sensitivity analysis, with a range of values for the factors. But that's more work.

NickLaing @ 2024-10-22T12:38 (+8) in response to New Open Access Book: Weighing Animal Welfare, edited by Bob Fischer

Thanks so much this is great, so happy you turned this mountain of impressive work into a book.

Am thinking about what kind of person to recommend it to. After a bit of a look out seems great for my philosopher friend, maybe on the slightly-too-heavy for a couple of animal rights workers I know?

What do you think about who might enjoy this, or who it might be most helpful for?

Bob Fischer @ 2024-10-22T12:51 (+6)

Thanks for asking, Nick! Although we tried to make it as accessible as possible, it's still pitched to academics first and foremost. For those who just want the big picture, this podcast episode is probably the best option right now. We're also working on an article-length overview, but it may be a few months before that's available. I'll share it here when it is!

NickLaing @ 2024-10-22T12:38 (+8) in response to New Open Access Book: Weighing Animal Welfare, edited by Bob Fischer

Thanks so much this is great, so happy you turned this mountain of impressive work into a book.

Am thinking about what kind of person to recommend it to. After a bit of a look out seems great for my philosopher friend, maybe on the slightly-too-heavy for a couple of animal rights workers I know?

What do you think about who might enjoy this, or who it might be most helpful for?

Jason @ 2024-10-22T01:09 (+4) in response to What wheels do EAs and rationalists reinvent?

The academic fields most relevant to GH&D work are fairly mature. Because of that, it's reasonable for GH&D to focus less on producing stuff that is more like basic research / theory generation (academia is often strong in this and had a big head start) and devote its resources more toward setting up a tractable implementation of something (which is often not academia's comparative advantage for various reasons).

GH&D also has a clearly successful baseline with near-infinite room for more funding, and so more speculative projects need to clear that baseline before they become viable. You haven't identified any specific proposed area to study, but my suspicion is that most of them would require sustained political commitment over many years in the LDC and/or large cash infusions beyond the bankroll of EA GH&D to potentially work.

Matrice Jacobine @ 2024-10-22T12:31 (+1)

GH&D also has a clearly successful baseline with near-infinite room for more funding, and so more speculative projects need to clear that baseline before they become viable.

Again, that is exactly what I am calling "constantly retreading the streetlight-illuminated ground". I do not think most institutional development economists would endorse the idea that LDCs can escape the poverty trap through short-term health interventions alone.

Ulrik Horn @ 2024-10-22T11:28 (+3) in response to What wheels do EAs and rationalists reinvent?

Moral circle. There are so many frameworks from psychology on morality, empathy etc. But maybe I am missing some nuance that makes moral circle distinct from all of these but to date I have not seen it.

Matrice Jacobine @ 2024-10-22T12:28 (+5)

The concept was coined by Singer, who is an EA, but he coined it in 1981 and it has been a term of mainstream moral philosophy for a while.

Moritz Stumpe šŸ”ø @ 2024-10-22T12:18 (+2) in response to Weighted Factor Models: Consider using the geometric mean instead of the arithmetic mean

Thanks for writing this up and for highlighting this weakness in our prioritisation report (example 1).

Since the publication of this report (which was quite an early piece of research for me), I've built a lot more of these models and strongly agree that it's important to not just blindly use a weighted average. (Didn't change anything about our research outcomes in this case, but it could have important effects elsewhere.) Geometric mean is important. I also sometimes use completely different scoring tools (e.g., multiplication, more BOTEC style, as MichaelStJules has commented). It's always helpful from my experience to experiment with different methods/perspectives.
 

Jan-Ole Hesselberg @ 2024-10-22T11:45 (+1) in response to Two directions for research on forecasting and decision making

Thank you for this insightful review @Paal.

Ulrik Horn @ 2024-10-22T11:28 (+3) in response to What wheels do EAs and rationalists reinvent?

Moral circle. There are so many frameworks from psychology on morality, empathy etc. But maybe I am missing some nuance that makes moral circle distinct from all of these but to date I have not seen it.

Joel TanšŸ”ø @ 2024-10-22T04:48 (+13) in response to Weighted Factor Models: Consider using the geometric mean instead of the arithmetic mean

I generally agree, and CEARCH uses geomeans for our geographic prioritzation WFMs, but I would also express caution - multiplicative WFM are also more sensitive to errors in individual parameters, so if your data is poor you might prefer the additive model.

Also general comment on geomeans vs normal means - I think of geomeans as useful when you have different estimates of some true value, and the differences reflect methodological differences (vs cases where you are looking to average different estimates that reflect real actual differences, like strength of preference or whatever)

Soemano Zeijlmans @ 2024-10-22T09:59 (+1)

Good point on the error sensitivity. The geometric mean penalizes low scores more so it increases the probability of a false negative/type II error: an alternative that should be prioritised is not prioritised.

MathiasKBšŸ”ø @ 2024-10-22T07:48 (+2) in response to Weighted Factor Models: Consider using the geometric mean instead of the arithmetic mean

Naively, is there a case for using the average of the two?

Joel TanšŸ”ø @ 2024-10-22T07:53 (+2)

I don't see any strong theoretical reason to do so, but I might be wrong. In a way it doesn't matter, because you can always rejig your weights to penalize/boost one estimate over another.

Joel TanšŸ”ø @ 2024-10-22T04:48 (+13) in response to Weighted Factor Models: Consider using the geometric mean instead of the arithmetic mean

I generally agree, and CEARCH uses geomeans for our geographic prioritzation WFMs, but I would also express caution - multiplicative WFM are also more sensitive to errors in individual parameters, so if your data is poor you might prefer the additive model.

Also general comment on geomeans vs normal means - I think of geomeans as useful when you have different estimates of some true value, and the differences reflect methodological differences (vs cases where you are looking to average different estimates that reflect real actual differences, like strength of preference or whatever)

MathiasKBšŸ”ø @ 2024-10-22T07:48 (+2)

Naively, is there a case for using the average of the two?

Karthik Tadepalli @ 2024-10-21T16:37 (+9) in response to What wheels do EAs and rationalists reinvent?

This is a restatement of the law of iterated expectations. LIE says . Replace with an indicator variable for whether some hypothesis is true, and interpret as an indicator for binary evidence about . Then this immediately gives you a conservation of expected evidence: if , then , since is an average of the two of them so it must be in between them.

You could argue this is just an intuitive connection of the LIE to problems of decisionmaking, rather than a reinvention. But there's no acknowledgement of the LIE anywhere in the original post or comments. In fact, it's treated as a consequence of Bayesianism, when it follows from probability axioms. (Though one comment does point this out.)

To see it formulated in a context explicitly about beliefs, see Box 1 in these macroeconomics lecture notes.

Arepo @ 2024-10-22T05:24 (+2)

Thanks - agree or disagree with it, this is a really nice example of what I was hoping for.

Vasco GrilošŸ”ø @ 2024-10-21T11:17 (+2) in response to Why Stop AI is barricading OpenAI

I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.

Remmelt @ 2024-10-22T04:55 (+4)

There are bunch of crucial considerations here. Iā€™m afraid it would take too much time to unpack those.

Happy though to have had this chat!

Joel TanšŸ”ø @ 2024-10-22T04:48 (+13) in response to Weighted Factor Models: Consider using the geometric mean instead of the arithmetic mean

I generally agree, and CEARCH uses geomeans for our geographic prioritzation WFMs, but I would also express caution - multiplicative WFM are also more sensitive to errors in individual parameters, so if your data is poor you might prefer the additive model.

Also general comment on geomeans vs normal means - I think of geomeans as useful when you have different estimates of some true value, and the differences reflect methodological differences (vs cases where you are looking to average different estimates that reflect real actual differences, like strength of preference or whatever)

CBšŸ”ø @ 2023-06-21T15:51 (+1) in response to Successif: helping mid-career and senior professionals have impactful careers

Great post ! I like the focus on already experienced people.


By the way, the bit about the "a number of people need permission" was fascinating ! I think I'll check transactional analysis.

Richard_Leyba_Tejada @ 2024-10-22T02:02 (+1)

Curious what you learned about transactional analysis. I too am fascinated by this post and "giving permission".

Thank you Claire!

Jackson Wagner @ 2024-10-22T01:27 (+2) in response to A Rocketā€“Interpretability Analogy

Cross-posting a lesswrong comment where I argue (in response to another commenter) that not only did NASA's work on rocketry probably benefitted military missile/ICBM technology, but their work on satellites/spacecraft also likely contributed to military capabilities:

Satellites were also plausibly a very important military technology.  Since the 1960s, some applications have panned out, while others haven't.  Some of the things that have worked out:

  • GPS satellites were designed by the air force in the 1980s for guiding precision weapons like JDAMs, and only later incidentally became integral to the world economy.  They still do a great job guiding JDAMs, powering the style of "precision warfare" that has given the USA a decisive military advantage since 1991's first Iraq war.
  • Spy satellites were very important for gathering information on enemy superpowers, tracking army movements and etc.  They were especially good for helping both nations feel more confident that their counterpart was complying with arms agreements about the number of missile silos, etc.  The Cuban Missile Crisis was kicked off by U-2 spy-plane flights photographing partially-assembled missiles in Cuba.  For a while, planes and satellites were both in contention as the most useful spy-photography tool, but eventually even the U-2's successor, the incredible SR-71 blackbird, lost out to the greater utility of spy satellites.
  • Systems for instantly detecting the characteristic gamma-ray flashes of nuclear detonations that go off anywhere in the world (I think such systems are included on GPS satellites), and giving early warning by tracking ballistic missile launches during their boost phase (the Soviet version of this system famously misfired and almost caused a nuclear war in 1983, which was fortunately forestalled by one Lieutenant colonel Stanislav Petrov).

Some of the stuff that hasn't:

  • The air force initially had dreams of sending soldiers into orbit, maybe even operating a military base on the moon, but could never figure out a good use for this.  The Soviets even test-fired a machine-gun built into one of their Salyut space stations: "Due to the potential shaking of the station, in-orbit tests of the weapon with cosmonauts in the station were ruled out.  The gun was fixed to the station in such a way that the only way to aim would have been to change the orientation of the entire station.  Following the last crewed mission to the station, the gun was commanded by the ground to be fired; some sources say it was fired to depletion".
  • Despite some effort in the 1980s, were were unable to figure out how to make "Star Wars" missile defence systems work anywhere near well enough to defend us against a full-scale nuclear attack.
  • Fortunately we've never found out if in-orbit nuclear weapons, including fractional orbit bombardment weapons, are any use, because they were banned by the Outer Space Treaty. But nowadays maybe Russia is developing a modern space-based nuclear weapon as a tool to destroy satellites in low-earth orbit.

Overall, lots of NASA activities that developed satellite / spacecraft technology seem like they had a dual-use effect advancing various military capabilities.  So it wasn't just the missiles.  Of course, in retrospect, the entire human-spaceflight component of the Apollo program (spacesuits, life support systems, etc) turned out to be pretty useless from a military perspective. But even that wouldn't have been clear at the time!

Matrice Jacobine @ 2024-10-21T12:23 (+1) in response to What wheels do EAs and rationalists reinvent?

I don't know how to make it clearer. Longtermist nonprofits get to research world problems and their possible solutions without having to immediately show a randomized controlled trial following the ITN framework on policies that don't exist yet. Why is the same thing seemingly impossible for dealing with global poverty?

Jason @ 2024-10-22T01:09 (+4)

The academic fields most relevant to GH&D work are fairly mature. Because of that, it's reasonable for GH&D to focus less on producing stuff that is more like basic research / theory generation (academia is often strong in this and had a big head start) and devote its resources more toward setting up a tractable implementation of something (which is often not academia's comparative advantage for various reasons).

GH&D also has a clearly successful baseline with near-infinite room for more funding, and so more speculative projects need to clear that baseline before they become viable. You haven't identified any specific proposed area to study, but my suspicion is that most of them would require sustained political commitment over many years in the LDC and/or large cash infusions beyond the bankroll of EA GH&D to potentially work.

Martin (Huge) Vlach @ 2024-10-19T08:10 (+2) in response to Martin (Huge) Vlach's Quick takes

Is it obvious that( and how) massages reduce stress?
Are studies like https://www.semanticscholar.org/paper/Effects-of-Scalp-Massage-on-Physiological-and-Shimada-Tsuchida/9e3a7bc9745469a9333ebe493e79a44220111d0c and https://www.semanticscholar.org/paper/The-Effect-of-Self-Scalp-Massage-on-Adult-Stress-Kim-Choi/99d1999aa8d8776e55461882cc06c06905ca77b1 rare and mostly ignored?
What actions would measurably promote their conclusions?( I mean more like what strategies would promote actions of more massaging for more wellbeing.)

Joseph Lemien @ 2024-10-22T00:23 (+2)

I don't know about medical professionals, but my informal impression that the majority of adults in developed countries knows that massage reduces stress.

Personal perspective, not grounded in research: Similar to yoga or walking, I think the main issue is the counterfactual. Studies tend to show that massage is better than nothing for stress reduction, but is nothing really the baseline we want to use?

Here is a research summary from Elicit:

Research suggests that massage therapy can be effective in reducing stress levels. Multiple studies have found that massage can significantly decrease self-reported stress and anxiety (FranƧoise Labrique-Walusis et al., 2010; C. Heard et al., 2012; Bost & Wallis, 2006). Even brief interventions, such as a 5-minute hand or foot massage or a 15-minute weekly massage, can lower perceived stress levels (FranƧoise Labrique-Walusis et al., 2010; Bost & Wallis, 2006). Mechanical massage chairs have also shown promise in reducing stress for individuals with serious mental illness (C. Heard et al., 2012). While some studies have observed single-treatment reductions in physiological stress markers like salivary cortisol and heart rate, evidence for sustained physiological effects is limited (Moraska et al., 2008). Despite the need for more rigorous research, the existing literature suggests that massage therapy can be a beneficial tool for stress management, particularly in healthcare settings (FranƧoise Labrique-Walusis et al., 2010; Bost & Wallis, 2006).

Cipolla @ 2024-10-21T07:55 (+1) in response to Cipolla's Quick takes

I noticed the most successful people, in the sense of advancing their career and publishing papers, I meet at work have a certain belief in themselves. What is striking, no matter their age/career stage, it is like they are already taking certain their success and where to go in the future.

 

I also noticed this is something that people from non-working class backgrounds manage to do.

 

Second point. They are good at finishing projects and delivering results in time.

 

I noticed that this was somehow independent from how smart is someone.

 

While I am very good at single tasks, I have always struggled with long term academic performance. I know it is true for some other people too.

 

What kind of knowledge/mentality am I missing? Because I feel stuck.

Joseph Lemien @ 2024-10-22T00:15 (+2)

Practice is helpful. Is there a way you can repeatedly practice finishing projects? Having the right tools/frameworks is also helpful. Maybe reading about about personal productivity and breaking large tasks down into smaller pieces would help? I also find kanban boards to be very helpful, and you can set one up in a program like Asana, or you can do it on your wall with sticky notes.

Perhaps you could describe a bit more how your failures have happened with longer-term efforts? That might allow people to give you more tailored recommendations.

Joseph Lemien @ 2024-10-22T00:07 (+6) in response to Joseph Lemien's Quick takes

I've previously written a little bit about recognition in relation to mainanence/prevention, and this passage from Everybody Matters: The Extraordinary Power of Caring for Your People Like Family stood out to me as a nice reminder:

We tell the story in our class about the time our CIO Craig Hergenroetherā€™s daughter was working in another organization, and she said, ā€œWeā€™re taking our IT team to happy hour tonight because we got this big e-mail virus, but they did a great job cleaning it up.ā€

Our CIO thought, ā€œWe never got the virus. We put all the disciplines and practices in place to ensure that we never got it. Shouldnā€™t we celebrate that?ā€

What we choose to hold up and celebrate gets emulated. Therefore it is important to consider how those decisions impact the culture. Instead of firefighting behaviors, we recognize and celebrate sustained excellence: people who consistently distinguish themselves through their actions. We celebrate people who do their jobs very well every day with little drama. Craig, the CIO, took his team out to happy hour and said, ā€œCongratulations, we did not get the e-mail virus that took out most of the companies in St. Louis and Tampa Bay.ā€

Overall, the Everybody Matters could is the kind of book that could have been an article. I wouldn't recommend spending the time to read it if you are already superficially familiar with the fact that an organization can choose to treat people well (although maybe that would be revelatory for some people). It was on my  to-read list due to it's mention in the TED Talk Why good leaders make you feel safe.



Comments on 2024-10-21

MichaelStJules @ 2024-10-21T23:26 (+9) in response to Weighted Factor Models: Consider using the geometric mean instead of the arithmetic mean

Note that the logarithm of a positive weighted geometric mean is the weighted arithmetic mean of the logarithms:

 

So, instead of switching to the weighted geometric mean, you could just take the logarithm of your factors.

EDIT: Well, the weighted geometric mean is easier than taking logarithms, but it can be useful to remember this equivalence.

MichaelStJules @ 2024-10-21T23:19 (+26) in response to Weighted Factor Models: Consider using the geometric mean instead of the arithmetic mean

FWIW, when I have a weighted factor model to build, I think about how I can turn it into a BOTEC, and try to get it close(r) to a BOTEC. I did this for my career comparison and a geographic weighted factor model.

Jeff Kaufman @ 2024-10-18T10:34 (+18) in response to Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations

The survey question was, in Dutch:

Imagine the worst suffering a [dog, bird, fish (for example a salmon), shrimp, fly] can experience. Try to compare this suffering with the worst suffering a human can experience. How intense or severe do you think is the worst suffering of a [dog, bird, fish, shrimp, fly] for an hour compared to the worst suffering of a human for an hour?

The detailed results are here, including a histogram for birds:

Whether the answers to this question imply moral equivalence between humans and birds, though, depends on the assumption that the respondents are something close to hedonistic utilitarians, and I doubt they are? For example, if the survey had instead given questions specifically about moral weight ("how many birds would you need to be saving from an hour of intense suffering before you'd prioritize that over doing the same for a human", etc) you'd have seen different answers.

Stijn @ 2024-10-21T21:42 (+4)

I agree. It strongly depends on the framing of questions. For example, I asked people how strongly they value animal welfare compared to human welfare. Average: 70%. So in one interpretation, that means  1 chicken = 0.7 humans. But there is a huge difference between saving and not harming, and between 'animal' and 'chicken'. Asking people how many bird or human lives to save, gives a very different answer than asking them how many birds or humans to harm. People could say that saving 1 human is the equivalent of saving a million birds, but that harming one human is the equivalent of harming only a few birds. And when they realize the bird is a chicken used for food, people get stuck and their answers go weird. Or ask people about their maximum willingness to pay to avoid an hour of human or chicken suffering, versus their minimum willingness to accept to add an hour of suffering: huge differences. (I conducted some unpublished surveys about this, and one published: https://www.tandfonline.com/doi/abs/10.1080/21606544.2022.2138980.) In short: in this area you can easily show that people give highly inconsistent answers depending on the formulations of the questions.

Vasco GrilošŸ”ø @ 2024-10-21T21:37 (+2) in response to Cost-effectiveness of Shrimp Welfare Projectā€™s Humane Slaughter Initiative

My estimate for the past cost-effectiveness of HSI is:

  • 28.7 times my estimate for the cost-effectiveness of corporate campaigns for chicken welfare of 15.0 DALY/$.
  • 43.4 k times my estimate for the cost-effectiveness of GiveWellā€™s top charities of 0.00994 DALY/$.

For HSI to be as cost-effective as GiveWellā€™s top charities, for example, one of the following would have to happen:

  • All my pain intensities becoming 0.00230 % (= 1/(43.4*10^3)) as high.
  • The welfare range of shrimp becoming 0.00230 % as high, i.e. 7.13*10^-7 (= 0.031*2.30*10^-5).
  • All my pain intensities and the welfare range of shrimp each becoming 0.480 % (= (2.30*10^-5)^0.5) as high.
Isaac King @ 2024-10-15T17:08 (+4) in response to Value lock-in is happening *now*

The same would have been said about reusable rockets and human-level AI 15 years ago. I don't understand how one can look at a billion dollar company with concrete plans to colonize Mars and the technology to do so, and conclude that the probability of this happening is so low it can be dismissed.

David T @ 2024-10-21T21:13 (+1)

The presence of a company worth a few tens of billions whose founder talks about colonizing Mars (amongst many other bold claims) and has concrete plans in the subset of Mars colonization problems that involve actually getting there feels very compatible with the original suggestion that the plausible near term consequence is a small number of astronauts hanging out in a dome and some cracking TV footage, not an epoch-defining social transformation

Looked from another angle, fifty years ago the colonization of space wasn't driven by half of one billionaire's fortune,[1] it was driven by a significant fraction of the GDP of both the world's superpowers locked in a race, and the last 20 years' transition was from nothing in space to lunar landings, space stations, deep space probes, not from expensive launches and big satellites to cheaper launches and a lot more small satellites. So you had better arguments for imminent space cities half a century ago.

  1. ^

    the part he isn't spending on his social media habit, anyway...

Isaac King @ 2024-10-18T17:21 (+1) in response to Value lock-in is happening *now*

A location doesn't need to be "better" for it to contribute to the economy. Some countries are almost strictly worse than others in terms of natural resources and climate for living and growing things, but people still live there.

David T @ 2024-10-21T21:08 (+1)

If you're doing a comparison with anywhere on Earth, the obvious one would be Antarctica. There absolutely are permanent settlements there even though it's barely livable, but really only for relatively short term visitors to do scientific research and/or enjoy the experience of being one of the few people to travel there. It absolutely isn't a functioning economy that runs at a profit. (Some places inside the Arctic Circle, maybe, but that wouldn't be the case if shipping the exploitable resources back to somewhere that felt more like home cost spaceflight prices per kg). The profitable segment of space is the orbital plane around earth, ideally without the complications of people in the equation, and that's what SpaceX has actually spent the last decade focused on.

Antartica is also an interesting comparison point for the social and legal systems since it's also small numbers of people from different missions living on extraterritorial land.  I mean, they're not really particularly well sorted out, it just turns out they involve far too few people and far too little competition to be particularly problematic.

Shaan Shaikh @ 2024-10-21T19:05 (+1) in response to Should you pursue an MA degree in International Relations?

Good point. I prefer some ambiguity over a longer title, but welcome alternatives that are both clear and concise.

OscarDšŸ”ø @ 2024-10-21T20:30 (+2)

Maybe 'Value of an MA in IR: my experience'

Ben MillwoodšŸ”ø @ 2024-10-21T11:33 (+4) in response to Should you pursue an MA degree in International Relations?

[edit: it's been changed I think?]

FWIW when I saw the title of this post I assumed you were going to be asking for advice rather than offering it. Something like "My advice on whether it's worth [...]" would be less ambiguous, though a bit clumsier ā€“ obv this is partly a stylistic thing and I won't tell you what style is right for you :)

Shaan Shaikh @ 2024-10-21T19:05 (+1)

Good point. I prefer some ambiguity over a longer title, but welcome alternatives that are both clear and concise.

Karthik Tadepalli @ 2024-10-21T16:31 (+4) in response to What wheels do EAs and rationalists reinvent?

Conservation of expected evidence

Karthik Tadepalli @ 2024-10-21T16:37 (+9)

This is a restatement of the law of iterated expectations. LIE says . Replace with an indicator variable for whether some hypothesis is true, and interpret as an indicator for binary evidence about . Then this immediately gives you a conservation of expected evidence: if , then , since is an average of the two of them so it must be in between them.

You could argue this is just an intuitive connection of the LIE to problems of decisionmaking, rather than a reinvention. But there's no acknowledgement of the LIE anywhere in the original post or comments. In fact, it's treated as a consequence of Bayesianism, when it follows from probability axioms. (Though one comment does point this out.)

To see it formulated in a context explicitly about beliefs, see Box 1 in these macroeconomics lecture notes.

Arepo @ 2024-10-17T06:07 (+2) in response to What wheels do EAs and rationalists reinvent?

Do you have a citation for coordination traps specifically? Coordination games seem pretty closely related, but Googling for the former I find only casual/informal references to it being a game (possibly a coordination game specifically) with multiple equilibria, some worse than others, such that players might get trapped in a suboptimal equilibrium.

Karthik Tadepalli @ 2024-10-21T16:35 (+2)

I agree with Linch that the idea that "a game can have multiple equilibria that are Pareto-rankable" is trivial. Then the existence of multiple equilibria automatically means players can get trapped in a suboptimal equilibrium ā€“ after all, that's what an equilibrium is.

What specific element of "coordination traps" goes beyond that core idea?

Karthik Tadepalli @ 2024-10-21T16:31 (+4) in response to What wheels do EAs and rationalists reinvent?

Conservation of expected evidence

Neel Nanda @ 2024-10-11T19:22 (+2) in response to How much I'm paying for AI productivity software (and the future of AI use)

What does reclaim give you? I've never heard of it, and the website is fairly uninformative

Jonas Hallgren @ 2024-10-21T14:55 (+3)

Sorry for not noticing the comment earlier! 

Here's the Claude distillation based on my reasoning on why to use it:

Reclaim is useful because it lets you assign different priorities to tasks and meetings, automatically scheduling recurring meetings to fit your existing commitments while protecting time for important activities. 

For example, you can set exercising three times per week as a priority 3 task, which will override priority 2 meetings, ensuring those exercise timeblocks can't be scheduled over. It also automatically books recurrent meetings so they fit into your existing schedule, like for team members or mentors/mentees. 

This significantly reduces the time and effort spent on scheduling, as you can easily add new commitments without overlapping more important tasks. The main advantage is the ability to set varying priorities for different tasks, which streamlines the process of planning weekly and monthly calls, resulting in almost no overhead for meeting planning and making it simple to accommodate additional commitments without conflicting with higher-priority tasks..

SummaryBot @ 2024-10-21T14:54 (+1) in response to Bargaining among worldviews

Executive summary: Worldview diversification in effective altruism can lead to complex bargaining dynamics between worldviews, potentially resulting in resource allocations that differ significantly from initial credence-based distributions.

Key points:

  1. Bargaining between worldviews can take various forms: compromises, trades, wagers, loans, and common cause coordination.
  2. Compromises and trades require specific circumstances to be mutually beneficial, while wagers and loans are more flexible but riskier.
  3. Common cause incentives arise from worldviews' shared association within the EA movement.
  4. Bargaining allows for more flexibility in resource allocation but requires understanding each worldview's self-interest.
  5. This approach differs from top-down prioritization methods, respecting worldviews' autonomy in decision-making.
  6. Practical challenges include ensuring compliance with agreements and managing changing circumstances over time.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-10-21T14:53 (+2) in response to A brief theory of why we think things are good or bad

Executive summary: Our fundamental moral beliefs about good and bad may arise from motivated reasoning rather than evidence, with implications for how we view moral judgments and the potential for AI systems to have good or bad experiences.

Key points:

  1. Basic moral judgments like "pain is bad" seem to stem from desires rather than evidence-based reasoning.
  2. This theory elegantly explains the universal belief in pain's badness as motivated by our desire to avoid pain.
  3. If moral beliefs arise from motivated reasoning, it raises questions about their truth status and validity.
  4. Language models may be capable of good/bad experiences if they engage in motivated reasoning about preferences.
  5. Consistent judgments may be necessary for beliefs about goodness/badness, creating uncertainty about whether current AI systems truly have such experiences.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-10-21T14:52 (+1) in response to Safety tax functions

Executive summary: The concept of a "safety tax function" provides a framework for analyzing the relationship between technological capability and safety investment requirements, reconciling the ideas of "solving" safety problems and paying ongoing safety costs.

Key points:

  1. Safety tax functions can represent both "once-and-done" and ongoing safety problems, as well as hybrid cases.
  2. Graphing safety requirements vs. capability levels on log-log axes allows for analysis of safety tax dynamics across different technological eras.
  3. Key factors in safety coordination include peak tax requirement, suddenness and duration of peaks, and asymptotic tax level.
  4. Safety is not binary; contours represent different risk tolerance levels as capabilities scale.
  5. The model could be extended to account for world-leading vs. minimum safety standards, non-scalar capabilities/safety, and sequencing effects.
  6. This framework may help provide an intuitive grasp of strategic dynamics in AI safety and other potentially dangerous technologies.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-10-21T14:51 (+1) in response to The consequences of large-scale blackouts

Executive summary: A prolonged, large-scale blackout would have devastating consequences across multiple sectors of society, with communication, transportation, water, food, and healthcare systems rapidly breaking down, though some mitigation measures are possible.

Key points:

  1. Communication systems would fail quickly, severely hampering crisis response and public information.
  2. Transportation would be disrupted, with electric modes halting and fuel shortages limiting road travel.
  3. Water systems would cease functioning, though emergency wells could provide limited supply.
  4. Food distribution would be challenging due to transportation and refrigeration issues.
  5. Healthcare would be severely impaired within days, with most critical care impossible after a week.
  6. Potential mitigation strategies include developing microgrids, decentralizing resource storage, and improving emergency planning.
  7. More research and modeling is needed to better understand and prepare for large-scale blackout scenarios.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-10-21T14:50 (+3) in response to What AI companies should do: Some rough ideas

Executive summary: AI companies developing powerful AI systems should prioritize specific safety actions, including achieving extreme security optionality, preventing AI scheming and misuse, planning for AGI development, conducting safety research, and engaging responsibly with policymakers and the public.

Key points:

  1. Develop extreme security optionality for model weights and code by 2027, with a clear roadmap and validation.
  2. Implement robust control measures to prevent AI scheming and escape during internal deployment.
  3. Mitigate risks of external misuse through careful deployment strategies and capability evaluations.
  4. Create a comprehensive plan for AGI development, including government cooperation and nonproliferation efforts.
  5. Conduct and share safety research, boost external research, and provide deeper model access to safety researchers.
  6. Engage responsibly with policymakers and the public about AI progress, risks, and safety measures.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-10-21T14:48 (+3) in response to Should you pursue an MA degree in International Relations?

Executive summary: Pursuing an MA in International Relations can be worthwhile depending on individual circumstances, but prospective students should carefully weigh the costs and benefits, have clear career goals, and ideally have some work experience before enrolling.

Key points:

  1. Good reasons to pursue an IR MA include receiving government fellowships, earning full scholarships, or pivoting to a new career in policy.
  2. Major costs include high tuition, opportunity costs of not working, and potentially unnecessary coursework.
  3. Benefits include unique experiences, connections with accomplished professors and peers, and specialized knowledge acquisition.
  4. Work experience (2-4 years) before enrolling is highly recommended to clarify goals and strengthen applications.
  5. Students should develop a clear mission statement for how the degree supports their career objectives.
  6. When choosing between top programs, funding should be a primary consideration, as differences in quality are often minimal.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Moritz Stumpe šŸ”ø @ 2024-10-21T13:45 (+1) in response to Transitioning from Battery Cages: How Can Farmers Access Support for Humane Poultry Systems?

Thank you for your willingness to transition! I strongly recommend you reach out to the One Health and Development Initiative. They are working with farmers in Nigeria on this topic. If you don't receive a response from them, please let me know / contact us at Animal Advocacy Africa and we can make an introduction.

GV @ 2024-10-21T07:57 (+1) in response to Announcing: biosecurity.world

Thanks a lot for the hard work! This will certainly be useful to people interested in biosecurity careers in our group!

Swan @ 2024-10-21T13:37 (+1)

Glad to hear this!

MountainPath @ 2024-10-21T12:45 (+5) in response to Pausing for what?

Strongest reason for pausing and AI safety I can think about: In order to build a truth-seeking super intelligence, that not only maximises paperclips, but also tries to understand the nature of the universe, you need to align it to that goal. And we have not accomplished this yet or figured out how to do so. Hence, regardless of whether you believe in the inherent value of humanity or not, AI safety is still important, and pausing probably too. Otherwise we wonā€™t be able to create a truth-seeking ASI.

Ian Turner @ 2024-10-21T03:57 (+1) in response to What wheels do EAs and rationalists reinvent?

Is it possible weā€™re talking to past each other? ā€œInstitutional reformsā€ isnā€™t something a donor can spend money or donate to. But EA global health efforts are open to working on policy change; an example is the Lead Exposure Elimination Project.

I still feel that you havenā€™t really answered the question, what do you think GiveWell should recommend, which they currently arenā€™t?

Matrice Jacobine @ 2024-10-21T12:23 (+1)

I don't know how to make it clearer. Longtermist nonprofits get to research world problems and their possible solutions without having to immediately show a randomized controlled trial following the ITN framework on policies that don't exist yet. Why is the same thing seemingly impossible for dealing with global poverty?

David T @ 2024-10-21T07:58 (+1) in response to The Marginal $100m Would Be Far Better Spent on Animal Welfare Than Global Health

If the slow death involves no pain, of course it's credible. (The electric shock is, incidentally, generally insufficient to kill. They generally solve the problem of the fish reviving with immersion in ice slurry....). It's also credible that neither are remotely as painful as a two week malaria infection or a few years of malaria infection which is (much of) what sits on the other side of the trade here.

MichaelStJules @ 2024-10-21T12:07 (+5)

My understanding from conversation with SWP is that for shrimp, the electric stunning also just kills the shrimp, and it's all over very quickly.

It might be different for fish.

JackM @ 2024-10-21T09:24 (+4) in response to The Marginal $100m Would Be Far Better Spent on Animal Welfare Than Global Health

Conditional on fish actually being able to feel pain, it seems a bit far-fetched to me that a slow death in ice wouldnā€™t be painful.

MichaelStJules @ 2024-10-21T12:04 (+3)

This is less clear for shrimp, though. I don't know if they find the cold painful at all, and it might sedate them or even render them unconscious. But I imagine that takes time, and they're being crushed by each other and ice with ice slurry.

Isaac King @ 2024-10-18T17:19 (+2) in response to Value lock-in is happening *now*

Yes, I'm conditioning on no singularity here.

JordanStone @ 2024-10-21T11:49 (+1)

Interestingly, the singularity could actually have the opposite effect. Where originally human exploration of the Solar System was decades away, extremely intelligent AI could speed up technology to where it's all possible within a decade.

The space policy landscape is not ready for that at all. There is no international framework for governing the use of space resources, and human exploration is still technically illegal on Mars due to contamination of the surface (and the moon! Yes we still care a lot). 

So I lean more towards superintelligent AI being a reason to care more about space, not less. Will Macaskill discusses it in more detail here

Ben MillwoodšŸ”ø @ 2024-10-21T11:33 (+4) in response to Should you pursue an MA degree in International Relations?

[edit: it's been changed I think?]

FWIW when I saw the title of this post I assumed you were going to be asking for advice rather than offering it. Something like "My advice on whether it's worth [...]" would be less ambiguous, though a bit clumsier ā€“ obv this is partly a stylistic thing and I won't tell you what style is right for you :)

Remmelt @ 2024-10-21T10:37 (+4) in response to Why Stop AI is barricading OpenAI

As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.

The problem here is that AI corporations are increasingly making decisions for us. 
See this chapter.

Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)

To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.

Vasco GrilošŸ”ø @ 2024-10-21T11:17 (+2)

I agree it makes sense to model corporations as maximising profit, to a 1st approximation. However, since humans ultimately want to be happy, not increasing gross world product, I assume people will tend to pay more for AIs which are optimising for human welfare instead of economic growth. So I expect corporations developping AIs optimising for something closer to human welfare will be more successful/profitable than ones developping AIs which maximally increase economic growth. That being said, if economic growth refers to the growth of the human economy (instead of the growth of the AI economy too), I guess optimising for economic growth will lead to better outcomes for humans, because this has historically been the case.

Jim Buhler @ 2024-10-21T08:59 (+3) in response to Who would you like to see speak at EA Global?

Nice, thanks for sharing, I'll actually give you a different answer than last time after thinking about this a bit more (and maybe understanding your questions better). :)

> Would you still be clueless if the vast majority of the posterior counterfactual effect of our actions (e.g. in terms of increasing expected total hedonistic utility) was realised in at most a few decades to a century? Maybe this is the case based on the quickly decaying effect size of interventions whose effects can be more easily measured, likes ones in global health and development?

Not sure that's what you meant, but I don't think the effects of these decay in the sense that they have big short-term impact and negligible longterm impact (this is known as the "ripple in a pond" objection to cluelessness [1]). I think their longterm impact is substantial but that we just have no clue if it's good or bad because that depends on so many longterm factors the people carrying out these short-term interventions ignore and/or can't possibly estimate in an informative non-arbitrary way.

So I don't know how to respond to your first question because it seems it implictly assumes something I find impossible and goes against how causality works in our complex World (?)

> Do you think global human wellbeing has been increasing in the last few decades? If so, would you agree past actions have generally been good considering just a time horizon of a few decades after such actions? One could still argue past actions had positive effects over a few decades (i.e. welfare a few decades after the actions would be lower without such actions), but negative and significant longterm effects, such that it is unclear whether they were good overall.

Answering the second question:
1. Yes, one could argue that. 
2. One could also argue we're wrong to assume human wellbeing has been improving to begin with. Maybe we have a very flawed definition of what wellbeing is, which seems likely given how much people disagree on what kinds of wellbeing matter. Maybe we're neglecting a crucial consideration such as "there have been more people with cluster headaches with the population increasing and these are so bad that they outweigh all the good stuff". Maybe we're totally missing a similar kind of crucial consideration I can't think of.
3. Maybe most importantly, in the real World outside of this thought experiment, I don't care only about humans. If I cared only about them, I'd be less clueless because I could ignore humans' impact on aliens and other non-humans.

And to develop on 1:

> Do we have examples where the posterior counterfactual effects was positive at 1st, but then became negative instead of decaying to 0?

- Some AI behaved very well at first and did great things and then there's some distributional shift and it does bad things.
- Technological development arguably improved everyone's life at first and then it caused things like the confection of torture instruments and widespread animal farming.
- Humans were incidentally reducing wild animal suffering by deforesting but then they started becoming environmentalists and rewilding.
- Alice's life seemed wonderful at first but she eventually came down with severe chronic mental illness. 
- Some pill helped people like Alice at first but then made their lives worse.
- The Smokey Bear campaign reduced wildfires at first and then it turned out it increased them.

[1] See e.g. James Lenman's and Hilary Greaves' work on cluelessness for rejections of this argument.

Vasco GrilošŸ”ø @ 2024-10-21T11:01 (+2)

Thanks for following up, Jim.

big short-term impact and negligible longterm impact

If these were not so for global health and development interventions, I would expect to see interventions whose posterior effect size increases as time goes by, whereas this is not observed as far as I know.

2. One could also argue we're wrong to assume human wellbeing has been improving to begin with. Maybe we have a very flawed definition of what wellbeing is, which seems likely given how much people disagree on what kinds of wellbeing matter. Maybe we're neglecting a crucial consideration such as "there have been more people with cluster headaches with the population increasing and these are so bad that they outweigh all the good stuff". Maybe we're totally missing a similar kind of crucial consideration I can't think of.

I think welfare per human-year has increased in the last few hundred years. However, even if one is clueless about that, one could still conclude human welfare has increased due to population growth, as long as one agrees humans have positive lives?

3. Maybe most importantly, in the real World outside of this thought experiment, I don't care only about humans. If I cared only about them, I'd be less clueless because I could ignore humans' impact on aliens and other non-humans.

I agree there is lots of uncertainty about whether wild and farmed animals have positive or negative lives, and about the impact of humans on animal and alien welfare. However, I think there are still robustly positive interventions, like Shrimp Welfare Project's Humane Slaughter Initiative, which I estimate is way more cost-effective than GiveWell's top charities, and arguably barely changes the number of farmed and wild animals. I understand improved slaughter will tend to increase the cost of shrimp, and therefore decrease the consumption of shrimp, which could be bad if shrimp have positive lives, but I think the increase in welfare from the less painful slaughter is the driver of the overall effect.

- Some AI behaved very well at first and did great things and then there's some distributional shift and it does bad things.

Not all AI development is good, but I would say it has generally been good at least so far and for humans.

- Technological development arguably improved everyone's life at first and then it caused things like the confection of torture instruments and widespread animal farming.

Fair. However, cluelessness about whether technological development has been good/bad does not imply cluelessness about what to do, which is what matters. For example, one could abstain from supporting technological development more closely linked to wars and factory-farming if one does not think it has generally been beneficial in those areas.

- Humans were incidentally reducing wild animal suffering by deforesting but then they started becoming environmentalists and rewilding.

I think it is very unclear whether wild animals have positive/negative lives, so I would focus on efforts trying to improve their lives instead of increasing/decreasing the number of lives.

- Alice's life seemed wonderful at first but she eventually came down with severe chronic mental illness. 

I agree there are many examples where the welfare of a human decreases. However, we are far from clueless about improving human welfare. Even if welfare per human-year has not been increasing, welfare per human life has been increasing due to increases in life expectancy.

- Some pill helped people like Alice at first but then made their lives worse.

There are always counterexamples, but I suppose taking pills recommended by doctors still improves welfare in expectation (although I guess less than people imagine).

- The Smokey Bear campaign reduced wildfires at first and then it turned out it increased them.

It is unclear to me whether this interventions was positive at 1st, becaues I do not know whether wild animals have positive or negative lives, and I expect the effects on these are the major driver of the overall effect.

Vasco GrilošŸ”ø @ 2024-10-21T10:02 (+2) in response to Why Stop AI is barricading OpenAI

Thanks for clarifying!

I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I'd be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.

Fair! I did not look into that. However, the rate of automation (not the share of automated tasks) is linked to economic growth, and this used to be much lower in the past. According to Table 1 (2) of Hanson 2000, the global economy used to double once every 230 k (224 k) years in hunting and gathering period of human history. Today it doubles once every 20 years or so[1]. Despite a much higher growth rate, and therefore a way higher rate of automation, the unemployment rate is still relatively low (5.3 % globally in 2022). So I still think it is very unlikely that faster automation in the next few years would lead to massive unemployment.

Longer term, over decades to centuries, I can see AI coming to perform the vast majority of economically valuable tasks. However, I believe humans will only allow this to happen if they get to benefit. As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.

  1. ^

    The doubling time for 3 % annual growth is 23.4 years (= LN(2)/LN(1.03)).

Remmelt @ 2024-10-21T10:37 (+4)

As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.

The problem here is that AI corporations are increasingly making decisions for us. 
See this chapter.

Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)

To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.

Remmelt @ 2024-10-21T02:21 (+10) in response to Why Stop AI is barricading OpenAI

I am open to a bet similar to this one.

I would bet on both, on your side.
 

Potentially relatedly, I think massive increases in unemployment are very unlikely.

I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I'd be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.

AI Safety inside views are wrong for various reasons in my opinion. I agree with many of Thorstad's views you cited (eg. critiquing how fast take-off, orthogonality thesis and instrumental convergence relies on overly simplistic toy models, missing the hard parts about machinery coherently navigating an environment that's more complex than just the machinery itself).

There are arguments that you are still unaware of, which mostly come from outside of the community. They're less flashy, involving longer timelines. For example, it considers why the standardisation of hardware and code allows for extractive corporate-automation feedback loops.

To learn about why superintelligent AI disempowering humanity would be the lead up to the extinction of all current living species, I suggest digging into substrate-needs convergence. 

I gave a short summary in this post:

  • AGI is artificial. The reason why AGI would outperform humans at economically valuable work in the first place is because of how virtualisable its code is, which in turn derives from how standardisable its hardware is. Hardware parts can be standardised because their substrate stays relatively stable and compartmentalised. Hardware is made out of hard materials, like the silicon from rocks. Their molecular configurations are chemically inert and physically robust under human living temperatures and pressures. This allows hardware to keep operating the same way, and for interchangeable parts to be produced in different places. Meanwhile, human "wetware" operates much more messily. Inside each of us is a soup of bouncing and continuously reacting organic molecules. Our substrate is fundamentally different.
  • The population of artificial components that constitutes AGI implicitly has different needs than us (for maintaining components, producing components, and/or potentiating newly connected functionality for both). Extreme temperature ranges, diverse chemicals ā€“ and many other unknown/subtler/more complex conditions ā€“ are needed that happen to be lethal to humans. These conditions are in conflict with our needs for survival as more physically fragile humans.
  • These connected/nested components are in effect ā€œvariantsā€ ā€“ varying code gets learned from inputs, that are copied over subtly varying hardware produced through noisy assembly processes (and redesigned using learned code).
  • Variants get evolutionarily selected for how they function across the various contexts they encounter over time. They are selected to express environmental effects that are needed for their own survival and production. The variants that replicate more, exist more. Their existence is selected for.
  • The artificial population therefore converges on fulfilling their own expanding needs. Since (by 4.) control mechanisms cannot contain this convergence on wide-ranging degrees and directivity in effects that are lethal to us, human extinction results.
Vasco GrilošŸ”ø @ 2024-10-21T10:02 (+2)

Thanks for clarifying!

I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I'd be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.

Fair! I did not look into that. However, the rate of automation (not the share of automated tasks) is linked to economic growth, and this used to be much lower in the past. According to Table 1 (2) of Hanson 2000, the global economy used to double once every 230 k (224 k) years in hunting and gathering period of human history. Today it doubles once every 20 years or so[1]. Despite a much higher growth rate, and therefore a way higher rate of automation, the unemployment rate is still relatively low (5.3 % globally in 2022). So I still think it is very unlikely that faster automation in the next few years would lead to massive unemployment.

Longer term, over decades to centuries, I can see AI coming to perform the vast majority of economically valuable tasks. However, I believe humans will only allow this to happen if they get to benefit. As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.

  1. ^

    The doubling time for 3 % annual growth is 23.4 years (= LN(2)/LN(1.03)).

David T @ 2024-10-21T07:58 (+1) in response to The Marginal $100m Would Be Far Better Spent on Animal Welfare Than Global Health

If the slow death involves no pain, of course it's credible. (The electric shock is, incidentally, generally insufficient to kill. They generally solve the problem of the fish reviving with immersion in ice slurry....). It's also credible that neither are remotely as painful as a two week malaria infection or a few years of malaria infection which is (much of) what sits on the other side of the trade here.

JackM @ 2024-10-21T09:24 (+4)

Conditional on fish actually being able to feel pain, it seems a bit far-fetched to me that a slow death in ice wouldnā€™t be painful.

CBšŸ”ø @ 2024-10-19T16:13 (+1) in response to Are Organically Farmed Animals Already Living a Net-Positive Life?

This is an interesting question, and I'd really like to know the answer.

From what I got, chicken have a lot of trouble making hiƩrarchies when there are a lot of them (e.g. thousands) and it's something important for them so this might be a negative element. But frankly not sure.

Christoph Hartmann šŸ”ø @ 2024-10-21T09:15 (+2)

Yes I heard the same. I had a brief look at their regulation and saw that "No more than 3,000 laying hens may be kept in any one shed" which seems pretty high even if they have more space per hen than with other regulations.

I'll see if I can talk to some experts and get their thoughts on these questions.

Vasco GrilošŸ”ø @ 2024-10-20T15:44 (+2) in response to Who would you like to see speak at EA Global?

Hi Jim,

I had already shared the below with you before, but I am reposting it here in case others find it relevant.

Would you still be clueless if the vast majority of the posterior counterfactual effect of our actions (e.g. in terms of increasing expected total hedonistic utility) was realised in at most a few decades to a century? Maybe this is the case based on the quickly decaying effect size of interventions whose effects can be more easily measured, likes ones in global health and development?

Do you think global human wellbeing has been increasing in the last few decades? If so, would you agree past actions have generally been good considering just a time horizon of a few decades after such actions? One could still argue past actions had positive effects over a few decades (i.e. welfare a few decades after the actions would be lower without such actions), but negative and significant longterm effects, such that it is unclear whether they were good overall. Do we have examples where the posterior counterfactual effects was positive at 1st, but then became negative instead of decaying to 0?

Jim Buhler @ 2024-10-21T08:59 (+3)

Nice, thanks for sharing, I'll actually give you a different answer than last time after thinking about this a bit more (and maybe understanding your questions better). :)

> Would you still be clueless if the vast majority of the posterior counterfactual effect of our actions (e.g. in terms of increasing expected total hedonistic utility) was realised in at most a few decades to a century? Maybe this is the case based on the quickly decaying effect size of interventions whose effects can be more easily measured, likes ones in global health and development?

Not sure that's what you meant, but I don't think the effects of these decay in the sense that they have big short-term impact and negligible longterm impact (this is known as the "ripple in a pond" objection to cluelessness [1]). I think their longterm impact is substantial but that we just have no clue if it's good or bad because that depends on so many longterm factors the people carrying out these short-term interventions ignore and/or can't possibly estimate in an informative non-arbitrary way.

So I don't know how to respond to your first question because it seems it implictly assumes something I find impossible and goes against how causality works in our complex World (?)

> Do you think global human wellbeing has been increasing in the last few decades? If so, would you agree past actions have generally been good considering just a time horizon of a few decades after such actions? One could still argue past actions had positive effects over a few decades (i.e. welfare a few decades after the actions would be lower without such actions), but negative and significant longterm effects, such that it is unclear whether they were good overall.

Answering the second question:
1. Yes, one could argue that. 
2. One could also argue we're wrong to assume human wellbeing has been improving to begin with. Maybe we have a very flawed definition of what wellbeing is, which seems likely given how much people disagree on what kinds of wellbeing matter. Maybe we're neglecting a crucial consideration such as "there have been more people with cluster headaches with the population increasing and these are so bad that they outweigh all the good stuff". Maybe we're totally missing a similar kind of crucial consideration I can't think of.
3. Maybe most importantly, in the real World outside of this thought experiment, I don't care only about humans. If I cared only about them, I'd be less clueless because I could ignore humans' impact on aliens and other non-humans.

And to develop on 1:

> Do we have examples where the posterior counterfactual effects was positive at 1st, but then became negative instead of decaying to 0?

- Some AI behaved very well at first and did great things and then there's some distributional shift and it does bad things.
- Technological development arguably improved everyone's life at first and then it caused things like the confection of torture instruments and widespread animal farming.
- Humans were incidentally reducing wild animal suffering by deforesting but then they started becoming environmentalists and rewilding.
- Alice's life seemed wonderful at first but she eventually came down with severe chronic mental illness. 
- Some pill helped people like Alice at first but then made their lives worse.
- The Smokey Bear campaign reduced wildfires at first and then it turned out it increased them.

[1] See e.g. James Lenman's and Hilary Greaves' work on cluelessness for rejections of this argument.

Chris Leong @ 2024-10-21T08:44 (+1) in response to Announcing: biosecurity.world

Very exciting! I would love to see folk create versions for other cause areas as well.

JackM @ 2024-10-20T20:50 (+4) in response to The Marginal $100m Would Be Far Better Spent on Animal Welfare Than Global Health

I was trying to question you on the duration aspect specifically. If electric shock lasts a split second is it really credible that it could be worse than a slow death through some other method?

David T @ 2024-10-21T07:58 (+1)

If the slow death involves no pain, of course it's credible. (The electric shock is, incidentally, generally insufficient to kill. They generally solve the problem of the fish reviving with immersion in ice slurry....). It's also credible that neither are remotely as painful as a two week malaria infection or a few years of malaria infection which is (much of) what sits on the other side of the trade here.

GV @ 2024-10-21T07:57 (+1) in response to Announcing: biosecurity.world

Thanks a lot for the hard work! This will certainly be useful to people interested in biosecurity careers in our group!

ScienceMon @ 2024-08-26T12:59 (+7) in response to Career FOMO. How to exit?

It sounds like we're around the same age, both a few years out of a PhD (mine was in bio). I'm happy to email/talk 1-on-1 with you about this:

  1. Do not continue down the academic path. Your mind and body are clearly telling you to stop! Instead, start applying for jobs as you wind down your ongoing projects.
  2. Probably don't start a business right now. Not unless you have a major technical edge in a lucrative area and a suite of business-relevant skills (I couldn't infer this from your post).
  3. If your internships were at large/established companies, I would be unsurprised that they went poorly. There are stark cultural differences between academia and the corporate world.
  4. Consider joining a startup. Culturally, this will be a smoother transition than to big corporate. You will be paid much more than you are as a postdoc. The people around you will generally be happier than your academic colleagues. And you'll build skills that are relevant to starting your own business one day.

Academia is wonderful in many ways, but it teaches people that life is linear, which is a damn lie. Life isn't linear! You have decades ahead of you that will be filled with personal growth and bringing happiness to other people.

Cipolla @ 2024-10-21T07:56 (+1)

Thanks a lot! I will consider it :)

Cipolla @ 2024-10-21T07:55 (+1) in response to Cipolla's Quick takes

I noticed the most successful people, in the sense of advancing their career and publishing papers, I meet at work have a certain belief in themselves. What is striking, no matter their age/career stage, it is like they are already taking certain their success and where to go in the future.

 

I also noticed this is something that people from non-working class backgrounds manage to do.

 

Second point. They are good at finishing projects and delivering results in time.

 

I noticed that this was somehow independent from how smart is someone.

 

While I am very good at single tasks, I have always struggled with long term academic performance. I know it is true for some other people too.

 

What kind of knowledge/mentality am I missing? Because I feel stuck.

tobycrisford šŸ”ø @ 2024-10-21T06:36 (+1) in response to Fungal diseases: Health burden, neglectedness, and potential interventions

This is a fascinating summary!

I have a bit of a nitpicky question on the use of the phrase 'confidence intervals' throughout the report. Are these really supposed to be interpreted as confidence intervals? Rather than the Bayesian alternative, 'credible intervals'..?

My understanding was that the phrase 'confidence interval' has a very particular and subtle definition, coming from frequentist statistics:

  • 80% Confidence Interval: For any possible value of the unknown parameter, there is an 80% chance that your data-collection and estimation process would produce an interval which contained that value.
  • 80% Credible interval: Given the data you actually have, there is an 80% chance that the unknown parameter is contained in the interval.

From my reading of the estimation procedure, it sounds a lot more like these CIs are supposed to be interpreted as the latter rather than the former? Or is that wrong?

Appreciate this is a bit of a pedantic question, that the same terms can have different definitions in different fields, and that discussions about the definitions of terms aren't the most interesting discussions to have anyway. But the term jumped out at me when reading and so thought I would ask the question!

yanni kyriacos @ 2024-10-19T22:11 (+2) in response to Whose transparency can we celebrate?

Remember: everything has opportunity costs. So before looking at transparent things and assuming they have a positive cost / benefit, consider the fact that to be transparent the person or org didn't do something else.

For e.g. I could list on my website that my major funder is LTFF but honestly that is not in my top 30 tasks. 

Let's not justify things just because they feel good. Which is exactly the same trap EAs fall into about giving criticism!

leillustrationsšŸ”ø @ 2024-10-21T05:38 (+1)

I think we should celebrate doing things which are better than not doing that thing, even if we don't know what the counterfactual would have been. For example:

  • When a friend donates to charity, I show appreciation, not ask him how sure he is that it was the best possible use of his money
  • When my relative gets a good grade, I congratulate her - I don't start questioning if she really prioritised studying for the right subject
  • When a server is nice to me, I thank them - I don't ask them why they're talking to me instead of serving someone else

I appreciate that transparency might never be on the top of your to do list, and that might be the correct decision. But when an organisation is transparent, that's a public good - it helps me and the community make better decisions about how I want to do good, and I want them to know it helped me. 

Public goods have this slightly annoying feature of being disincentivised, because they helps everyone, often at the cost of those providing the good. In an ideal world EAs would all do it anyway because we're perfect altruists, but we still respond to incentives like everyone else. This is why I don't think we need to go around asking eg. who has sent the best funding applications, even though that can often be more important than being transparent. 

I'd love to talk about other important public goods that we should celebrate!

Ulrik Horn @ 2024-10-21T04:52 (+4) in response to Start an Upper-Room UV Installation Company?

Hi Jeff, I have been pondering a similar question: Why is there not more uptake of UV? I suspect it could be down to targeting the wrong initial, beachhead market. While this analogy will fail in many aspects, there was social media before Facebook. However, they did not try to start on elite college campuses, and thus make it "cool" to have social media. Similarly, I am not sure UV sufficiently targets a desperate market. It seems UV companies target cleanrooms, hospitals, etc. but these already have tried and tested methods, especially via air filtration, for achieving low contamination. There might be some cost savings from UV, but it is not clear cut - filters are extremely cheap as they last for years. And there is industry inertia connected with doing things differently. And cleanrooms, ORs and the like have a lot of regulation one quickly gets stuck in.

Coming at this from another and admittedly subjective angle, as a parent, and talking to others, I am intrigued by the possibility of using UV as well as other disease fighting tech in nurseries/pre-schools (I think you are a parent too). This user group is certifiably desperate to be less at home with sick kids (and also to not constantly feel tired and low-energy). I just wanted to put this out there as I would be keen to support anything in this direction as long as I have availability. It should not be too expensive to test out, and at least in certain jurisdictions there is little in the way of legislation stopping something like this. On the contrary, here in Sweden where the government pays parents staying at home with sick kids, there is a push to reduce sickness in this sector of the society.

Matrice Jacobine @ 2024-10-20T15:26 (+1) in response to What wheels do EAs and rationalists reinvent?

Why are the animalist and longtermist wings of EA the only wings that consider policy change an intervention?

Ian Turner @ 2024-10-21T03:57 (+1)

Is it possible weā€™re talking to past each other? ā€œInstitutional reformsā€ isnā€™t something a donor can spend money or donate to. But EA global health efforts are open to working on policy change; an example is the Lead Exposure Elimination Project.

I still feel that you havenā€™t really answered the question, what do you think GiveWell should recommend, which they currently arenā€™t?

nathanhb @ 2024-10-19T15:38 (+3) in response to Discussion thread: Animal Welfare vs. Global Health Debate Week

Ok, I just read this post and the discussion on it (again, great insights from MichaelStJules). https://forum.effectivealtruism.org/posts/AvubGwD2xkCD4tGtd/only-mammals-and-birds-are-sentient-according-to Ipsundrum is the concept I haven't had a word for, of the self-modeling feedback loops in the brain.

So, now I can say that my viewpoint is somewhat of being a Gradualist over quantity/quality of ipsundrum across species.

Also, I have an intuition around qualitative distinctions that emerge from different quantities/qualities/interpretations of experiences. Thus, that a stubbed toe and a lifetime of torture seem like qualitatively different things, even if their component pieces are the same.

MichaelStJules @ 2024-10-21T03:04 (+2)

Also this thread (and maybe especially my response) may be useful.

Vasco GrilošŸ”ø @ 2024-10-20T16:44 (+4) in response to Why Stop AI is barricading OpenAI

I wrote a book for educated laypeople explaining how AI corporations would cause increasing harms leading eventually  to machine destruction of our society and ecosystem.

Curious for your own thoughts here.

Thanks for sharing! You may want to publish a post with a summary of the book. Potentially relatedly, I think massive increases in unemployment are very unlikely. If you or anyone you know are into bets, and guess the unemployment rate in the United States will reach tens of % in the next few years, I am open to a bet similar to this one.

Remmelt @ 2024-10-21T02:21 (+10)

I am open to a bet similar to this one.

I would bet on both, on your side.
 

Potentially relatedly, I think massive increases in unemployment are very unlikely.

I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I'd be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.

AI Safety inside views are wrong for various reasons in my opinion. I agree with many of Thorstad's views you cited (eg. critiquing how fast take-off, orthogonality thesis and instrumental convergence relies on overly simplistic toy models, missing the hard parts about machinery coherently navigating an environment that's more complex than just the machinery itself).

There are arguments that you are still unaware of, which mostly come from outside of the community. They're less flashy, involving longer timelines. For example, it considers why the standardisation of hardware and code allows for extractive corporate-automation feedback loops.

To learn about why superintelligent AI disempowering humanity would be the lead up to the extinction of all current living species, I suggest digging into substrate-needs convergence. 

I gave a short summary in this post:

  • AGI is artificial. The reason why AGI would outperform humans at economically valuable work in the first place is because of how virtualisable its code is, which in turn derives from how standardisable its hardware is. Hardware parts can be standardised because their substrate stays relatively stable and compartmentalised. Hardware is made out of hard materials, like the silicon from rocks. Their molecular configurations are chemically inert and physically robust under human living temperatures and pressures. This allows hardware to keep operating the same way, and for interchangeable parts to be produced in different places. Meanwhile, human "wetware" operates much more messily. Inside each of us is a soup of bouncing and continuously reacting organic molecules. Our substrate is fundamentally different.
  • The population of artificial components that constitutes AGI implicitly has different needs than us (for maintaining components, producing components, and/or potentiating newly connected functionality for both). Extreme temperature ranges, diverse chemicals ā€“ and many other unknown/subtler/more complex conditions ā€“ are needed that happen to be lethal to humans. These conditions are in conflict with our needs for survival as more physically fragile humans.
  • These connected/nested components are in effect ā€œvariantsā€ ā€“ varying code gets learned from inputs, that are copied over subtly varying hardware produced through noisy assembly processes (and redesigned using learned code).
  • Variants get evolutionarily selected for how they function across the various contexts they encounter over time. They are selected to express environmental effects that are needed for their own survival and production. The variants that replicate more, exist more. Their existence is selected for.
  • The artificial population therefore converges on fulfilling their own expanding needs. Since (by 4.) control mechanisms cannot contain this convergence on wide-ranging degrees and directivity in effects that are lethal to us, human extinction results.
Remmelt @ 2024-10-21T01:40 (+2) in response to Remmelt's Quick takes

Donation opportunities for restricting AI companies:

In my pipeline:  

  • funding a 'horror documentary' against AI by an award-winning documentary maker (got a speculation grant of $50k)
  • funding lawyers in the EU for some high-profile lawsuits and targeted consultations with EU AI Office.
     

If you're a donor, I can give you details on their current activities. I worked with staff in each of these organisations. DM me.

Nithin Ravi @ 2024-10-19T23:15 (+1) in response to Criticism is sanctified in EA, but, like any intervention, criticism needs to pay rent

Thanks Yanni! I've been on the path of nondual meditation for about a year now, and have slowly watched the benefits manifest.

yanni kyriacos @ 2024-10-21T01:10 (+2)

Nice! Have you had any breakthroughs yet? 

mhendricšŸ”ø @ 2024-10-20T09:20 (+5) in response to Whose transparency can we celebrate?

I don't find this convincing. It seems to me that updating that one line on your website should not take longer than e.g. writing this comment. Why would you think it has a significant tradeoff?

yanni kyriacos @ 2024-10-21T01:09 (0)

There's probably 100 things that sit in the "not urgent space" when running a start up.

If you open yourself to those 100 things then you don't work on the most important. 

If you haven't run / worked in a small startup I don't expect this to be intuitive.



Comments on 2024-10-20

Linch @ 2024-10-18T23:40 (+4) in response to EA "Worldviews" Need Rethinking

You'd either want to stop focusing on infant mortality, or start interventions to increase fertility. (Depending on whether population growth is a priority.)

I'm not sure I buy this disjunctive claim. Many people over humanity's history have worked on reducing infant mortality (in technology, in policy, in direct aid, and in direct actions that prevent their own children/relatives' children from dying). While some people worked on this because they primarily intrinsically value reducing infant mortality, I think many others were inspired by the indirect effects. And taking the long view, reducing infant mortality clearly had long-run benefits that are different from (and likely better than) equivalent levels of population growth while keeping infant mortality rates constant. 

Rohin Shah @ 2024-10-20T21:51 (+6)

I agree reductions in infant mortality likely have better long-run effects on capacity growth than equivalent levels of population growth while keeping infant mortality rates constant, which could mean that you still want to focus on infant mortality while not prioritizing increasing fertility.

I would just be surprised if the decision from the global capacity growth perspective ended up being "continue putting tons of resources into reducing infant mortality, but not much into increasing fertility" (which I understand to be the status quo for GHD), because:

  • Probably the dominant consideration for importance is how good / bad it is to grow the population, and it is unlikely that the differential effects from reducing infant mortality vs increasing fertility end up changing the decision
  • Probably it is easier / cheaper to increase fertility than to reduce infant mortality, because very little effort has been put into increasing fertility (to my knowledge)

That said, it's been many years since I closely followed the GHD space, and I could easily be wrong about a lot of this.