Latest comments on the EA Forum

Comments on 2023-09-29

ElliotJDavies @ 2023-09-28T21:37 (+2) in response to Our tofu book has launched!! (Upvote on Amazon)

Amazing! Bought the UK version

George Stiffman @ 2023-09-29T02:51 (+1)

Thanks for supporting! I'm not sure if Amazon has dropped the price yet... hopefully they should today or tomorrow.

quinn @ 2023-09-28T20:56 (+2) in response to Our tofu book has launched!! (Upvote on Amazon)

Huh. When I was in singapore I felt like I was getting a deeper view of chinese cuisine than any knowledge I had acquired in the states, but I still didn't get into like game-changingly new ways of viewing tofu in particular. 

George Stiffman @ 2023-09-29T02:51 (+1)

That's interesting. I wonder how Singapore compares to China for tofu? 

My impression is that Singaporean food overlaps most with Southeastern (Fujian and Cantonese) Chinese cooking, but those two cuisines use fewer varieties than other regions of China. Granted, I've never been, so this could be very wrong! Does anyone have a better sense?

Jeff Kaufman @ 2023-09-28T03:33 (+14) in response to Net global welfare may be negative and declining

There are a ton of judgement calls in coming up with moral weights. I'm worried about a dynamic where the people most interested in getting deep into these questions are people who already intuitively care pretty strongly about animals, and so the best weights available end up pretty biased

RedStateBlueState @ 2023-09-29T02:13 (+1)

I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They're applied to animals, but I think they're really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.

ASB @ 2023-09-29T01:08 (+2) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

Because you've been a public servant who took on the responsibility of shutting down the Soviet bioweapons program, securing loose nuclear material, and kickstarting a wildly successful early career program while at the DoD, I need to know: is it ever difficult being so awesome?

And, what would your advice be for younger folks aiming to follow in your footsteps?

Charlie_Guthmann @ 2023-09-29T00:00 (+1) in response to Weighing Animal Worth

Do people here think there is a correct answer to this question?



Comments on 2023-09-28

Florian Habermacher @ 2023-09-25T21:31 (+2) in response to AI Pause Will Likely Backfire

Enjoyed the post, thanks! But it starts with an invalid deduction:

Since we don’t enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium. We should only advocate for such heavy-handed government action if it’s clear that the benefits of doing so would significantly outweigh the costs.

(I added the emphasis)

Instead, it seems more reasonable to simply advocate for such action exactly if, in expectation, the benefits seem to [even just about] outweigh the costs. Of course, we have to take into account all types of costs, as you advocate in your post. Maybe that includes even some unknown unknowns in terms of risks from an imposed pause. Still, in the end, we should be even-handed. That we don't impose pauses on most technologies, surely is not a strong reason to the contrary: We might (i) for bad reasons fail to impose pauses also in other cases, or, maybe more clearly, (ii) simply not see so many other technologies with so large potential downside warranting making pause a major need - after all, that's why we have started the debate in particular about this new technology, AI.

This is just a point on stringency in your provided motivation for the work; changing that beginning of your article would IMHO avoid an unnecessary 'tendentious' passage.

Matthew_Barnett @ 2023-09-28T23:41 (+2)

Instead, it seems more reasonable to simply advocate for such action exactly if, in expectation, the benefits seem to [even just about] outweigh the costs.

I agree in theory, but disagree in practice. In theory, utilitarians only care about the costs and benefits of policy. But in practice, utilitarians should generally be constrained by heuristics and should be skeptical of relying heavily on explicit cost-benefit calculations.

Consider the following thought experiment:

You're the leader of a nation and are currently deciding whether to censor a radical professor for speech considered perverse. You're very confident that the professor's views are meritless. You ask your advisor to run an analysis on the costs and benefits of censorship in this particular case, and they come back with a report concluding that there is slightly more social benefit from censoring the professor than harm. Should you censor the professor?

Personally, my first reaction would be to say that the analysis probably left out second order effects from censoring the professor. For example, if we censor the professor, there will be a chilling effect on other professors in the future, whose views might not be meritless. So, let's make the dilemma a little harder. Let's say the advisor insists they attempted to calculate second order effects. You check and can't immediately find any flaws in their analysis. Now, should you censor the professor?

In these cases, I think it often makes sense to override cost-benefit calculations. The analysis only shows a slight net-benefit, and so unless we're extremely confident in its methodology, it is reasonable to fall back on the general heuristic that professors shouldn't be censored. (Which is not to say we should never violate the principle of freedom of speech. If we learned much more about the situation, we might eventually decide that the cost-benefit calculation was indeed correct.)

Likewise, I think it makes sense to have a general heuristic like, "We shouldn't ban new technologies because of abstract arguments about their potential harm" and only override the heuristic because of strong evidence about the technology, or after very long examination, rather than after the benefits of a ban merely seem to barely outweigh the costs.

Dhruv Makwana @ 2023-09-21T22:25 (+1) in response to Change my mind: Veganism entails trade-offs, and health is one of the axes

Before I reply, I'd like to acknowledge that my original comment from 3 months ago, much before our recent, cordial and respectful exchange elsewhere on this post, was probably a 6-6.5/10 in terms of tone and clarity, and could have been made more conducive to discussion: sorry. 

I'd also like to say upfront that I am very reluctantly spending 150+ minutes getting nerdsniped into writing this comment during a week when I'm aiming to address a sleep deficit, and as I said in my other comment, "For the sake of my time, this should hopefully be my last comment on this post.", but this time for real.

I realise making a point and walking away can come off frustrating/rude, but that's not my intention here, it's just self-preservation. If that's objectionable, you may ignore the rest of this comment.

But to your basic point - my point is not that "people are wrong about their feelings of hunger" (which off the top of my head, and my experience, I think they can be - for example mistaking stress/discomfort/boredom for hunger - but this is besides the point). 

My point is about the primary attribution of the cause of the subjective feeling of hunger to a not easily perceptible thing such as protein. My intuition comes from subjective wellbeing (e.g. Stumbling on Happiness by Dan Gilbert) and also perception/embodied cognition research (e.g. rubber hand illusion). The attribution is an empirical claim, and that's what I was (very poorly) getting at.

As part of this empirical attribution, there's two different concepts at play here: satiation and satiety (yes, silly naming). Satiation is how much of food can be consumed in one sitting. Satiety is how much a given food will delay or decrease calorie intake in the next meal.

From the post:

But there isn’t a satisfying plant product that is as rich in as many things as meat, dairy, and especially eggs. 

But there isn’t a satisfying plant product that is as rich in as many things as meat, dairy, and especially eggs. Every “what about X?” has an answer, but if you add up all the foods you would need to meet every need, for people who aren’t gifted at digestion, it’s far too many calories and still fairly restrictive.

From the comment:

Because protein is the hardest and most valuable macronutrient for most diets, and because it's correlated with the subset of vitamins that's richer in meat sources.

It looks like there's two aspects to this: (a) judging plants using meat as the standard (b) an implicit assumption that protein is (b) important, (c) especially "satisfying"-ness, i.e. satiation and satiety. 

[Tangent: I think others have pointed out that (a) is a little unfair - meat doesn't have many health promoting things like Vit C, fibre, antioxidants, easier to regulate absorption of nutrients, lack of cholesterol, less saturated fat etc.]

In response to (b) the first video about the very low protein requirement for humans I think covers the major aspects (babies need the most protein and human breast milk is 1% protein by weight, 5-7% by calories, adults need around 0.8g/kg of body-weight, maximum up to 1.5-1.8g/kg for strength training etc).

In response to (c), a whole host of other factors influence satiation and satiety (as mentioned in video 2 and elsewhere[1]).

  • Calorie density, influenced mostly by lack of fats and increased water. Quoting Dr. Greger, "When dozens of common foods, pitted head-to-head for for their ability to satiate appetites for hours, the characteristic most predictive was not how little fat or how much protein it had, but how much water it had." (https://pubmed.ncbi.nlm.nih.gov/7498104/). Whole fresh fruit & veg generally fall in at < 100 calories per cup, whereas meats are 300-600 calories per cup.
  •  Fibre. Fibre limits the absorption of calories, meaning that you can eat more but still absorb the same amount of calories.
  • Absorbability. Conversely, processing (e.g. turning peanuts into peanut butter) separates the peanut's calories from the fibrous cell walls and thus making it more vastly more absorbable. Animal product do not have a fibrous cell walls, meaning they are absorbable off the bat. This means lower satiation per calorie - you eat more calories in a stomach full.
  • Thylakoids. The thing that makes leaves green slows down fat absorption in the gut. Slowing down fat absorption means that un-absorbed calories can reach the end of the intestine (ileum). When this is detected, appetite is decreased dramatically.
  • Hardness of food. Same food, presented hard or soft (e.g. carrots) leads to fewer calories being consumed but no extra calories as compensation in the next meal.

To be clear, I am willing to grant the premise that "protein > carb > fat" in terms of satiation. But this would not be the end of the matter, because cardinality matters too. I don't want to spend an hour digging for numbers at this stage, but I can illustrate what I mean with an example:

  • Chicken/beef roughly 45% calories from protein (rest from fat).
  • Chickpeas roughly 22% calories from protein (rest from carbs incl. fibre).
  • Dry soya chunks rougly 57% calories from protein (rest from carbs incl. fibre).

Based only on macros (let's say you blitzed the chickpeas into ultra fine hummus and equalised the water content), which of these is going to be more satisfying is going to depend on the ratio of how much less satisfying carbs and fat are (per calorie) compared to protein. And it's not clear to me based solely on protein being most important that meat has a slam dunk advantage here.

Anyways, as I said in the other comment, I'm going to signpost Chris MacAskill as a source of information and potential collaborator. Toodles!

  1. ^

    Chris MacAskill's https://youtu.be/zOAapJo9cE0?feature=shared high-protein, animal keto vs low-fat, plant-based diet especially the section on satiety vs satiation, which is the source of my information in this comment.

Elizabeth @ 2023-09-28T23:36 (+2)

First, I want to apologize. I didn't realize you were the same commenter I'd been talking to and had asked to bow out. I'm not sure what the right way to handle this was, but I should have at least acknowledged it.

I have some disagreements with some of your claims here, but mostly they feel irrelevant to my claims. This feels like an argument against a heavily meat-based diet, not against small amounts of meat in an otherwise plant-based one.

Vee @ 2023-09-28T09:31 (+3) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

What role should international organizations and treaties play in regulating emerging biotechnologies to prevent their misuse for bioweapons development?

How can we strike a balance between scientific research and security concerns in the field of biotechnology to prevent the accidental or deliberate creation of bioweapons?

Andy Weber @ 2023-09-28T23:00 (+1)
  1. International organizations and treaties have a vital role in preventing BioWeapons development. We need to redouble our efforts to strengthen the BWC. There also needs to be stronger global governance to prevent accidents and misuse. Kazakhstan President Tokayev has proposed establishing and International Biosafety Agency. This and other similar concepts to strengthen biosecurity should be actively promoted.

  2. We definitely need to do more on the security side of this equation.

lukasj10 @ 2023-09-28T13:25 (+1) in response to Resources: Pursuing a career in animal advocacy (even if you're a longtermist!)

Thanks for putting this together, Ren! 

Double link for Animal Advocacy in the Age of AI (EA Forum), I think.

Ren Springlea @ 2023-09-28T22:57 (+1)

Thanks, fixed :)

mhint199 @ 2023-09-28T09:39 (+3) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

Hindsight is 20:20, but do you think it was a net good for Ukraine to give up its nukes? I know it didn't have the C2 capabilities to actually use them at the time and economically was kind of strongarmed into it and all else equal I know it's better if fewer countries have them, but maybe it would have prevented this current war which has significant escalation potential.

Andy Weber @ 2023-09-28T22:51 (+3)

Removing nuclear weapons from Kazakhstan, Belarus, and Ukraine was an extremely important success. Had Ukraine tried to retain nuclear weapons, I believe an armed conflict with Russia would have broken out in the 1990’s.
There are many other things that could have been done to prevent Russia’s unprovoked, illegal attack on Ukraine. Ukraine keeping nuclear weapons is not one of them.

Prometheus @ 2023-09-26T23:51 (+6) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

Should the US start mass-producing hazmat suits? So that, in the event of an engineered pandemic, the spread of the disease can be prevented, while still being able to maintain critical infrastructure/delivery of basic necessities.

Andy Weber @ 2023-09-28T22:45 (+3)

Physical protection works, so this would be our best defense until medical countermeasures are developed and distributed. We need better and cheaper masks and suits, and they should be widely available in a crisis.

NickLaing @ 2023-09-27T11:43 (+6) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

After what happened in Iraq, do you think the USA is likely to take unilateral action against nuclear/chemical/biological threats if they emerge in the near future? If not then what might their approach be too such threats?

Andy Weber @ 2023-09-28T22:40 (+3)

Unilateral action should be the last resort. The Iraqi BW program was successfully destroyed by 1996, and it was never reconstituted. Rolf Ekeus wrote a very good book on UNSCOM’s successful efforts. The only place today that such unilateral action would even be considered is Syria’s rump chemical weapons program. The only other countries that have biological and chemical weapons also have nuclear weapons, so unilateral action would not be considered unless it were part of a larger direct conflict.
Our approach should be to do everything we can to strengthen the norm against developing and using biological and chemical weapons. Thankfully very few countries pursue these prohibited weapons. I am also a strong believer in deterrence by denial, and the Council on Strategic Risks has written about this. We and our Allies and partners should have a visible, greatly expanded biodefense effort to deter bio attacks and deny our adversaries the mass casualty effects of such weapons. The U.S. Department of Defense spends less than 1/5 of one percent of its budget on chemical and biological defense. This needs to change. The recent Biodefense Posture Review is a good step in the right direction.

Jeremy Klemin @ 2023-09-28T22:27 (+1) in response to Should you leave bad reviews on low quality vegan restaurants?

If you are the type of person to leave restaurant reviews, then I'd say yeah, you probably should! Lots of non-vegan people eat at these places. Some are flexitarians, while others have been dragged along by friends. If a non-veg's first experience at a vegan restaurant goes poorly, that's probably a bad thing overall. In theory, negative reviews help mitigate that. 

Madhav Malhotra @ 2023-09-27T18:34 (+6) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

Are there any misconceptions, stereotypes, or tropes that you commonly see in academic literature around nuclear security or biosecurity that you could correct given your perspective inside government?

Andy Weber @ 2023-09-28T22:12 (+3)

Some of our amazing former Council on Strategic Risks Ending Bioweapons Fellows wrote this outstanding paper debunking common misconceptions about biological weapons: https://councilonstrategicrisks.org/wp-content/uploads/2020/12/Common-Misconceptions-About-Biological-Weapons_BRIEFER-12_2020_12_7.pdf

JoshuaBlake @ 2023-09-27T12:28 (+7) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

What are you more concerned about in the biological weapons space: states, terrorist groups, or lone wolves? Why (if you can share the information)?

Andy Weber @ 2023-09-28T22:06 (+1)

I’m deeply concerned about all of these. Thankfully, only a handful of countries are actively pursuing biological weapons. That said, the few countries that have offensive BW programs are very dangerous. Given the expanding access to knowledge and BW capabilities, I worry also worry a lot about terrorist groups and lone wolves. They represent a very difficult intelligence and law enforcement challenge.

Daniel Greene @ 2023-09-28T02:42 (+7) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

Do you think that raising life scientists' awareness about the potential dual-use risks of their work is net-positive, because they can mitigate those risks, or net-negative, because they will draw the attention of bad actors?

Andy Weber @ 2023-09-28T22:02 (+3)

Definitely net-positive. It is actually shocking how little on this is included in the education of life scientists. We teach bio-ethics but rarely biosecurity.

Greg_Colbourn @ 2023-09-26T14:00 (+9) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

How seriously are national security people taking the threat of AI today (in particular, extinction risk)? Can we expect meaningful action to create a kill switch soon?

Andy Weber @ 2023-09-28T22:01 (+1)

Lately it is quite high on the national security agenda. The upcoming UK summit demonstrates the importance some leaders attach to it. A lot of this resides in the private sector, so governments will have to work in close partnership with private stakeholders to take meaningful action. I don’t know enough to answer your specific “kill switch” question.

JoelMcGuire @ 2023-09-28T21:59 (+3) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

The prospect of a nuclear conflict is so terrifying I sometimes think we should be willing to pay almost any price to prevent such a possibility. 

But when I think of withdrawing support for Ukraine or Taiwan to reduce the likelihood of nuclear war, that doesn't seem right either -- as it'd signal that we could be threatened into any concession if nuclear threats were sufficiently credible.

How would you suggest policymakers navigate such terrible tradeoffs?

JoelMcGuire @ 2023-09-28T21:54 (+3) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

How much do you think the risk of nuclear war would increase over the century if Iran acquired nuclear weapons? And what measures, if any, do you think are appropriate to attempt to prevent this or other examples of nuclear proliferation?

Ben_West @ 2023-09-27T18:59 (+9) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

Do you think research into game theory has increased, decreased, or had no effect on the risks from nuclear weapons? Should this tell us anything about the value of research into the theoretical basis of conflict in the future?

Andy Weber @ 2023-09-28T21:52 (+3)

It is one somewhat useful tool to try to assess and potentially mitigate risks. With nuclear weapons the data on use and near misses are extremely limited. Untested theories can be helpful, but we can’t rely on them too much because the stakes of getting it wrong are so high.

Davidmanheim @ 2023-09-26T11:35 (+10) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

Very happy to see you doing this, and hope you're doing well.

Question: What is your view on catastrophic risks from communicable versus noncommunicable biological or chemical threats - for example, biotoxins, anthrax, or chemical weapons, as opposed to possible communicable disease bioweapons like smallpox? Specifically, do you see a justification for considering [noncommunicable threats] global catastrophic risks? [Edited to clarify.]

(I'm interested in hearing your views on lots of topics, but since I'm not going to ask fifty question here, I wanted to pick something I think you may disagree with the "EA consensus" about.)

Andy Weber @ 2023-09-28T21:44 (+3)

David, Great to hear from you, and I look forward to your other forty nine questions the next time we meet in person. Communicable biological weapons represent an existential or omnicidal risk. Non-communicable biological weapons could also be catastrophic. In my opinion several million dead is catastrophic, even if it is not existential. Thankfully, much that we can do to prevent the worst case will also reduce the lesser included case. Also, some non-communicable BW agents like antibiotic and vaccine resistant anthrax are more probable. So if there is an “EA consensus” to ignore toxins and anthrax, I would disagree.

Tristan Williams @ 2023-09-28T21:39 (+14) in response to From Passion to Depression and Pessimism: My Journey with Effective Altruism

What do you feel like the community could best do for you going forward?

ElliotJDavies @ 2023-09-28T21:37 (+2) in response to Our tofu book has launched!! (Upvote on Amazon)

Amazing! Bought the UK version

Ben_West @ 2023-09-27T18:56 (+11) in response to AMA: Andy Weber (U.S. Assistant Secretary of Defense from 2009-2014)

If, in the next 15 years, there is a human caused biological global catastrophe (say, kills >1% of global population), what credence would you give that artificial intelligence was somehow involved?

Andy Weber @ 2023-09-28T21:32 (+5)

AI is increasing the BW threat in at least two ways. It is expanding “recipe” access to more players, like the internet did. For the last thirty years there were terrorist groups with intent to deploy BW, but they were either interdicted or not very capable. AI will expand access to capability. The second concern is that sophisticated actors will use AI-enabled bioengineering to make enhanced pathogens. I’m pleased that responsible AI companies are working feverishly to put in place guardrails to mitigate both of these risks. To answer your specific question, I would not be at all surprised if in 15 years a global biological catastrophe is AI-enabled.

Larks @ 2023-09-28T17:43 (+4) in response to Weighing Animal Worth

I think the conclusions of RP's research with respect to cause prioritization still hold up after incorporating the arguments you've enumerated in your post.

This seems maybe truth for animals vs AMF but not for animals vs Xrisk.

Peter Wildeford @ 2023-09-28T21:31 (+9)

We're working on animals vs xrisk next!

Larks @ 2023-09-28T17:43 (+4) in response to Weighing Animal Worth

I think the conclusions of RP's research with respect to cause prioritization still hold up after incorporating the arguments you've enumerated in your post.

This seems maybe truth for animals vs AMF but not for animals vs Xrisk.

MichaelStJules @ 2023-09-28T21:25 (+1)

This could depend on your population ethics and indirect considerations. I'll assume some kind of expectational utilitarianism.

The strongest case for extinction and existential risk reduction is on a (relatively) symmetric total view. On such a view, it all seems dominated by far future moral patients, especially artificial minds, in expectation. Farmed animal welfare might tell us something about whether artificial minds are likely to have net positive or net negative aggregate welfare, and moral weights for animals can inform moral weights for different artificial minds and especially those with limited agency. But it's relatively weak evidence. If you expect future welfare to be positive, then extinction risk reduction looks good and (far) better in expectation even with very low probabilities of making a difference, but could be Pascalian, especially for an individual (https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/). The Pascalian concerns could also apply to other population ethics.

If you have narrow person-affecting views, then cost-effective farmed animal interventions don’t generally help animals alive now, so won't do much good. If death is also bad on such views, extinction risk reduction would be better, but not necessarily better than GiveWell recommendations. If death isn't bad, then you'd pick work to improve human welfare, which could include saving the lives of children for the benefit of the parents and other family, not the children saved.

If you have asymmetric or wide person-affecting views, then animal welfare could look better than extinction risk reduction depending on human vs nonhuman moral weights and expected current lives saved by x-risk reduction, but worse than far future quality improvements or s-risk reduction (e.g. https://onlinelibrary.wiley.com/doi/full/10.1111/phpr.12927, but maybe animal welfare work counts for those, too, and either may be Pascalian). Still, on some asymmetric or wide views, extinction risk reduction could look better than animal welfare, in case good lives offset the bad ones (https://onlinelibrary.wiley.com/doi/full/10.1111/phpr.12927). Also, maybe extinction risk reduction could look better for indirect reasons, e.g. replacing alien descendants with our happier ones, or because the work also improves the quality of the far future conditional on not going extinct.

EDIT: Or, if the people alive today aren't killed (whether through a catastrophic event or anything else, like malaria), there's a chance they'll live very very long lives through technological advancement, and so saving them could at least beat the near-term effects of animal welfare if dying earlier is worse on a given person-affecting view.

That being said, all the above variants of expectational utilitarianism are irrational, because unbounded utility functions are irrational (e.g. can be money pumped, https://onlinelibrary.wiley.com/doi/abs/10.1111/phpr.12704), so the standard x-risk argument seems based on jointly irrational premises. And x-risk reduction might not follow from stochastic dominance or expected utility maximization on all bounded increasing utility functions of total welfare (https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/ and https://arxiv.org/abs/1807.10895; the argument for riskier bets here also depends on wide background value uncertainty, which would be lower with lower moral weights for nonhuman animals; stochastic dominance is equivalent to higher expected utility on all bounded increasing utility functions consistent with the (pre)order in deterministic cases).

SummaryBot @ 2023-09-28T21:07 (+3) in response to [Linkpost] John Wesley's surprisingly EA-aligned views on the use of money

Executive summary: John Wesley, an 18th century Christian preacher, advocated an "earning to give" approach with radical and EA-aligned views on money, including maximizing income ethically, avoiding unnecessary spending, and donating all surplus to effectively help others.

Key points:

  1. Wesley believed money was a tool for good that should be gained ethically, saved by avoiding excess, and given generously.
  2. He took an earning to give approach, saying to maximize income ethically through hard work while minimizing unnecessary expenses.
  3. Wesley advocated total impartiality in giving after basic needs are met, aiding strangers before indulging oneself and family.
  4. He was very radical, arguing to give away all surplus, not just a percentage, to do the most good.
  5. Wesley instructed assessing spending via questions of stewardship, obedience, sacrifice, and eternal reward.
  6. He lived simply himself, limiting expenses to donate large proportions of his income.
  7. Wesley diverged from EA in stronger limits on careers and giving priority to proximal needs first.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

JordanStone @ 2023-09-28T21:05 (+4) in response to JordanStone's Quick takes

I am a researcher in the space community and I recently wrote a post introducing the links between outer space and existential risk. I'm thinking about developing this into a sequence of posts on the topic. I plan to cover:

  1. Cosmic threats - what are they, how are they currently managed, and what work is needed in this area. Cosmic threats include asteroid impacts, solar flares, supernovae, gamma-ray bursts, aliens, rogue planets, pulsar beams, and the Kessler Syndrome. I think it would be useful to provide a summary of how cosmic threats are handled, and determine their importance relative to other existential threats.
  2. Lessons learned from the space community. The space community has been very open with data sharing - the utility of this for tackling climate change, nuclear threats, ecological collapse, animal welfare, and global health and development cannot be understated. I may include perspective shifts here, provided by views of Earth from above and the limitless potential that space shows us. 
  3. How to access the space community's expertise, technology, and resources to tackle existential threats. 
  4. The role of the space community in global politics. Space has a big role in preventing great power conflicts and building international institutions and connections. With the space community growing a lot recently, I'd like to provide a briefing on the role of space internationally to help people who are working on policy and war. 

Would a sequence of posts on space and existential risk be something that people would be interested in? (please agree or disagree to the post) I haven't seen much on space on the forum (apart from on space governance), so it would be something new.

George Stiffman @ 2023-09-28T17:07 (+3) in response to Our tofu book has launched!! (Upvote on Amazon)

Thanks Jessica! I'm so with you on the Chinese alt protein scene... would love to see more folks promoting these foods abroad!

Ooh, thanks for catching the international e-book pricing - just messaged Amazon and they'll correct that today or tomorrow.

I think this is a pretty open question. I'm more skeptical of plant-based meats than a lot of folks, largely because I think "narratives" matter more than "taste" for food selection. Narratives scale, whereas taste is extremely individualized. But dominant food narratives in China (and in the US!) ascribe a lot more value to things like local, natural, farm-to-table, cultural than the things that PBM are good at.

quinn @ 2023-09-28T20:56 (+2)

Huh. When I was in singapore I felt like I was getting a deeper view of chinese cuisine than any knowledge I had acquired in the states, but I still didn't get into like game-changingly new ways of viewing tofu in particular. 

Sean_o_h @ 2023-09-26T13:51 (+17) in response to Ben_West's Quick takes

Yeah, unfortunately I suspect that "he claimed to be an altruist doing good! As part of this weird framework/community!" is going to be substantial part of what makes this an interesting story for writers/media, and what makes it more interesting than "he was doing criminal things in crypto" (which I suspect is just not that interesting on its own at this point, even at such a large scale).

joshcmorrison @ 2023-09-28T20:48 (+8)

Agree with this and also with the point below that the EA angle is kind of too complicated to be super compelling for a broad audience. I thought this New Yorker piece's discussion (which involved EA a decent amount in a way I thought was quite fair -- https://www.newyorker.com/magazine/2023/10/02/inside-sam-bankman-frieds-family-bubble) might give a sense of magnitude (though the NYer audience is going to be more interested in these sort of nuances than most.

The other factors I think are: 1. to what extent there are vivid new tidbits or revelations in Lewis's book that relate to EA and 2. the drama around Caroline Ellison and other witnesses at trial and the extent to which that is connected to EA; my guess is the drama around the cooperating witnesses will seem very interesting on a human level, though I don't necessarily think that will point towards the effective altruism community specifically.

Tobias Häberli @ 2023-09-26T06:04 (+23) in response to Ben_West's Quick takes

My hope and expectation is that neither will be focused on EA

I'd be surprised [p<0.1] if EA was not a significant focus of the Michael Lewis book – but agree that it's unlikely to be the major topic. Many leaders at FTX and Alameda Research are closely linked to EA. SBF often, and publically, said that effective altruism was a big reason for his actions. His connection to EA is interesting both for understanding his motivation and as a story-telling element. There are Manifold prediction markets on whether the book would mention 80'000h (74%), Open Philanthropy (74%), and Give Well (80%), but these markets aren't traded a lot and are not very informative.[1]

This video titled The Fake Genius: A $30 BILLION Fraud (2.8 million views, posted 3 weeks ago) might give a glimpse of how EA could be handled. The video touches on EA but isn't centred on it. It discusses the role EAs played in motivating SBF to do earning to give, and in starting Alameda Research and FTX. It also points out that, after the fallout at Alameda Research, 'higher-ups' at CEA were warned about SBF but supposedly ignored the warnings. Overall, the video is mainly interested in the mechanisms of how the suspected fraud happened, where EA is only one piece of the puzzle. One can equally get a sense of "EA led SBF to do fraud" as "SBF used EA as a front to do fraud".

ETA:
The book description[2] "mentions "philanthropy", makes it clear that it's mainly about SBF and not FTX as a firm, and describes the book as partly a psychological portrait.

  1. ^

    I also created a similar market for CEA, but with 2 mentions as the resolving criteria. One mention is very likely as SBF worked briefly for them.

  2. ^

    "In Going Infinite Lewis sets out to answer this question, taking readers into the mind of Bankman-Fried, whose rise and fall offers an education in high-frequency trading, cryptocurrencies, philanthropy, bankruptcy, and the justice system. Both psychological portrait and financial roller-coaster ride, Going Infinite is Michael Lewis at the top of his game, tracing the mind-bending trajectory of a character who never liked the rules and was allowed to live by his own—until it all came undone."

Tristan Williams @ 2023-09-28T20:47 (+9)

Update: the court ruled SBF can't make reference to his philanthropy

Dawn Drescher @ 2023-09-28T14:16 (+11) in response to From Passion to Depression and Pessimism: My Journey with Effective Altruism

Yeah… I've been part of another community where a few hundred people were scammed out of some $500+ and left stranded in Nevada. (Well, Las Vegas, but Nevada sounds more dramatic.) Hundreds of other people in the community spang into action within hours, donated and coordinated donation efforts, and helped the others at least get back home.

Only Nonlinear attempted something similar in the EA community. (But I condemn exploitative treatment of employees of course!) Open Phil picked up an AI safety prize contest, and I might miss a few cases. I was very disappointed by how little of this sort happened. Then again I could've tried to start such an effort myself. I don't have the network, so I'm pretty sure I would've failed. I was also in bed with Covid for the first month. 

I suppose it really makes more sense to model EA not as a community but as a scientific discipline. I have a degree in CS, but I wasn't disappointed that the CS community didn't support their own after the FTX collapse because I never had the expectation that that is something that could happen. EA seems to me is better understood within that reference class. (Unfortunately – not because there's something wrong with scientific disciplines but because I would've loved to be part of a real community too.)

quinn @ 2023-09-28T20:35 (+2)

openphil did some lost wages stuff after FTXsplosion, but I think evaluated case by case and some people may have been left behind. 

Steven Byrnes @ 2023-09-24T19:06 (+15) in response to Protest against Meta's irreversible proliferation (Sept 29, San Francisco)

In your hypothetical, if Meta says “OK you win, you're right, we'll henceforth take steps to actually cure cancer”, onlookers would assume that this is a sensible response, i.e. that Meta is responding appropriately to the complaint. If the protester then gets back on the news the following week and says “no no no this is making things even worse”, I think onlookers would be very confused and say “what the heck is wrong with that protester?”

Holly_Elmore @ 2023-09-28T20:32 (+6)

It was a difficult point to make and we ended up removing it where we could.

michel @ 2023-09-28T20:29 (+3) in response to New page on animal welfare on Our World in Data

Thank you for making this page

Niyorurema Pacifique @ 2023-07-18T05:45 (+23) in response to Open Thread: July - September 2023

Hello everyone,

I am Pacifique Niyorurema from Rwanda. I was introduced to the EA movement last year (2022). I did the introductory program and felt overwhelmed by the content, 80k hours podcast, Slack communities, local groups, and literature. having a background in economics and mission aligning with my values and beliefs, I felt I have found my place. I am pretty excited to be in this community. with time, I plan to engage more in the communities and contribute as an active member. I tend to lean more on meta EA, effective giving and governance, and poverty reduction.

Best.

Evan_Gaensbauer @ 2023-09-28T20:21 (+2)

Welcome to the EA Forum! Thanks for sharing!

Corentin Biteau @ 2023-09-28T20:15 (+1) in response to A Primer for Insect Sentience and Welfare (as of Sept 2023)

Thanks ! This is useful

Peter Wildeford @ 2023-09-28T14:41 (+23) in response to Weighing Animal Worth

I want to add that personally before this RP "capacity for welfare" project I started with an intuition that a human year was worth about 100-1000 times more than a chicken year (mean ~300x) conditional on chickens being sentient. But after reading the RP "capacity for welfare" reports thoroughly I have now switched roughly to the RP moral weights valuing a human year at about 3x a chicken year conditional on chickens being sentient (which I think is highly likely but handled in a different calculation). This report conclusion did come at a large surprise to me.

Obviously me changing my views to match RP research is to be expected given that I am the co-CEO of RP. But I want to be clear that contra your suspicions it is not the case at least for me personally that I started out with insanely high moral value on chickens and then helped generate moral weights that maintained my insanely high moral value (though note that my involvement in the project was very minimal and I didn't do any of the actual research). I suspect this is also the case for other RP team members.

That being said, I agree that the tremendous uncertainty involved in these calculations is important to recognize, plus there likely will be some potentially large interpersonal variation based on having different philosophical assumptions (e.g., not hedonism) as well as different fundamental values (given moral anti-realism which I take to be true).

Linch @ 2023-09-28T20:10 (+4)

I think my median is mostly still at priors (which started off not very different from yours). Though I guess I have more uncertainty now, so if forced to pick, the average is closer to the project's results than I previously thought, simply because of how averages work.

BruceF @ 2023-09-24T15:18 (+10) in response to Price-, Taste-, and Convenience-Competitive Plant-Based Meat Would Not Currently Replace Meat

Thanks for your response, Jacob - 

Here’s my/GFI’s principal thesis on this topic: 

Taste and price are essential to the success of plant-based and cultivated meat, and it’s going to be very hard to reach taste and price parity for either product. So we think it makes sense to focus on those two factors. But that doesn’t mean that once we’ve solved those two factors, we’re done.

As noted in a previous post, we have added nutrition as a third critical factor, mostly in the face of negative messaging around ultra processing and the critical role of early adopters (i.e. people who will sacrifice on taste, price, or both - but only if they see nutrition benefits). See, e.g., gfi.org/nutrition. 

The two quotes you add from me are not (I don’t think) different from what I said in my previous post, and they don’t discuss (let alone defend) “strong form PTC” theory. These are examples of me focusing on the things I think are most critical; strong PTC does not come up, and I don't defend it.

In the first case, “even if you think that is not sufficient, I would contend that that is absolutely necessary if we're going to change the massive [upward] trajectory through 2050” - this is GFI’s view, and it’s quite different from strong PTC theory.

And in the second case, since we’re at 1% plant-based meat right now and 0% cultivated meat, my statements that “we can have many times the penetration that we have right now if we can get to price and taste parity” and “if you can get to price and taste parity, you can make a huge, huge dent”: 1) don’t mean that nothing else is required; and also 2) don’t mean that we magically reach 50%+.

Aside: It feels curious to me that you continue to claim I believe something that I am telling you explicitly that I don’t believe; you are essentially saying “you believe this and you’re wrong,” and I’m saying “I agree that’s wrong, and I don’t believe it.” This feels very odd, since we do have a few actual disagreements that feel important. Specifically:

We still appear to have sufficient disagreement w/r/t the importance of price & taste competitive alt meats to our shared desire to see industrial meat production levels fall - I continue to think that alt proteins offer our only real hope of that happening globally, and so I’ll be curious to learn what your alternatives are and why you see them as viable.

With regard to your four specific critiques: I think the overwhelming evidence of the importance of taste and price (including in the three sections from your paper) are a strong response to specific critiques about specific studies. i.e., the overwhelming preponderance of the evidence indicates the importance of taste and price to food choice. 

Finally and perhaps most importantly, IMO: I’ll be extremely interested to read what you think might decrease industrial animal agriculture globally, how big you think that difference could be (and why), and how you see that theory working in, e.g., developing economies where growth in meat consumption will be greatest over the next few decades. 

While I’m certainly enthused about the value of “defaults, labeling, classroom education, shifting social norms, and non-analog plant-based options,” two things: 1) those are the strategies of the past 50+ years; they work to a point and are absolutely worthwhile (they’re why I’m doing this work, e.g.), but they have not (so far) even decreased per capita meat consumption in the U.S.; and 2) I’m not sure how they scale. One especially promising aspect of alt proteins (IMO) is that science anywhere can result in more competitive products everywhere (same as solar/wind energy, electric vehicles, etc.). 

In the end, I think we need a both/and approach, but I think that alt proteins are the only approach that has a shot at slashing the global consumption of industrial animal meat.

Jacob_Peacock @ 2023-09-28T19:44 (+1)

they don’t discuss (let alone defend) “strong form PTC” theory.

I suppose we simply disagree here. The first quote I cite states "the products need to taste the same or better and cost the same or less." The next sentence strongly implies that "the market can kick in and take it from there, just shoot us up the S-curve," with "necessary but not sufficient" relegated to a "quibble." In conjunction with the Q&A, I think reasonable audience member would infer that your statements mean roughly "if price and taste parity were met, a majority of consumers would soon switch." Conversely, it's hard to imagine audience members construing "up the S-curve", "huge, huge dent" and "change the massive trajectory" to mean, for example, 20% of people switch over two decades.

And in the second case, since we’re at 1% plant-based meat right now and 0% cultivated meat, my statements that “we can have many times the penetration that we have right now if we can get to price and taste parity” and “if you can get to price and taste parity, you can make a huge, huge dent”: 1) don’t mean that nothing else is required; and also 2) don’t mean that we magically reach 50%+.

Can you clarify roughly what number you did intend "many times the penetration" and "huge, huge dent" to refer to here?

It feels curious to me that you continue to claim I believe something that I am telling you explicitly that I don’t believe; you are essentially saying “you believe this and you’re wrong,” and I’m saying “I agree that’s wrong, and I don’t believe it.”

I don't think you believe this given you're clearly saying you do not. Instead, as I wrote, "I’d contend that you (and GFI) have prominently promoted and supported the strong PTC hypothesis. Or, at the very least, made statements that reasonable people interpret to support the strong PTC hypothesis." The situation to me begins to resemble a motte-and-bailey fallacy, with the strong PTC hypothesis as the bailey and the weak as the motte.

With regard to your four specific critiques: I think the overwhelming evidence of the importance of taste and price (including in the three sections from your paper) are a strong response to specific critiques about specific studies. i.e., the overwhelming preponderance of the evidence indicates the importance of taste and price to food choice.

You're simply reasserting your disagreement and declining to engage the critiques, despite being asked multiple times now (1, 2). In fact, none of the studies you cited address all four of the issues, and studies simply repeating the issues do not make for overwhelming evidence. I don't follow your argument against "specific critiques about specific studies?" Presumably vague critiques of unspecified studies would be unhelpful. A third time I'd ask, are you able to address these critiques, especially in those studies that predate 2015, when you started claiming price and taste as the most important factors in food choice?

In the end, I think we need a both/and approach, but I think that alt proteins are the only approach that has a shot at slashing the global consumption of industrial animal meat.

This seems self-contradictory: why would you support another solution, if you think alternative proteins are "the only approach that has a shot"? By assumption, that other solution would not have a shot.

I look forward to your comments on my forthcoming work on other strategies to reduce meat usage. I'll let you have the last word here.

Larks @ 2023-09-28T18:15 (+3) in response to The Bulwark's Article On Effective Altruism Is a Short Circuit

Do you think there is a symmetrical obligation for people writing positive things about vaccines? If vaccines were in fact not safe or effective then promoting them would also be very harmful.

ElliotJDavies @ 2023-09-28T19:21 (+2)

You're trying to reason from first principles without factoring in people's cognitive biases. In a world without cognitive biases, a symmetrical obligation towards "communication seriousnesses" makes sense. 

However, I suspect when I give the concrete example of vaccines above, you actually agree with the statement, because your brain is factoring in the negativity bias that exists towards vaccines. 

Just to make this more concrete: an incredible amount of work is done to ensure vaccines are safe, and that people trust them. But a handful of viral social media posts can erode people's confidence. In this world, a symmetrical obligation does not make sense. 

Edit: added last paragraph.

kyle_fish @ 2023-09-28T18:51 (+4) in response to Weighing Animal Worth

Hm, I'm not sure how I would have read this if it had been your original wording, but in context it still feels like an effort to slightly spin my claims to make them more convenient for your critique. So for now I'm just gonna reference back to my original post—the language therein (including the title) is what I currently endorse.

Jeff Kaufman @ 2023-09-28T19:15 (+2)

Hmm..., sorry! Not trying to spin your claims and I would like to have something here that you'd be happy with. Would you agree your post views "currently negative and getting more negative" as more likely than not?

(I'm not happy with "may" because it's ambiguous between indicating uncertainty vs possibility. more)

Ulrik Horn @ 2023-09-28T14:12 (+2) in response to New FLI Podcast on tackling climate change

Hi Johannes, I really enjoyed the structure of the interview and your detailed and careful answers. This made it easier to pinpoint a part of the interview where I think you might have too much confidence. 

This part is around (3) on variance around outcomes. If I understand correctly, the argument put forward in the podcast is more or less that if we had given equal social choice to nuclear as we have done with wind and solar, nuclear would be highly likely to have followed similar cost reductions as wind and solar. I think we disagree but to clarify it might be helpful for you to put some numbers on it? Perhaps something like X% chance of reducing the LCoE of SMRs by more than 50% from the only built SMR where I could find some cost data, where I very roughly calculated the LCoE of as $127/MWh (might be low-balling, might be hidden costs as Russia is not known for transparancy). However, I take it from your statements you think there might be a ~70% chance of SMRs having become competitive with wind and solar if we have decided to support that technology similarly. Wind and solar have middle-of-the-range LCoEs at around $50/MWh and $60/MWh respectively, eyeballing Lazard's charts. 

I think this is overly optimistic but not impossible. So my disagreement is more about the strength of your claim, not that it is impossible in all possible alternative worlds. I would very initially put something like a 30% chance that SMRs with a similar deployment in MW to wind or solar would end up below $80/MWh and maybe a 5% chance of getting closer to the $50-$60 range of solar and wind.

One main reason for this is that nuclear is not modular. Moore's law, solar cost decreases and also the Carlson curve are all dependent on massive scale, factory manufacturing, etc. Professor Bent Flyvbjerg at Oxford covers this quite well (especially solar, but also touching on nuclear and wind) and bases it on extensive data his team has collected. As an example that I think I have used on the forum before, the HTR-PM's first reactor took ~10 years to build. I doubt the first solar panels or wind turbines took that long.

I will stop writing as this comment is already long but would be happy to have a conversation about this. Perhaps there is something I am missing. I am coming from a project management and engineering background so I could be biased towards being somewhat dismissive of social and political influences.

jackva @ 2023-09-28T19:07 (+2)

Hi Ulrik,

thanks for your comment and for engaging!

I think there is a mix of (1) looking at things at (a) different time-scales, (b) and geographical levels (~ differences in perspective), (2) misunderstandings of what my view is here, so let me try to clarify:

(1) (a)
When I speak about social choice as the primary driver of techno-economic outcomes, I am taking a multi-decadal view on the level of the energy system at large, which is quite different from the perspective of a project manager and engineer in the short-term. It is certainly true that right now, as I discuss in the pod, it is easy and fast to build renewables and it is slow and difficult to build nuclear.

All I am saying is that the fact that this is so is the result of long processes of differential societal commitment, in the case of solar and wind it is the result of policy support since the 1970s to getting renewables cheap and, in the case of nuclear,  similarly long efforts by large constituencies to make nuclear expensive and hard to build (+ other factors, as we discussed here).  

(1)(b)
It is also important to not conflate project delivery times with energy system transformation at the system level. At the same time as renewables are cheap and easy to add to the grid, France was much more faster and complete in decarbonizing the grid in the 1970s and 1980s with nuclear than has been achieved with renewables to date (as I also discuss in the pod, the circumstances of France in the 1970s are not replicable).

In this context, it is also important to point out that the value proposition of SMRs or any other clean firm power option does not lie in meeting the LCOE of solar but rather in providing an energy system function that is distinct from the one that solar is providing. If one wants to choose an adequate cost analogue it would be solar LCOE + cost of seasonal storage + transmission in most contexts, i.e. marginal solar can be cheap, with more expensive clean firm power options in terms of LCOE still being quite valuable.

For (1)(a)(b) you can observe all of the things you mention and what I say can still be true, i.e. that these data points are the result of long-term societal choices and what matters is the ease and speed of energy system transformation, not individual projects. What I would say is that, philanthropically speaking, shaping the long-run system level picture is causally more relevant.


(2)
I think your comment sometimes mixes arguments about SMRs and traditional nuclear and I think about them as quite distinct:

Large-scale nuclear: As I discuss in the pod we see a 5x difference in building time for reactors when we compare France in 1970s to Western countries today. All of this difference is the result of social choice, it is not that large-scale nuclear is a new technology. It is not all affectable social choice (i.e. we could not induce the conditions that triggered 1970s France), but the variance is clearly not inherently technical. If we had had a pro-nuclear environmental movement that got concerned about climate change in the 1990s, this could look very different.

SMRs: As I said on various occasions, I am not confident we will get  cheap SMRs, I think about it as a bet worth making. But for SMRs we should principally expect similar learning curves than for other modular technologies, that is the whole idea behind SMRs. They don't need to reach the LCOE of solar to be a valuable addition to the energy system as long as transmission + seasonal storage remain relevant barriers to a 100% intermittent renewable grid. And, here, at the same time as we have spent hundreds of billions on making renewables cheap, even in the US we still have a Nuclear Regulatory Commission that makes nuclear innovation harder than it should be. Given that the primary reason that renewables are cheap today are the efforts of jurisdictions actively anti-nuclear (Germany, California, (Denmark)) and that no country has made a bet on SMRs that is parallel to the bets that these jurisdictions have made on solar and wind, it seems quite plausible that the learning curves and cost reductions could have been induced for SMRs as well had they had similar support as renewables had.
 

RayTaylor @ 2023-09-28T19:06 (+1) in response to EA is a global community - but should it be?

I think it's wise to separate the FTX and due diligence issue from the broader thesis. Here I'm just commenting on due diligence with donors.

Who was/is responsible for checking the probity or criminality of ...

 (a) FTX and Almeda?

 (b) donors to a given charity like CEA? (I put some links on this below)

(a) First it's their own board/customers/investors, but presumably supervisory responsibility is or should also be with central bank regulators, FBI, etc. If the CEO of a company is a member of Rotary, donates to Oxfam, invests in a football team, it doesn't suddenly become the primary responsibility of all of those entities (ahead of board, FBI etc) to check out his business and ethics and views, unless (and this is important) he's going to donate big and then have influence on programmes, membership etc. 

(See the links below on how both due diligence and reputational considerations* can matter a lot to the recipient charity. If there is some room for doubt about the donor, but it doesn't reach a threshold, it may be possible to create a layer of distance or supervision e.g. create a trust with it's own board, which does the donating.)

(b) Plenty of charities accepted donations from Enron, Bernie Madoff and others. 

Traditionally, their job is to do their job, not evaluate the probity of all their donors. However, there has been a change of mood since oil industry disinvestment campaigns and the opioid crisis (with the donations from the Sackler family, here in the UK at least**). Political parties are required to do checks on donors. 

Marshall Rosenberg turned down lots of people who wanted to fund NVC and the cNVC nonprofit, because he felt that taking money put him into relationship with them, and some companies he just didn't want to be in relationship with. This worked well for him, and made sure there was no pressure to shift focus, but it did frustrate his staff team quite often.

It might be possible as a matter of routine policy to ask large donors if they are happy to have their main income checked, especially if they want to be publicly associated with a particular project, or to go more discreetly to ratings agencies and so on. A central repository of donor checks could be maintained, to minimise costs. This wouldn't be perfect, but a due diligence process, ideally open and transparent, would sometimes be a good defence if problems arise later? 

These are the (more minimal) UK Charity Commission guidelines on checking out your donors, and even this might have helped if it had been done rigorously:

https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/550694/Tool_6.pdf 

Here's a plain English version where the overall advice is "be reasonable":

https://manchestercommunitycentral.org/sites/manchestercommunitycentral.co.uk/files/Ethical%20fundraising%20-%20how%20to%20conduct%20due%20diligence%20on%20potential%20donors_0.pdf 

**This is for bigger donations:
https://www.nao.org.uk/wp-content/uploads/2017/08/Due-diligence-processes-for-potential-donations.pdf  

*This is about how things went wrong for Prince Charles's charities:
https://www.charitytoday.co.uk/due-diligence-for-charities-ensuring-transparency-and-trustworthiness/ 

Jeff Kaufman @ 2023-09-28T15:44 (+2) in response to Weighing Animal Worth

Sorry for all the noise on this! I've now added "likely" to show that this is uncertain; does that work?

kyle_fish @ 2023-09-28T18:51 (+4)

Hm, I'm not sure how I would have read this if it had been your original wording, but in context it still feels like an effort to slightly spin my claims to make them more convenient for your critique. So for now I'm just gonna reference back to my original post—the language therein (including the title) is what I currently endorse.

Jeff Kaufman @ 2023-09-28T03:33 (+14) in response to Net global welfare may be negative and declining

There are a ton of judgement calls in coming up with moral weights. I'm worried about a dynamic where the people most interested in getting deep into these questions are people who already intuitively care pretty strongly about animals, and so the best weights available end up pretty biased

kyle_fish @ 2023-09-28T18:42 (+9)

I'm concerned about that dynamic too and think it's important to keep in mind, especially in the general case of researchers' intuitions tending to bias their work, even when attempting objectivity. However, I'm also concerned about the dismissal of results like RP's welfare ranges on the basis of speculation about the researchers' priors and/or the counterintuitive conclusions, rather than on the merits of the analyses themselves.

Lorenzo Pascal @ 2023-09-28T18:23 (+1) in response to Career advice the Probably Good team would give our younger selves

Good tips, thanks!

Davidmanheim @ 2022-11-18T11:50 (+14) in response to EA is a global community - but should it be?

I don't mean either (1) or (2), but I'm not sure it's a single argument. 

First, I think it's epistemically and socially healthy for people to separate giving to their community from altruism. To explain a bit more, it's good to view your community as a valid place to invest effort independent of eventual value. Without that, I think people often end up being exploitative, pushing people to do things instead of treating them respectfully, or being dismissive of others, for example, telling people they shouldn't be in EA because they aren't making the right choices. If your community isn't just about the eventual altruistic value they will create, those failure modes are less likely.

Second, it's easy to lose sight of eventual goals when focused on instrumental ones, and get stuck in a mode where you are goodharting community size, or dollars being donated - both community size and total dollars seem like an  unfortunately easy attractor for this failure.

Third, relatedly, I think that people should be careful not to build models of impact that are too indirect, because they often fail at unexpected places. The simpler your path to impact is, the fewer failure points exist. Community building in many steps removed from the objective, and we should certainly be cautious about doing naïve EV calculations about increasing community size!

RayTaylor @ 2023-09-28T18:21 (+1)

> healthy for people to separate giving to their community from altruism.

Is this realistically achievable, with the community we have now? How?


(I imagine it would take a comms team with a social psychology genius and a huge budget, and still would only work partially, and would require very strong buy in from current power players, and a revision of how EA is presented and introduced? but perhaps you think another, leaner and more viable approach is possible?)

>The simpler your path to impact is, the fewer failure points exist

That's not always true. 

Some extreme counter-examples:

a. Programmes on infant stunting keep failing, partly because an overly simple approach has been adopted (intensive infant feeding, Plumpy Nuts etc, with insufficient attention to maternal nutrition, aflatoxin removal, treating parasites in pregnancy, adolescent nutrition, conditional cash transfers etc)

b. A critical path plan was used for Apollo, and worked much better than the simpler Soviet approach, despite being much more complicated. 

c. The Brexit Leave campaign SEEMED simple but was actually formed through practice on previous campaigns, and was very sophisticated "under the hood", which made it hard to oppose.

Jeff Kaufman @ 2023-09-28T17:56 (+4) in response to Net global welfare may be negative and declining

I don't think that thought experiment works for me: what would it even mean for a human to experience a year of chicken life?

emre kaplan @ 2023-09-28T18:19 (+3)

Yeah I agree that it is not the most natural and straightforward thought-experiment. Unfortunately hedonic comparisons make most sense to me when I can ask "would I prefer experience A or B" and asking this question is much more difficult when you try to compare experiences for the animals.

But at least it should be physically imaginable to get me lobotomised to have mental capacities equivalent to that of a chicken. I'm much less likely to care about what happens to future me if my mental capacities were altered to be similar to that of an ant. But if my brain was altered to be similar to a chicken brain, I'm much more afraid of getting boiled alive, being crammed in a cage etc.

Catherine Low @ 2023-09-20T19:53 (+62) in response to Sharing Information About Nonlinear

Some confidentiality constraints have been lifted in the last few days, so I’m now able to share more information from the Community Health and Special Projects team to give people a sense of how this case went from our perspective, and how we think about these things. 
 

Previous updates:

To give a picture of how things happened over time:

  • Starting mid last year, our team heard about many of the concerns mentioned in this post.
  • At the time of our initial conversations with former staff/associates of Nonlinear, they were understandably reluctant for us to do anything that would let on to Nonlinear that they were raising complaints. This limited our ability to hear Nonlinear’s side of the story, though members of our team did have some conversations with Kat that touched on some of these topics. It also meant that the former staff/associates did not give permission at that time for us to take some steps that we suggested. They also suggested some steps that we didn’t see as feasible for us. 
  • At one point we discussed the possibility of the ex-staff writing a public post of some kind, but at that time they were understandably unwilling to do this. Our impression is that the impetus for that eventually coming together was Ben being willing to put in a lot of work.
  • Over time, confidentiality became less of a constraint. The people raising the concerns became more willing to have information shared, and some people made public comments, meaning we were able to take some more actions without compromising confidentiality. We were then able to take some steps including what we describe here, and pointing various people to the publicly available claims, to reduce the risk of other people ending up in bad situations.
  • We had been considering taking more steps when we heard Ben was working with theformer staff/associates on a public post. We felt that this public post might make some of those steps less necessary. We kept collecting information about Nonlinear, but did not do as much as we might have done had Ben not been working on this.
  • We continued to track Nonlinear and were ready to prioritise the case more highly if it seemed that the risk to others in the community was rising. 
AnonymousEAForumAccount @ 2023-09-28T18:19 (+8)

They also suggested some steps that we didn’t see as feasible for us. 

Can you disclose the specifics of some or all of these steps and the reasons why you didn't think they were feasible?

Grumpy Squid @ 2023-09-26T14:42 (+16) in response to Sharing Information About Nonlinear

In your earlier post, you write:

Nonlinear has not been invited or permitted to run sessions or give talks relating to their work, or host a recruiting table at EAG and EAGx conferences this year.

And

Kat ran a session on a personal topic at EAG Bay Area 2023 in February. EDIT: Kat, Emerson and Drew also had a community office hour slot at that conference.

Community office hours are an event that organizers invite you to sign up for (not all EAG attendees can sign up). While not as prominent as a recruiting table or talk, they still signal status to the attendees.

Given that public comments were made as early as November, it seems that there was sufficient time to ensure they were disunited from the event in February. Additionally, even if you don't table at EAG, you can still actively recruit via 1-1 meetings.

I think the lack of acknowledgement or explanation of how this choice happened - and whether CHT sees this as a mistake - worries me, especially now that the anonymity constraints have been lifted.

AnonymousEAForumAccount @ 2023-09-28T18:16 (+14)

I agree with all of this, and hope the CH team responds. I'd also add that the video of Kat's talk has a prominent spot on the EAG 2023 playlist on CEA's official youtube channel. That video has nearly 600 views.

ElliotJDavies @ 2023-09-28T18:03 (+2) in response to The Bulwark's Article On Effective Altruism Is a Short Circuit

I can your perspective, and I recognise it's context dependent.

However, If journalist is writing, or publishing, about deleterious effects from vaccines, they should be very careful to ensure what they're writing is accurate, because we have a track record of said output being wrong and irreversible. [1]

I suspect I could make a similar argument for a philosopher writing about an ethical or moral movement. It might take more time, but conclude in a similar place.

  1. ^

    https://en.wikipedia.org/wiki/MMR_vaccine_and_autism#Media_role

Larks @ 2023-09-28T18:15 (+3)

Do you think there is a symmetrical obligation for people writing positive things about vaccines? If vaccines were in fact not safe or effective then promoting them would also be very harmful.

Davidmanheim @ 2022-11-19T15:56 (+11) in response to EA is a global community - but should it be?

"What is missing to me is an explanation of exactly how your suggestions would prevent a future SBF situation."

1. The community is unhealthy in various ways. 
2. You're suggesting centralizing around high trust, without a mechanism to build that trust.

I don't think that the EA community could have stopped SBF, but they absolutely could have been independent of him in ways that mean EA as a community didn't expect a random person most of us had never  heard of before this to automatically be a trusted member of the community. Calling people out is far harder when they are a member of your trusted community, and the people who said they had concerns didn't say it loudly because they feared community censure. That's a big problem.

RayTaylor @ 2023-09-28T18:07 (+1)

It's also hard to call people out when a lot of you are applying to him/them for funding, and are mostly focused on trying to explain how great and deserving your project is.

One good principle here is "be picky about your funders". Smaller amounts from steady, responsible, principled and competent funders, who both do and submit to due diligence, are better than large amounts from others. 

This doesn't mean you HAVE to agree with their politics or everything they say in public - it's more about having proper governance in place, and funders being separate from boards and boards being separate from executive, so that undue influence and conflicts of interest don't arise, and decisions are made objectively, for the good of the project and the stated goals, not to please an individual funder or get kudos from EAs.

I've written more about donor due diligence in the main thread, with links.

Linch @ 2023-09-28T00:04 (+8) in response to The Bulwark's Article On Effective Altruism Is a Short Circuit

Just want to quickly register that I disagree with your comment (and disagree-voted). This proposed policy reminds me too much of the original meaning of "political correctness" and "party line." My guess is that we should not have a higher bar for critical voices than complimentary ones, no matter how righteous our cause areas might be. 

ElliotJDavies @ 2023-09-28T18:03 (+2)

I can your perspective, and I recognise it's context dependent.

However, If journalist is writing, or publishing, about deleterious effects from vaccines, they should be very careful to ensure what they're writing is accurate, because we have a track record of said output being wrong and irreversible. [1]

I suspect I could make a similar argument for a philosopher writing about an ethical or moral movement. It might take more time, but conclude in a similar place.

  1. ^

    https://en.wikipedia.org/wiki/MMR_vaccine_and_autism#Media_role

Davidmanheim @ 2022-11-20T08:25 (+7) in response to EA is a global community - but should it be?

I think that social ties are useful, yet having a sprawling global community is not. I think that you're attacking a bit of a straw man, one which claims that we should have no relationships or community whatsoever.

I also think that there is an unfair binary you're assuming, where on one side you have "unpaid, ad-hoc community organising" and on the other you have the current abundance of funding for community building. Especially in EA hubs like London, the Bay Area, and DC, the local community can certainly afford to pay for events and event managers without needing central funding, and I'd be happy for CEA to continue to do community building - albeit with the expectation that communities do their own thing and pay for events, which would be a very significant change from the current environment. Oh, and I also don't live in an EA hub, and have never attended an EAG - but I do travel occasionally, and have significant social interaction with both EAs and non-EAs working in pandemic preparedness, remotely.  The central support might be useful, but it's far from the only way to have EA continue.

RayTaylor @ 2023-09-28T18:01 (+1)

Both of you now seem to be focusing specifically on funding for community building, whereas the original post was much broader:

... maybe if those broader issues were addressed, the question of which community-building to fund would then be easier to work out?

RayTaylor @ 2023-09-28T17:57 (+1) in response to EA is a global community - but should it be?

Hi David, I think I follow your thinking, but I'm not hopeful that there is a viable route to "ending the community" or "ending community-building" or ending people "identifying as EAs", even if a slight majority agreed it was desirable, which seems unlikely.

On the other hand, I vary much agree that a single Oxford or US-based organisation can't "own" and control the whole of effective altruism, and aiming not for a "perfect supertanker" but a varied "fleet" or "regatta" of EA entities would be preferable, and much more viable. Then supervision and gatekeeping and checks could be done in a single timezone, and the size of EA entities and groups could be kept at 150 or less. Also different EA regions or countries or groups could develop different strengths. 

We'd end up with a confederation, rather like Oxfam, the Red Cross, Save the Children etc. (It's not an accident that the federated movements often have a head office in the Netherlands or Switzerland, where the laws on what NGOs/charities can and can't do are more flexible than in the UK or USA, which is kinda helpful for an 'unusual' movement like EA.)

Oxfam themselves also formed INTRAC as a training entity, and one could imagine CEA doing something similar, offering training in (for example)
- lessons learned
- bringing in MEv  trainers for evaluation training
- PLA trainers for participatory budgeting etc.

emre kaplan @ 2023-09-28T17:48 (+4) in response to Net global welfare may be negative and declining

I think the question "would you rather see additional one human life-year or 3 chicken life-years" conflates the hedonic comparison with special obligations to help human beings. One might prefer human experiences vs non-human experiences even when they are hedonically equivalent because of special obligations. If we're exclusively interested in welfare I think a better thought experiment would be how would you feel about having these experiences yourself. 

If God offered you an opportunity to have an extra year of average human life, and on top of that, 1 year of average layer hen life, 1 year of average broiler chicken life, 10 years of average farmed fish life, and 100 years of farmed shrimp life, would you accept that offer? Of course that experiment is too artificial, but people go through extreme illnesses that cause them have mental capacities similar to a chicken. I sometimes think about how afraid I would be about being reincarnated after my death, going through some mental changes to get my mental capacities equivalent to that of a chicken, and going through all the average chicken experiences. I personally wouldn't take that risk in exchange of one additional year of human life.

Jeff Kaufman @ 2023-09-28T17:56 (+4)

I don't think that thought experiment works for me: what would it even mean for a human to experience a year of chicken life?

Larks @ 2023-09-28T17:43 (+4) in response to Weighing Animal Worth

I think the conclusions of RP's research with respect to cause prioritization still hold up after incorporating the arguments you've enumerated in your post.

This seems maybe truth for animals vs AMF but not for animals vs Xrisk.

Ariel Simnegar @ 2023-09-28T17:56 (+3)

Yes, I agree with that caveat.

Jeff Kaufman @ 2023-09-27T20:22 (+45) in response to Net global welfare may be negative and declining

This sort of work is very sensitive to your choices for moral weights, and while I do appreciate you showing your input weights clearly in a table I think it's worth emphasizing up front how unusual they are. For example, I'd predict an overwhelming majority of humans would rather see an extra year of good life for one human than four chickens, twelve carp, or thirty three shrimp. And, eyeballing your calculations, if you used more conventional moral weights your bottom-line conclusion would be that net global welfare was positive and increasing.

emre kaplan @ 2023-09-28T17:48 (+4)

I think the question "would you rather see additional one human life-year or 3 chicken life-years" conflates the hedonic comparison with special obligations to help human beings. One might prefer human experiences vs non-human experiences even when they are hedonically equivalent because of special obligations. If we're exclusively interested in welfare I think a better thought experiment would be how would you feel about having these experiences yourself. 

If God offered you an opportunity to have an extra year of average human life, and on top of that, 1 year of average layer hen life, 1 year of average broiler chicken life, 10 years of average farmed fish life, and 100 years of farmed shrimp life, would you accept that offer? Of course that experiment is too artificial, but people go through extreme illnesses that cause them have mental capacities similar to a chicken. I sometimes think about how afraid I would be about being reincarnated after my death, going through some mental changes to get my mental capacities equivalent to that of a chicken, and going through all the average chicken experiences. I personally wouldn't take that risk in exchange of one additional year of human life.

Ariel Simnegar @ 2023-09-28T15:08 (+8) in response to Weighing Animal Worth

(Disclaimer: I take RP's moral weights at face value, and am thus inclined to defend what I consider to be their logical implications.)

Specifically with respect to cause prioritization between global heath and animal welfare, do you think the evidence we've seen so far is enough to conclude that animal welfare interventions should most likely be prioritized over global health?

In "Worldview Diversification" (2016), Holden Karnofsky wrote that "If one values humans 10-100x as much [as chickens], this still implies that corporate campaigns are a far better use of funds (100-1,000x) [than AMF]." In 2023, Vasco Grilo replicated this finding by using the RP weighs to find corporate campaigns 1.7k times as effective.

Let's say RP's moral weights are wrong by an order of magnitude, and chickens' experiences actually only have 3% of the moral weight of human experiences. Let's say further that some remarkably non-hedonic preference view is true, where hedonic goods/bads only account for 10% of welfare. Still, corporate campaigns would be an order of magnitude more effective than the best global health interventions.

While I agree with you that it would be premature to conclude with high confidence that global welfare is negative, I think the conclusions of RP's research with respect to cause prioritization still hold up after incorporating the arguments you've enumerated in your post.

Larks @ 2023-09-28T17:43 (+4)

I think the conclusions of RP's research with respect to cause prioritization still hold up after incorporating the arguments you've enumerated in your post.

This seems maybe truth for animals vs AMF but not for animals vs Xrisk.

Fai @ 2023-09-28T17:38 (+3) in response to Weighing Animal Worth

Thank you for the post!

What concerns me is that I suspect people rarely get deeply interested in the moral weight of animals unless they come in with an unusually high initial intuitive view.

I also suspect this, and have concerns about it, but in a very different way than you I speculate. More particularly, I am concerned by the "people rarely get deeply interested in the moral weight of animals " part. This is problematic because for many actions humans do, there are consequences to animals (in many cases, huge consequences), and to act ethically, even for some non-conseuquentialists, it is essential to at least have some views about moral weights of animals.

But the issue isn't only most people not being interested in investigating "moral weights" of animals, but that for people who don't even bother to investigate, they don't use the acknowledgement of uncertainty (and tools for dealing with uncertainty) to guide their actions - they assign, with complete confidence, 1 to each human and 0 to almost everyone else. 

The above analysis, if I am only roughly correct, is crucial to our thinking about which direction to move people's view is a correct one. If most people are already assigning animals with virtual 0s, where else can we go? Presumably moral weights can't go negative, animals' moral weights only have one place to go, unless most people were right - that all animals have moral weights of virtually 0.

"I would expect working as a junior person in a community of people who value animals highly would exert a large influence in that direction regardless of what the underlying truth."

For the reasons above, I am extremely skeptical this is worthy of worry. I think unless it happens to be true that all animals have moral weights of virtually 0, it seems to me that "a community of people who value animals highly exerting a large influence in that direction regardless of what the underlying truth" is something that we should exactly hope for, rationally and ethically speaking. (emphasis on "regardless of what the underlying truth" is mine)

 

P.S. A potential pushback is that a very significant number of people clearly care about some animals, such as their companion animals. But I think we have to also look at actions with larger stakes. Most people, and even more so for a collection of people (such as famailies, companies, governments, charities, and movements), judging from their actions (eating animals, driving, animal experiments, large scale constructions) and reluctance to adjust their view regarding these actions, clearly assign a virtual 0 to the moral weights of most animals - they just chose a few species, maybe just a few individual animals, to rise to within one order of magnitude of difference in moral weight with humans. Also, even for common companion animals such as cats and dogs, many people are shown to assign much less moral weight to them when they are put into situations where they have to choose these animals against (sometimes trivial) human interests. 

Davidmanheim @ 2023-09-28T04:02 (+9) in response to Net global welfare may be negative and declining

When doing rough analysis, there are virtues to having simple models simply laid out, so I commend this - but step 2 is looking at which analytic and other choices the simple model is most sensitive to, and laying that out, and  I think this post suffers from not doing that.

In this case, there are plausible moral and analytic assumptions that lead to almost any conclusion you'd like. A few examples: 

  • Include declining total numbers of net-negative lives among wild animals. 
  • Reject total utilitarianism for average utilitarianism across species. 
  • Change your time scale to longer than 10m years, and humanity is plausibly the only way any species on earth survives. 
  • Project species welfare on a per-species basis instead of an aggregate, and it may be improving, and this may be Simpson's paradox. 
  • Change the baseline zero-level for species welfare, and the answer could reverse.

And other than the first, none of these is even considered in you future directions - even though the assumptions being made are, it seems, far too strong given the types of uncertainties involved. So I applaud flagging that this is uncertain, but don't think it's actually useful to make any directional claim, not would further modeling do that much to change this.

Finally, I'm struggling to see how and where this is decision relevant for people or organizations - but that's an entirely different set of complaints about how to do analyses.

sbehmer @ 2023-09-28T17:19 (+1)

Finally, I'm struggling to see how and where this is decision relevant for people or organizations - but that's an entirely different set of complaints about how to do analyses.

One way in which it's decision relevant for people considering how much to prioritize extinction risk mitigation. Arguments for extinction risk mitigation being overwhelmingly important often rely on the assumption that the expected value of the future is positive (and astronomically large). A seemingly sensible way to get evidence on whether the future is likely to be good is to look at whether the present is good and whether the trend is positive. I think this is why multiple people have tried to look into those questions (see Holden Karnovsky's blog, which is linked already in the main post, and Chapter 9 of What We Owe the Future). 

In fact, in WWOTF, Macaskill does almost the same exercise as the one in this post, except he uses neuron counts as measures of moral weight instead of rethink priorities' weights. My memory is that he comes to the conclusion that the welfare of animals hardly makes an impact on total welfare. I think this post makes a very nice contribution in showing that Macaskill's conclusion isn't robust to using alternative (and plausible) moral weights.
 

Note: there could be plenty of other arguments for X-risk being overwhelmingly important that don't rely on the claim that the expected value of the future is positive.

Denis @ 2023-09-28T17:10 (+2) in response to How could a moratorium fail?

I fully support a pause, however that is enacted, until we find a way to ensure safety. 

I think part of the reasons so many people do not consider a pause not only reasonable but actually self-evidently the right thing to do is related to the specific experience of some of the people on the forum. 

A lot of people engaging in this debate naturally come from an AI or tech background. Or they've followed the fortunes of Facebook and Amazon and Microsoft from a distance and seen how they've been allowed to do pretty much whatever they want. Any proposal to limit tech innovation may seem shocking. Because tech has had an almost regulation-free ride until now. And other groups in the public eye, such as banks and investment firms have paid off enough people in congress to eliminate most of their regulations too. 

But this is very much NOT the norm. 

But if you look at, say, the S&P 500, you'll see maybe 30 tech companies or banks, and a few others, which face very little regulation. But many more companies who are very used to being very strictly regulated. 

  • Pharma companies are used to discovering miracle drugs but still having to go through decades (literally!) of safety testing before they can make them available to the public, and even then they still need FDA audits to prove that they are producing exactly what they said, how they said they would. Any change can take another few years to get approved. 
  • Engineers and Architects know that every major design they create needs to be reviewed by countless bodies who effectively have a right to deny approval - and the burden of proof is always on the side of those who want to go ahead. 
  • If you try to get a new chemical approved for use in food, it is such a long and costly process that most companies just don't bother even trying. 

This is how the world works. There is this mentality among tech people that they somehow have the right to innovate and put out products with no restrictions as if this as everyone's god-given right. But it's not. 

So maybe people within tech have a can't do attitude (as Katja Grace called it) towards a pause, thinking it cannot work. But the world knows how to do pauses, how to define safety criteria and ways to make sure they are met before a product is released. Sure, the details for AI will be different than for Pharma, but is AI fundamentally more complex than the interactions of a new, complex chemical with a human body? It isn't obviously so. 

The FDA and others have found ways to keep drugs safe, while still allowing phenomenal progress. It is frustrating as hell in the short term, but in the long run it works best for everyone - when you buy a drug, it is highly unlikely to do you harm in unexpected ways, and typically any harm it might do has been analysed, communicated to the medical community. So that you and your doctor know what the risks are. 

It feels wrong for the AI community to believe that they deserve to be free of regulation when the risks are even greater than those from Pharma. And it feels like a can't do attitude for us to believe that a pause cannot happen or cannot be effective. 



 

Fai @ 2023-09-28T16:58 (+4) in response to Net global welfare may be negative and declining

Ah, interesting! I like both the terminology and and idea of "adversarial collaboration". For instance, I think incorporating debates into this research might actually move us closer to the truth.

But I am also wary that if we use a classical way of deciding who wins debate, the losing side would aljmost always be the group who assigned higher (even just slightly higher than average) "moral weights" to animals (not relative to humans, but relative to the debate opponent). So I think maybe if we use debate as a way to push closer to the truth, we probably use the classical ways of deciding debates.

Jeff Kaufman @ 2023-09-28T17:09 (+2)

if we use a classical way of deciding who wins debate

Can you say more about what you mean by that?

Bella @ 2023-09-28T16:16 (+2) in response to Our tofu book has launched!! (Upvote on Amazon)

Awesome, I got the UK ebook! I'm so excited to see this launched and I hope people love the book!

George Stiffman @ 2023-09-28T17:08 (+1)

Thanks Bella!! I hope so too!

Jessica Wen @ 2023-09-28T09:27 (+2) in response to Our tofu book has launched!! (Upvote on Amazon)

Congratulations George! After my recent visit to China (my first since I went vegan) I was truly blown away by how convincing a lot of the tofu meat substitutes are. Even my meat-eating brother was surprised. I'm glad that you're trying to bring this to Western audiences!

I will note that the UK e-book is not free, but I coughed up the £1.63 for it anyway ;)

Perhaps this isn't the place to discuss this, but despite the excellent fake meats in China, they really haven't taken off and meat consumption and production are at an all-time high. How much better do meat substitutes have to be for people to choose them over animal meat?

George Stiffman @ 2023-09-28T17:07 (+3)

Thanks Jessica! I'm so with you on the Chinese alt protein scene... would love to see more folks promoting these foods abroad!

Ooh, thanks for catching the international e-book pricing - just messaged Amazon and they'll correct that today or tomorrow.

I think this is a pretty open question. I'm more skeptical of plant-based meats than a lot of folks, largely because I think "narratives" matter more than "taste" for food selection. Narratives scale, whereas taste is extremely individualized. But dominant food narratives in China (and in the US!) ascribe a lot more value to things like local, natural, farm-to-table, cultural than the things that PBM are good at.

Jeff Kaufman @ 2023-09-28T13:52 (+6) in response to Net global welfare may be negative and declining

Sure! I'd love to see a group of people who don't start out caring about animals much more than average try to tackle this research problem. And then maybe an adversarial collaboration?

I just wrote up more on this here: Weighing Animal Worth.

Fai @ 2023-09-28T16:58 (+4)

Ah, interesting! I like both the terminology and and idea of "adversarial collaboration". For instance, I think incorporating debates into this research might actually move us closer to the truth.

But I am also wary that if we use a classical way of deciding who wins debate, the losing side would aljmost always be the group who assigned higher (even just slightly higher than average) "moral weights" to animals (not relative to humans, but relative to the debate opponent). So I think maybe if we use debate as a way to push closer to the truth, we probably use the classical ways of deciding debates.

Bob Fischer @ 2023-09-28T16:37 (+42) in response to Weighing Animal Worth

Hi Jeff. Thanks for engaging. Three quick notes. (Edit: I see that Peter has made the first already.)

First, and less importantly, our numbers don't represent the relative value of individuals, but instead the relative possible intensities of valenced states at a single time. If you want the whole animal's capacity for welfare, you have to adjust for lifespan. When you do that, you'll end up with lower numbers for animals---though, of course, not OOMs lower.

Second, I should say that, as people who work on animals go, I'm fairly sympathetic to views that most would regard as animal-unfriendly. I wrote a book criticizing arguments for veganism. I've got another forthcoming that defends hierarchicalism. I've argued for hybrid views in ethics, where different rules apply to humans and animals. Etc. Still, I think that conditional on hedonism it's hard to get MWs for animals that are super low. It's easier, though still not easy, on other views of welfare. But if you think that welfare is all that matters, you're probably going to get pretty animal-friendly numbers. You have to invoke other kinds of reasons to really change the calculus (partiality, rights, whatever).

Third, I've been trying to figure out what it would look like to generate MWs for animals that don't assume welfarism (i.e., the view that welfare is all that matters morally). But then you end up with all the familiar problems of moral uncertainty. I wish I knew how to navigate those, but I don't. However, I also think it's sufficiently important to be transparent about human/animal tradeoffs that I should keep trying. So, I'm going to keep mulling it over.

Bella @ 2023-09-28T16:16 (+2) in response to Our tofu book has launched!! (Upvote on Amazon)

Awesome, I got the UK ebook! I'm so excited to see this launched and I hope people love the book!

pseudonym @ 2023-09-28T15:55 (+2) in response to Weighing Animal Worth

Given the multiple pushbacks by different people, is there a reason you didn't just take Kyle's suggested text "concludes that things may be bad and getting worse"?

Jeff Kaufman @ 2023-09-28T16:15 (+4)

"May" is compatible with something like "my overall view is that things are good but I think there's a 15% chance things are bad, which is too small to ignore" while Kyle's analysis (as I read it) is stronger, more like "my current best guess is that things are bad, though there's a ton of uncertainty".

Nathan Young @ 2023-09-27T16:47 (+5) in response to My life would be much harder without the Community Health Team. I think yours might too.

Do you think you'll do another of these?

Julia_Wise @ 2023-09-28T16:11 (+2)

We don't have immediate plans to do another one, but do think it would be valuable to do at some point.

Jeff Kaufman @ 2023-09-28T15:44 (+2) in response to Weighing Animal Worth

Sorry for all the noise on this! I've now added "likely" to show that this is uncertain; does that work?

pseudonym @ 2023-09-28T15:55 (+2)

Given the multiple pushbacks by different people, is there a reason you didn't just take Kyle's suggested text "concludes that things may be bad and getting worse"?

Jeff Kaufman @ 2023-09-28T15:24 (+4) in response to Weighing Animal Worth

As Kyle has already said his analysis might imply that human extinction is highly undesirable. For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didn’t also kill off all other sentient life).

But his analysis doesn't say that? He considers two quantities in determining net welfare: human experience, and the experience of animals humans raise for food. Human extinction would bring both of these to zero.

I think maybe you're thinking his analysis includes wild animal suffering?

Jack Malde @ 2023-09-28T15:47 (+2)

Fair point, but I would still disagree his analysis implies that human extinction would be good. He discusses digital sentience and how, on our current trajectory, we may develop digital sentience with negative welfare. An implication isn't necessarily that we should go extinct, but perhaps instead that we should try to alter this trajectory so that we instead create digital sentience that flourishes. 

So it's far too simple to say that his analysis "concludes that human extinction would be a very good thing". It is also inaccurate because, quite literally, he doesn't conclude that. 

So I agree with your choice to remove that wording.

kyle_fish @ 2023-09-28T15:42 (+4) in response to Weighing Animal Worth

Thanks, Jeff! This helps a lot, though ideally a summary of my conclusions would acknowledge the tentativeness/uncertainty thereof, as I aim to do in the post (perhaps, "concludes that things may be bad and getting worse"). 

Jeff Kaufman @ 2023-09-28T15:44 (+2)

Sorry for all the noise on this! I've now added "likely" to show that this is uncertain; does that work?

Jeff Kaufman @ 2023-09-28T15:00 (+5) in response to Weighing Animal Worth

Sorry! After Peter pointed that out I edited it to "concludes that human extinction would be a big welfare improvement". Does that wording address your concerns?

EDIT: changed again, to "concludes that things are bad and getting worse, which suggests human extinction would be beneficial".

EDIT: and again, to "concludes that things are bad and getting worse, which suggests efforts to reduce the risks of human extinction are misguided".

EDIT: and again, to just "concludes that things are bad and getting worse".

EDIT: and again, to "concludes that things are likely bad and getting worse".

kyle_fish @ 2023-09-28T15:42 (+4)

Thanks, Jeff! This helps a lot, though ideally a summary of my conclusions would acknowledge the tentativeness/uncertainty thereof, as I aim to do in the post (perhaps, "concludes that things may be bad and getting worse"). 

Jack Malde @ 2023-09-28T15:22 (+2) in response to Weighing Animal Worth

If I were you I would remove that part altogether. As Kyle has already said his analysis might imply that human extinction is highly undesirable.

For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didn’t also kill off all other sentient life).

Just because total welfare may be net negative now and may have been decreasing over time doesn’t mean that this will always be the case. Maybe we can do something about it and have a flourishing future.

Jeff Kaufman @ 2023-09-28T15:27 (+6)

If I were you I would remove that part altogether.

Yeah, this seems like it's raising the stakes too much and distracting from the main argument; removed.

Jack Malde @ 2023-09-28T15:22 (+2) in response to Weighing Animal Worth

If I were you I would remove that part altogether. As Kyle has already said his analysis might imply that human extinction is highly undesirable.

For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didn’t also kill off all other sentient life).

Just because total welfare may be net negative now and may have been decreasing over time doesn’t mean that this will always be the case. Maybe we can do something about it and have a flourishing future.

Jeff Kaufman @ 2023-09-28T15:24 (+4)

As Kyle has already said his analysis might imply that human extinction is highly undesirable. For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didn’t also kill off all other sentient life).

But his analysis doesn't say that? He considers two quantities in determining net welfare: human experience, and the experience of animals humans raise for food. Human extinction would bring both of these to zero.

I think maybe you're thinking his analysis includes wild animal suffering?

Jeff Kaufman @ 2023-09-28T15:00 (+5) in response to Weighing Animal Worth

Sorry! After Peter pointed that out I edited it to "concludes that human extinction would be a big welfare improvement". Does that wording address your concerns?

EDIT: changed again, to "concludes that things are bad and getting worse, which suggests human extinction would be beneficial".

EDIT: and again, to "concludes that things are bad and getting worse, which suggests efforts to reduce the risks of human extinction are misguided".

EDIT: and again, to just "concludes that things are bad and getting worse".

EDIT: and again, to "concludes that things are likely bad and getting worse".

Jack Malde @ 2023-09-28T15:22 (+2)

If I were you I would remove that part altogether. As Kyle has already said his analysis might imply that human extinction is highly undesirable.

For example, if animal welfare is significantly net negative now then human extinction removes our ability to help these animals, and they may just suffer for the rest of time (assuming whatever killed us off didn’t also kill off all other sentient life).

Just because total welfare may be net negative now and may have been decreasing over time doesn’t mean that this will always be the case. Maybe we can do something about it and have a flourishing future.

Larks @ 2023-09-28T15:17 (+4) in response to Weighing Animal Worth

It is bizarre to me that people would disagree-vote this as it seems to be a true description of the edit you made. If people think the edit is bad they should downvote, not disagreevote.

Jeff Kaufman @ 2023-09-28T15:20 (+4)

Eh; I interpret this upvote+disagree to mean "I think it's good that you posted that you made the edit, but the edit it doesn't fix the problem"

Jeff Kaufman @ 2023-09-28T15:15 (+6) in response to Weighing Animal Worth

the history of animal farmers or slaughterhouse workers becoming convinced animal killing is wrong through directly engaging in it

Is this actually much evidence in that direction? I agree that there are many examples of former farmers and slaughterhouse workers that have decided that animals matter much more than most people think, but is that the direction the experience tends to move people? I'm not sure what I'd predict. Maybe yes for slaughterhouse workers but no for farmers, or variable based on activity? For example, I think many conventional factory farming activities like suffocating baby chicks or castrating pigs are things the typical person would think "that's horrible" but the typical person exposed to that work daily would think "eh, that's just how we do it".

Rockwell @ 2023-09-28T15:20 (+4)

I agree and I point to that more so as evidence that even in environments that are likely to foster a moral disconnect (in contrast to researchers steeped in moral analysis) increased concern for animals is still a common enough outcome that it's an observable phenomenon.

(I'm not sure if there's good data on how likely working on an animal farm or in a slaughterhouse is to convince you that killing animals is bad. I would be interested in research that shows how these experiences reshape people's views and I would expect increased cognitive dissonance and emotional detachment to be a more common outcome.)

MichaelStJules @ 2023-09-28T15:18 (+17) in response to Weighing Animal Worth

EDIT: Looks like Ariel beat me to this point by a few minutes.

(Not speaking on behalf of RP. I don't work there now.)

FWIW, corporate chicken welfare campaigns have looked better than GiveWell recommendations on their direct welfare impacts if you weigh chicken welfare per year up to ~5,000x less than human welfare per year. Quoting Fischer, Shriver and myself, 2022 citing others:

Open Philanthropy once estimated that, “if you value chicken life-years equally to human life-years… [then] corporate campaigns do about 10,000x as much good per dollar as top [global health] charities.” Two more recent estimates—which we haven’t investigated and aren’t necessarily endorsing—agree that corporate campaigns are much better. If we assign equal weights to human and chicken welfare in the model that Grilo, 2022 uses, corporate campaigns are roughly 5,000x better than the best global health charities. If we do the same thing in the model that Clare and Goth, 2020 employ, corporate campaigns are 30,000 to 45,000x better.[8]

About Open Phil's own estimate in that 2016 piece, Holden wrote in a footnote:

Bayesian adjustments should attenuate this difference to some degree, though it’s unclear how much, if you believe – as I do – that both estimates are fairly informed and reasonable though far from precise or reliable. I will put this consideration aside here.

My understanding is that they've continued to be very cost-effective since then. See this comment by Saulius, who estimated their impact, and this section of Grilo, 2022.

Duffy, 2023 for RP also recently found a handful of US ballot initiatives from 2008-2018 for farm animal welfare to be similarly cost-effective, making, in my view, relatively conservative assumptions.

Jeff Kaufman @ 2023-09-28T14:57 (+4) in response to Weighing Animal Worth

Thanks; edited to change those to "here's a selection from their bottom-line point estimates comparing animals to humans" and [EDIT: see above].

Larks @ 2023-09-28T15:17 (+4)

It is bizarre to me that people would disagree-vote this as it seems to be a true description of the edit you made. If people think the edit is bad they should downvote, not disagreevote.

Rockwell @ 2023-09-28T15:10 (+29) in response to Weighing Animal Worth

If you somehow could convince a research group, not selected for caring a lot about animals, to pursue this question in isolation, I'd predict they'd end up with far less animal-friendly results.

I think this is a possible outcome, but not guaranteed. Most people have been heavily socialized to not care about most animals, either through active disdain or more mundane cognitive dissonance. Being "forced" to really think about other animals and consider their moral weight may swing researchers who are baseline "animal neutral" or even "anti-animal" more than you'd think. Adjacent evidence might be the history of animal farmers or slaughterhouse workers becoming convinced animal killing is wrong through directly engaging in it.

I also want to note that most people would be less surprised if a heavy moral weight is assigned to the species humans are encouraged to form the closest relationships with (dogs, cats). Our baseline discounting of most species is often born from not having relationships with them, not intuitively understanding how they operate because we don't have those relationships, and/or objectifying them as products. If we lived in a society where beloved companion chickens and carps were the norm, the median moral weight intuition would likely be dramatically different.

Jeff Kaufman @ 2023-09-28T15:15 (+6)

the history of animal farmers or slaughterhouse workers becoming convinced animal killing is wrong through directly engaging in it

Is this actually much evidence in that direction? I agree that there are many examples of former farmers and slaughterhouse workers that have decided that animals matter much more than most people think, but is that the direction the experience tends to move people? I'm not sure what I'd predict. Maybe yes for slaughterhouse workers but no for farmers, or variable based on activity? For example, I think many conventional factory farming activities like suffocating baby chicks or castrating pigs are things the typical person would think "that's horrible" but the typical person exposed to that work daily would think "eh, that's just how we do it".

Peter Wildeford @ 2023-09-28T14:59 (+8) in response to Weighing Animal Worth

concludes that human extinction would be a big welfare improvement

I don't think he concludes that either, nor do I know if he agrees with that. Maybe he implies that? Maybe he concludes that if our current trajectory is maintained / locked-in then human extinction would be a big welfare improvement? Though Kyle is also clear to emphasize the uncertainty and tentativeness of his analysis.

Larks @ 2023-09-28T15:13 (+8)

Though Kyle is also clear to emphasize the uncertainty and tentativeness of his analysis.

I think if you want to emphasize uncertainty and tentativeness it is a good idea to include something like error bars, and to highlight that one of the key assumptions involves fixing a parameter (the weight on hedonism) at the maximally unfavourable value (100%).

Rockwell @ 2023-09-28T15:10 (+29) in response to Weighing Animal Worth

If you somehow could convince a research group, not selected for caring a lot about animals, to pursue this question in isolation, I'd predict they'd end up with far less animal-friendly results.

I think this is a possible outcome, but not guaranteed. Most people have been heavily socialized to not care about most animals, either through active disdain or more mundane cognitive dissonance. Being "forced" to really think about other animals and consider their moral weight may swing researchers who are baseline "animal neutral" or even "anti-animal" more than you'd think. Adjacent evidence might be the history of animal farmers or slaughterhouse workers becoming convinced animal killing is wrong through directly engaging in it.

I also want to note that most people would be less surprised if a heavy moral weight is assigned to the species humans are encouraged to form the closest relationships with (dogs, cats). Our baseline discounting of most species is often born from not having relationships with them, not intuitively understanding how they operate because we don't have those relationships, and/or objectifying them as products. If we lived in a society where beloved companion chickens and carps were the norm, the median moral weight intuition would likely be dramatically different.

Ariel Simnegar @ 2023-09-28T15:08 (+8) in response to Weighing Animal Worth

(Disclaimer: I take RP's moral weights at face value, and am thus inclined to defend what I consider to be their logical implications.)

Specifically with respect to cause prioritization between global heath and animal welfare, do you think the evidence we've seen so far is enough to conclude that animal welfare interventions should most likely be prioritized over global health?

In "Worldview Diversification" (2016), Holden Karnofsky wrote that "If one values humans 10-100x as much [as chickens], this still implies that corporate campaigns are a far better use of funds (100-1,000x) [than AMF]." In 2023, Vasco Grilo replicated this finding by using the RP weighs to find corporate campaigns 1.7k times as effective.

Let's say RP's moral weights are wrong by an order of magnitude, and chickens' experiences actually only have 3% of the moral weight of human experiences. Let's say further that some remarkably non-hedonic preference view is true, where hedonic goods/bads only account for 10% of welfare. Still, corporate campaigns would be an order of magnitude more effective than the best global health interventions.

While I agree with you that it would be premature to conclude with high confidence that global welfare is negative, I think the conclusions of RP's research with respect to cause prioritization still hold up after incorporating the arguments you've enumerated in your post.

Jeff Kaufman @ 2023-09-28T15:10 (+4)

Instead of going that direction, would you say that efforts to prevent AI from wiping out all life and neutralizing all value in the universe on earth are counterproductive?

Ariel Simnegar @ 2023-09-28T15:08 (+8) in response to Weighing Animal Worth

(Disclaimer: I take RP's moral weights at face value, and am thus inclined to defend what I consider to be their logical implications.)

Specifically with respect to cause prioritization between global heath and animal welfare, do you think the evidence we've seen so far is enough to conclude that animal welfare interventions should most likely be prioritized over global health?

In "Worldview Diversification" (2016), Holden Karnofsky wrote that "If one values humans 10-100x as much [as chickens], this still implies that corporate campaigns are a far better use of funds (100-1,000x) [than AMF]." In 2023, Vasco Grilo replicated this finding by using the RP weighs to find corporate campaigns 1.7k times as effective.

Let's say RP's moral weights are wrong by an order of magnitude, and chickens' experiences actually only have 3% of the moral weight of human experiences. Let's say further that some remarkably non-hedonic preference view is true, where hedonic goods/bads only account for 10% of welfare. Still, corporate campaigns would be an order of magnitude more effective than the best global health interventions.

While I agree with you that it would be premature to conclude with high confidence that global welfare is negative, I think the conclusions of RP's research with respect to cause prioritization still hold up after incorporating the arguments you've enumerated in your post.

Peter Wildeford @ 2023-09-28T14:59 (+8) in response to Weighing Animal Worth

concludes that human extinction would be a big welfare improvement

I don't think he concludes that either, nor do I know if he agrees with that. Maybe he implies that? Maybe he concludes that if our current trajectory is maintained / locked-in then human extinction would be a big welfare improvement? Though Kyle is also clear to emphasize the uncertainty and tentativeness of his analysis.

Jeff Kaufman @ 2023-09-28T15:02 (–2)

Edited again; see above.

kyle_fish @ 2023-09-28T14:58 (+27) in response to Weighing Animal Worth

I strongly object to the (Edit: previous) statement that my post "concludes that human extinction would be a very good thing". I do not endorse this claim and think it's a grave misconstrual of my analysis. My findings are highly uncertain, and, as Peter mentions, there are many potential reasons for believing human extinction would be bad even if my conclusions in the post were much more robust (e.g. lock-in effects, to name a particularly salient one). 

Jeff Kaufman @ 2023-09-28T15:00 (+5)

Sorry! After Peter pointed that out I edited it to "concludes that human extinction would be a big welfare improvement". Does that wording address your concerns?

EDIT: changed again, to "concludes that things are bad and getting worse, which suggests human extinction would be beneficial".

EDIT: and again, to "concludes that things are bad and getting worse, which suggests efforts to reduce the risks of human extinction are misguided".

EDIT: and again, to just "concludes that things are bad and getting worse".

EDIT: and again, to "concludes that things are likely bad and getting worse".

Jeff Kaufman @ 2023-09-28T14:57 (+4) in response to Weighing Animal Worth

Thanks; edited to change those to "here's a selection from their bottom-line point estimates comparing animals to humans" and [EDIT: see above].

Peter Wildeford @ 2023-09-28T14:59 (+8)

concludes that human extinction would be a big welfare improvement

I don't think he concludes that either, nor do I know if he agrees with that. Maybe he implies that? Maybe he concludes that if our current trajectory is maintained / locked-in then human extinction would be a big welfare improvement? Though Kyle is also clear to emphasize the uncertainty and tentativeness of his analysis.

kyle_fish @ 2023-09-28T14:58 (+27) in response to Weighing Animal Worth

I strongly object to the (Edit: previous) statement that my post "concludes that human extinction would be a very good thing". I do not endorse this claim and think it's a grave misconstrual of my analysis. My findings are highly uncertain, and, as Peter mentions, there are many potential reasons for believing human extinction would be bad even if my conclusions in the post were much more robust (e.g. lock-in effects, to name a particularly salient one). 

Peter Wildeford @ 2023-09-28T14:46 (+5) in response to Weighing Animal Worth

Two nitpicks:

Here's a selection from their bottom-line point estimates for how many animals of a given species are morally equivalent to one human:

The chart is actually estimates for how many animal life years of a given species are morally equivalent to one human life year. Though you do get the comparison correct in the paragraph after the table.

~

The post weighs the increasing welfare of humanity over time against the increasing suffering of livestock, and concludes that human extinction would be a very good thing.

You'd have to ask Kyle Fish but it's not necessarily the case that he endorses this conclusion about human extinction and he certainly didn't actually say that (note if you CTRL+F for "extinction" it is nowhere to be found in the report). I think there's lots of reasons to think that human extinction is very bad even given our hellish perpetuation of factory farming.

Jeff Kaufman @ 2023-09-28T14:57 (+4)

Thanks; edited to change those to "here's a selection from their bottom-line point estimates comparing animals to humans" and [EDIT: see above].

Peter Wildeford @ 2023-09-28T14:41 (+23) in response to Weighing Animal Worth

I want to add that personally before this RP "capacity for welfare" project I started with an intuition that a human year was worth about 100-1000 times more than a chicken year (mean ~300x) conditional on chickens being sentient. But after reading the RP "capacity for welfare" reports thoroughly I have now switched roughly to the RP moral weights valuing a human year at about 3x a chicken year conditional on chickens being sentient (which I think is highly likely but handled in a different calculation). This report conclusion did come at a large surprise to me.

Obviously me changing my views to match RP research is to be expected given that I am the co-CEO of RP. But I want to be clear that contra your suspicions it is not the case at least for me personally that I started out with insanely high moral value on chickens and then helped generate moral weights that maintained my insanely high moral value (though note that my involvement in the project was very minimal and I didn't do any of the actual research). I suspect this is also the case for other RP team members.

That being said, I agree that the tremendous uncertainty involved in these calculations is important to recognize, plus there likely will be some potentially large interpersonal variation based on having different philosophical assumptions (e.g., not hedonism) as well as different fundamental values (given moral anti-realism which I take to be true).

Jeff Kaufman @ 2023-09-28T14:53 (+4)

it is not the case at least for me personally that I started out with insanely high moral value on chickens and then helped generate moral weights that maintained my insanely high moral value

Mmm, good point, I'm not trying to say that. I would predict that most people looking into the question deeply would shift in the direction of weighting animals more heavily than they did at the outset. But it also sounds like you did start with, for a human, unusually pro-animal views?

Peter Wildeford @ 2023-09-28T14:46 (+5) in response to Weighing Animal Worth

Two nitpicks:

Here's a selection from their bottom-line point estimates for how many animals of a given species are morally equivalent to one human:

The chart is actually estimates for how many animal life years of a given species are morally equivalent to one human life year. Though you do get the comparison correct in the paragraph after the table.

~

The post weighs the increasing welfare of humanity over time against the increasing suffering of livestock, and concludes that human extinction would be a very good thing.

You'd have to ask Kyle Fish but it's not necessarily the case that he endorses this conclusion about human extinction and he certainly didn't actually say that (note if you CTRL+F for "extinction" it is nowhere to be found in the report). I think there's lots of reasons to think that human extinction is very bad even given our hellish perpetuation of factory farming.

Dawn Drescher @ 2023-09-28T14:16 (+11) in response to From Passion to Depression and Pessimism: My Journey with Effective Altruism

Yeah… I've been part of another community where a few hundred people were scammed out of some $500+ and left stranded in Nevada. (Well, Las Vegas, but Nevada sounds more dramatic.) Hundreds of other people in the community spang into action within hours, donated and coordinated donation efforts, and helped the others at least get back home.

Only Nonlinear attempted something similar in the EA community. (But I condemn exploitative treatment of employees of course!) Open Phil picked up an AI safety prize contest, and I might miss a few cases. I was very disappointed by how little of this sort happened. Then again I could've tried to start such an effort myself. I don't have the network, so I'm pretty sure I would've failed. I was also in bed with Covid for the first month. 

I suppose it really makes more sense to model EA not as a community but as a scientific discipline. I have a degree in CS, but I wasn't disappointed that the CS community didn't support their own after the FTX collapse because I never had the expectation that that is something that could happen. EA seems to me is better understood within that reference class. (Unfortunately – not because there's something wrong with scientific disciplines but because I would've loved to be part of a real community too.)

Jeff Kaufman @ 2023-09-28T14:43 (+16)

I've been part of another community where a few hundred people were scammed out of some $500+ and left stranded in Nevada. (Well, Las Vegas, but Nevada sounds more dramatic.) Hundreds of other people in the community spang into action within hours, donated and coordinated donation efforts, and helped the others at least get back home.

I think if this happened with, say, a conference you would see this kind of response within EA. A group of people stuck in a specific place is very different from the FTX collapse.

Peter Wildeford @ 2023-09-28T14:41 (+23) in response to Weighing Animal Worth

I want to add that personally before this RP "capacity for welfare" project I started with an intuition that a human year was worth about 100-1000 times more than a chicken year (mean ~300x) conditional on chickens being sentient. But after reading the RP "capacity for welfare" reports thoroughly I have now switched roughly to the RP moral weights valuing a human year at about 3x a chicken year conditional on chickens being sentient (which I think is highly likely but handled in a different calculation). This report conclusion did come at a large surprise to me.

Obviously me changing my views to match RP research is to be expected given that I am the co-CEO of RP. But I want to be clear that contra your suspicions it is not the case at least for me personally that I started out with insanely high moral value on chickens and then helped generate moral weights that maintained my insanely high moral value (though note that my involvement in the project was very minimal and I didn't do any of the actual research). I suspect this is also the case for other RP team members.

That being said, I agree that the tremendous uncertainty involved in these calculations is important to recognize, plus there likely will be some potentially large interpersonal variation based on having different philosophical assumptions (e.g., not hedonism) as well as different fundamental values (given moral anti-realism which I take to be true).

Angelina Li @ 2023-09-26T17:34 (+1) in response to Shrimp: The animals most commonly used and killed for food production

Where did you get the 35 trillion number from? Did you mean something closer to 27T (the median estimate for the "Total number of shrimp (farmed and wild-caught, 2020)")?

Peter Hickman @ 2023-09-28T14:39 (+2)

Good point. I'm not sure now where I got 35T. I just now edited the original post.

Dawn Drescher @ 2023-09-28T14:16 (+11) in response to From Passion to Depression and Pessimism: My Journey with Effective Altruism

Yeah… I've been part of another community where a few hundred people were scammed out of some $500+ and left stranded in Nevada. (Well, Las Vegas, but Nevada sounds more dramatic.) Hundreds of other people in the community spang into action within hours, donated and coordinated donation efforts, and helped the others at least get back home.

Only Nonlinear attempted something similar in the EA community. (But I condemn exploitative treatment of employees of course!) Open Phil picked up an AI safety prize contest, and I might miss a few cases. I was very disappointed by how little of this sort happened. Then again I could've tried to start such an effort myself. I don't have the network, so I'm pretty sure I would've failed. I was also in bed with Covid for the first month. 

I suppose it really makes more sense to model EA not as a community but as a scientific discipline. I have a degree in CS, but I wasn't disappointed that the CS community didn't support their own after the FTX collapse because I never had the expectation that that is something that could happen. EA seems to me is better understood within that reference class. (Unfortunately – not because there's something wrong with scientific disciplines but because I would've loved to be part of a real community too.)

Ulrik Horn @ 2023-09-28T14:12 (+2) in response to New FLI Podcast on tackling climate change

Hi Johannes, I really enjoyed the structure of the interview and your detailed and careful answers. This made it easier to pinpoint a part of the interview where I think you might have too much confidence. 

This part is around (3) on variance around outcomes. If I understand correctly, the argument put forward in the podcast is more or less that if we had given equal social choice to nuclear as we have done with wind and solar, nuclear would be highly likely to have followed similar cost reductions as wind and solar. I think we disagree but to clarify it might be helpful for you to put some numbers on it? Perhaps something like X% chance of reducing the LCoE of SMRs by more than 50% from the only built SMR where I could find some cost data, where I very roughly calculated the LCoE of as $127/MWh (might be low-balling, might be hidden costs as Russia is not known for transparancy). However, I take it from your statements you think there might be a ~70% chance of SMRs having become competitive with wind and solar if we have decided to support that technology similarly. Wind and solar have middle-of-the-range LCoEs at around $50/MWh and $60/MWh respectively, eyeballing Lazard's charts. 

I think this is overly optimistic but not impossible. So my disagreement is more about the strength of your claim, not that it is impossible in all possible alternative worlds. I would very initially put something like a 30% chance that SMRs with a similar deployment in MW to wind or solar would end up below $80/MWh and maybe a 5% chance of getting closer to the $50-$60 range of solar and wind.

One main reason for this is that nuclear is not modular. Moore's law, solar cost decreases and also the Carlson curve are all dependent on massive scale, factory manufacturing, etc. Professor Bent Flyvbjerg at Oxford covers this quite well (especially solar, but also touching on nuclear and wind) and bases it on extensive data his team has collected. As an example that I think I have used on the forum before, the HTR-PM's first reactor took ~10 years to build. I doubt the first solar panels or wind turbines took that long.

I will stop writing as this comment is already long but would be happy to have a conversation about this. Perhaps there is something I am missing. I am coming from a project management and engineering background so I could be biased towards being somewhat dismissive of social and political influences.

Fai @ 2023-09-28T10:54 (+23) in response to Net global welfare may be negative and declining

There are a ton of judgement calls in coming up with moral weights.I'm worried about a dynamic where the people most interested in getting deep into these questions are people who already intuitively care pretty strongly about animals, and so the best weights available end up pretty biased

I agree there's such a problem. But I think it is important to also point out that there is the same problem for people who tend to think they "do not make judgement calls about moral weights", but have nonetheless effectively came up with their own judgement calls when they live their daily lives which "by the way" affect animals (eat animals, live in buildings that require constructions that kill millions of animals, gardening, which harms and give rise to many animals, etc).

Also, I think it is equally, maybe more, important to recognize those people who make such judgement calls without explicitly thinking about moral weights, let alone go into tedious research projects, are people who intuitively care pretty little about animals, and so their "effective intuition about moral weights" (intuitive because they didn't want to use research to back it up) backing up their actions end up pretty biased.

I think I intuitively worry about the bias of those who do not particularly feel strongly about animals' suffering (even those caused by them), than the bias of those who care pretty strongly about animals. And of course, disclaimer: I think I lie within the latter group.

Jeff Kaufman @ 2023-09-28T13:52 (+6)

Sure! I'd love to see a group of people who don't start out caring about animals much more than average try to tackle this research problem. And then maybe an adversarial collaboration?

I just wrote up more on this here: Weighing Animal Worth.