defun's Quick takes

By defun @ 2024-04-03T10:15 (+2)

null
defun @ 2024-04-03T10:15 (+48)

The meat-eater problem is under-discussed.

I've spent more than 500 hours consuming EA content and I had never encountered the meat-eater problem until today.

https://forum.effectivealtruism.org/topics/meat-eater-problem

(I had sometimes thought about the problem, but I didn't even know it had a name)

saulius @ 2024-04-03T14:48 (+39)

I think the reason is that it doesn't really have a target audience. Animal advocacy interventions are hundreds of times more cost-effective than global poverty interventions. It only makes sense to work on global poverty if you think that animal suffering doesn't matter nearly as much as human suffering. But if you think that, then you won't be convinced to stop working on global poverty because of its effects on animals. Maybe it's relevant for some risk-averse people. 

saulius @ 2024-04-03T16:24 (+16)

I wonder if Open Philanthropy thinks about it because they fund both animal advocacy and global poverty/health. Animal advocacy funding probably easily offsets its negative global poverty effects on animals. It takes thousands of dollars to save a human life with global health interventions and that human might consume thousands of animals in her lifetime. Chicken welfare reforms can half the suffering of thousands of animals for tens of dollars. However, I don't like this sort of reasoning that much because we may not always have interventions as cost-effective as chicken welfare reforms.

Ben Millwood @ 2024-04-06T13:55 (+4)

Yeah, perhaps if you care about animal welfare, the main problem with giving money to poverty causes is that you didn't give it to animal welfare instead, and the increased consumption of meat is a relative side issue.

Jeff Kaufman @ 2024-04-08T03:14 (+2)

One potential audience is people open to moral trade. Say Pat doesn't care much about animals and is on the fence between global poverty interventions with different animal impacts, and Alex cares a lot about animals and normally donates to animal welfare efforts. Alex could agree with Pat to donate some amount to the better-for-animals global poverty charity if Pat will agree to send all their donations there.

Except if you do the math on it, I think you'll find that it's really hard to come out with a set of charities, values, and impacts that make this work. Pat would have to be so close to indifferent between the two options.

(And if you figure that out, there's also all the normal reasons why moral trade is challenging and practice.)

saulius @ 2024-04-03T15:03 (+22)

Also, you can argue against the poor meat eater problem by pointing out that it's very unclear whether increased animal production is good or bad for animals. In short, the argument would be that there are way more wild animals than farmed animals, and animal product consumption might substantially decrease wild animal populations. Decreasing wild animal populations could be good because wild animals suffer a lot, mostly due to natural causes. See https://forum.effectivealtruism.org/topics/logic-of-the-larder I think this issue is also very under-discussed.

BrownHairedEevee @ 2024-04-04T04:44 (+13)

I've been thinking about the meat eater problem a lot lately, and while I think it's worth discussing, I've realized that poverty reduction isn't to blame for farmed animal suffering.

(Content note: dense math incoming)

Assume that humans' utility as a function of income is  (i.e. isoelastic utility with ), and the demand for meat is  where  is the income elasticity of demand. Per Engel's law is typically between 0 and 1. As long as  at low incomes and  at high incomes.

For simplicity, I am assuming that the animal welfare impact of meat production is negative and proportional to . (As saulius points out, it's unclear whether meat production is net positive or net negative for animals as a whole. Also, animal welfare regulations and alternative protein technologies are more common in high-income regions like the EU and US, so this assumption may not apply at the high end.) If this is true, then increasing a person or country's income is most valuable when that person/country is in extreme poverty, and least valuable at the high end of the income spectrum.

The upshot: the framing of the meat eater problem as being about poverty obscures the fact that the worst offenders of factory farming are rich countries like the United States, not poor ones, and that increasing the income of a rich person is worse for animal welfare than increasing that of a poor one (as long as both of them are non-vegan). I feel like it's hypocritical for animal advocates and EAs from rich countries to blame poor countries for the suffering caused by factory farming.

Ben Millwood @ 2024-04-06T13:27 (+10)

I feel like it's hypocritical for animal advocates and EAs from rich countries to blame poor countries for the suffering caused by factory farming.

I don't think this is what the meat-eater problem does. You could imagine a world in which the West is responsible for inventing the entire machinery of factory farming, or even running all the factory farms, and still believe that lifting additional people out of poverty would help the Western factory farmers sell more produce. It's not about blame, just about consequences.

I realise this isn't your main point, and I haven't processed your main argument yet. It would make a lot of sense to me if transferring money from a first-world meat eater to a third-world meat eater resulted in less meat being eaten, but I'd imagine that the people most concerned with this issue are thinking about their own money, and already don't consume meat themselves?

defun @ 2024-08-06T11:13 (+22)

John Schulman (OpenAI co-founder) has left OpenAI to work on AI alignment at Anthropic.

https://x.com/johnschulman2/status/1820610863499509855

defun @ 2024-09-04T14:28 (+20)

Ilya's Safe Superintelligence Inc. has raised $1B.

huw @ 2024-09-05T13:06 (+17)

I guess one thing worth noting here is that they raised from a16z, whose leaders are notoriously critical of AI safety. Not sure how they square that circle, but I doubt it involves their investors having changed their perspectives on that issue.

yanni kyriacos @ 2024-09-04T23:47 (+9)

Just incase anyone is reading this, I too would like a billion dollars.

NickLaing @ 2024-09-05T18:09 (+4)

The way people downvote jokes on this forum... At least I appreciated it :)

yanni kyriacos @ 2024-09-06T10:21 (+4)

Don’t worry Nick, I’ll never stop.

NickLaing @ 2024-09-08T18:25 (+2)

I'll try a bit more too. 23 votes and 6 karma now - looks like the forum is split on the low effort humor front ;).

yanni kyriacos @ 2024-09-09T06:18 (+3)

lol someone has to write a post "How to make an upvoted joke on the forum that isn't cringe"

calebp @ 2024-09-05T18:24 (+3)

simply become one of the most successful and influential ML researchers 🤷‍♂️

NickLaing @ 2024-09-05T18:15 (+4)

Maybe a silly question, but does "one shot" for safe AGI mean they aren't going to release models along the way and only try and do reach the superintelligence bar? Would have thought investors wouldn't have been into that...

Or are they basically just like other AI companies and will release commercial products along the way but with a compelling pitch?

defun @ 2024-05-27T12:00 (+19)

I highly recommend the book "How to Launch A High-Impact Nonprofit" to everyone.

I've been EtG for many years and I thought this book wasn't relevant to me, but I'm learning a lot and I'm really enjoying it.

Neel Nanda @ 2024-05-27T20:18 (+6)

Cool! What kind of things are you learning from it?

defun @ 2024-05-28T11:38 (+4)

After years of donating to established organizations (top GiveWell charities), I want to start directing a portion of my donations to new/small charities (eg. Presenting nine new charities). I think this book is helping me better understand which new charities might have more potential.

I also really liked "Part II. Making good decisions", which covers many tools that can be useful for personal and professional decision-making (rationality, the scientific method, EA, Weighted Factor Modelling, etc.).

Alfredo_Parra @ 2024-05-27T13:53 (+4)

(The link isn't working for me.)

defun @ 2024-05-27T14:02 (+1)

Fixed. Thanks!

yanni kyriacos @ 2024-05-31T22:53 (+1)

I second this recommendation!

defun @ 2024-07-23T15:22 (+14)

Meta has just released Llama 3.1 405B. It's open-source and in many benchmarks it beats GPT-4o and Claude 3.5 Sonnet:

Zuck's letter "Open Source AI Is the Path Forward".

EJT @ 2024-07-23T16:48 (+11)

Wait, all the LLMs get 90+ on ARC? I thought LLMs were supposed to do badly on ARC.

JWS 🔸 @ 2024-07-23T16:55 (+16)

It's an unfortunate naming clash, there are different ARC Challenges:

ARC-AGI (Chollet et al) - https://github.com/fchollet/ARC-AGI

ARC (AI2 Reasoning Challenge) - https://allenai.org/data/arc

These benchmarks are reporting the second of the two.

LLMs (at least without scaffolding) still do badly on ARC, and I'd wager Llama 405B still doesn't do well on the ARC-AGI challenge, and it's telling that all the big labs release the 95%+ number they get on AI2-ARC, and not whatever default result they get with ARC-AGI...

(Or in general, reporting benchmarks where they can go OMG SOTA!!!! and not helpfully advance the general understanding of what models can do and how far they generalise. Basically, traditional benchmark cards should be seen as the AI equivalent of "IN MICE")