Reasons to have hope
By Jordan Pieters 🔸 @ 2023-04-20T10:19 (+53)
This is a very short post mentioning some recent developments that make me hopeful for the future of AI safety work. These mostly relate to an increased amount of attention for AI safety concerns. I think this is likely to be good, but you might disagree.
- Eliezer Yudkowsky was invited to give a TED talk and received a standing ovation
- The NSF announced a $20 million request for proposals for empirical AI safety research.
- 46% of Americans are concerned about extinction from AI; 69% support a six-month pause in AI development
- AI Safety concerns have received increased media coverage
- ~700 people applied for AGI Safety Fundamentals in January
- FLI’s open letter has received 27572 signatures to date
Remember – The world is awful. The world is much better. The world can be much better.
Fai @ 2023-04-21T11:35 (+8)
Remember – The world is awful. The world is much better. The world can be much better.
If we include the nonhuman animals, the world is not clearly much better now. It might be much worse now than before, or much better, depending on your view on wild animal suffering.[1] But unfortunately the article entirely disregarded the nonhuman animals.
So it is entirely possible that "The world is awful. The world is much worse. The world can be much worse."
But to leave things at a slightly more positive note, let's say "The world is awful. The world is probably much worse. The world can be much better.
- ^
A noteworthy point is that even if factory farming really does reduce the amount of suffering, humans deserve little praise for doing it as:
1. We clearly did not do it to reduce wild animal suffering - wild animal suffering was likely entirely not in the minds of people who developed factory farming. But it is fair to say that farmed animals' suffering were in the minds of the people who developed factory farming
2. There are extremely likely other ways that can achieve more reduction of wild animal suffering than cutting rainforests (which reduces the number of wild animals), growing crops on these land and then feed the crops to raise animals we abuse to retroduce another source of suffering.
Vasco Grilo @ 2023-04-27T07:53 (+2)
Hi Fai,
I agree on the point about non-human animals, but I guess if we should account for future beings too. 1k years ago, we had not fully realised how much humanity (or post-humanity) could flourish, because it was not clear settling the galaxy etc. was possible, so I think the utility of the future has increased a lot in expectation (as long as you think the future is positive). If we decrease existential risk, we can increase it even further. In other words:
- The world is awful, because existential risk is quite high.
- The world is much better, because we have realised vast flourishing is possible.
- The world can be much better, because we can decrease existential risk.
Fai @ 2023-04-27T13:44 (+4)
Thank you for your reply!
I think the future is not clearly positive if we also consider non-human animals (and digital people and animals). I think realising colonizing the galaxy could be a bad things: For example, by spreading wild animal suffering and factory farming.
Vasco Grilo @ 2023-04-27T16:13 (+2)
Fair enough! I agree with Saulius that digital minds might be much more important than WAW in the future. I see you wrote about why the expected numbers of farmed animals in the far future might be huge. I have only read the summary of your piece (added it to my reading list now), but I agree that "digital people will, presumably, have very few incentive to raise animals for food, or even other purposes". In addition, I think there are scenarions in which digital beings dominate total utility by many many orders of magnitude (OOMs), whereas I find it hard to imagine wild animal welfare dominating by much more than the current 8 OOMs (roughly my best guess).