More to explore on 'What do you think?'
By EA Handbook @ 2022-07-09T23:00 (+7)
Effectiveness is a conjunction of multipliers (5 mins.) - one take on why it matters so much to think carefully and critically about which of the above perspectives is right.
Types of criticism
- Disagreeing about what’s effective isn’t disagreeing with effective altruism - Rob Wiblin differentiates critiques of effective altruism as a concept and critiques of the ways EAs attempt to apply this concept. (5 mins.)
- Four categories of effective altruism critiques (4 mins.)
Systemic change
- Response to Effective Altruism, Iason Gabriel (1 min.)
- Effective altruists love systemic change - Robert Wiblin argues why EA does not, in fact, neglect systemic change. (13 mins.)
- Beware Systemic Change (15 mins.)
- Critique of pursuing systemic change. How hard is it to figure out what systemic changes will make things better?
- This is partly an expression of disagreement with others in EA who have embraced systemic change, which was itself partly a response to criticisms like those in the Boston Review
Is effective altruism a question or an ideology, or both?
- Effective Altruism is a Question (not an ideology) (5 mins.)
- Effective Altruism is an Ideology, not (just) a Question (24 mins.)
General criticisms of effective altruism
- Notes on Effective Altruism (20 mins.)
- The Centre for Effective Altruism’s responses to some common objections (10 mins.)
- Responses to The Logic of Effective Altruism (~20 mins., pick a few to read) Note that these critiques are from 2015.
- Recommended excerpts
- Daron Acemoglu
- Angus Deaton
- Jennifer Rubenstein
- Iason Gabriel
- Peter Singer’s response
- How to view these: click the names under “Responses” at the bottom of the original article
- Recommended excerpts
- Towards Ineffective Altruism (15 mins.)
- A critique of effective altruism (11 mins.)
- Another Critique of Effective Altruism (5 mins.)
- The motivated reasoning critique of effective altruism (34 mins.)
- Making decisions under moral uncertainty - Placing credence in multiple ethical systems leads to questions of moral uncertainty when the two ethical systems disagree. This post summarizes the problem and suggests ways to resolve such issues. (16 mins.)
- Some blindspots in rationality and effective altruism - An EA forum blog post that discusses some common pitfalls for rationalists and effective altruists, as well as some meta-considerations (12 mins.)
- 80,000 hours' anonymous flaws in EA
- Critiques of EA that I want to read (16 mins)
- Effective Altruism: Not Effective and Not Altruistic (27 mins.)
- Stop the Robot Apocalypse - Amia Srinivasan - (15 mins.)
- EA is about maximization, and maximization is perilous (8 mins.)
- Reflecting on the last year – Lessons for EA (20 minutes)
Deference and forming inside views
- Some thoughts on deference and inside view models (14 mins.)
- A sketch of good communication (4 mins.)
- How I formed my own views on AI safety (21 mins.)
- Deference Culture in EA (8 mins.)
- Bad Omens in Current Community Building (27 mins.)
Criticism of EA methods
- A philosophical review of Open Philanthropy’s Cause Prioritisation Framework (42 mins.)
- Evidence, Cluelessness, and the Long Term - Hilary Greaves - (30 mins.)
- Why we can’t take expected value estimates literally (even when they’re unbiased) - Holden Karnofsky explains why he takes issue with using expected value estimates of impact. (35 mins. - skimmable)
- Ethical Systems - Check out other ethical systems not discussed yet in the program. Which ones resonate most with you? (Varies)
- Summary review of ITN critiques (8 mins.)
Criticism of EA principles
- Pascal’s Mugging Critique of the application of expected value theory. How do you deal with very low probability events that would be disastrous if they took place? (5 mins.)
- Ethical Systems - Check out other ethical systems not discussed yet in the program. Which ones resonate most with you? (Varies)
- AI alignment, philosophical pluralism, and the relevance of non-Western philosophy - Short talk (18 mins.)
- The Repugnant Conclusion - Total utilitarianism (maximizing overall wellbeing) implies that it’s better to have many many beings with infinitesimally positive wellbeing to a smaller number of beings that are all extremely well off. Some people find this counterintuitive, but there’s significant debate on this. (Video - 6 mins.)
- Utility monster - Another thought experiment suggesting that trying to maximize wellbeing may have counterintuitive implications (5 mins.)
- The bullet-swallowers - Scott Aaronson describes how some theories (like EA) force you to either swallow some tough conclusions or dodge them by contorting the theory. (2 mins.)
Funding and Cause prioritization
- The flow of funding in EA movement building (In this piece, data on funding for different cause areas within EA was collected and analyzed. It’s important to note that funding is only one of many ways we could determine what the EA movement currently prioritizes, and other sources (such as the EA Survey) might show other results.) (12 mins.)