What I believe, part 1: Utilitarianism | Sunyshore
By Eevee🔹 @ 2021-01-10T17:58 (+10)
This is a linkpost to https://sunyshore.substack.com/p/what-i-believe-part-1-utilitarianism
I wrote this post for my personal blog, Sunyshore, about why I think total utilitarianism is closest to the correct ethical theory and its implications for society. Since my blog is written for a general audience, this post explains a lot of basic concepts that most users of the EA Forum may be familiar with, and it's written in a more casual tone than I'd write for the EA Forum.
I've pasted the text of my blog post below, but I encourage you to check out the original version with images on Substack, and to subscribe so you'll receive my next post as soon as I publish it.
Happy new year, and welcome back to Sunyshore!
This is the first in a new series of posts about the foundations of my ethical and political worldview. Currently, I support effective altruism, which uses reason and evidence to benefit humans and other sentient beings as much as possible. At the level of public policy, I identify foremost as a social liberal—I support liberal democracy and nearly-free markets together with government intervention to reduce inequalities and provide public goods.
I expect my beliefs to change over time—they fluctuate from day to day depending on what I learn and experience—and this post is just a snapshot of my beliefs in the present moment. It may not reflect my beliefs a year from now.
I intend to cover a lot of topics in this series, ranging from economic systems to technological progress. In this first post, I will discuss utilitarianism and the foundations of my worldview.
So, let’s get to it!
Moral agents and moral patients
First, let me explain what I mean by “moral agent” and “moral patient,” since I will be using these terms throughout this post and future posts on ethics. These terms are seldom used outside of moral philosophy and are often conflated into the single concept of “moral personhood.”
- Moral patients are beings whose welfare (pleasure minus suffering) is morally relevant. To me, moral patienthood requires both sentience (the ability to have feelings) and qualia (conscious experience).
- Moral agents are beings whose actions are morally relevant. Moral agency requires the ability to reason about one’s actions, so that one can be held morally responsible for them.
Humans are moral patients because they can experience emotions such as pain and pleasure, and moral agents because they can reason about and take moral responsibility for their actions. Autonomous robots are moral agents because they reason about the effects of their actions on the real world, but are not moral patients because they lack sentience and conscious experience. By contrast, some non-human animals, such as chickens and cattle, are moral patients because they experience pleasure and pain, but are not moral agents because they cannot be meaningfully held responsible for their actions by humans (who lack the ability to communicate with them).
The veil of ignorance
In this section, I present an argument for why I believe total utilitarianism—which aims to maximize the total well-being of all moral patients—is closest to the correct ethical theory.
The original position is a well-known thought experiment in ethics, in which members of a society are given a chance to decide how that society should work, like a role-playing video game in which players decide on the game mechanics before they start playing. The players deliberate behind a veil of ignorance, in which they don’t know ahead of time anything about who they will be—including their social status, race, ethnicity, gender, or where and when they will be born. Because players negotiate from a position of ignorance about their specific stations in the resulting society, they must deliberate impartially, as if any of them could end up as the richest person or the poorest person; a light- or dark-skinned person; an able-bodied or disabled person; a person born with male, female, or intersex reproductive traits.
The most famous version of the veil of ignorance was developed by philosopher John Rawls in his book, A Theory of Justice (1971). However, Rawls borrowed this concept from previous thinkers, including philosopher Immanuel Kant and economist John Harsanyi. Harsanyi believed that people deliberating behind the veil of ignorance would design their society in such a way that maximizes their expected, or average, utility.
But wait. Does expected utility really mean average utility? Average utility refers to the average welfare of moral patients, whereas total utility also depends on the number of patients that exist. Depending on the society chosen by our players in the original position, different numbers of people will be instantiated. For example, if humanity goes extinct by 2100, then anyone slated to be born after 2100 will not exist. If everyone prefers to exist, then those people will prefer a world in which humanity survives past 2100. In general, each person will want to maximize the total utility of everyone instantiated, which depends on the probability that they will exist and the average utility of the people who do exist.
Similarly, we can show that the players will want to maximize the utility of all moral patients, not just human beings. Even though the players are capable of reasoning (and thus moral agency) while in the original position, they could be instantiated as humans or non-human animals, with or without moral agency. All players have a stake in the decision-making process whether or not they end up as moral agents.
Utilitarianism in the real world
Based on my (non-expert) knowledge of the social sciences, especially economics and political science, I think that a society with maximum total utility would have the following characteristics:
- It would avoid unnecessary suffering and violence. Thus, it would provide for everyone’s safety while avoiding excessive or discriminatory punishments, and it would be free from war and armed conflict.
- It would tolerate various ways of living in terms of religion, political belief, culture, sexuality, and so on; and it would be free from prejudice and discrimination based on morally irrelevant features like race and gender.
- It would have a globalized, free-market economy (to promote economic efficiency and growth) with an effective welfare state (to limit inequality). Both markets and government intervention would work together to eliminate poverty and create wealth for all.
- It would protect non-human animals and the natural and built environments, since everyone benefits from clean air, clean water, and a good climate.
- It would protect humanity from existential risks, such as biological and nuclear weapons, so that humanity can survive and flourish for thousands, if not millions, of years.
It would have mechanisms to make progress and address new challenges. Thus, it would have inclusive, democratic institutions, as well as freedom of speech and assembly, so that people can openly propose and debate ideas for improvement.
In short, such a society would embrace economic, social, and political liberalism. It would be an open society in which everyone can fully participate, free from discrimination and violence. But the real world is full of suffering and injustice, even as it has improved so much in the last 200 years. How can we build a better world?
Countless intellectuals and social movements have dedicated themselves to improving the world. One such movement is liberalism, a diverse political movement that aims to achieve such goals as securing civil and political rights and creating shared prosperity through the reform of political and economic institutions. Liberalism came of age in the early 19th century, and it has come to dominate modern politics. More recently, the effective altruism movement has been applying careful reasoning and evidence to figure out how to help others as effectively as possible. Its successes include GiveWell, Open Philanthropy, and the Gates Foundation.
I plan to write more posts about how we can improve the world, drawing from both the liberal and effective altruist traditions—be sure to subscribe so you’ll receive them. Also, if you like this post, please share it with your friends.
In the meantime, you can learn more about utilitarianism at Utilitarianism.net, a website co-written by William MacAskill, a philosophy professor at Oxford and one of the founders of effective altruism, and Darius Meissner, an Oxford student and fellow member of the EA community.
Take care!
RogerAckroyd @ 2021-01-11T12:07 (+4)
I am attracted to utilitarianism, but also find some of the possible implications off-putting. But there are also some objections I have from first principles.
One objection is that any numbers we use in practice just have to be made up. (This objection might be especially serious if we take animals into account, which I think we should.) So maybe utalitarianism is the "correct" theory but if I don't have access to the correct utilities it is not clear whether I should use some made up numbers to do the expected utility calculations. One might compare with theorems saying that individual rational choice is equivalent to maximizing a von Neumann-Morgenstern utility function. Yet very few people, even economists, try to do that in practice and it is not clear that people would be less irrational in practice if they tried to do calculations with their expected utility in various circumstances.
A second theoretical objection I have is that if we suppose there is any chance that humanity, or sentient life, will survive forever, then the universe will contain infinite amounts of pain and pleasure, all calculations become divergent, and the theory gives no guide at all. You might object that this is impossible with current scientific theories, but the conclusion goes through no matter how small the probability is. Surely there is a 1/Ackerman(1000) chance that our current understanding of physics is wrong?