On infinite ethics
By Joe_Carlsmith @ 2022-01-31T07:17 (+94)
(Cross-posted from Hands and Cities)
And for all this, nature is never spent…
Summary:
- Infinite ethics (i.e., ethical theory that considers infinite worlds) is important – both in theory and in practice.
- Infinite ethics puts serious pressure on various otherwise-plausible ethical principles (including some that underlie common arguments for “longtermism”). We know, from impossibility results, that some of these will have to go.
- A willingness to be “fanatical” about infinities doesn’t help very much. The hard problem is figuring out how to value different infinite outcomes – and in particular, lotteries over infinite outcomes.
- Proposals for how to do this tend to be some combination of: silent about tons of choices; in conflict with principles like “if you can help an infinity of people and harm no one, do it”; sensitive to arbitrary and/or intuitively irrelevant things; and otherwise unattractive/horrifying.
- Also, the discourse thus far has focused almost entirely on countable infinities. If we have to deal with larger infinities, they seem likely to break whatever principles we settle on for the countable case.
- I think infinite ethics punctures the dream of a simple, bullet-biting utilitarianism. But ultimately, it’s everyone’s problem.
- My current guess is that the best thing to do from an infinite ethics perspective is to make sure that our civilization reaches a wise and technologically mature future – one of superior theoretical and empirical understanding, and superior ability to put that understanding into practice.
- But reflection on infinite ethics can also inform our sense of how strange such a future’s ethical priorities might be.
Thanks to Leopold Aschenbrenner, Amanda Askell, Paul Christiano, Katja Grace, Cate Hall, Evan Hubinger, Ketan Ramakrishnan, Carl Shulman, and Hayden Wilkinson for discussion. And thanks to Cate Hall for some poetry suggestions.
I. The importance of the infinite
Most of ethics ignores infinities. They’re confusing. They break stuff. Hopefully, they’re irrelevant. And anyway, finite ethics is hard enough.
Infinite ethics is just ethics without these blinders. And ditching the blinders is good. We have to deal with infinites in practice. And they are deeply revealing in theory.
Why do we have to deal with infinities in practice? Because maybe we can do infinite things.
More specifically, we might be able to influence what happens to an infinite number of “value-bearing locations” – for example, people. This could happen in two ways: causal, or acausal.
The causal way requires funkier science. It’s not that infinite universes are funky: to the contrary, the hypothesis that we share the universe with an infinite number of observers is very live, and various people seem to think it’s the leading cosmology on offer (see footnote).[1] But current science suggests that our causal influence is made finite by things like lightspeed and entropy (though see footnote for some subtlety).[2] So causing infinite stuff probably needs new science. Maybe we learn to make hypercomputers, or baby universes with infinite space-times.[3] Maybe we’re in a simulation housed in a more infinite-causal-influence-friendly universe. Maybe something about wormholes? You know, sci-fi stuff.
The acausal way can get away with more mainstream science. But it requires funkier decision theory. Suppose you’re deciding whether to make a $5000 donation that will save a life, or to spend the money on a vacation with your family. And suppose, per various respectable cosmologies, that the universe is filled with an infinite number of people very much like you, faced with choices very much like yours. If you donate, this is strong evidence that they all donate, too. So evidential decision theory treats your donation as saving an infinite number of lives, and as sacrificing an infinite number of family vacations (does one outweigh the other? on what grounds?). Other non-causal decision theories, like FDT, will do the same. The stakes are high.
Perhaps you say: Joe, I don’t like funky science or funky decision theory. And fair enough. But like a good Bayesian, you’ve got non-zero credence on them both (otherwise, you rule out ever getting evidence for them), and especially on the funky science one. And as I’ll discuss below, non-zero credence is enough.
And whatever our credences here, we should be clear-eyed about the fact that helping or harming an infinite number of people would be an extremely big deal. Saving a hundred lives, for example, is a deeply significant act. But saving a thousand lives is even more so; a million, even more so; and so on. For any finite number of lives, though, saving an infinite number would save more than that. So saving an infinite number of lives matters at least as much as saving any finite number – and very plausibly, it matters more (see Beckstead and Thomas (2021) for more).
And the point generalizes: for any way of helping/harming some finite set of people, doing that to an infinite number of people matters at least as much, and plausibly more. And if you’re the type of person who thinks that e.g. saving 10x the lives is 10x as important, it will be quite natural and tempting to say that the infinite version matters infinitely more.
Of course, accepting these sorts of stakes can lead to “fanaticism” about infinities, and neglect of merely finite concerns. I’ll touch on this below. For now, I mostly want to note that, just as you can recognize that humanity’s long-term future matters a lot, without becoming indifferent to the present, so too can you recognize that helping or harming an infinite number of people would matter a lot, without becoming indifferent to the merely finite. Perhaps you do not yet have a theory that justifies this practice; perhaps you’ll never find one. But in the meantime, you need not distort the stakes of infinite benefits and harms, and pretend that infinity is actively smaller than e.g. a trillion.
I emphasize these stakes partly because I’m going to be using the word “infinite” a lot, and causally, with reference to both wonderful and horrifying things. My examples will be math-y and cartoonish. Faced with such a discourse, it can be easy to start numbing out, or treating the topic like a joke, or a puzzle, or a wash of weirdness. But ultimately, we’re talking about situations that would involve actual, live human beings – the same human beings whose lives are at stake in genocides, mental hospitals, slums; human beings who fall in love, who feel the wind on their skin, who care for dying parents as they fade. In infinite ethics, the stakes are just: what they always are. Only: unendingly more.
Here I’m reminded of people who realize, after engaging with the terror and sublimity of very large finite numbers (e.g., Graham’s number), that “infinity,” in their heads, was actually quite small, such that e.g. living for eternity sounds good, but living a Graham’s number of years sounds horrifying (see Tim Urban’s “PS” at the bottom of this post). So it’s worth taking a second to remember just how non-small infinity really is. The stakes it implies are hard to fathom. But they’re crucial to remember – especially given that, in practice, they may be the stakes we face.
Even if you insist on ignoring infinities in practice, though, they still matter in theory. In particular: whatever our actual finitude, ethics shouldn’t fall silent in the face of the infinite. Nor does it. Suppose you were God, choosing whether to create an infinite heaven, or an infinite hell. Flip a coin? Definitely not. Ok then: that’s a data point. Let’s find others. Let’s get some principles. It’s a familiar game – and one we often use merely possible worlds to play.
Except: the infinite version is harder. Instructively so. In particular: it breaks tons of stuff developed for the finite version. Indeed, it can feel staring into a void that swallows all sense-making. It’s painful. But it’s also good. In science, one often hopes to get new data that ruins an established theory. It’s a route to progress: breaking the breakable is often key to fixing it.
Let’s look into the void.
II. On “locations” of value
Forever – is composed of Nows –
A quick note on set-up. The standard game in infinite ethics is to put finite utilities on an infinite set (specifically, a countably infinite set) of value-bearing “locations.” But it can make an important difference what sort of “locations” you have in mind.
Here’s a classic example (adapted from Cain (1995); see also here). Consider two worlds:
Zone of suffering: An infinite line of immortal people, numbered starting at 1, who all start out happy (+1). On day 1, person 1 becomes sad (-1), and stays that way forever. On day 2, person 2 becomes sad, and stays that way forever. And so on.
Person 1 2 3 4 5
day 1: <-1, 1, 1, 1, 1, …>
day 2: <-1, -1, 1, 1, 1, …>
day 3: <-1, -1, -1, 1, 1, …>
etc…
Zone of happiness: Same world, but the happiness and sadness are reversed: everyone starts out sad, and on day 1, person 1 becomes happy; day 2, person 2, and so on.
Person 1 2 3 4 5
day 1: <1, -1, -1, -1, -1, …>
day 2: <1, 1, -1, -1, -1, …>
day 3: <1, 1, 1, -1, -1, …>
etc…
In zone of suffering, at any given time, the world has finite sadness, and infinite happiness. But any given person is finitely happy, and infinitely sad. In zone of happiness, it’s reversed. Which is better?
My take is that the zone of happiness is better. It’s where I’d rather live, and choosing it fits with principles like “if you can save everyone from infinite suffering and give them infinite happiness instead, do it,” which sound pretty solid. We can talk about analogous principles for “times,” but from a moral perspective, agents seem to me more fundamental.
My broader point, though, is that the choice of “location” matters. I’ll generally focus on “agents.”
III. Problems for totalism
Friend,
the hours will hardly pardon you their loss,
those brilliant hours that wear away the days,
those days that eat away eternity.
OK, let’s start with easy stuff: namely, problems for a simple, total utilitarian principle that directs you to maximize the total welfare in the universe.
First off: “total welfare in the universe” gets weird in infinite worlds. Consider a world with infinite people at +2 welfare, and an infinite number at -1. What’s the total welfare? It depends on the order you add. If you go: +2, -1, -1, +2, -1, -1, then the total oscillates forever between 0 and 2 (if you prefer to hang out near a different number, just add or subtract the relevant amount at the beginning, then start oscillating). If you go: +2, -1, +2, -1, you get ∞. If you go: +2, -1, -1, -1, +2, -1, -1, -1, you get –∞. So which is it? If you’re God, and you can create this world, should you?
Or consider a world where the welfare levels are: 1, -1/2, 1/3, -1/4, 1/5, and so on. Depending on the order you use, these can sum to any welfare level you want (see the Reimann Rearrangement Theorem; and see the Pasadena Game for decision-theory problems this creates). Isn’t that messed up? Not the type of situation the totalist is used to. (Maybe you don’t like infinitely precise welfare levels. Fine, stick with the previous example.)
Maybe we demand enough structure to fix a definite order (this already involves giving up some cherished principles – more below). But now consider an infinite world where everyone’s at 1. Suppose you can bump everyone up to 2. Shouldn’t you do it? But the “total welfare” is the same: ∞.
So “totals” get funky. But there’s also another problem: namely, that if the total is infinite (whether positive or negative), then finite changes won’t make a difference. So the totalist in an infinite world starts shrugging at genocides. And if they can only ever do finite stuff, they start treating all their possible actions as ethically indifferent. Very bad. As Bostrom puts it:
“This should count as a reductio by everyone’s standards. Infinitarian paralysis is not one of those moderately counterintuitive implications that all known moral theories have, but which are arguably forgivable in light of the theory’s compensating virtues. The problem of infinitarian paralysis must be solved, or else aggregative consequentialism must be rejected.” (p. 45).
Strong words. Worrying.
But actually, even if I put a totalist hat on, I’m not too worried. If “how can finite changes matter in infinite worlds?” were the only problem we faced, I’d be inclined to ditch talk about maximizing total welfare, and to focus instead on maximizing the amount of welfare that you add on net. Thus, in a world of infinite 1s, bumping ten people up to 2 adds 10. Nice. Worth it. Size of drop, not size of bucket.[4]
But “for totalists in infinite worlds, are finite genocides still bad?” really, really isn’t the only problem that infinities create.
IV. Infinite fanatics
In the finite no happiness can ever breathe.
The Infinite alone is the fulfilling happiness.
Another problem I want to note, but then mostly set aside, is fanaticism. Fanaticism, in ethics, means paying extreme costs with certainty, for the sake of tiny probabilities of sufficiently big-deal outcomes.
Thus, to take an infinite case: suppose that you live in a finite world, and everyone is miserable. You are given a one-time opportunity to choose between two buttons. The blue button is guaranteed to transform your world into a giant (but still finite) utopia that will last for trillions of years. The red button has a one-in-a-graham’s-number chance of creating a utopia that will last infinitely long. Which should you press?
Here the fanatic says: red. And naively, if an infinite utopia is infinitely valuable, then expected utility theory agrees: the EV of red is infinite (and positive), and the EV of blue, merely finite. But one might wonder. In particular: red seems like a loser’s game. You can press red over and over for a trillion^trillion years, and you just won’t win. And wasn’t rationality about winning?
This isn’t a purely infinity problem. Verdicts like “red” are surprisingly hard to avoid, even for merely finite outcomes, without saying other very unattractive things (see Beckstead and Thomas (2021) and Wilkinson (2021) for discussion).
Plausibly, though, the infinite version is worse. The finite fanatic, at least, cares about how tiny the probability is, and about the finite costs of rolling the dice. But the infinite fanatic has no need for such details: she pays any finite cost for any probability of an infinite payoff. Suppose that: oops, we overestimated the probability of red paying out by a factor of a graham’s number. Oops: we forgot that red also tortures a zillion kittens with certainty. The infinite fanatic doesn’t even blink. The moment you said “infinity,” she tuned all that stuff out.
Note that varying the “quality” of the infinity (while keeping its sign the same) doesn’t matter either. Suppose that oops: actually, red’s payout is just a single, barely-conscious, slightly-happy lizard, floating for eternity in space. For a sufficiently utilitarian-ish infinite fanatic, it makes no difference. Burn the Utopia. Torture the kittens. I know the probability of creating that lizard is unthinkably negligible. But we have to try.
What’s more, the finite fanatic can reach for excuses that the infinite fanatic cannot. In particular, the finite fanatic can argue that, in her actual situation, she has faces no choices with the relevantly problematic combination of payoffs and probabilities. Whether this argument works is another question (I’m skeptical). But the infinite fanatic can’t even voice it. After all, any non-zero credence on an infinite payoff is enough to bite her. And since it is always possible to get evidence that infinite payoffs are available (God could always appear before you with various multi-colored buttons), non-zero-credences seem mandatory. Thus, no matter where she is, no matter what she has seen, the infinite fanatic never gives finite things any intrinsic attention. When she kisses her children, or prevents a genocide, she does it for the lizard, or for something at least as large.
(This “non-zero credences on infinities” issue is also a problem for assigning expected sizes to empirical quantities. What’s your expected lifespan? Oops: it’s infinite. How long will humanity survive, in expectation? Oops: eternity. How tall, in expectation, is that tree? Oops: infinity tall. I guess we’ll just ignore this? Yep, I guess we will.)
But infinite fanaticism isn’t our biggest infinity problem either. Notably, for example, it seems structurally similar to finite fanaticism, and one expects a similar diagnosis. But also: it’s a type of bullet a certain sort of person has gotten used to biting (more below). And biting has a familiar logic: as I noted above, infinities really are quite a big-deal thing. Maybe we can live with obsession? There’s a grand tradition, for example, of treating God, heaven, hell, etc as lexically more important than the ephemera of this fallen world. And what is heaven but a gussied-up lizard? (Well, one hopes for distinctions.)
No, the biggest infinity problems are harder. They break our familiar logic. They serve up bullets no one dreamed of biting. They leave the “I’ll just be hardcore about it” train without tracks.[5]
V. The impossibility of what we want
From this – experienced Here –
Remove the Dates – to These –
Let Months dissolve in further Months –
And Years – exhale in Years
In particular: whether you’re obsessed with infinities or not, you need to be able to choose between them. Notably, for example, you might (non-zero credences!) run into a situation where you need to create one infinite baby universe (hypercomputer, etc), vs. another. And as I noted above, we have views about this. Heaven > hell. Infinite utopia > infinite lizard (at least according to me).
And even absent baby-universe stuff, EDT-ish folks (and people with non-trivial credence on EDT-ish decision-theories) with mainstream credences on infinite cosmologies are already choosing between infinite worlds – and even, infinite differences between worlds – all the time. Whenever an EDT-ish person moves their arm, they see (with very substantive probability) an infinite number of arms, all across the universe, moving too. Every donation is an infinite donation. Every papercut is an infinity of pain. Yet: whatever your cosmology and decision theory, isn’t a life-saving donation worth a papercut? Aren’t two life-saving donations better than one?
Ok, then, let’s figure out the principles at work. And let’s start easy, with what’s called an “ordinal” ranking of infinite worlds: that is, a ranking that says which worlds are better than which others, but which doesn’t say how much better.
Suppose we want to endorse the following extremely plausible principle:
Pareto: If two worlds (w1 and w2) contain the same people, and w1 is better for an infinite number of them, and at least as good for all of them, then w1 is better than w2.
Pareto looks super solid. Basically it just says: if you can help an infinite number of people, without hurting anyone, do it. Sign me up.
But now we hit problems. Consider another very attractive principle:
Agent-neutrality: If there is a welfare-preserving bijection from the agents in w1 to the agents in w2, then w1 and w2 are equally good.
By “welfare-preserving bijection,” I mean a mapping that pairs each agent in w1 with a single agent in w2, and each agent in w2 with a single agent in w1, such that both members of each pair have the same welfare level. The intuitive idea here is that we don’t have weird biases that make us care more about some agents than others for no good reason. A world with a hundred Alices, each at 1, has the same value as a world of hundred Bobs, each at 1. And a world where Alice has 1, and Bob has 2, has the same value as a world where Alice has 2, and Bob has 1. We want the agents in a world to flourish; but we don’t care extra about e.g. Bob flourishing in particular. Once you’ve told me the welfare levels in a given world, I don’t need to check the names.
(Maybe you say: what if Alice and Bob differ in some intuitively relevant respect? Like maybe Bob has been a bad boy and deserves to suffer? Following common practice, I’m ignoring stuff like this. If you like, feel free to add further conditions like “provided that everyone is similar in XYZ respects.”)
The problem is that in infinite worlds, Pareto and Agent-Neutrality contradict each other. Consider the following example (adapted from Van Liedekerke (1995)). In w1, every fourth agent has a good life. In w2, every second agent has a good life. And the same agents exist in both worlds.
Agents a1 a2 a3 a4 a5 a6 a7
w1 1 0 0 0 1 0 0….
w2 1 0 1 0 1 0 1….
By Pareto, w2 is better than w1 (it’s better for a3, a7, and so on, and just as good for everyone else). But there is also a welfare-preserving bijection from w1 to w2: you just map the 1s in w1 to the 1s in w2, in order, and the same for the 0s. Thus: a1 goes to a1, a2 goes to a2, a3 goes to a4, a4 goes to a6, a5 goes to a3, and so on. So by Agent-Neutrality, w1 and w2 are equally good. Contradiction.
Here’s another example (adapted from Hamkins and Montero (1999)). Consider an infinite world where each agent is assigned to an integer, which determines their well-being, such that each agent i is at i welfare. And now suppose you could give each agent in this world +1 welfare. Should you do it? By Pareto, yes. But wait: have you actually improved anything? By Agent-Neutrality: no. There’s a welfare preserving bijection from each agent i in the first world to agent i-1 in the second:
Agents … a-3 a-2 a-1 a0 a1 a2 a3 …
w3 … -3 -2 -1 0 1 2 3 …
w4 … -2 -1 0 1 2 3 4 …
Indeed, Agent-neutrality mandates indifference to the addition or subtraction of any uniform level of well-being in w3. You could harm each agent by a million, or help them by a zillion, and Agent-neutrality will shrug: it’s the same distribution, dude.
Clearly, then, either Pareto or Agent-Neutrality has got to go. Which is it?
My impression is that ditching Agent-Neutrality is the more popular option. One argument for this is that Pareto just seems so right. If we’re not in favor of helping an infinite number of agents, or against harming an infinite number, then where on earth has our ethics landed us?
Plus: Agent-Neutrality causes problems for other attractive, not-quite-Pareto principles as well. Consider:
Anti-infinite-sadism: It’s bad to add infinitely many suffering agents to a world.
Seems right. Very right, in fact. But now consider an infinite world where everyone is at -1. And suppose you can add another infinity of people at -1.
Agents a1 a2 a3 a4 a5 a6 a7
w5 -1 -1 -1 -1….
w6 -1 -1 -1 -1 -1 -1 -1….
Agent-neutrality is like: shrug, it’s the same distribution. But I feel like: tell that to the infinity of distinct suffering people you just created, dude. If there is a button on the wall that says “create an extra infinity of suffering people, once per second,” one does not lean casually against it, regardless of whether it’s already been pressed.
On the other hand, when I step back and look at these cases, my agent-neutrality intuitions kick in pretty hard. That is, pairs like w3 and w4, and w5 and w6, really start to look like the same distribution.
Here’s a way of pumping the intuition. Consider a world just like w3/w4, except with an entirely different set of people (call them the “b-people”).
Agents … b-3 b-2 b-1 b0 b1 b2 b3 …
w7 … -3 -2 -1 0 1 2 3 …
Compared to w3, w7 really looks equally good: switching from a-people to b-people doesn’t change the value. But so, too, does w7 look equally good when compared to w4 (it doesn’t matter which b-person we call b0). But by Pareto, it can’t be both.
We can pump the same sort of intuition with w5, w6, and another infinite b-people world consisting of all -1s (call this w8). I feel disinclined to pay to move from w5 to w8: it’s just another infinite line of -1s. But I feel the same about w6 and w8. Yet I am very into paying to prevent the addition of an extra infinity of suffering people to a world. What gives?
What’s more, my understanding is that the default way to hold onto Pareto, in this sort of case, is to say that w7 is “incomparable” to w3 and w4 (e.g., it’s neither better, nor worse, nor equally good), even though w3 and w4 are comparable to each other. There’s a big literature on incomparability in philosophy, which I haven’t really engaged with. One immediate problem, though, has to do with money-pumps.
Suppose that I’m God, about to create w3. Someone offers me w4 instead, for $1, and I’m like: hell yeah, +1 to an infinite number of people. Now someone offers me w7 in exchange for w4. They’re incomparable, so I’m like … um, I think the thing people say here is that I’m “rationally permitted” to either trade or not? Ok, f*** it, let’s trade. Now someone else says: wait, how about w3 for w8? Another “whatever” choice: so again I shrug, and trade. But now I’m back to where I started, except with $1 less. Not good. Money-pumped.
Fans of incomparability will presumably have a lot to say about this kind of case. For now I’ll simply register a certain kind of “bleh, whatever we end up saying here is going to kind of suck” feeling. (For example: if in order to avoid money-pumping, the incomparabilist forces me to “complete” my preferences in a particular way once I make certain trades, such that I end up treating w7 as equal either to w3 or w4, but not both, I feel like: which one? Either choice seems arbitrary, and I don’t actually think that w7 is better/worse than one of w3 or w4. Why am I acting like I do?)
Overall, this looks like a bad situation to me. We have to start shrugging at infinities of benefit or harm, or we have to start being opinionated/weird about worlds that really look the same. I don’t like it at all.
And note that we can run analogous arguments for basic locations of value other than agents. Suppose, for example, that we replace each of the “agents” in the worlds above with spatio-temporal regions. We can then derive similar contradictions between e.g. “spatio-temporal Pareto” (if you make some spatio-temporal regions better, and none worse, that’s an improvement), and “spatio-temporal-neutrality” (e.g., it doesn’t matter in which spatio-temporal region a given unit of value occurs, as long as there’s a value-preserving bijection between them). And the same goes for person-moments, generations, and so forth.
This contradiction between something-Pareto and something-Neutrality is one relatively simple impossibility result in infinite ethics. The literature, though, contains a variety of others (see e.g. Zame (2007), Lauwers (2010), and Askell (2018)). I haven’t dug in on these much, but at a glance, they seem broadly similar in flavor.
And note that we can get contradictions between something-Pareto and something-else-Pareto as well: for example, Pareto over agents and Pareto over spatio-temporal locations. Thus, consider a single room where Alice will live, then Bob, then Cindy, and so forth, onwards for eternity. In w9, each of them lives for 100 happy years. In w10, each lives for 1000 slightly less happy years, such that each life is better overall. w10 is better for every agent. But w9 is better at every time (this example is adapted from Arntzenius (2014)). So which is better overall? Here, following my verdict about the zone of happiness, I’m inclined to go with w10: agents, I think, are the more fundamental unit of ethical concern. But one might’ve thought that making an infinite number of spatio-temporal locations worse would make the world worse, not better.
Pretty clearly, some stuff we liked from finite land is going to have to go.
VI. Ordinal rankings aren’t enough
Suppose we bite the bullet and ditch Pareto or Agent-Neutrality. We’re still nowhere close to generating an ordinal ranking over infinite worlds. Pareto, after all, is an extremely weak principle: it stops applying as soon a given world is better for one agent, and worse for another (for example, donations vs. papercuts). And Agent-Neutrality stops applying without a welfare-preserving bijection. So even with a nasty bullet fresh in our teeth, a lot more work is in store.
Worse, though, ordinal rankings aren’t enough. They tell you how to choose between certainties of one outcome vs. another. But real choices afford no such certainty. Rather, we need to choose between probabilities of creating one outcome vs. another. Suppose, for example, that God offers you the following lotteries:
l1: 40% on a line of people at <1, 1, 1, 0, 1, 1, 1, 0 …>
60% on zone of suffering, plus an infinite lizard (always at 1) on the side.l2: 80% on <1, -2, 3, -4, 5 … >
20% on zone of happiness, plus four infinite lizards (always at -6.2) on the side.
Which should you choose? Umm…
The classic thing to want here is some kind of “score” for each world, such that you can multiply this score by the probabilities at stake to get an expected value. But we’ll settle for principles that will just tell us how to choose between lotteries more generally.
Here I’ll look at a few candidates for principles like this. This isn’t an exhaustive survey; but my hope is that it can give a flavor for the challenge.
VII. Something about averages?
Could we say something about averages? Like <2, 2, 2, 2, …> is better than <1, 1, 1, 1, …>, right? So maybe we could base the value of an infinite world on something like the limit of (total welfare of the agents counted so far)/(number of agents counted so far). Thus, the 2s have a limiting average of 2; and the 1s, a limiting average of 1; etc.
This approach suffers from a myriad of problems. Here’s a sample:
- It’s always indifferent to helping finitely many agents, adding finitely many suffering agents to a world, etc, since this won’t change the limit of the average.
- It’s indifferent to many ways of helping infinitely many agents, like moving from <1, 2, 3, 4..> to <2, 3, 4, 5,…> (limiting average: ∞ in both cases).
- It breaks on cases like <1, -3, 5, -7…>, where the average utility keeps flipping back and forth between -1 and 1.
- It breaks on cases with infinitely good/bad lives (e.g. <∞, -∞, -∞, -∞, -∞, -∞ … > vs. <-∞, ∞, ∞, ∞, ∞, ∞…>).
- Naively, it implies average utilitarian about finite worlds. But most people want to avoid this (average utilitarianism in finite contexts does things like creating suffering people, instead of a larger number of happy people who will together drag the average down more).
- It’s order-dependent. E.g., if I have infinite agents at 2, and infinite agents at -1, I’ll get a different average depending on whether I alternate 2s and -1s (limiting average: 1/2), add a 2 after every three -1s (limiting average: -1/4) and so on. Indeed, I can make the average swing wildly, both above and below zero, depending on the order.
One solution to order-dependence is to appeal to the limit of the utility per unit space-time volume, as you expand outward from some (all?) points. I cover principles with this flavor below. For now I’ll just note that many of the other problems I just listed will persist.
VIII. New ways of representing infinite quantities?
Could we look for new ways of representing infinite quantities?
Bostrom (2011) suggests mapping infinite worlds (or more specifically: the sums of the utilities in an infinite sequence of value-bearing things) to “hyperreal numbers.” I won’t try to explain this proposal in full here (and I haven’t tried to understand it fully), but I’ll note one of the major problems: namely, that it’s sensitive to an arbitrary choice of “ultra-filter,” such that:
- <1, -2, 1, 1, -2, 1, 1 …> can be made better, worse, or equal to an empty world;
- an infinite sequence whose sum reaches every finite value infinitely many times (for example, a “random walk” sequence) can be made equivalent to any finite value;
- the world <2, 2, -2, 2, -2 …> is either twice or four times as good as a single dude at 1.
And once you’ve arbitrarily chosen your ultra-filter, Bostrom’s proposal is order-dependent as well. E.g., once you’ve decided that <1, -2, 1, 1, -2, 1, 1 …> is e.g. better than (or worse than, or equal to) an empty world, we can just re-arrange the terms to change your mind.
(Arntzenius also complains that Bostrom’s proposal gets him dutched booked. At a glance, though, this looks to me like an instance of the broader set of worries about “Satan’s Apple” type cases (see Arntzenius, Elga and Hawthorne (2004)), which I don’t feel very worried about.)
IX. Something about expanding regions of space-time?
Let’s turn to a more popular approach (e.g., an approach that has multiple adherents): one focused on the utility contained inside expanding bubbles of space-time.
Vallentyne and Kagan (1997) suggest that if we have two worlds with the same locations, and these locations have an “essential natural order,” we look at the differences between the utility contained in a “bounded uniform expansion” from any given location. In particular: if there is some positive number k such that, for any bounded uniform expansion, the utility inside the expansion eventually stays larger by more than k in worldi vs. worldj, then worldi is better.
Thus, for example, in a comparison of <1, 1, 1, 1, …> vs. <2, 2, 2, 2, …>, the utility inside any expansion is bigger in the 2 world. And similarly, in <1, 2, 3, 4 …> vs. <2, 3, 4, 5>, expansions in the latter will always be greater by 1.
“Essential natural order” is a bit tricky to define, but the key upshot, as I understand it, is that things like agents and person-moments don’t have it (agents can be listed by their height, by their passion for Voltaire, etc), but space-timey-stuff plausibly does (there is a well-defined notion of a “bounded-region of space-time,” and we can make sense of the idea that in order to get from a to b, you have to “go through” c). Exactly what counts as a “uniform expansion” also gets a bit tricky (see Arntzenius (2014) for discussion), but one gets the broad vibe: e.g., if I’ve got a growing bubble of space-time, it should be growing at the same rate in all directions (some of trickiness comes from comparing “directions,” I think).
A major problem for Vallentyne and Kagan (1997) is that their principle only provides an ordinal ranking. But Arntzenius suggests a modification that generalizes to choices amongst lotteries: instead of looking at the actual value at each location, look at the expected value. Thus, if you’re choosing between:
l3: 50% on <1, 1, 1, 1…>
50% on <1, 2, 3, 4…>l4: 50% on <-1, 0, -1, 0>
50% on <1, 4, 9, 16…>
Then you’d use the expected values of the locations “make these lotteries into worlds.” E.g., l3 is equivalent to <1, 1.5, 2, 2.5 …>, and l4 is equivalent to <0, 2, 4, 8 …>; and the latter is better according to Vallentyne-Kagan, so Arntzenius says to choose it. Granted, this approach doesn’t give worlds cardinal scores to use in EV maximization; but hey, at least we can say something about lotteries.
The literature calls this broad approach “expansionism” (see also Wilkinson (2021) for similar themes). I’ll note two major problems with it: that it leads to results that are unattractively sensitive to the spatio-temporal distribution of value, and that it fails to rank tons of stuff.
Consider an infinite line of planets, each of which houses a Utopia, and none of which will ever interact with any of the others. On expansionism, it is extremely good to pull all these planets an inch closer together: so good, indeed, as to justify any finite addition of dystopias to the world (thanks to Amanda Askell, Hayden Wilkinson, and Ketan Ramakrishnan for discussion). After all, pulling on the planets so that there’s an extra Utopia every x inches will be enough for the eventual betterness of the uniform expansions to compensate for any finite number of hellscapes. But this looks pretty wrong to me. No one’s thanking you for pulling those planets closer together. In fact, no one noticed. But a lot of people are pissed about the whole “adding arbitrarily large (finite) numbers of hellscapes” thing: in particular, the people living there.
For closely related reasons, expansionism violates both Pareto over agents and Agent-neutrality. Consider the following example from Askell (2018), p. 83, in which three infinite sets of people (x-people, y-people, and z-people) live on an infinite sequence of islands, which are either “Balmy” (such that three out of four agents are happy) or “Blustery” (such that three out of four agents are sad). Happy agents are represented in black, and sad agents in white.
From Askell (2018), p. 83; reprinted with permission
Here, expansionism likes Balmy more than Blustery – and intuitively, we might agree. But Blustery is better for the y-people, and worse for no one: hence, goodbye Pareto. And there is a welfare-preserving bijection from Balmy to Blustery as well. So goodbye Agent-Neutrality, too. Can’t we at least have one?
The basic issue, here, is that expansionism’s moral focus is on space-time points (regions, whatever), rather than people, person-moments, and so on. In some cases (e.g. Balmy vs. Blustery), this actually does fit with our intuitions: we like it if the universe seems “dense” with value. But abstractly, it’s pretty alien; and when I reflect on questions like “how much do I want to pay to pull these planets closer together?”, the appeal from intuition starts to wane.
My other big issue with expansionism, at present, is that it fails to provide guidance in lots of cases. Some milder problems are sort of exotic and specific. Thus:
- Expansionism gives different verdicts on “zone of suffering/happiness” type cases, depending on whether the expansion in question grows faster than the “zone of x” does (see Askell (2018) p. 81).
- Expansionism fails to rank worlds where some spatio-temporal locations are infinitely far apart (see Bostrom (2011), p. 13). For example: < 2 , 2, 2, … (infinite distance) … 1 , 1, 1> vs. < 1, 1, 1, … (infinite distance) … 1, 2, 1>. Here, the former world is better at an infinite number of locations, and worse at only one, so it seems intuitively better: but the expansion that starts at the single 2 location in the second world is forever greater in the latter world.
- Expansionism has nothing to say about cases like <… 0, 0, 0, 0, 0, 0, 0…> vs. <… -1, -1, -1, 100, 1, 1, 1…>, since if you start your expansion suitably far into the -1 zone, its utility stays negative forever. That said, it’s not clear that our intuitions have much to say about this case, either.
These are all cases in which the worlds being compared have the exact same locations. I expect bigger problems, though, with worlds that aren’t like that. Consider, for example, the choice between creating a spatially-finite world with an immortal dude trudging from hell to heaven, where each day looks like <…-2, -1, 0, 1, 2 …>, and a spatially-infinite universe that only lasts a day, with a infinite line of people whose days are <…-2, -1, 0, 1, 2 …>. How shall we match up the locations in these worlds? Depending on how we do it, we’ll get different expansionist verdicts. And we’ll hit even worse arbitrariness if we try to e.g. match up locations for worlds with different numbers of dimensions (e.g., pairing locations in a 2-d world with locations in a 4-d one), let alone worlds whose differences reflect the full range of logically-possible space-times.
Maybe you say: whatever, we’ll just go incomparable there. But note this incomparability infects our lotteries as well. Thus, for example, suppose that we get some space-times, A and B, that just can’t be matched up with each other in any reasonable and/or non-arbitrary way. And now suppose that I’m choosing between lotteries like:
l5: 99% on a A-world of -1s
1% on a B-world of 2s.l6: 99% on a A-world of 2s
1% on a B-world of -1s.
The problem is because these worlds can’t be matched up, we can’t turn these lotteries into single worlds we can compare with our expansionist paradigm. So even though it looks kind of plausible that we want l6 here, we can’t actually run the argument.
Maybe you say: Joe, this won’t happen often in practice (this is the vibe one gets from Arntzenius (2014) and Wilkinson (2021)). But I feel like: yes it will? We should already have non-zero credence on our living in different space-times that can’t be matched up, and it doesn’t matter how small the probability on the B-world is in the case above. What’s more, we should have non-zero credence that later, we’ll be able to create all sorts of crazy infinite baby-universes – including ones where their causal relationship to our universe doesn’t support a privileged mapping between their locations.
There are other possible expansionist-ish approaches to lotteries (see e.g. Wilkinson (2020)). But I expect them – and indeed, any approach that requires counterpart relations between spatio-temporal locations — to run into similar problems.
X. Weight people by simplicity?
Here’s an approach I’ve heard floating around amongst Bay Area folks, but which I can’t find written up anywhere (see here, though, for some similar vibes; and the literature on UDASSA for a closely-related anthropic view that I think some people use, perhaps together with updateless-ish decision theory, to reach similar conclusions). Let’s call it “simplicity weighted utilitarianism” (I’ve also heard “k-weighted,” for “Kolmogorov Complexity”). The basic idea, as I understand it, is to be a total utilitarian, but to weight locations in a world by how easily they can be specified by an arbitrarily-chosen Universal Turing Machine (see my post on the Universal Distribution for more on moves in this vicinity). The hope here is to do for people’s moral weight what UDASSA does for your prior over being a given person in an infinite world: namely, give an infinite set of people weights that sum to 1 (or less).
Thus, for example, suppose that I have an infinite line of rooms, each with numbers written in binary on the door, starting at 0. And let’s say we use simplicity-discounts that go in proportion to 1/(2^(numbers of bits for the door number+1)). Room 0 gets a 1/4 weighting, room 1 gets 1/4, room 10 gets 1/8, room 11 gets 1/8, room 100 gets 1/16th, and so on. (See here for more on this sort of set-up.) The hope here is that if you fill the rooms with e.g. infinite 1s, you still get a finite total (in this case, 1). So you’ve got a nice cardinal score for infinite worlds, and you’re not obsessing about them.
Except, you are anyway? After all, the utilities can grow as fast or faster than the discounts shrink. Thus, if the pattern of utilities is just 2^(numbers of bits for the door number+1), the discounted total is infinite (1+1+1+1…); and so, too, is it infinite in worlds where everyone has a million times the utility (1M + 1M + 1M…). Yet the second world seems better. Thus, we’ve lost Pareto (over whatever sort of location you like), and we’re back to obsessing about infinite worlds anyway, despite our discounts.
Maybe one wants to say: the utility at a given location isn’t allowed to take on any finite value (thanks to Paul Christiano for discussion). Sure, maybe agents can live for any finite length of time. But our UTM should be trying to specify momentary experiences (“observer-moments”) rather than e.g. lives, and experiences can’t get any finite amount of pleasure-able (or whatever you care about experiences being) – or perhaps, to the extent they can, they get correspondingly harder to specify.
Naively, though, this strikes me as a dodge (and one that the rest of the philosophical literature, which talks about worlds like <1, 2, 3…> all the time, doesn’t allow itself). It feels like denying the hypothetical, rather than handling it. And are we really so confident about how much of what can be fit inside an “experience”?
Regardless, though, this view has other problems as well. Notably: like expansionism, this approach will also pay lots to re-arrange people, pull them closer together, etc (for example, moving from a “one person every million rooms” world to a “one person every room” world). But worse than expansionism, it will do this even in finite worlds. Thus, for example, it cares a lot about moving the happy people in rooms 100-103 to rooms 0-3, even if only four people exist.
Indeed, it’s willing to create infinite suffering for the sake of this trade. Thus, a world where the first four rooms are at 1 is worth 1/4 + 1/4 + 1/8 + 1/8 = 3/4. But if we fill the rest of the rooms with an infinite line of -1, we only take a -1/4 hit. Indeed, on this view, just the first room at 1 offsets an infinity of suffering in rooms four and up.
Maybe you say: “Joe, my discounts aren’t going to be so steep.” But it’s not clear to me how to tell which discounts are at stake, for a given UTM. And anyway, regardless of your discounts, the same arguments will hold, but with a different quantitative gloss.
Looks bad to me.
XI. What’s the most bullet-biting hedonistic utilitarian response we can think of?
As a final sample from the space of possible views, let’s consider the view that seems to me most continuous with the spirit of hardcore, bullet-biting hedonistic utilitarianism. (I’m not aware of anyone who endorses the view I’ll lay out, but Bostrom (2011, p. 29)’s “Extended Decision Rule” is in a similar ballpark). This view doesn’t care about people, or space-time points, or densities of utility per unit volume, or Pareto, or whatever. All it cares about is the amount of pleasure vs. pain in the universe. Pursuant to this single-minded focus, it groups worlds into four types:
- Positive infinities. Worlds with infinite pleasure, and finite pain. Value: ∞.
- Negative infinities. Worlds with infinite pain, and finite pleasure. Value: –∞.
- Mixed infinities. Worlds with infinite pleasure and infinite pain. Value: worse than positive infinities, better than negative infinities, incomparable to each other and to finite worlds.
- Finite worlds. Worlds with finite pleasure and finite pain. Value: ~0, but ranked according to total utilitarianism. Worse than positive infinities, better than negative infinities, incomparable to mixed infinities.
This view’s decision procedure is just: maximize the probability of positive infinity minus the probability of negative infinity (call this quantity “the diff”). Maybe it allows finite worlds to serve as tie-breakers, but this doesn’t really come up in practice: in practice, it’s obsessed with maximizing the diff (see Bostrom (2011), p. 30-31). And it doesn’t have anything to say about comparisons between different mixed infinity worlds, or about trade-offs between mixed infinities and finite worlds.
Alternatively, if we don’t like all this faff about incomparability (my model of a bullet-biting utilitarian doesn’t), we can set the value of all mixed infinity worlds to 0 (i.e., the positive and negative infinities “cancel out”). Then we’d have a nice ranking with positive infinity infinitely far on the top, finite worlds in between (with mixed infinities sitting at zero), and negative infinities infinitely far at the bottom.
Call this the “four types” view. To get a sense of this view’s verdicts, consider the following worlds:
- Heaven: infinite people living the best possible (painless) lives you can imagine, forever.
- Infinite Lizard: A single barely-conscious, slightly-happy lizard floating in space for eternity.
- Heaven+Speck: Infinite people living in bliss for eternity, but each gets a speck in their eye one time.
- Hell+Lollypop: Infinite people being tortured for eternity, but each gets to lick a lollypop one time.
- Infinite Speck: Infinite barely-conscious mice who pop into existence, feel a mildly-irritating dust-speck in their eye, then wink painlessly out of existence.
- Hell: Infinite people being tortured for eternity.
On the four types view:
- Heaven and Infinite Lizard are equally good; Infinite Speck and Hell are equally bad; and Heaven+Speck and Hell+Lollypop are either incomparable or equal (e.g. 0).
- Faced with a choice between Heaven + Speck, or a lottery with a one-in-a-graham’s-number chance of Infinite Lizard, and Hell+Lollypop otherwise, this view chooses the lottery.
- Faced with a choice between creating Heaven + Speck, or creating a finite world with arbitrarily many suffering people, the “mixed infinities and finite worlds are incomparable” version says that either choice is permissible.
- Faced with a choice between Heaven + Speck, or a finite world where one guy has a sandwich one time then dies, the “mixed infinities are zero” version goes for the sandwich (the “mixed infinites are incomparable” version shrugs).
- Given a chance to prevent the addition of infinite eternally suffering people to the last four worlds, or to add an infinity of eternally happy people to any of the first four, both versions shrug. Indeed, the “mixed infinities are 0” version would rather focus on a tiny probability of another bite of sandwich in a finite world; and the “incomparable” version says this priority is at least permissible.
We can see the four types view as continuous with a certain kind of “pleasure/pain-neutrality” principle. That is, if we assume that pleasure/pain come in units you can either “swap around” or render equivalent to each other (e.g., there is some amount of lizard time that outweighs a moment in heaven; some number of dust specks that outweigh a moment in hell, etc – a classic utilitarian thought), then in some sense you can build every positive infinity world (or the equivalent) by re-arranging Infinite Lizard, every negative infinity world by re-arranging Infinite Speck, and every type 3 world by re-arranging both in combination. It’s the same (quality-weighted) amount of pleasure and pain regardless, says this view, and amounts of pleasure and pain (as opposed to “densities,” or placements in different people’s lives, or whatever) were what utilitarianism was supposed to be all about.
There is, I think, a certain logic to it. But also: it’s horrifying. Trading a world where an infinite number of people have infinitely good lives, for a ~guarantee of a world where infinitely many people are eternally tortured, to get a one-in-a-graham’s-number chance of creating a single immortal, barely-conscious lizard? Fuuuuhck that. That’s way worse than paying to pull planets together, or not knowing what to say about worlds with non-matching space-times. It’s worse than the repugnant conclusion; worse than fanaticism; worse than … basically every bullet some philosopher has ever bitten? If this is where “bullet-biting utilitarianism” leads, it has entered a whole new phase of crazy. Just say no, people. Just say no.
But also: such a choice doesn’t really make sense on its own terms. Infinite Lizard is getting treated as lexically better than Heaven + Speck, because it’s possible to map all of Infinite Lizard’s barely conscious happiness onto something equivalent to all the happiness in Heaven+Speck, with the negative infinity of the dust specks left over. But so, equally, is it possible to map all of Infinite Lizard’s barely-conscious happiness onto everyone’s first nano-seconds in heaven, to map those nano-seconds onto each of their dust specks in a way that would more than outweigh the dust-specks in finite contexts, and to leave everyone with an infinity of fully-conscious happiness left over. That is, the “Infinite Lizard Has All of Heaven’s Happiness” and “No Amount Of Time In Heaven Can Outweigh The Dust Specks” mappings aren’t, actually, privileged here: one just as easily interpret Heaven + Speck as ridiculously better than Infinite Lizard (indeed, this is my default stance). But the four types view has fixated on these particular mappings anyway, and condemned an infinity of people to eternal torture for their sake.
(Alternatively, on yet a third version of the four-types view, we can try to take the arbitrariness of these mappings more seriously, and say that all mixed worlds are incomparable to everything, including positive and negative infinities. This avoids mandating trades from Heaven + Speck to Hell + Lollypop for a tiny chance of the lizard (such a choice is now merely “permissible”), but it also makes an even larger set of choices rationally permissible: for example, choosing Hell + Lollypop over pure Heaven. And it permits money-pumps that lead you from Heaven, to Hell + Lollypop, and then to Hell.)
XII. Bigger infinities and other exotica
OK, we’ve now touched on five possible approaches to infinite ethics: averages, hyperreals, expansionism, simplicity weightings, and the four types view. There are others in the literature, too (see e.g. Wilkinson (2020) and Easwaran (2021) – though I believe that both of these proposals require that the two worlds have exactly the same locations (maybe Wilkinson’s can be rejiggered to avoid this?) – and Jonsson and Voorneveld (2018), which I haven’t really looked at). I also want to note, though, ways in which the discussion of all of these has been focused on a very narrow range of cases.
In particular: we’ve only ever been talking about the smallest possible infinities – i.e., “countable infinities.” This is the size of the set of the natural numbers (and the rationals, and the odd numbers, and so on), and it makes it possible to do things like list all the locations in some order. But there is an unending hierarchy of larger infinities, too, create-able by taking power-sets over and over forever (see Cantor’s theorem). Indeed, according to this video, some people even want to posit a size of infinity inaccessible via power-setting – an infinity whose role, with respect to taking power-sets, is analogous to the role of countable infinities, with respect to counting (i.e., you never get there). And some go beyond that, too: the video also contains the following diagram (see also here), which starts with the “can’t get there via power-setting” infinity at the bottom (“inaccessible”), and goes from there (centrally, according to the video, by just adding axioms declaring that you can).
(From here.)
I’m not a mathematician (as I expect this post has already made clear in various places), but at a glance, this looks pretty wild. “Almost huge?” “Superhuge?” Also, not sure where this fits with respect to the diagram, but Cantor was apparently into the idea of the “Absolute Infinite,” which I think is supposed to be just straight up bigger than everything period, and which Cantor “linked to the idea of God.”
Now, relative to countably infinite worlds, it’s quite a bit harder to imagine worlds with e.g. one person for every real number. And imagining worlds with a “strongly Ramsey” number of people seems likely to be a total non-starter, even if one knew what “strongly Ramsey” meant, which I don’t. Still, it seems like the infinite fanatic should be freaking out (drooling?). After all, what’s the use obsessing about the smallest possible infinities? What happened to scope-sensitivity? Maybe you can’t imagine bigger-infinity worlds; maybe the stuff on that chart is totally confused – but remember that thing about non-zero credences? The lizards could be so much larger, man. We have to try for an n-huge lizard at least. And really (wasn’t it obvious the whole time?), we should be trying to create God. (A friend comments, something like: “God seems too comprehensible, here. N-huge lizards seem bigger.”)
More importantly, though: whether we’re obsessing about infinities or not, it seems very likely that trying to incorporate merely uncountable infinities (let alone “supercompact” ones, or whatever) into our lotteries is going to break whatever ethical principles we worked so hard to construct for the countably infinite case. In this sense, focusing purely on countable infinities seems like a recipe for the same kind of rude awakening that countable infinities give to finite ethics. Perhaps we should try early to get hip to the pattern.
And we can imagine other exotica breaking our theories as well. Thus, for example, very few theories are equipped to handle worlds with infinite value at a single “location.” And expansionism relies on all the worlds we’re considering having something like a “space-time” (or at least, a “natural ordering” of locations). But do space-timey worlds, or worlds with any natural orderings of “locations,” exhaust the worlds of moral concern? I’m not sure. Admittedly, I have a tough time imagining persons, experience-like things, or other valuable stuff existing without something akin to space-time; but I haven’t spent much time on the project, and I have non-zero credence that if I spent more, I’d come up with something.
XIII. Maybe infinities are just not a thing?
When we wake up brushed by panic in the dark
our pupils grope for the shape of things we know.
But now, perhaps, we feel the rug slipping out from under us too easily. Don’t we have non-zero credences on coming to think any old stupid crazy thing – i.e., that the universe is already a square circle, that you yourself are a strongly Ramsey lizard twisted in a million-dimensional toenail beyond all space and time, that consciousness is actually cheesy-bread, and that before you were born, you killed your own great-grandfather? So how about a lottery with a 50% chance of that, a 20% chance of the absolute infinite getting its favorite ice cream, and a 30% chance that probabilities need not add up to 100%? What percent of your net worth should you pay for such a lottery, vs. a guaranteed avocado sandwich? Must you learn to answer, lest your ethics break, both in theory and in practice?
One feels like: no. Indeed, one senses that a certain type of plot has been lost, and that we should look for less demanding standards for our lottery-choosing – ones that need not accommodate literally every wacked-out, probably-non-sensical possibility we haven’t thought of yet.
With this in mind, though, perhaps one is tempted to give a similar response to countable infinities as well. “Look, dude, just like my ethics doesn’t need to be able to handle ‘the universe is a square circle,’ it doesn’t need to be able to handle infinite worlds, either.”
But this dismissal seems too quick. Infinite worlds seem eminently possible. Indeed, we have very credible scientific theories that say that our actual universe contains a countably infinite number of people, credible decision theories that say that we can have infinite influence on that universe, widely-accepted religions that posit infinite rewards and punishments, and a possibly very intense future ahead of us where baby-universes/wormholes/hyper-computers etc appear much more credible, at least, than “consciousness = cheesy-bread.” What’s more, we have standard ethical theories that break quickly on encounter with readily-imaginable cases that we continue to have strong ethical intuitions about (e.g., Heaven + Speck vs. Hell + Lollypop). For these reasons, it seems to me that we have much more substantive need to deal with countable infinities in our ethics than we do with square-circle universes.
Still, my impression is that a relatively common response to infinite ethics is just: “maybe somehow infinities actually aren’t a thing? For example: they’re confusing, and they lead to weird paradoxes, like building the sun out of a pea (video), and messed up stuff with balls in boxes (video). Also: I don’t like some of these infinite ethics problems you’re talking about” (see here for some more considerations). And indeed, despite their role in e.g. cosmology (let alone the rest of math), some philosophers of math (e.g., “ultrafinitists”) deny the existence of infinities. Naively, this sort of position gets into trouble with claims like “there is a largest natural number” (a friend’s reaction: “what about that number plus one?”), but apparently there is ultra-finitist work trying to address this (something about “indefinitely large numbers”? hmm…).
My own take, though, is that resting the viability of your ethics on something like “infinities aren’t a thing” is a dicey game indeed, especially given that modern cosmology says that our actual concrete universe is very plausibly infinite. And as Bostrom (2011, p. 38) notes, conditioning on the non-thing-ness of infinities (or ignoring infinity-involving possibilities) leads to weird behavior in other contexts – e.g., refusing to fund scientific projects premised on infinity-involving hypotheses, insisting that the universe is actually finite even as more evidence comes in, etc. And more broadly, it just looks like denial. It looks like covering your ears and says “la-la-la.”
XIV. The death of a utilitarian dream
I bite all the bullets.
— A friend of mine, pre-empting an objection to his utilitarianism.
The broad vibe I’m trying to convey, here, is that infinite ethics is a rough time. Even beyond “torturing any finite number of people for any probability of an infinite lizard,” we’ve got bad impossibility results even just for ordinal rankings; we’ve got a smattering of theories that are variously incomplete, order-dependent, Pareto-violating, and otherwise unattractive/horrifying; and we’ve got an infinite hierarchy of further infinities, waiting in the wings to break whatever theory we happen to settle on. It’s early days (there isn’t that much work on this topic, at least in analytic ethics), but things are looking bleak.
OK, but: why does this matter? I’ll mention a few reasons.
The first is that I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever they lead, no matter the costs. Yes, you push fat men and harvest organs; yes, you destroy Utopias for tiny chances of creating zillions of evil, slightly-happy rats (plus some torture farms on the side). But you always “know what you’re getting” – e.g., more expected net pleasure. And because you “know what you’re getting,” you can say things like “I bite all the bullets,” confident that you’ll always get at least this one thing, whatever else must go.
Plus, other people have problems you don’t. They end up talking about vague and metaphysically suspicious things like “people,” whereas you only talk about “valenced experiences” which are definitely metaphysically fine and sharp and joint-carving. They end up writing papers entirely devoted to addressing a single category of counter-example – even while you can almost feel the presence of tons of others, just offscreen. And more generally, their theories are often “janky,” complicated, ad hoc, intransitive, or incomplete. Indeed, various theorems prove that non-you people will have problems like this (or so you’re told; did you actually read the theorems in question?). You, unlike others, have the courage to just do what the theorems say, ‘intuitions’ be damned. In this sense, you are hardcore. You are rigorous. You are on solid ground.
Indeed, even people who reject this dream can feel its allure. If you’re a deontologist, scrambling to add yet another epicycle to your already-complex and non-exhaustive principles, to handle yet another counter-example (e.g. the fat man lives in a heavy metal crate, such that his body itself won’t stop the trolley, but he’ll die if the crate moves), you might hear, sometimes, a still, small voice saying: “You know, the utilitarians don’t have this kind of problem. They’ve got a nice, simple, coherent theory that takes care of this case and a zillion others in one fell swoop, including all possible lotteries (something my deontologist friends barely ever talk about). And they always get more expected net pleasure in return. They sure have it easy…”[6] In this sense, “maximize expected net pleasure” can hover in the background as a kind of “default.” Maybe you don’t go for it. But it’s there, beckoning, and making a certain kind of sense. You could always fall back on it. Perhaps, indeed, you can feel it relentlessly pulling on you. Perhaps a part of you fears the force of its simplicity and coherence. Perhaps a part of you suspects that ultimately (horribly?), it’s the way to go.
But I think infinite ethics changes this picture. As I mentioned above: in the land of the infinite, the bullet-biting utilitarian train runs out of track. You have to get out and wander blindly. The issue isn’t that you’ve become fanatical about infinities: that’s a bullet, like the others, that you’re willing to bite. The issue is that once you’ve resolved to be 100% obsessed with infinities, you don’t know how to do it. Your old thing (e.g., “just sum up the pleasure vs. pain”) doesn’t make sense in infinite contexts, so your old trick – just biting whatever bullets your old thing says to bite – doesn’t work (or it leads to horrific bullets, like trading Heaven + Speck for Hell + Lollypop, plus a tiny chance of the lizard). And when you start trying to craft a new version of your old thing, you run headlong into Pareto-violations, incompleteness, order-dependence, spatio-temporal sensitivities, appeals to persons as fundamental units of concern, and the rest. In this sense, you start having problems you thought you transcended – problems like the problems the other people had. You start having to rebuild yourself on new and jankier foundations. You start writing whole papers about a few counterexamples, using principles that you know don’t cover all the choices you might need to make, even as you sense the presence of further problems and counterexamples just offscreen. Your world starts looking stranger, “patchier,” more complicated. You start to feel, for the first time, genuinely lost.
To be clear: I’m not saying that infinite ethics is hopeless. To the contrary, I think some theories are better than others (expansionism is probably my current favorite), and that further work on the topic is likely to lead to further clarity about the best overall response. My point is just that this response isn’t going to look like the simple, complete, neutrality-respecting, totalist, hedonistic, EV-maximizing utilitarianism that some hoped, back in the day, would answer every ethical question – and which it is possible to treat as a certain kind of “fallback” or “default.” Maybe the best view will look a lot like such a utilitarianism in finite contexts – or maybe it won’t. But regardless, a certain type of dream will have died. And the fact that it dies eventually should make it less appealing now.
XV. Everyone’s problem
That said, infinite ethics is a problem for everyone, not just utilitarians. Everyone (even the virtue ethicists) needs to know how to choose between Heaven + Speck vs. Hell + Lollypop, given the opportunity. Everyone needs decision procedures that can handle some probability of doing infinite things. Faced with impossibility results, everyone has to give something up. And sometimes that stuff you give up matters in finite contexts, too.
A salient example to me, here, is spatio-temporal neutrality. Utilitarian or no, most philosophers want to deny that a person’s location in space and time has intrinsic ethical significance. Indeed, claims in this vicinity play an important role in standard arguments against discounting the welfare of future people, and in support of “longtermism” more broadly (e.g., “location in time doesn’t matter, there could be a lot of people in the future, so the future matters a ton”). But notably, various prominent views in infinite ethics (notably, expansionist views; but also “simplicity-weightings”) reject spatio-temporal neutrality. On these views, locations in space and time matter a lot – enough, indeed, to make e.g. pulling infinite happy planets an inch closer together worth any finite amount of additional suffering. On its own, this isn’t enough to get conclusions like “people matter more if they’re nearer to me in space and time” (the thing that longtermism most needs to reject) – but it’s an interesting departure from “location in spacetime is nothing to me,” and one that, if accepted, might make us question other neutrality-flavored intuitions as well.
And the logic that leads to non-neutrality about space-time is understandable. In particular: infinite worlds look and behave very differently depending on how you order their “value-bearing locations,” so if your view focuses on a type of location that lacks a natural order (e.g., agents, experiences, etc), it often ends up indeterminate, incomplete, and/or in violation of Pareto for the locations in question. Space-time, by contrast, comes with a natural order, so focusing on it cuts down on arbitrariness, and gives us more structure to work with.
Something somewhat analogous happens, I think, with “persons” vs. “experiences” as units of concern. Some people (especially, in my experience, utilitarian-types) are tempted, in finite contexts, to treat experiences (or “person-moments”) as more fundamental, since persons can give rise to various Parfitian problems. But in infinite contexts, refusing to talk about persons makes it much harder to do things like distinguish between worlds like Heaven + Speck vs. Hell + Lollypop, where our intuition is centrally driven, I think, by thoughts like “In Heaven + Speck, everyone’s life is infinitely good; in Hell + Lollypop, everyone’s life is infinitely bad.” So it becomes tempting to bring persons back into the picture (see Askell (2018), p. 198, for more on this).
We can see the outlines of a broader pattern. Finite ethics (or at least, a certain reductionist kind) often tries to ignore structure. It calls more and more things (e.g., the location of people in space-time, the locations of experiences in lives) irrelevant, so that it can hone in on the true, fundamental unit of ethical concern. But infinite ethics needs structure, or else everything dissolves into re-arrangeable nonsense. So it often starts adding back in what finite ethics threw out. One is left with a sense that perhaps, there is even more structure to be not-ignored. Perhaps, indeed, the game of deriving the value of the whole from the value of some privileged type of part is worse than one might’ve thought (see Chappel (2011) for some considerations, h/t Carl Shulman). Perhaps the whole is primary.
These are a few examples of finite-ethical impulses that infinities put pressure on. I expect there to be many others. Indeed, I think it’s good practice, in finite ethics, to make a habit of checking whether a given proposal breaks immediately upon encounter with the infinite. That doesn’t necessarily mean you need to throw it out. But it’s a clue about its scope and fundamentality.
XVI. Nihilism and responsibility
Vain are the thousand creeds
That move men’s hearts: unutterably vain…
Perhaps one looks at infinite ethics and says: this is an argument for nihilism. In particular: perhaps one was up for some sort of meta-ethical realism, if the objectively true ethics was going to have certain properties that infinite ethics threatens to deny – properties like making a certain sort of intuitively resonant sense. Perhaps, indeed, one had (consciously or unconsciously) tied one’s meta-ethical realism to the viability of a certain specific normative ethical theory – for example, total hedonistic utilitarianism – which seemed sufficiently simple, natural, and coherent that you could (just barely) believe that it was written into the fabric of an otherwise inhuman universe. And perhaps that theory breaks on the rocks of the infinite.
Or perhaps, more generally, infinite ethics reminds us too hard of our cognitive limitations; of the ways in which our everyday morality, for all its pretension to objectivity, emerges from the needs and social dynamics of fleshy creatures on a finite planet; of how few possibilities we are in the habit of actually considering; of how big and strange the world can be. And perhaps this leaves us, if not with nihilism, then with some vague sense of confusion and despair (or perhaps, more concretely, it makes us think we’d have to learn more math to dig into this stuff properly, and we don’t like math).
I don’t think there’s a clean argument from “infinite ethics breaks lots of stuff I like” to “meta-ethical realism is false,” or to some vaguer sense that Cosmos of value hath been reduced to Chaos. But I feel some sympathy for the vibe.
I was already pretty off-board with meta-ethical realism, though (see here and here). And for anti-realists, despairing or giving up in the face of the infinite is less of an option. Anti-realists, after all, are much less prone to nihilism: they were never aiming to approximate, in their action, some ethereal standard that might or might not exist, and which infinities could refute. Rather, anti-realists (or at least, my favored variety) were always choosing how to respond to the world as it is (or might be), and they were turning to ethics centrally as a means of becoming more intentional, clear-eyed, and coherent in their choice-making. That project persists in its urgency, whatever the unboundedness of the world, and of our influence on it. We still need to take responsibility for what we do, and for what it creates. We still harm, or help – only, on larger scales. If we act incoherently, we still step on our own feet, burning what we care about for nothing – only, this time, the losses can be infinite. Perhaps coherence is harder to ensure. But the stakes are higher, too.
The realists might object: for the anti-realist, “we need to take responsibility for how we respond to infinite worlds” is too strong. And fair enough: at the deepest level, the anti-realist doesn’t “need” or “have” to do anything. We can ignore infinities if we want, in the same sense that we can let our muscles go limp, or stay home on election day. What we lose, when we do this, is simply the ability to intentionally steer the world, including the infinite world, in the directions we care about – and we do, I think, care about some infinite things, whatever the challenges this poses. That is: if, in response to the infinite, we simply shrug, or tune out, or wail that all is lost, then we become “passive” about infinite stuff. And to be passive with respect to X is just: to let what happens with X be determined by some set of factors other than our agency. Maybe that’ll work out fine with infinites; but maybe, actually, it won’t. Maybe, if we thought about it more, we’d see that infinities are actually, from our perspective, quite a big deal indeed – a sufficiently big deal that “whatever, this is hard, I’ll ignore it” no longer looks so appealing.
I’m hoping to write more about this distinction between “agency” and “passivity” at some point (see here for some vaguely similar themes). For now I’ll mostly leave it as a gesture. I want to add, though, that given how far away we are (in my opinion) from a satisfying and coherent theory of infinite ethics, I expect that a good amount of the agency we aim at the infinite will remain, for some time, pretty weak-sauce in terms of “steering stuff in consistent directions I’d endorse if I thought about it more.” That is, while I don’t think that we should give up on approaching infinities with intentional agency, I think we should acknowledge that for a while, we’re probably going to suck at it.
XVII. Infinities in practice
If we can think
this far, might not our eyes adjust to the dark?What, if some day or night a demon were to steal after you into your loneliest loneliness and say to you: “This life as you now live it and have lived it, you will have to live once more and innumerable times more; and there will be nothing new in it … even this spider and this moonlight between the trees, and even this moment and I myself. The eternal hourglass of existence is turned upside down again and again, and you with it, speck of dust!”
Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus? Or have you once experienced a tremendous moment when you would have answered him: “You are a god and never have I heard anything more divine.”
Heaven lies about us in our infancy!
I’ll close with a few thoughts on practical implications.
Perhaps we suck at infinite ethics now, both in theory and in practice. Someday, though, we might get better. In particular: if humanity can survive long enough to grow profoundly in wisdom and power, we will be able to understand the ethics here fully – or at least, much more deeply. We’ll also know much more about what sort of infinite things we are able to do, and we’ll be much better able to execute on infinite projects we deem worthwhile (building hyper-computers, creating baby-universes, etc). Or, to the extent we were always doing infinite things (for example, acausally), we’ll be wiser, more skillful, and more empowered on that front, too.
And to be clear: I don’t think that understanding the ethics, here, is going to look like “patching a few counterexamples to expansionism” or “figuring out how to deal with lotteries involving incomparable outcomes.” I’m imagining something closer to: “understanding ~all the math you might ever need, including everything related to all the infinites on the completed version of that crazy chart above; solving all of cosmology, physics, metaphysics, epistemology, and so on, too; probably reconceptualizing everything in fundamentally new and more sophisticated terms — terms that creatures at our current level of cognitive capacity can’t grok; then building up a comprehensive ethics and decision theory (assuming those terms still make sense), informed by this understanding, and encompassing of all the infinities that this understanding makes relevant.” It may well make sense to get started on this project now (or it might not); but we’re not, as it were, a few papers away.
I don’t, though, expect the output of such a completed understanding to be something like: “eh, infinities are tricky, we decided to ignore them,” which as far as I can tell is our current default. To the contrary, I can readily imagine future people being horrified at the casual-ness of our orientation towards the possibility of infinite benefits and harms. “They knew that an infinite number of people is more than any finite number, right? Did they even stop to think about it?” This isn’t to say that future people will be fanatical about infinities (as I noted above, I expect that the right thing to say about fanaticism will emerge even just from considering the finite case). But the argument for taking infinite benefits and harms very seriously isn’t especially complex. It’s the type of thing you can imagine future people being pretty adamant about.
On the other hand, if someone comes to me now and says: “I’m doing X crazy-sounding thing (e.g., quitting my bio-risk job to help break us out of the simulation; converting to Catholicism because it seemed to me slightly more likely than all the other religions; following up on that one drug experience with those infinite spaghetti elves), because of something about infinite ethics,” I’m definitely feeling nervous and bad. As ever with the wackier stuff on this blog (and indeed, even with the less-wacky stuff), my default attitude is: OK (though not risk-free) to incorporate into your worldview in grounded and suitably humble ways; bad to do brittle and stupid stuff for the sake of. I trust a wise and empowered humanity to handle the wacky stuff well (or at least, much better). I trust present-day humans who’ve thought about it for a few hours/weeks/years (including myself) much less. So as a first pass, I think that what it looks like, now, to take infinite ethics seriously is: to help our species make it to a wise and empowered future, and to let our successors take it from there.
That said, I do think that reflection on infinite ethics can (very hazily) inform our backdrop sense of how strange and different a wise future’s priorities might be. In particular: of the options I’ve considered (and setting aside simulation shenanigans), to my mind the most plausible way of doing infinitely good stuff is via exerting optimally wise acausal influence on an infinitely large cosmology. That is, my current attitude towards things like baby-universes and hyper-computers is something like: “hard to totally rule out.” (And I’d say the same thing, in a more skeptical tone, about various religions.) But I’m told that my attitude towards infinitely large cosmologies should be somewhere between: “plausible” and “probably,” and my current attitude towards some sort of acausal decision theory is something like: “best guess view.” So this leaves me, already, with very macroscopic credences on all of my actions exerting infinite amounts of (acausal) influence. It’s hard to really absorb — and I haven’t, partly because I haven’t actually looked into the relevant cosmology. But if I had to guess about where the attention of future infinity-oriented ethical projects would turn, I’d start with this type of thing, rather than with hypercomputers, or Catholicism.
Does this sort of infinite influence, maybe, just add up to normality? Maybe, for example, we use some sort of expansionism to say that you should just make your local environment as good as possible, thereby acausally making an infinite number of other places in the universe better too, thereby improving the whole thing by expansionist lights? If so, then maybe we can just live our finite lives as usual, but in an infinite number of places at once? Our lives would simply carry, on this view, the weight of Nietzsche’s eternal return – only spread out across space-time, rather than in an endless loop. We’d have a chance to confront a version of Nietzsche’s demon in the real world – to find out if we rejoice, or if we gnash our teeth.
I do think we’d confront this demon in some form. But I’m skeptical it would leave our substantive priorities untouched (and anyway, we’d need to settle on a theory of infinite ethics to get this result). In particular, I expect this sort of “acausal influence across the universe” perspective to expand beyond very close copies of you, to include acausal interaction with other inhabitants of the universe (including, perhaps, ones very different from you) whose decisions are nevertheless correlated with yours (see e.g. Oesterheld (2017) for some discussion). And naively, I expect this sort of interaction to get pretty weird.
Even beyond this particular form of weirdness, though, I think visions of future civilizations that put substantive weight on infinity-focused projects are just different in flavor from the ones that emerge from naively extrapolating your favorite finite-ethical views (though even with infinities to the side, I expect such extrapolations to mislead). Thus, for example, total utilitarian types often think that the main game for a wise future is going to be “tiling the accessible universe” with some kind of intrinsically optimal value-structure (e.g., paperclips; oh wait, no…), the marginal value of which stays constant no matter how much you’ve already got. So this sort of view sees e.g. a one-in-a-billion chance of controlling a billion galaxies as equivalent in expected value to a guarantee of one galaxy. But even as infinities cause theoretical problems for total utilitarianism, they also complicate this sort of voracious appetite for resources: relative to “hedonium per unit galaxy,” it is less clear that the success and value of infinity-oriented projects scales linearly with the resources involved (h/t Nick Beckstead for suggesting this consideration) – though obviously, resources are still useful for tons of things (including, e.g., building hypercomputers, acausal bargaining with the aliens – you know, the usual).
All in all, I currently think of infinite ethics as a lesson in humility: humility about how far standard ethical theory extends; humility about what priorities a wise future might bring; humility about just how big the world (both the abstract world, and the concrete world) can be, and how little we might have seen or understood. We need not be pious about such humility. Nor need we preserve or sanctify the ignorance it reflects: to the contrary, we should strive to see further, and more clearly. Still, the puzzles and problems of the infinite can be evidence about brittleness, dogmatism, over-confidence, myopia. If infinities break our ethics, we should pause, and notice our confusion, rather than pushing it under the rug. Confusion, as ever, is a clue.
- ^
From Sean Carroll (13:01 here): “Yeah, I’ll just say very quickly, I think that, just so everyone knows, this is an open question in cosmology. … The possibility’s on the table, the universe is infinite, there’s an infinite number of observers of all different kinds, and there’s a possibility on the table that the universe is finite, and there’s not that many observers, we just don’t know right now.”
Bostrom (2011): “Recent cosmological evidence suggests that the world is probably infinite. [continued in footnote] In the standard Big Bang model, assuming the simplest topology (i.e., that space is singly connected), there are three basic possibilities: the universe can be open, flat, or closed. Current data suggests a flat or open universe, although the final verdict is pending. If the universe is either open or flat, then it is spatially infinite at every point in time and the model entails that it contains an infinite number of galaxies, stars, and planets. There exists a common misconception which confuses the universe with the (finite) “observable universe”. But the observable part—the part that could causally affect us— would be just an infinitesimal fraction of the whole. Statements about the “mass of the universe” or the “number of protons in the universe” generally refer to the content of this observable part; see e.g. [1]. Many cosmologists believe that our universe is just one in an infinite ensemble of universes (a multiverse), and this adds to the probability that the world is canonically infinite; for a popular review, see [2].”
Wilkinson (2021): “you might be disappointed to find that the world around you is infinite in the relevant sense. I am sorry to disappoint you, but contemporary physics suggests just that. The widely accepted flat-lambda model predicts that our universe will tend towards a stable state and will then remain in that state for infinite duration (Wald 1983; Carroll 2017). Also widely accepted, the inflationary view posits that our world is spatially infinite, containing infinitely many other ‘bubble’ universes beyond our cosmic horizon (Guth 2007). But that’s not all they predict. Take any small-scale phenomenon which is morally valuable e.g., perhaps a human brain experiencing the thrill of reading philosophy for a given duration. Each of the above physical views predicts that our universe, in its infinite volume, will contain infinitely many such thrills (Garriga and Vilenkin 2001; Linde 2007; de Simone 2010; Carroll 2017).” - ^
I’m ignoring situations where e.g. if I eat a sandwich today, then this changes what happens to an infinite number of Boltzmann brains later, but in a manner I can’t ever predict. That said, this sort of scenario does raise problems: see e.g. Wilkinson (2021) for some discussion.
- ^
See also Dyson (1979, p. 455-456) for more on possibilities for infinite computation.
- ^
See MacAskill: “It’s not the size of the bucket that matters, but the size of the drop” (p. 25).
- ^
This image is partly inspired by Ajeya Cotra’s discussion of the “crazy train” here.
- ^
An example from an unpublished paper by Ketan Ramakrishnan: “If this is correct, some other account of suboptimal supererogatory harming is called for. But I have been unable to figure out how such an account would work. And our exhausting casuistical gymnastics suggest that, whatever the best such account turns out to be, its mechanics are likely to prove extremely intricate. Perhaps a satisfying account will eventually be found, of course. But an alternative diagnosis of our predicament is also available. The foundational elements of ordinary, deontological moral thought – stringent duties against harming and using other people without their consent, wide prerogatives to refrain from harming ourselves in order to aid other people – are highly compelling on first inspection. But they prove, on closer view, to be composed of byzantine causal structures whose moral significance is open to serious doubt. Our present difficulties may thus be symptomatic of wider instabilities in the deontological architecture. Perhaps we should renounce any moral view that is built on such intricate casual structures. Perhaps we should just accept, with consequentialists, that 'well-being comes first. The weal and woe of human beings comes first.'"
djbinder @ 2022-01-31T10:43 (+47)
I think Section XIII is too dismissive of the view that infinities are not "real", conflating it with ultrafinitism. But the sophisticated version of this view is that infinities should only be treated as "idealized limits" of finite processes. This is, as far as understand, the default view amongst practicing mathematicians and physicists. If you stray from it, and use infinities without specifying the limiting process, it is very easy to produce paradoxes, or at least, indeterminancy in the problem. The sophisticated view, then, is not that infinities don't exist, but that, since they only exist as limiting cases of finite processes. One must always specify the limiting process, and in doing so any paradoxes or indeterminancies will disappear.
As Jaynes' summarizes in Chapter 15 of Probability: The Logic of Science:
[P]aradoxes caused by careless dealing with infinite sets or limits can be mass-produced by the following simple procedure:
(1) Start from a mathematically well-defined situation, such as a finite set, a normalized probability distribution, or a convergent integral, where everything is well-behaved and there is no question about what is the correct solution.
(2) Pass to a limit – infinite magnitude, infinite set, zero measure, improper pdf, or some other kind – without specifying how the limit is approached.
(3) Ask a question whose answer depends on how the limit was approached.
weeatquince @ 2022-02-02T00:08 (+14)
Agree with djbinder on this, that "infinities should only be treated as 'idealized limits' of finite processes".
To explain what I mean:
Infinites outside of limiting sequences are not well defined (at least that is how I would describe it). Sure you can do some funky set theory maths on them but from the point of view of physics they don’t work, cannot be used.
(My favorite example (HT Jacob Hilton) is a man throws tennis balls into a room once every 1 second numbered 1,2,3,4,... and you throw them out once every 2 seconds, how many balls are in the room after infinite time and which balls are they? Well if you throw out the odd balls (2,4,6...) then 1,3,5,.. are left but if you throw out the balls in the order they are thrown in (1,2,3,4...) then after infinite time no balls are left. The lesson: "infinite" is not a precise enough term to answer the question.)
As far as I understand it, from a philosophy of science point of view doing physics is something like writing the simplest set of mathematical formulas to describe the universe. These formulas need to work. So you will never find a physicist that uses infinites in the way this post does. If physics is ever able to succeed at mapping the universe it will have to do it without using infinite sets, except where they can be well defined as limits (unless there is drastic change to what physics is).
As such doing infinite ethics of the type done in this post makes as much sense as doing any other poorly defined by physics thought experiment (see example in my other reply about what if time travel paradoxes are true).
Of course there could still be infinites in limits. E.g. one happy person a day tending forever (as Joe flags in his comment). But hopefully they are better defined and may avoid some of the problems of the post above (certainly it breaks the zones of happiness/suffering thought experiment). I am not sure.
MichaelStJules @ 2022-02-01T00:43 (+10)
But the sophisticated version of this view is that infinities should only be treated as "idealized limits" of finite processes. This is, as far as understand, the default view amongst practicing mathematicians and physicists.
I think this is false in general (at least for mathematicians), but true for many specific applications. Mathematicians frequently deal with infinite sets, and they don't usually treat them like limits of finite processes, especially if they're uncountable.
How would you handle the possibility of a spatially unbounded universe, e.g. if our space looks like ?
Expansionism is an approach that basically treats the universe as a limit of bounded universes to add up ethical value, since you take limits of partial sums expanding out from a point. But it still runs into problems, as discussed in the post.
djbinder @ 2022-02-01T08:58 (+7)
I think you are right about infinite sets (most of the mathematicians I've talked to have had distinctly negative views about set theory, in part due to the infinities, but my guess is that such views are more common amongst those working on physics-adjacent areas of research). I was thinking about infinities in analysis (such as continuous functions, summing infinite series, integration, differentiation, and so on), which bottom out in some sort of limiting process.
On the spatially unbounded universe example, this seems rather analogous to me to the question of how to integrate functions over the same space. There are a number of different sets of functions which are integrable over , and even for some functions which are not integrable over there are natural regularization schemes which allows the integral to be defined. In some cases these regularizations may even allow a notion of comparing different "infinities", as in cases where the integral diverges as the regularizer is taken to zero one integral may strictly dominate the other. When dealing with situations in ethics, perhaps we should always be restricting to these cases? There are a lot of different choices here, and it isn't clear to me what the correct restriction is, but it seems plausible to me that some form of restriction is needed. Note that such a restrictions include ultrafinitism, as an extreme case, but in general allows a much richer set of possibilities.
Expansionism is neceessarily incomplete, it assumes that the world has a specific causal structure (ie, one that is locally that of special relativity) which is an empirical observation about our universe rather than a logically necessary fact. I think it is plausible that, given the right causal assumptions, expansionism follows (at least for individual observers making decisions that respect causality).
RyanCarey @ 2022-02-01T12:45 (+11)
So you're saying a utilitarian needs both a utility function, and a measure with which to integrate over any sets of interest (OK)? And also some transformations to regularise infinite sets (giving up the dream of impartiality)? And still there are some that cannot be regularised, so utilitarian ethics can't order them (but isn't that the problem we were trying to solve)?
djbinder @ 2022-02-01T13:37 (+8)
I agree with with your first question, the utilitarian needs a measure (they don't need a separate utility function from their measure, but there may be other natural measures to consider in which case you do need a utility function).
With respect to your second question, I think you can either give up on the infinite cases (because you think they are "metaphysically" impossible, perhaps) or you demand that a regularization must exist (because with this the problem is "metaphysically" underspecified). I'm not sure what the correct approach is here, and I think it is an interesting question to try and understand this in more detail. On the latter case you have to give up impartiality, but only in a fairly benign way, and that our intuitions about impartiality are probably wrong here (analogous situations occur in physics with charge conservation, as I noted in another comment).
With respect to your third question, I think it is likely that problems with no regularization are non-sensical. This is not to say that all problems involving infinities are themselves non-sense, nor to say that correct choice of regularization is obvious.
As an intuition pump maybe we can consider cases that don't involve infinities. Say we are in (rather contrived world) in which utility is literally a function of space-time, and we integrate to get the total utility. How should I assign utility for a function which has support on a non-measurable set? Should I even think such a thing is possible? After all, the existence of non-measuarable sets follows not from ZF alone, but requires also the axiom of choice. As another example, maybe my utility function depends on whether or not the continuum hypothesis is true or false. How should I act in this case?
My own guess is that such questions likely have no meaningful answer, and I think the same is true for questions involving infinities without specified ways to operationalize the infinities. I think it would be odd to give up on the utilitarian dream due to unmeasurable sets, and that the same is true for ill-defined infinities.
Joe_Carlsmith @ 2022-01-31T18:36 (+9)
A few questions about this:
- Does this view imply that it is actually not possible to have a world where e.g. a machine creates one immortal happy person per day, forever, who then form an ever-growing line?
- How does this view interpret cosmological hypotheses on which the universe is infinite? Is the claim that actually, on those hypotheses, the universe is finite after all?
- It seems like lots of the (countable) worlds and cases discussed in the post can simply be reframed as never-ending processes, no? And then similar (identical?) questions will arise? Thus, for example, w5 is equivalent to a machine that creates a1 at -1, then a3 at -1, then a5 at -1, etc. w6 is equivalent to a machine that creates a1 at -1, then a2 at -1, a3 at -1, etc. What would this view say about which of these machines we should create, given the opportunity? How should we compare these to a w8 machine that creates b1 at -1, b2 at -1, b3 at -1, b4 at -1, etc?
Re: the Jaynes quote: I'm not sure I've understood the full picture here, but in general, to me it doesn't feel like the central issues here have to do with dependencies on "how the limit is approached," such that requiring that each scenario pin down an "order" solves the problems. For example, I think that a lot of what seems strange about Neutrality-violations in these cases is that even if we pin down an order for each case, the fact that you can re-arrange one into the other makes it seem like they ought to be ethically equivalent. Maybe we deny that, and maybe we do so for reasons related to what you're talking about - but it seems like the same bullet.
weeatquince @ 2022-02-02T00:31 (+9)
My take (think I am less of an expert than djbinder here)
- This view allows that.
- This view allows that. (Although entirely separately consideration of entropy etc would not allow infinite value.)
- No I don’t think identical questions arise. Not sure. Skimming the above post it seems to solve most of the problematic examples you give. At any point a moral agent will exist in a universe with finite space and finite time that will tend infinite going forward. So you cannot have infinite starting points so no zones of suffering etc. Also I think you don’t get problems with "welfare-preserving bijections" when well defined it time but struggle to explain why. It seems that for example w1 below is less bad than w2
Time t1 t2 t3 t4 t5 t6 t7
Agent a1 a2 a3 a4 a5 a6 a7
w1 -1 -1 -1 -1….
w2 -1 -1 -1 -1 -1 -1 -1….
djbinder @ 2022-01-31T19:41 (+6)
I think what is true is probably something like "neverending process don't exist, but arbitrarily long ones do", but I'm not confident. My more general claim is that there can be intermediate positions between ultrafinitism ("there is a biggest number"), and any laissez faire "anything goes" attitude, where infinities appear without care or scrunity. I would furthermore claim (but on less solid ground), that the views of practicing mathematicians and physicists falls somewhere in here.
As to the infinite series examples you give, they are mathematically ill-defined without giving a regularization. There is a large literature in mathematics and physics on the question of regularizing infinite series. Regularization and renormalization are used through physics (particular in QFT), and while poorly written textbooks (particularly older ones) can make this appear like voodoo magic, the correct answers can always be rigorously be obtained by making everything finite.
For the situation you are considering, a natural regularization would be to replace your sum with a regularized sum where you discount each time step by some discounting factor . Physically speaking, this is what would happen if we thought the universe had some chance of being destroyed at each timestep; that is, it can be arbitrarily long-lived, yet with probability 1 is finite. You can sum the series and then take and thus derive a finite answer.
There may be many other ways to regulate the series, and it often turns out that how you regulate the series doesn't matter. In this way, it might make sense to talk about this infinite universe without reference to a specific limiting process, but rather potentially with only some weaker limiting process specification. This is what happens, for instance, in QFT; the regularizations don't matter, all we care about are the things that are independent of regularization, and so we tend to think of the theories as existing without a need for regularization. However, when doing calculations it is often wise to use a specific (if arbitrary) regularization, because it guarantees that you will get the right answer. Without regularizations it is very easy to make mistakes.
This is all a very long-winded way to say that there are at least two intermediate views one could have about these infinite sequence examples, between the "ultrafinitist" and the "anything goes":
-
The real world (or your priors) demands some definitive regularization, which determines the right answer. This would be the case if the real world had some probability of being destroyed, even if it is arbitrarily small.
-
Maybe infinite situations like the one you described are allowed, but require some "equivalence class of regularizations" to be specified in order to be completely specified. Otherwise the answer is as indeterminant as if you'd given me the situation without specifiying the numbers. I think this view is a little weirder, but also the one that seems to be adopted in practice by physicists.
djbinder @ 2022-01-31T19:49 (+5)
As an aside, while neutrality-violations are a necessary consequence of regularization, a weaker form of neutrality is preserved. If we regularize with some discounting factor so that everything remains finite, it is easy to see that "small rearrangments" (where the amount that a person can move in time is finite) do not change the answer, because the difference goes to zero as . But "big rearrangments" can cause differences that grow with . Such situations do arise in various physical situations, and are interpretted as changes to boundary conditions, whereas the "small rearrangments" manifestly preserve boundary conditions and manifestly do not cause problems with the limit. (The boundary is most easily seen by mapping the infinite interval sequence onto a compact interval, so that "infinity" is mapped to a finite point. "Small rearrangments" leave infinity unchanged, whereas "large" ones will cause a flow of utility across infinity, which is how the two situations are able to give different answers.)
jtm @ 2022-01-31T10:50 (+35)
Just here to say that this bit is simultaneously wonderfully hilarious and extraordinarily astute:
The first is that I think infinite ethics punctures a certain type of utilitarian dream. It’s a dream I associate with the utilitarian friend quoted above (though over time he’s become much more of a nihilist), and with various others. In my head (content warning: caricature), it’s the dream of hitching yourself to some simple ideas – e.g., expected utility theory, totalism in population ethics, maybe hedonism about well-being — and riding them wherever they lead, no matter the costs. Yes, you push fat men and harvest organs; yes, you destroy Utopias for tiny chances of creating zillions of evil, slightly-happy rats (plus some torture farms on the side). But you always “know what you’re getting” – e.g., more expected net pleasure. And because you “know what you’re getting,” you can say things like “I bite all the bullets,” confident that you’ll always get at least this one thing, whatever else must go.
Plus, other people have problems you don’t. They end up talking about vague and metaphysically suspicious things like “people,” whereas you only talk about “valenced experiences” which are definitely metaphysically fine and sharp and joint-carving. They end up writing papers entirely devoted to addressing a single category of counter-example – even while you can almost feel the presence of tons of others, just offscreen. And more generally, their theories are often “janky,” complicated, ad hoc, intransitive, or incomplete. Indeed, various theorems prove that non-you people will have problems like this (or so you’re told; did you actually read the theorems in question?). You, unlike others, have the courage to just do what the theorems say, ‘intuitions’ be damned. In this sense, you are hardcore. You are rigorous. You are on solid ground.
weeatquince @ 2022-02-01T22:16 (+25)
This is a really really well written piece and you go in to great depth and explain things well and it is marvellous that someone (in this case you) has done this work. But before you or other people on this forum put too much resources into infinite ethics I think it is important to note the extent to which is its, as you say, "sci-fi stuff".
(I do worry you overstate the extent to which infinites might be a thing. For example you end the section on "maybe infinities are not just a thing" by saying that "modern cosmology says that our actual concrete universe is very plausibly infinite". This feels somewhat misleading. )
As far as I am aware no theories of modern physics would say that anything we do can be infinite in any meaningful ethical sense. Sure the universe might last for infinite time but it is expected to undergo a heat death (or a collapse) such that no more action or suffering or pleasure is possible. At which point it is not ethically relevant. Or sure maybe there might be some theories that suggest there are other universes that we can never affect or influence in any way, but that too is not ethically relevant. *
To give an example. Infinite ethics questions are for sure more credible than questions of: what is ethics if consciousness = cheesy-bread? But I expect infinite ethics questions are on par with questions like: what is ethics in a universe that has time travel paradoxes? (also a thing we cannot 100% rule out). I don’t feel that anyone should be shocked or worried or even surprised to note that utilitarianism has no good answers to the question of: Should I travel back in time to kill Hitler if doing so would cause a chain of events that would stop me from traveling back in time to kill Hitler?
I have been reading your stuff on power seeking AI and it is great. To me that seems a much more valuable focus area for future research time.
*(Also it looks like from the introduction you try to rescue the possibility of infinite ethics by referencing the many worlds interpretation of quantum mechanics. I could be wrong, but don’t believe physicists thinks the many worlds are infinite, just very very very large. And anyway the many worlds cosmology also avoids all the problems you discuss as in that case your actions effects are still well-defined, see discussion here).
motteposting @ 2022-04-30T00:54 (+5)
I know that this is a necro but I just wanted to point out that the problem still arises as long as you have any non-trivial credence in your actions having infinite consequences. For infinite consequences always dominate finite ones as long as the former have any probability above 0.
Moreover, you should have such a non-trivial credence. For example, although we have pretty good evidence that the universe is not going to end in a Big Bounce scenario, it’s certainly not totally ruled out (definitely not to the point where you should have credence of 1 that it doesn’t happen). Plenty of cosmologists are still kicking around that and other cyclic cosmologies, which do theoretically allow for literally infinite (morally-relevant!) effects from individual actions, even if they’re in the minority.
weeatquince @ 2022-04-30T20:12 (+11)
I would disagree.
Let me try to explain why by reversing your argument in on itself. Imagine with me for a minute we live in a world where the vast majority of physicists believe in a big bounce and/or infinite time etc.
Ok got that, now consider:
The infinite ethics problems still do not arise as long as you have any non-trivial credence in time being finite. For more recent consequences always dominate later ones as long as the later have any probability above 0 of not happeneing.
Moreover, you should have such a non-trivial credence. For example, although we have pretty good evidence that the universe is not going to suddenly end in a false vacuum decay scenario, it’s certainly not totally ruled out (definitely not to the point where you should have credence of 1 that it doesn’t happen). Plenty of cosmologists are still kicking around that and other universe ending cosmologies, which do theoretically allow for literally finite effects from individual actions, even if they’re in the minority.
Basically even if time went on forever as long as we have a >0 credence that it would stop at some point then we would prefer w1 to w2 where:
Time t1 t2 t3 t4 t5 t6 t7 ….
w 1 +1 +1 +1 +1 +1 +1 +1 ….
w 2 +1 +1 +1 +1 ….
So no infinite ethics paradoxes!!
YAY we can stop worrying.
[I should add this is not the most thorough explanation it mostly amused me to reverse your argument. I would also see: djbinder comment and my reply to that comment for a slightly better explanation of why (in my view) physics does not allow infinite paradoxes (any more than it allows time travel paradoxes).]
motteposting @ 2022-04-30T22:46 (+5)
That’s a clever response! But I don’t think it works. It does prove that we shouldn’t be indifferent between w1 and w2, but both are infinite in expectation. So if your utility function is unbounded, then you will still prefer any non-zero probability of w1 or w2 to certainty of any finite payoff. (And if it’s bounded then infinite stuff doesn’t matter anyway.)
weeatquince @ 2022-05-01T07:45 (+8)
Going to type and think at the same time – lets see where this goes (sorry if it ends up with a long reply).
Well firstly, as long as you still have a non zero chance of the universe not being infinite, then I think you will avoid most of the paradoxes mentioned above (zones of happiness and suffering, locating value and rankings of individuals, etc), But it sounds like you are claiming you still get the "infinite fanatics" problems.
I am not sure how true this is. I find it hard to think through what you are saying without a concrete moral dilemma in my head. I don’t on a daily basis face situations where I get to create universes with different types of physics. Here are some (not very original) stories that might capture what you are suggestion could happen.
1. Lets imagine a pascals mugging situation
- A stranger stops you in the street and says give me a $5 or I will create a universe of infinite sadness.
2. A rats on heroin type situation. Imagine we are in a world where:
- Scientists believe with very high certainty that the universe will eventually undergo heat death and utility will stop.
- You have a device that will tile the entire universe with rats on heroin (or something else that maximises utility, until the heat death of the universe (and people agree that is a good thing). But this would stop scientific research.
- An infinite fanatic might say: don’t use the device, it sounds good but if we keep doing science then there is an extremely small chance we can prove our current scientific view of the universe to be wrong and find a way to create infinite joy which is bigger than an entire universe of joy.
Feel free to suggest a better story if you have one.
These do look like problems for utilitarianism that involve infinites.
But I am not convinced that they are problems to do with infinite ethics. They both seem to still arise if you replace the "infinite" with "Graham’s number" or "10^100" etc.
But I already think that standard total utilitarianism breaks down quite often, especially in situations of uncertainty or hard to quantify credences. Utilitarian philosophers don’t even agree on if preventing extinction risks should be a priority (for, against), even using finite numbers.
Now I might be wrong, I am not a professional philosopher with a degree in making interesting thought experiments, but I guess I would say that all of the problems in the post above EITHER make no more sense than saying, oh look utiltarinaism doesn’t work if you add in time travel paradoxes, or something like that OR are reduceable to problems with large finites or high uncertainties. So considering "infinitities" does not itself break utilitarianism (which is already broken).
Vasco Grilo @ 2023-06-25T14:34 (+2)
Hi there,
I know that this is a necro but I just wanted to point out that the problem still arises as long as you have any non-trivial credence in your actions having infinite consequences. For infinite consequences always dominate finite ones as long as the former have any probability above 0.
One's actions leading to infinite factual value does not mean they lead to infinite Shapley value, which is what one arguably should care about. If N agents are in a position to achieve a factual value V, the Shapley value of each agent is V/N. This naively suggests the Shapley value goes to infinity as V goes to infinity. However, I think we should assume the value one can achieve in the world is proportional to the number of agents in it (V = k N). So, in this toy model, the Shapley value will be constant (equal to k), not depending on the number of agents.
In other words, if one can cause infinite value, one's actions can be infinitely important. However, this suggests the existence of infinitely many agents, i.e. null neglectedness. So the infinite importance is cancelled out by the null neglectedness, and therefore I would say the cost-effectiveness does not change.
Moreover, you should have such a non-trivial credence. For example, although we have pretty good evidence that the universe is not going to end in a Big Bounce scenario, it’s certainly not totally ruled out (definitely not to the point where you should have credence of 1 that it doesn’t happen). Plenty of cosmologists are still kicking around that and other cyclic cosmologies, which do theoretically allow for literally infinite (morally-relevant!) effects from individual actions, even if they’re in the minority.
I do not think a Big Bounce scenario would imply infinite effects. It would imply a non-zero chance of arbitrarily large effects, but that is quite different from infinite effects. Values which tend to infinity can be compared, and therefore would not cause havoc to ethics. In contrast, infinities cause lots of problems.
David Johnston @ 2022-02-01T03:40 (+9)
"And suppose, per various respectable cosmologies, that the universe is filled with an infinite number of people very much like you"
I'm not familiar with these cosmologies: do they also say that the universe is filled with an equally large number of people quite like me except they make the opposite decision whenever considering a donation?
HaukeHillebrandt @ 2022-01-31T23:58 (+5)
This reminded me of Schwitzgebel's new book 'The Weirdness of the World' [you can download a draft of the PDF on his website]:
Quote:
"1. What I Will Argue in This Book.
Consider three huge questions: What is the fundamental structure of the cosmos? How does human consciousness fit into it? What should we value? What I will argue in this book – with emphasis on the first two questions, but also sometimes drawing implications for the third – is (1.) the answers are currently beyond our capacity to know, and (2.) we do nonetheless know at least this: Whatever the truth is, it’s weird. Careful reflection will reveal all of the viable theories on these grand topics to be both bizarre and dubious. In Chapter 3 (“Universal Bizarreness and Universal Dubiety”), I will call this the Universal Bizarreness thesis and the Universal Dubiety thesis. Something that seems almost too crazy to believe must be true, but we can’t resolve which of the various crazy-seeming options is ultimately correct. If you’ve ever wondered why every wide-ranging, foundations-minded philosopher in the history of Earth has held bizarre metaphysical or cosmological views (each philosopher holding, seemingly, a different set of bizarre views), Chapter 3 offers an explanation. I will argue that given our weak epistemic position, our best big-picture cosmology and our best theories of consciousness are tentative, modish, and strange. Strange: As I will argue, every approach to cosmology and consciousness has bizarre implications that run strikingly contrary to mainstream “common sense”. Tentative: As I will also argue, epistemic caution is warranted, partly because theories on these topics run so strikingly contrary to common sense and also partly because they test the limits of scientific inquiry. Indeed, dubious assumptions about the fundamental structure of mind and world frame or undergird our understanding of the nature and value of scientific inquiry, as I discuss in Chapters 4 (“1% Skepticism”), 5 (“Kant Meets Cyberpunk”), and 7 (“Experimental Evidence for the Existence of an External World”)
Modish: On a philosopher’s time scale – where a few decades ago is “recent” and a few decades hence is “soon” – we live in a time of change, with cosmological theories and theories of consciousness rising and receding based mainly on broad promise and what captures researchers’ imaginations. We ought not trust that the current range of mainstream academic theories will closely resemble the range in a hundred years, much less the actual truth. Even the common garden snail defies us (Chapter 9, “Is There Something It’s Like to Be a Garden Snail?”). Does it have experiences? If so, how much and of what kind? In general, how sparse or abundant is consciousness in the universe? Is consciousness – feelings and experiences of at least the simplest, least reflective kind – cheap and common, maybe even ubiquitous? Or is consciousness rare and expensive, requiring very specific conditions in the most sophisticated organisms? Our best scientific and philosophical theories conflict sharply on these questions, spanning a huge range of possible answers, with no foreseeable resolution. The question of consciousness in near-future computers or robots similarly defies resolution, but with arguably more troubling consequences: If constructions of ours might someday possess humanlike emotions and experiences, that creates moral quandaries and puzzle cases for which our ethical intuitions and theories are unprepared. In a century, the best ethical theories of 2022 might seem as quaint and inadequate as medieval physics applied to relativistic rocketships (Chapter 10, “The Moral Status of Future Artificial Intelligence: Doubts and a Dilemma”)."
Vasco Grilo @ 2024-02-07T15:39 (+4)
Hi Joe,
I think all the evidence for infinity is coming from having some weight on infinity in our prior. Empirical evidence can take us from a very large universe to an arbitrarily large universe (for an arbitrarily large amount of evidence), but never to an infinite universe? An arbitrarily large universe would still be infinitely smaller than an infinite universe, so I would say the former would provide no empirical evidence for the latter. If this is so, I am confused about why discussions about infinite ethics often mention there is empirical evidence pointing to the existence of infinity. From a footnote of your post (emphasis mine):
Bostrom (2011): “Recent cosmological evidence suggests that the world is probably infinite. [continued in footnote] In the standard Big Bang model, assuming the simplest topology (i.e., that space is singly connected), there are three basic possibilities: the universe can be open, flat, or closed. Current data suggests a flat or open universe, although the final verdict is pending. If the universe is either open or flat, then it is spatially infinite at every point in time and the model entails that it contains an infinite number of galaxies, stars, and planets. There exists a common misconception which confuses the universe with the (finite) “observable universe”. But the observable part—the part that could causally affect us— would be just an infinitesimal fraction of the whole. Statements about the “mass of the universe” or the “number of protons in the universe” generally refer to the content of this observable part; see e.g. [1]. Many cosmologists believe that our universe is just one in an infinite ensemble of universes (a multiverse), and this adds to the probability that the world is canonically infinite; for a popular review, see [2].”
In contrast, in maths, there is the axiom of infinity, which I assume points to the fact infinity as to be assumed from the onset rather than deduced.
finm @ 2024-02-07T16:47 (+11)
[Copied from an email exchange with Vasco, slightly embellished]
I think the probability of a flat universe is ~0 because the distribution describing our knowledge about the curvature of the universe is continuous, whereas a flat universe corresponds to a discrete curvature of 0.
Sure, if you put infinitesimal weight on a flat universe in your prior (true if your distribution is continuous over a measure of spatial curvature and you think it's infinite only if spatial curvature = 0), then no observation of (local) curvature is going to be enough. On your framing, I think the question is just why the distribution needs to be continuous? Consider: "the falloff of light intensity / gravity etc is very close to being proportional to , but presumably the exponent isn't exactly 2 since our distribution over for is continuous".
all the evidence for infinity is coming from having some weight on infinity in our prior.
'All' in the sense that you need nonzero non-infinitesimal weight on infinity in your prior, but not in the sense that your prior is the only thing influencing your credence in infinity. Presumably observations of local flatness do actually upweight hypotheses about the universe being infinite, or at least keep them open if you are open to the possibility in the first place. And I could imagine other things counting as more indirect evidence, such as how well or poorly our best physical theories fit with infinity.
[Added] I think this speaks to something interesting about a picture of theoretical science suggested by a subjective Bayesian attitude to belief-forming in general, on which we start with some prior distribution(s) over some big (continuous?) hypothesis space(s), and observations tell us how to update our priors. But you might think that's a weird way to figure out which theories to believe, because e.g. (i) the hypothesis space is indefinitely large such that you should have infinitesimal or very small credence in any given theory; (ii) the hypothesis space is unknown in some important way, in which case you can't assign credences at all, or (iii) theorists value various kinds of simplicity or elegance which are hard to cash out in Bayesian terms in a non-arbitrary way. I don't know where I come down on this but this is a case where I'm unusually sympathetic to such critiques (which I associate with Popper/Deutsch[1]).
[Continuing email] I do agree that "the universe is infinite in extent" (made precise) is different from "for any size, we can't rule out the universe being at least that big", and that the first claim is of a different kind. For instance, your distribution over the size of the universe could have an infinite mean while implying certainty that the universe has some finite size (e.g. if that distribution over the size of the universe is where ).
That does put us in a weird spot though, where all the action seems to be in your choice of prior.
I don't know how relevant it is that the axiom of infinity is independent of ZFC, unless you think that all true mathematical claims are made true by actual physical things in the world (JS Mill believed something like this I think). Then you might have thought you have independent reason to believe (i) the axioms, and if so believing that (ii) you'd be forced to believe in an actual physical infinity. But that has the same suspect "synthetic a priori" character as ontological arguments for God's existence, and is moot in any case because (ii) is false!
For what it's worth, as a complete outsider I feel a surprised by how little serious discussion there is in e.g. astrophysics / philosophy of physics etc around whether the universe is infinite in some way. It seems like such a big deal; indeed an infinitely big deal!
- ^
Though I don't think these views would have much constructive to say about how much credence to put on the universe being infinite, since they'd probably reject the suggestion that you can or should be trying to figure out what credence to put on it. Paging @ben_chugg since I think he could say if I'm misrepresenting the view.
Vasco Grilo @ 2023-06-24T10:41 (+2)
Hi Joe,
Modern cosmology says the local curvature of the universe is very small, so there is a 50 % chance of the curvature being positive/negative. This naively suggests a 50 % chance of the universe being infinite, but only if one extrapolates local properties to everywhere. To my mind, that is like seeing a road extending all the horizon, and then claiming the road is infinite. Ok, we can see the universe is homogeneous over much larger distances than the road, but infinite is infinitely larger than those distances, so we have zero evidence about the universe being infinite.
Moreover, as you note:
- Modern cosmology says the part of the universe we can causally influence is finite.
- Even if the universe is flat (apparently what most cosmologists think, even though the data arguably points towards 50 % of the curvature being positive/negative, and very low chance of it being null), it can still be finite if the universe is a multiply connected space (see shape of the universe). Most cosmologists assume a simply connected space, which together with flatness implies the universe is infinite, but I not think there is evidence supporting one type of connectivity over the other. "The issue of simple versus multiple connectivity has not yet been decided based on astronomical observation".
In addition, from the point of view of the cost-effectiveness of our actions, what matters is not only the scale of the consequences of our actions, but also their neglectedness. Scale is directly proportional to the number of lives, but neglectedness is inversely proportional to the number of lives. Consequently, the size of the universe (how many lives it will have) alone does not affect the cost-effectiveness of our actions. Longtermism can work if we have reasons to think people alive today have an unusual influence over the whole universe, i.e. if the hinge of history hypothesis is true. However, if the universe one could causally affect was infinite, there would be infinite people in the hinge of history, which means our time would no longer be hingy. In other words, the effect of having a larger scale in an infinite universe would be cancelled out by that of having a lower neglectedness.
In my view, "the dream" of causal expected total hedonistic utilitarianism is very much internally consistent, and alive. In principle! In practice, one has to rely on heuristics like decreasing the number of nuclear weapons.
Vasco Grilo @ 2022-12-25T20:00 (+2)
Hi Joe,
I would be curious to know your thoughts on this post (feel free to comment there).
MichaelStJules @ 2022-01-31T23:24 (+2)
- Expansionism has nothing to say about cases like <… 0, 0, 0, 0, 0, 0, 0…> vs. <… -1, -1, -1, 100, 1, 1, 1…>, since if you start your expansion suitably far into the -1 zone, its utility stays negative forever. That said, it’s not clear that our intuitions have much to say about this case, either.
Doesn't that just mean the result depends on where you start, not that it says nothing? If you start your expansion suitably far into the -1 zone and it's negative forever, then that means it's worse than the 0s, which remains non-negative forever.
Or are you looking for agreement across all starting points?
MichaelStJules @ 2022-01-31T23:19 (+2)
- Expansionism fails to rank worlds where some spatio-temporal locations are infinitely far apart (see Bostrom (2011), p. 13). For example: < 2 , 2, 2, … (infinite distance) … 1 , 1, 1> vs. < 1, 1, 1, … (infinite distance) … 1, 2, 1>. Here, the former world is better at an infinite number of locations, and worse at only one, so it seems intuitively better: but the expansion that starts at the single 2 location in the second world is forever greater in the latter world.
For these cases, you could start expanding from one point in each cluster of locations that are finitely close to each other. If there are finitely many, then you can alternate between them or do one step into each cluster for each step of the sum (or whatever you're aggregating).
If there's a countable infinity, then you can use a bijection , but that is not very nice, and will give far less weight to the clusters you start on later.
Lukas_Gloor @ 2022-01-31T19:34 (+2)
I really liked this post!
The literature calls this broad approach “expansionism” (see also Wilkinson (2021) for similar themes). I’ll note two major problems with it: that it leads to results that are unattractively sensitive to the spatio-temporal distribution of value, and that it fails to rank tons of stuff.
On the first problem, what about the following line of reasoning. (I know there's probably a good reply to this, but I haven't gotten far enough to see it myself.)
On a large enough scale, the universe either becomes homogenous, or it's possible to have infinite utility in a finite region of space time. The idea of infinite (dis)value in a finite region of space time seems crazy to me. (Does this already commit me to the view "Maybe infinities are just not a thing?") Therefore, let's stipulate that the latter is the case (the universe is homogenous on a sufficiently large timescale).
For these expansionist approaches with expanding spheres and value densities, wouldn't the goal be to make the measurement spheres large enough such that whatever patterns there are start to repeat themselves (because of large-scale homogeneity)?
You write:
Consider an infinite line of planets, each of which houses a Utopia, and none of which will ever interact with any of the others. On expansionism, it is extremely good to pull all these planets an inch closer together: so good, indeed, as to justify any finite addition of dystopias to the world (thanks to Amanda Askell, Hayden Wilkinson, and Ketan Ramakrishnan for discussion). After all, pulling on the planets so that there’s an extra Utopia every x inches will be enough for the eventual betterness of the uniform expansions to compensate for any finite number of hellscapes.
However, if you imagine that the universe is homogenous at a large enough scale, then by pulling planets closer to each other in one location, you thereby increased their distance in other locations. In total, you made some regions more dense and other regions less dense. By pulling some planets closer together, you messed up the universe's homogeneity at the scale of your measurement sphere. That arguably defeats the purpose of the measurement, and it would be more "fair" to measure again with a larger sphere, large enough so the artificial difference you created by moving planets no longer matters. (To truly affect the density of the largest-but-still finite regions, you'd have to move around and infinite number of value locations.)
I'm not sure I've described this well, but there's something here that makes me wonder whether "value density" is perhaps not a completely arbitrary construct.
MichaelStJules @ 2022-02-01T00:19 (+2)
We could become fanatical about affecting the rate of cosmic expansion, but I think space colonization (+ acausal influence) would probably be more important (good or bad, depending on how it goes and relative weights given to goods and bads).
lexande @ 2022-04-18T22:51 (+1)
I really enjoyed this post, but have a few issues that make me less concerned about the problem than the conclusion would suggest:
- Your dismissal in section X of the "weight by simplicity" approach seems weak/wrong to me. You treat it as a point against such an approach that one would pay to "rearrange" people from more complex to simpler worlds, but that seems fine actually, since in that frame it's moving people from less likely/common worlds to more likely/common ones.
- I lean towards conceptions of what makes a morally relevant agent (or experience) under which there are only countably many of them. It seems like two people with the exact same full life experience history are the same person, and the same seems plausible for two people whose full-life-experience-histories can't be distinguished by any finite process, in which case each person can be specified by finitely much information and so there are at most countably many of them. I think if you're willing to put 100% credence on some pretty plausible physics you can maybe even get down to finitely many possible morally relevant morally distinct people, since entropy and the speed of light may bound how large a person can be.
- My actual current preferred ethics is essentially "what would I prefer if I were going to be assigned at random to one of the morally-relevant lives ever eventually lived" (biting the resulting "sadistic conclusion"-flavoured bullets). For infinite populations this requires that I have some measure on the population, and if I have to choose the measure arbitrarily then I'm subject to most of the criticisms in this post. However I believe the infinite cosmology hypotheses referenced generally come along with fundamental measures? Indeed a measure over all the people one might be seems like it might be necessary for a hypothesis that purports to describe the universe in which we in fact find ourselves. If I have to dismiss hypotheticals that don't provide me with a measure on the population as ill-formed and assign zero credence to universes without a fundamental measure that's a point against my approach but I think not a fatal one.
Arepo @ 2022-02-03T08:44 (+1)
I have a couple of criticisms that feel simultaneously naive and unaddressed by these infinitarian arguments:
since it is always possible to get evidence that infinite payoffs are available (God could always appear before you with various multi-colored buttons), non-zero-credences seem mandatory.
This kind of argument seems far too handwavey. What does this scenario mean concretely? A white beardy guy who can walk on water and tell a great story about that time he wrestled Jacob to a standstill teleports in front of you with some ominously labelled buttons? I cannot see any comprehensible version of this leading me to the belief that any particular action of mine (or his) could generate infinite value. Ie if extraordinary claims require extraordinary evidence, then infinitely extraordinary claims should require infinite evidence.
Joe, I don’t like funky science or funky decision theory. And fair enough. But like a good Bayesian, you’ve got non-zero credence on them both (otherwise, you rule out ever getting evidence for them), and especially on the funky science one. And as I’ll discuss below, non-zero credence is enough.
'Decision theory' doesn't feel like a concept that parses as a parameter in Bayes' theorem. That is, Bayes theorem seems like a statement about physical properties, and how likely they are to obtain. A decision theory is an algorithm that takes (the output of) Bayesian reasoning as a parameter. Obviously this leaves us with the question of which decision theory we follow and why, but to me this is best conceived not as a choice - and certainly not as a thing you can update on given data about physical properties - but as a process of clarifying what decision algorithm you're already running and bugfixing its execution. Conceived this way it doesn't make sense to describe it as something you can have credences in.
We could perhaps develop some vaguely analogous-to-credences concept, since there are obviously still difficulties in determining such an algorithm, but I don't think we should assume that a concept that feels vaguely analogous will still behave exactly like an input in a theorem from another conceptual domain.
(very speculative)
It doesn't feel obviously inconsistent to think (there's a chance) we live in a universe with infinite utilons and concern ourselves with finite value. We might coherently talk about total value in some contexts, but if I consider a utilitarian algorithm to be something like 'maximise the expected value caused by my action', it doesn't seem to matter if beyond my light cone, infinite utility is being had.
This gets messier if I assume a nonzero probability of us (eg) reversing entropy, and so of my action having arbitrarily many future consequences, but I can imagine this being solvable with a model of epistemic uncertainty in which my estimates of value difference between actions asymptotically approach 0 as we look further into the future (ie with a more formal modelling of cluelessness).
I think this approach makes more sense if, per 2), you don't think of a moral/decision theory as being something true or false, but as an understanding of an algorithm whose execution we bugfix.