Arguments for utilitarianism are impossibility arguments under unbounded prospects

By MichaelStJules @ 2023-10-07T21:09 (+39)

This is a crosspost, probably from LessWrong. Try viewing it there.


MichaelStJules @ 2023-10-08T01:06 (+14)

I'd be happy to get constructive criticism, given downvotes I was getting soon after posting. I'll leave some comment replies here for people to agreevote/disagreevote with in case they want to stay anonymous. I also welcome feedback as comments here or private messages.

I've removed my own upvotes from these comments so this thread doesn't start at the top of the comment section. EDIT: Also, keep my comments in this thread at 0 karma if you want to avoid affecting my karma for making so many comments.

Arepo @ 2023-10-09T21:46 (+8)

I haven't downvoted it, and I'm sorry you're getting that response for a thoughtful and in-depth piece of work, but I can offer a couple of criticisms I had that have stopped me upvoting it yet because I don't feel like I understand it, mixed in with a couple of criticisms where I feel like I did:

  • Too much work done by citations. Perhaps it's not possible to extract key arguments, but most philosophy papers IME have their core point in just a couple of paragraphs, which you could quote, summarise or refer to more precisely than a link to the whole paper. Most people on this forum just won't have the bandwidth to go digging through all the links.
  • The arguments for infinite prospective utility didn't hold up for me. A spatially infinite universe doesn't give us infinite expectation from our action - even if the universe never ends, our light cone will always be finite. Re Oesterheld's paper, acausal influence seems an extremely controversial notion in which I personally see no reason to believe. Certainly if it's a choice between rejecting that or scrabbling for some alternative to an intuitive approach that in the real world has always yielded reasonable solutions, I'm happy to count that as a point against Oesterheld.
  • Relatedly, some parts I felt like you didn't explain well enough for me to understand your case, eg:
    • I don't see the argument in this post for this: 'So, based on the two theorems, if we assume Stochastic Dominance and Impartiality,[18] then we can’t have Anteriority (unless it’s not worse to add more people to hell) or Separability.' It seemed like you just attempted to define these things and then asserted this - maybe I missed something in the definition?
    • 'You are facing a prospect  with infinite expected utility, but finite utility no matter what actually happens. Maybe  is your own future and you value your years of life linearly, and could live arbitrarily but finitely long, and so long under some possibilities that your life expectancy and corresponding expected utility is infinite.' I don't see how this makes sense. If all possible outcomes have me living a finite amount of time and generating finite utility per life-year, I don't see why expectation would be infinite.
  • Too much emphasis on what you find 'plausible'. IMO philosophy arguments should just taboo that word.
MichaelStJules @ 2023-10-10T06:50 (+4)

Thanks for the feedback and criticism!

Too much work done by citations.

Hmm, I didn't expect or intend for people to dig through the links, but it looks like I misjudged what things people would find cruxy for the rest of the arguments but not defended well enough, e.g. your concerns with infinite expected utility.

EDIT: I've rewritten the arguments for possibly unbounded impacts.

 

The arguments for infinite prospective utility didn't hold up for me. A spatially infinite universe doesn't give us infinite expectation from our action - even if the universe never ends, our light cone will always be finite.

But can you produce a finite upper bound on our lightcone that you're 100% confident nothing can pass? (It doesn't have to be tight.) If not, then you could consider a St Petersburg-like prospect, with for each , has probability   of size (or impact) , in whatever units you're using. That's finite under every possible outcome, but it has an infinite expected value.

Re Oesterheld's paper, acausal influence seems an extremely controversial notion in which I personally see no reason to believe.

Section II from Carlsmith, 2021 is one of the best arguments for acausal influence I'm aware of, in case you're interested in something more convincing. (FWIW, I also thought acausal influence was crazy for a long time, and I didn't find Newcomb's problem to be a compelling reason to reject causal decision theory.)

EDIT: I've now cut the acausal stuff and just focus on unbounded duration.

 

It seemed like you just attempted to define these things and then asserted this - maybe I missed something in the definition?

This follows from the theorems I cited, but I didn't include proofs of the theorems here. The proofs are technical and tricky,[1] and I didn't want to make my post much longer or spend so much more time on it. Explaining each proof in an intuitive way could probably be a post on its own.

 

I don't see how this makes sense. If all possible outcomes have me living a finite amount of time and generating finite utility per life-year, I don't see why expectation would be infinite.

How long you live could be distributed like a St Petersburg gamble, e.g. for each , with probability ., you could live  years. The expected value of that is infinite, even though you'd definitely only live a finite amount of time.

 

Too much emphasis on what you find 'plausible'. IMO philosophy arguments should just taboo that word.

Ya, I got similar feedback on an earlier draft for making it harder to read, and tried to cut some uses of the word, but still left a bunch. I'll see if I can cut some more.

  1. ^

    They work by producing some weird set of prospects. They then show that you can't order them in a way that satisfies the axioms, applying them one-by-one and then violating one of them or getting a contradiction.

Arepo @ 2023-10-10T09:14 (+2)

But can you produce a finite upper bound on our lightcone that you're 100% confident nothing can pass? (It doesn't have to be tight.)

I think Vasco already made this point elsewhere, but I don't see why you need certainty about any specific line to have finite expectation. If for the counterfactual payoff x, you think (perhaps after a certain point) xP(x) approaches 0 as x tends to infinity, it seems like you get finite expectation without ever having absolute confidence in any boundary (this applies to life expectancy, too).

Section II from Carlsmith, 2021 is one of the best arguments for acausal influence I'm aware of, in case you're interested in something more convincing. (FWIW, I also thought acausal influence was crazy for a long time, and I didn't find Newcomb's problem to be a compelling reason to reject causal decision theory.)

Thanks! I had a look, and it still doesn't persuade me, for much the reasons Newcomb's problem didn't. In roughly ascending importance

  1. Maybe this just a technicality, but the claim 'you are exposed to exactly identical inputs' seems impossible to realise with perfect precision. The simulator itself must differ in the two cases. So in the same way that outputs of two instances of a software program being run, even on the same computer in the same environment can theoretically differ for various reasons (looking at a high enough zoom level they will differ), the two simulations can't be guaranteed identical (Carlsmith even admits this with 'absent some kind of computer malfunction', but just glosses over it). On the one hand, this might be too fine a distinction to matter in practice; on the other, if I'm supposed to believe a wildly counterintuitive proposition instead of a commonsense one that seems to work fine in the real world, based on supposed logical necessity that it turns out isn't logically necessary, I'm going to be very sceptical of the proposition even if I can't find a stronger reason to reject it.
  2. The thought experiment gives no reason why the AI system should actually believe it's in the scenario described, and that seems like a crucial element in its decision process. If in the real world, someone put me in a room with a chalkboard and told me this is what was happening, no matter what evidence they showed, I would have some element of doubt, both of their ability (cf point 1) but more importantly their motivations. If I discovered that the world was so bizarre as in this scenario, it would be at best a coinflip for me that I should take them at face value. 
  3. It seems contradictory to frame decision theory as applying to 'a deterministic AI system' whose clones 'will make the same choice, as a matter of logical necessity'. There's a whole free will debate lurking underneath any decision theoretic discussion involving recognisable agents that I don't particularly want to get into - but if you're taking away all agency from the 'agent', it's hard to see what it means to advocate it adopting a particular decision theory. At that point the AI might as well be a rock, and I don't feel like anyone is concerned about which decision theory rocks 'should' adopt. 

This follows from the theorems I cited, but I didn't include proofs of the theorems here. The proofs are technical and tricky,[1] and I didn't want to make my post much longer or spend so much more time on it. Explaining each proof in an intuitive way could probably be a post on its own.

I would be less interested to see a reconstruction of a proof of the theorems and more interested to see them stated formally and a proof of the claim that it follows from them. 

MichaelStJules @ 2023-10-10T10:44 (+2)

On Carlsmith's example, we can just make it a logical necessity by assuming more. And, as you acknowledge the possibility, some distinctions can be too fine. Maybe you're only 5% sure your copy exists at all and the conditions are right for you to get $1 million from your copy sending it.

5%*$1 million = $50,000 > $1,000, so you still make more in expectation from sending a million dollars. You break even in expected money if your decision to send $1 million increases your copy's probability of sending $1 million by 1/1,000.

I do find it confusing to think about decision-making under determinism, but I think 3 proves too much. I don't think quantum indeterminacy or randomness saves free will or agency if it weren't already saved, and we don't seem to have any other options, assuming physicalism and our current understanding of physics.

MichaelStJules @ 2023-10-10T10:01 (+2)

I think Vasco already made this point elsewhere, but I don't see why you need certainty about any specific line to have finite expectation. If for the counterfactual payoff x, you think (perhaps after a certain point) xP(x) approaches 0 as x tends to infinity, it seems like you get finite expectation without ever having absolute confidence in any boundary (this applies to life expectancy, too).

Ya, I agree you don't need certainty about the bound, but now you need certainty about the distribution not being heavy-tailed at all. Suppose your best guess is that it looks like some distribution , with finite expected value. Now, I suggest that it might actually be , which is heavy-tailed (has infinite expected value). If you assign any nonzero probability to that being right, e.g. switch to  for some ,  then your new distribution is heavy-tailed, too. In general, if you think there's some chance you'd come to believe it's heavy-tailed, then you should believe now that it's heavy-tailed, because a probabilistic mixture with a heavy-tailed distribution is heavy-tailed. Or, if you think there's some chance you'd come to believe there's some chance it's heavy-tailed, then you should believe now that it's heavy-tailed.

(Vasco's claim was stronger: the difference is exactly 0 past some point.)

I would be less interested to see a reconstruction of a proof of the theorems and more interested to see them stated formally and a proof of the claim that it follows from them. 

Hmm, I might be misunderstanding.

I already have formal statements of the theorems in the post:

  1. Stochastic Dominance, Anteriority and Impartiality are jointly inconsistent.
  2. Stochastic Dominance, Separability and Impartiality are jointly inconsistent.

All of those terms are defined in the section Anti-utilitarian theorems. I guess I defined Impartiality a bit informally and might have hidden some background assumptions (preorder, so reflexivity + transitivity, and the set of prospects is every probability distribution over outcomes in the set of outcomes), but the rest were formally defined.

Then, from 1, assuming Stochastic Dominance and Impartiality, Anteriority must be false. From 2, assuming Stochastic Dominance and Impartiality, Separability must be false. Therefore assuming Stochastic Dominance and Impartiality, Anteriority and Separability must both be false.

MichaelStJules @ 2023-10-08T01:10 (+6)

The post is too long.

MichaelStJules @ 2023-10-08T01:08 (+2)

The title is bad, e.g. too provocative, clickbaity, overstates the claims or singles out utilitarianism too much (there are serious problems with other views).

EDIT: I've changed the title to "Arguments for utilitarianism are impossibility arguments under unbounded prospects". Previously, it was "Utilitarianism is irrational or self-undermining". Kind of long now, but descriptive and less provocative.

Vasco Grilo @ 2023-10-09T10:21 (+6)

Thanks for the post, Michael! I strongly endorse expectational total hedonistic utilitarianism, but strongly upvoted it for thoughtfully challenging the status-quo, and because one should be nice to other value systems.

However, total welfare, and differences in total welfare between prospects, may be unbounded, because the number of moral patients and their welfares may be unbounded. There are no 100% sure finite upper bounds on how many of them we could affect.

I agree total welfare may be unbounded for the reasons you mention, but I would say differences in total welfare between prospects have to be bounded. I think we have perfect evidential symmetry (simple cluelessness) between prospects beyond a sufficiently large (positive or negative) welfare, in which case the difference between their welfare probability density functions is exactly 0. So I believe the tails of the differences in total welfare between prospects are bounded, and so are the expected differences in total welfare between prospects. One does not know the exact points the tails of the differences in total welfare between prospects reach 0, but that only implies decisions will fall short of perfect, not that what one should do is undefined, right?

This post is concerned with the implications of prospects with infinitely many possible outcomes and unbounded but finite value, not actual infinities, infinite populations or infinite ethics generally.

I am glad you focussed on real outcomes.

One might claim that we can uniformly bound the number of possible outcomes by a finite number across all prospects. But consider the maximum number across all prospects, and a maximally valuable (or maximally disvaluable) but finite value outcome. We should be able to consider another outcome not among the set. Add a bit more consciousness in a few places, or another universe in the multiverse, or extend the time that can support consciousness a little. So, the space of possibilities is infinite, and it’s reasonable to consider prospects with infinitely many possible outcomes.

I would reply to this as follows. If "the maximum number across all prospects" is well chosen, one will have perfect evidential symmetry between any prospects for higher welfare levels than the maximum. Consequently, there will be no difference between the prospects for outcomes beyond the maximum, and the expected difference between prospects will be maintained when we "add a bit more consciousness".

MichaelStJules @ 2023-10-09T15:39 (+4)

Thanks for engaging!

On symmetry between options in the tails, if you think there's no upper bound with certainty on how long our descendants could last, then reducing extinction risk could have unbounded effects. Maybe other x-risks, too. I do think heavy tails like this are very unlikely, but it's hard to justifiably rule them out with certainty.

Or, you could have a heavy tail on the number of non-solipsist simulations, or the number of universes in our multiverse (if spatially very large, or the number of quantum branches, or the number of pocket universes, or if the universe will start over many times, like a Big Bounce, etc.), and acausal influence over what happens in them.

Derek Shiller @ 2023-10-09T01:03 (+6)

The money pump argument is interesting, but it feels strange to take away a decision-theoretic conclusion from it because the issue seems centrally epistemic. You know that the genie will give you evidence that will lead you to come to believe B has a higher expected value than A. Despite knowing this, you're not willing to change your mind about A and B without that evidence. This is a failure of van Fraassen's principle of reflection, and it's weird even setting any choices you need to make aside. That failure of reflection is what is driving the money pump. Giving up on unbounded utilities or expected value maximization won't save you from the reflection failure, so it seems like the wrong solution. Either there is a purely epistemic solution that will save you or your practical irrationality is merely the proper response to an inescapable epistemic irrationality.

MichaelStJules @ 2023-10-09T17:14 (+2)

There's also a reflection argument in Wilkinson, 2022, in his Indology Objection. Russell, 2023 generalizes the argument with a theorem:

Theorem 5. Stochastic Dominance, Negative Reflection, Background Independence, and Positive and Negative Compensation together imply Fanaticism.

and Russell, 2023 defines Negative Reflection based on Wilkinson, 2022's more informal argument as follows:

Negative Reflection.For prospects X and Y and a question Q, if X is not better than Y conditional on any possible answer to Q, then X is not better than Y unconditionally.

Background Independence is a weaker version of Separability. I think someone who denies Separability doesn't have much reason to satisfy Background Independence, because I expect intuitive arguments for Background Independence (like the Egyptology objection) to generalize to arguments for Separability.

But still, either way, Russell, 2023 proves the following:

Theorem 6. Stochastic Dominance and Negative Reflection together imply that Fanaticism is false.

This rules out expected utility maximization with unbounded utility functions.

MichaelStJules @ 2023-10-09T02:42 (+2)

Satisfying the Countable Sure-Thing Principle (CSTP, which sounds a lot like the principle of reflection) and updating your credences about outcomes properly as a Bayesian and looking ahead as necessary should save you here. Expected utility maximization with a bounded utility function satisfies the CSTP so it should be safe. See Russell and Isaacs, 2021 for the definition of the CSTP and a theorem, but it should be quick to check that expected utility maximization with a bounded utility function satisfies the CSTP.

You can also preserve any preorder over outcomes from an unbounded real-valued utility function with a bounded utility function (e.g. apply arctan) and avoid these problems. So to me it does seem to be a problem with the attitudes towards risk involved with unbounded utility functions, and it seems appropriate to consider implications for decision theory.

Maybe it is also an epistemic issue, too, though. Like it means having somehow (dynamically?) inconsistent or epistemically irrational joint beliefs.

Are there other violations of the principle of reflection that aren't avoidable? I'm not familiar with it.

Derek Shiller @ 2023-10-10T17:14 (+3)

Are there other violations of the principle of reflection that aren't avoidable? I'm not familiar with it

The case reminded me of one you get without countable additivity. Suppose you have two integers drawn with a fair chancy process that is as likely to result in any integer. What’s the probability the second is greater than the first? 50 50. Now what if you find out the first is 2? Or 2 trillion? Or any finite number? You should then think the second is greater.

MichaelStJules @ 2023-10-10T18:22 (+3)

Ya, that is similar, but I think the implications are very different.

The uniform measure over the integers can't be normalized to a probability distribution with total measure 1. So it isn’t a real (or proper) probability distribution. Your options are, assuming you want to address the problem:

  1. It's not a valid set of credences to hold.
  2. The order on the integers (outcomes) is the problem and we have to give it up (at least for this distribution).

2 gives up a lot more than 1, and there’s no total order we can replace it with that will avoid the problem. Giving up the order also means giving up arithmetical statements about the outcomes of the distribution, because the order is definable from addition or the successor function.

If you give up the total order entirely (not just for the distribution or distributions in general), then you can't even form the standard set of natural numbers, because the total order is definable from addition or the successor function. So, you're forced to give up 1 (and the Axiom of Infinity from ZF) along with it, anyway. You also lose lots of proofs in measure theory.

OTOH, the distribution of outcomes in a St Petersburg prospect isn't improper. The probabilities sum to 1. It's the combination with your preferences and attitudes to risk that generate the problem. Still, you can respond nearly the same two ways:

  1. It's not a valid set of credences (over outcomes) to hold.
  2. Your preferences over prospects are the problem and we have to give them up.

However, 2 seems to give up less than 1 here, because:

  1. There's little independent argument for 1.
  2. You can hold such credences over outcomes without logical contradiction. You can still have non-trivial complete preferences and avoid the problem, e.g. with a bounded utility function.
  3. Your preferences aren't necessary to make sense of things like the total order on the integers is.
Derek Shiller @ 2023-10-10T22:23 (+3)

The other unlisted option (here) is that we just accept that infinities are weird and can generate counter-intuitive results and that we shouldn't take too much from them because it is easier to blame them then all of the other things wrapped up with them. I think the ordering on integers is weird, but it's not a metaphysical problem. The weird fact is that every integer is unusually small. But that's just a fact, not a problem to solve.

Infinities generate paradoxes. There are plenty of examples. In decision theory, there is also stuff like Satan's apple and the expanding sphere of suffering / pleasure. Blaming them all on the weirdness of infinities just seems tidier than coming up with separate ad hoc resolutions.

MichaelStJules @ 2023-10-13T06:38 (+3)

I think there's something to this. I argue in Sacrifice or weaken utilitarian principles that it's better to satisfy the principles you find intuitive more than less (i.e. satisfy weaker versions, which could include the finitary or deterministic case versions, or approximate versions). So, it's kind of a matter of degree. Still, I think we should have some nuance about infinities rather than treat them all the same and paint their consequences as all easily dismissable. (I gather that this is compatible with your responses so far.)

In general, I take actual infinities (infinities in outcomes or infinitely many decisions or options) as more problematic for basically everyone (although perhaps with additional problems for those with impartial aggregative views) and so their problems easier to dismiss and blame on infinities. Problems from probability distributions with infinitely many outcomes seem to apply much more narrowly and so harder to dismiss or blame on infinities.

 

 

(The rest of this comment goes through examples.)

And I don't think the resolutions are in general ad hoc. Arguments for the Sure-Thing Principle are arguments for bounded utility (well, something more general), and we can characterize the ways that avoid the problem as such (given other EUT axioms, e.g. Russell and Isaacs, 2021). Dutch book arguments for probabilism are arguments that your credences should satisfy certain properties not satisfied by improper distributions. And improper distributions are poorly behaved in other ways that make them implausible for use as credences. For example, how do you define expectations, medians and other quantiles over them — or even the expected value of a nonzero constant functions or two-valued step function over improper distributions — in a way that makes sense? Improper distributions just do very little of what credences are supposed to do.

There are also representation theorems in infinite ethics, specifically giving discounting and limit functions under some conditions in Asheim, 2010 (discussed in West, 2015), and average utilitarianism under others in Pivato (2021, and further discussed in 2022 and 2023).

Satan's apple would be a problem for basically everyone, and it results from an actual infinity, i.e. infinitely many actual decisions made. (I think how you should handle it in practice is to precommit to taking at most a specific finite number of pieces of the apple, or use a probability distribution, possibly one with infinite expected value but finite with certainty.)

Similarly, when you have infinitely many options to choose from, there may not be any undominated option. As long as you respect statewise dominance and have two outcomes A and B, with one strictly worse than the other, then there's no undominated option among the set pA + (1-p)B, for all p strictly between 0 and 1 (with p=1/n or 1-1/n for each n). These are cases where the argument for dismissal is strong, because "solving" these problems would mean giving up the most basic requirements of our theories. (And this fits well with scalar utilitarianism.)

My inclination for the expanding sphere of suffering/pleasure is that there are principled solutions:

  1. If you can argue for the separateness of persons, then you should sum over each person's life before summing across lives. Or, people's utility values are in fact utility functions, just preferences about how things go, then there may be nothing to aggregate within the person. There's no temporal aggregation over each person in Harsanyi's theorem.
  2. If we have to pick an order to sum in or take a value density over, there are more or less natural ones, e.g. using a sequence of nested compact convex sets whose union is the whole space. If we can't pick one, we can pick multiple or them all, either allowing incompleteness with a multi-utility representation (Shapley and Baucells, 1998Dubra, Maccheroni, and Ok, 2004McCarthy et al., 2017McCarthy et al., 2021), or having normative uncertainty between them.
MichaelStJules @ 2023-10-09T03:27 (+2)

I think Parfit's Hitchhiker poses a similar problem for everyone, though.

You're outside town and hitchhiking with your debit card but no cash, and a driver offers to drive you if you pay him when you get to town. The driver can also tell if you'll pay (he's good at reading people), and will refuse to drive if he predicts that you won't. Assuming you'd rather keep your money than pay conditional on getting into town, it would be irrational to pay then (you wouldn't be following your own preferences). So, you predict that you won't pay. And then the driver refuses to drive you, and you lose.

So, the thing to do here is to somehow commit to paying and actually pay, despite it violating your later preference to not pay when you get into town.

And we might respond the same way for A vs B-$100 in the money pump in the post: just commit to sticking with A (at least for high enough value outcomes) and actually do it, even though you know you'll regret it when you find out the value of A.

So maybe (some) money pump arguments prove too much? Still, it seems better to avoid having these dilemmas when you can, and unbounded utility functions face more of them.

On the other hand, you can change your preferences so that you actually prefer to pay when you get into town. Doing the same for the post's money pump would mean actually not preferring B-$100 over some finite outcome of A. If you do this in response to all possible money pumps, you'll end up with a bounded utility function (possibly lexicographic, possibly multi-utility representation). Or, this could be extremely situation-specific preferences. You don't have to prefer to pay drivers all the time, just in Parfit's Hitchhiker situations. In general, you can just have preferences specific to every decision situation to avoid money pumps. This violates the Independence of Irrelevant Alternatives, at least in spirit.

See also https://www.lesswrong.com/tag/parfits-hitchhiker

CarlShulman @ 2023-10-08T03:35 (+5)

Now, there’s an honest and accurate genie — or God or whoever’s simulating our world or an AI with extremely advanced predictive capabilities — that offers to tell you exactly how  will turn out.[9] Talking to them and finding out won’t affect  or its utility, they’ll just tell you what you’ll get.


This seems impossible, for the possibilities that account for ~all the expected utility (without which it's finite)? You can't fit enough bits in a human brain or lifetime (or all accessible galaxies, or whatever). Your brain would have to be expanded infinitely (any finite size wouldn't be enough). And if we're giving you an actually infinite brain, the part about how infinite expectations of finite outcomes are more conservative arguments than actual infinities goes away.
 


I do want to point out that the results here don't depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/controversial domain (although I think an infinite universe shouldn't be controversial, and I'd guess our universe is infinite with probability >80%).

MichaelStJules @ 2023-10-08T07:45 (+2)

And if we're giving you an actually infinite brain, the part about how infinite expectations of finite outcomes are more conservative arguments than actual infinities goes away.

 

My post also covers two impossibility theorems that don't depend on anyone having arbitrary precision or unbounded or infinite representations of anything:[1]

  1. Stochastic Dominance, Anteriority and Impartiality are jointly inconsistent.
  2. Stochastic Dominance, Separability and Compensation (Impartiality) are jointly inconsistent.

The proofs are also of course finite, and the prospects used have finite representations, even though they represent infinitely many possible outcomes and unbounded populations.

  1. ^

    The actual outcome would be an unbounded (across outcomes) representation of itself, but that doesn't undermine the argument.

CarlShulman @ 2023-10-08T14:36 (+2)

I personally think unbounded utility functions don't work, I'm not claiming otherwise here, the comment above is about the thought experiment.

MichaelStJules @ 2023-10-08T03:48 (+1)

It wouldn't have to definitely be infinite, but I'd guess it would have to be expandable to arbitrarily large finite sizes, with the size depending on the outcome to represent, which I think is also very unrealistic. I discuss this briefly in my Responses section. Maybe not impossible if we're dealing with arbitrarily long lives, because we could keep expanding over time, although there could be other practical physical limits on this that would make this impossible, maybe requiring so much density that it would collapse into a black hole?

MichaelStJules @ 2023-10-08T05:06 (+2)

One way to illustrate this point is with Turing machines,[1] with finite but arbitrarily expandable tape to write on for memory. There are (finite) Turing machines that can handle arbitrarily large finite inputs, e.g. doing arithmetic with or comparing two arbitrarily large but finite integers. They only use a finite amount of tape at a time, so we can just feed more and more tape for larger and larger numbers. So, never actually infinite, but arbitrarily large and arbitrarily expandable. A similar argument might apply for more standard computer architectures, with expandable memory, but I'm not that familiar with how standard computers work.

You might respond that we need an actual infinite amount of possible tape (memory space) to be able to do this. Like there has to be an infinite amount of matter available to us to turn into memory space. That isn't true. The universe and amount of available matter for tape could be arbitrarily large but finite, and we could (in principle) need less tape than what could be available in all possible outcomes, and the amount of tape we'd need will scale with the "size" or value of the outcome. For example, if we want to represent how long you'll live in years or an upper bound for it, in case you might live arbitrarily long, the amount of tape you'd need would scale with how long you'd live. It could scale much slower, e.g. we could represent the log of the number of years, or the log of the log, or the log of the log of the log, etc.. Still, the length of the tape would have to be unbounded across outcomes.

So, I'd have concerns about:

  1. there not being enough practically accessible matter available (even if we only ever need a finite amount), and
  2.  the tape being too spread out spatially to work (like past the edge of the observable universe), or
  3. the tape not being packable densely enough without collapsing into a black hole.

So the scenario seems unrealistic and physically impossible. But if it's impossible, it's for reasons that don't have to do with infinite value or infinitely large things (although black holes might involve infinities).

  1. ^

    For anyone unaware, it's a type of computer that you can actually build and run, but not the architecture we actually use.

CarlShulman @ 2023-10-08T14:46 (+5)
  1. there not being enough practically accessible matter available (even if we only ever need a finite amount), and

This is what I was thinking about. If I need a supply of matter set aside in advance to be able to record/receive an answer, no finite supply suffices. Only an infinite brain/tape, or infinite pile of tape making resources, would suffice. 

If the resources are created on demand ex nihilo, and in such a way that the expansion processes can't be just 'left on' you could try to jury rig around it.

MichaelStJules @ 2023-10-08T18:41 (+1)

If the resources are created on demand ex nihilo, and in such a way that the expansion processes can't be just 'left on' you could try to jury rig around it.

The resources wouldn't necessarily need to be created on demand ex nihilo either (although that would suffice), but either way, we're forced into extremely remote possibilities — denying our current best understanding of physics — and perhaps less likely than infinite accessible resources (or other relevant infinities). That should be enough to say it's less conservative than actual infinities and make your point for this particular money pump, but it again doesn't necessarily depend on actual infinities. However, some people actually assign 0 probability to infinity (I think they're wrong to do so), and some of them may be willing to grant this possibility instead. For them, it would actually be more conservative.

The resources could just already exist by assumption in large enough quantities by outcome in the prospect (at least with nonzero probability for arbitrarily large finite quantities). For example, the prospect could be partially about how much information we can represent to ourselves (or recognize). We could be uncertain about how much matter would be accessible and how much we could do with it. So, we can have uncertainty about this and may not be able to put an absolute hard upper bound on it with certainty, even if we could with near-certainty, given our understanding of physics and the universe, and our confidence in them. And this could still be the case conditional on no infinities. So, we could consider prospects with extremely low probability heavy tails for how much we could represent to ourselves, which would have the important features of St Petersburg prospects for the money pump argument. It’s also something we'd care about naturally, because larger possible representations would tend to coincide with much more possible value.

St Petersburg prospects already depend on extremely remote possibilities to be compelling, so if you object to extremely low probabilities or instead assign 0 probability to them (deny the hypothetical), then you can already object at this point without actual infinities. That being said, someone could hold that finding out the value of a St Petersburg prospect to unbounded values is with certainty impossible (without an actual infinity, and so reject Cromwell's rule), but that St Petersburg prospects are still possible despite this.

If you don't deny with certainty the possibility of finding out unbounded values without actual infinities, then, we can allow "Find out " to fail sometimes, but work in enough exotic possibilities with heavy tails that  conditional on it working (but not its specific value) still has infinite expected utility. Then we can replace  in the money pump with a prospect  defined as follows in my next comment, and you still get a working money pump argument.

MichaelStJules @ 2023-10-08T18:43 (+1)

Let  be identically distributed to but statistically independent from  (not any specific value of ).  and  can each have infinite expected utility, by assumption, using an extended definition of "you" in which you get to expand arbitrarily, in extremely remote possibilities.  is also strictly stochastically dominated by , so .

Now, consider the following prospect:

With probability , it's . With the other probability , it's. We can abuse notation to write this in short-hand as

Then, letting , we can compare  to

 strictly stochastically dominates , so . Then the rest of the money pump argument follows, replacing    with , and assuming "Find out " only works sometimes, but enough of the time that  still has infinite expected utility.[1] You don't know ahead of time when "Find out " will work, but when it does, you'll switch to , which would then be , and when "Find out "  doesn't work, it makes no difference. So, your options become:

  1. you (sometimes) pay $50 ahead of time and switch to  to avoid switching to the dominated , which is a sure loss relative to sticking through with  when you do it and irrational.
  2. you stick through with  (or the conditionally stochastically equivalent prospect ) sometimes when "Find out " works, despite  beating the outcome of  you find out, which is irrational.
  3. you always switch to  when "Find out " works, which is a dominated strategy ahead of time, and so irrational.
  1. ^

    Or otherwise beats each of its actual possible outcomes.

MichaelStJules @ 2023-10-08T04:41 (+2)

On the other hand, if we can't rule out arbitrarily large finite brains with certainty, then the requirements of rationality (whatever they are) should still apply when we condition on it being possible.

Maybe we should discount some very low probabilities (or probability differences) to 0 (and I'm very sympathetic to this), but that would also be vulnerable to money pump arguments and undermine expected utility theory, because it also violates the standard finitary versions of the Independence axiom and Sure-Thing Principle.

MichaelStJules @ 2023-10-08T04:13 (+1)

I would guess that arbitrarily large but finite (extended) brains are much less realistic than infinite universes, though. I'd put a probability <1% on arbitrarily large brains being possible, but probability >80% on the universe being infinite. So, maybe actual infinities can make do with more conservative assumptions than the particular money pump argument in my post (but not necessarily unboundedness in general).

MichaelStJules @ 2023-10-08T03:58 (+1)

From my Responses section:

The hypothetical situations where irrational decisions would be forced could be unrealistic or very improbable, and so seemingly irrational behaviour in them doesn’t matter, or matters less. The money pump I considered doesn’t seem very realistic, and it’s hard to imagine very realistic versions. Finding out the actual value (or a finite upper bound on it) of a prospect with infinite expected utility conditional on finite actual utility would realistically require an unbounded amount of time and space to even represent. Furthermore, for utility functions that scale relatively continuously with events over space and time, with unbounded time, many of the events contributing utility will have happened, and events that have already happened can’t be traded away. That being said, I expect this last issue to be addressable in principle by just subtracting from B - $100 the value in A already accumulated in the time it took to estimate the actual value of A, assuming this can be done without all of A’s value having already been accumulated.

(Maybe I'm understating how unrealistic this is.)

MichaelStJules @ 2023-10-07T21:31 (+4)

For more on problems for impartiality, including involving infinite populations, and a comparison of some different utilitarian-ish views and how well-behaved they are (especially 45:40 for a table summary), see

https://globalprioritiesinstitute.org/parfit-memorial-lecture-jeffrey-sanford-russell-problems-for-impartiality/

Handout here.

Calvin_Baker @ 2023-10-16T16:47 (+2)

Hi Michael, thanks for the post! I was really happy to see something like this on the EA Forum. In my view, EAs* significantly overestimate the plausibility of total welfarist consequentialism**, in part due to a lack of familiarity with the recent literature in moral philosophy. So I think posts like this are important and helpful.

* I mean this as a generic term (natural language plurals (usually) aren't universally quantified).

** This isn't to suggest that I think there's some other moral theory that is very plausible. They're all implausible, as far as I can tell; which is partly why I lean towards anti-realism in meta-ethics.