The Last Stop on the Crazy Train

By Bentham's Bulldog @ 2025-11-05T14:21 (+16)

Crosspost of my blog

David Friedman once described some class of people as “economists only during their day job.” His basic point was that some people think like economists professionally, but totally forget everything they learned from economics when analyzing political issues. Similarly, some people are utilitarians only during their day job. They declare their support for increasing global utility, and then haphazardly mutter some convenient excuse about why that implies they should do whatever they were otherwise planning on doing.

Vasco Grilo is not one of those people.

Vasco began by saying that people should give money to the Shrimp Welfare Project. But then, when he began counting the welfare of the roughly billion soil nematodes per person, he concluded that normal GiveWell-style charities beat the Shrimp Welfare Project, because they lower soil nematode populations (though he GiveWell charities are less effective for saving lives than the High-Impact Philanthropy Fund). And he’s not even that confident that soil nematodes have bad lives!

Now, Vasco has started to suggest that it’s even more of a crush for saving human lives if you count microorganisms, even if you assume there’s a tiny chance they’re conscious (and give a tiny estimate of their consciousness conditional on them being conscious). This is because there are many microorganisms.

On the one hand, I’m kind of sympathetic to this. Part of the reason I give a sizeable portion of my monthly charitable donations to humans is because I think doing so lowers wild animal populations. But I think there’s a deeper underlying problem with this approach.

Vasco estimated that per dollar, HIPF prevents about 5 billion years of soil nematode life (and way more than that many years of bacteria life). But you know what’s a lot more than 5 billion? 10^50.

That’s the number of atoms on Earth. Now, I don’t think atoms are conscious. But Philip Goff does, and he’s pretty smart. The odds that he’s right aren’t, like, one in googol. If you guess that there’s a 1/10^10 chance atoms are conscious, and think their welfare range is 1/10^10 of ours conditional on them being conscious, then the welfare range of atoms on Earth is, in expectation, equivalent of that of 10^30 people. Now, you might be tempted to ignore low probabilities, but I don’t think this is that plausible for reasons I’ve given at length here (and in light of chaotic effects of our actions, even if you are risk averse, you probably should behave mostly like you weren’t).

Now, maybe you can ignore this one because we have no information about what it’s like to be an atom. The philosophical views that say atoms are conscious also imply that their consciousness is transferred over to higher-level brains—so probably our best bet for improving atom welfare is to improve the welfare of biological organisms that we know to be conscious. But still, if atoms might be conscious, then probably we should all be thinking hard about whether we can improve their welfare.

10^50 is a lot. But you know what’s more? 2^86 billion! That’s how many conscious sub-people you might have in your brain.

Suppose it turns out that every combination of neurons which would be enough to sustain consciousness, by itself, has its own associated mind. To give an example, let’s name a random one of my neurons Fred. If Fred disappeared, then all the neurons minus Fred would be conscious. On this view, the neurons minus Fred are conscious. Every combination of neurons that would be enough individually to sustain a mind has its own mind.

And there’s something kind of plausible about this. It’s a bit weird that the existence of Fred affects whether some combination of neurons other than Fred give rise to a unique mind. That makes consciousness weirdly extrinsic. Whether some neurons form a mind would depend on the presence of other neurons.

Now, I’m not saying this is that likely (though it does have surprisingly good arguments in its favor). But if this theory is true, it implies that brains which have about 86 billion neurons would contain around 2^86 billion conscious subsystems. It implies that a single African elephant with about 257 billion neurons has orders of magnitude more moral worth than all humans on Earth, on account of its staggeringly large number of conscious subsystems.

And you know what’s more than 2^86 billion (or even 2^257 billion)? Infinity. But that is how many years religious people tend to think we’ll spend in hell. So then maybe if you give non-zero credence to some religion being correct, you should spend all your time evangelizing.

One could keep going. The crazy train has many destinations. There are countless ways that our actions might affect unfathomably large numbers of others. If it just had one stop, you could simply do what was astronomically important on that theory. But if it has many, often pointing in completely opposite directions, then it’s hard to get off the crazy train at any particular stop.

Fortunately, I think there is a nice solution: you should just be a Longtermist.

Longtermists are those who think we should be doing a lot more to make the far future go well. Mostly, this involves reducing existential risks, because if the species goes extinct, then we won’t be able to bring about lots of future value. It also involves trying to steer institutions and values to make the future better. A future in which people have better, more humane, and more sentientist values is one that’s a lot likelier to contain astronomical amounts of value.

The future lasts billions of years, and it could sustain staggeringly large numbers of future people. For anything you could possibly imagine being worth promoting, we’ll be in a much better position to promote it in the far future. If atoms matter, we’ll be in a better position to promote atomic welfare in the far future than we are today. If the number of sub-minds grows exponentially with the number of neurons, future people with godlike technology will be in an ideal position to make super happy superminds with staggeringly large numbers of neurons.

If God gives eternal life to some people, then people probably have on average infinitely good total existences, and so increasing the number of future people, by being a Longtermist, is infinitely valuable. In fact, because the future could contain, according to Bostrom’s estimate, 10^52 happy people, it might be that each dollar given to Longtermist organizations enables on the order of 10^30 extra lives—and those extra lives have on average infinitely good total existences.

This holds most of all for all the crazy conclusions we haven’t thought of. Whatever is most important—whatever true conclusions have astronomical stakes—will be easier to affect in the far future. No matter what has value, only a tiny slice of it exists today. As Will MacAskill says in What We Owe The Future:

But now imagine that you live all future lives, too. Your life, we hope, would be just beginning. Even if humanity lasts only as long as the typical mammalian species (one million years), and even if the world population falls to a tenth of its current size, 99.5 percent of your life would still be ahead of you. On the scale of a typical human life, you in the present would be just five months old. And if humanity survived longer than a typical mammalian species—for the hundreds of millions of years remaining until the earth is no longer habitable, or the tens of trillions remaining until the last stars burn out—your four trillion years of life would be like the first blinking seconds out of the womb.

If you think about the future of humanity as like the life of a person, then Longtermism starts to look really obvious. Of course the stuff after the first few seconds would matter more than the first few seconds. Of course the first five months matter less than what comes later.

The future could have unimaginably awesome technology of a sort that we in the present can barely grok. If we have good values and godlike technology, then we’ll be in a better state to address whatever it is that ultimately matters most. To those who care about conclusions with astronomical stakes, then, steering the world towards such a future should be the top priority.


titotal @ 2025-11-06T10:29 (+7)

Longtermism doesn't get you out of the crazy train at all. In a lot of crazy train frameworks, the existence of people is net negative, so a large future for humanity is the worst thing that could happen. 

This is one of the reasons I'm concerned about people taking these sort of speculative expected value calculations too seriously: I don't want someone trying to end humanity because they futzed up on a math problem. 

Jim Buhler @ 2025-11-07T08:39 (+1)

In a lot of crazy train frameworks, the existence of people is net negative, so a large future for humanity is the worst thing that could happen.

Curious to know why you think these frameworks are crazier than the frameworks that say it's net positive.

Or are you saying it's too crazy in both cases and that we should reduce extinction risks (or at least not increase them) for non-longtermist reasons?

Jim Buhler @ 2025-11-06T08:57 (+6)

I don't see how longtermism solves this. It doesn't cancel the argument according to which you should believe, e.g., what matters most is conscious sub-people you might have in your brain. It just adds "in the long-term" to it.

What makes you believe reducing x-risks (or whatever longtermist project) does more good than harm, considering all sub-people in the long-term? (or atoms, or beneficiaries of acausal trade, or whatever.)

My preferred solution to the crazy-town problem fwiw: modeling our uncertain beliefs with imprecise probabilities. I find this well-motivated anyway, but this happens to break at least the craziest Pascalian wagers, assuming plausible imprecise credences (see DiGiovanni 2024).

Anthony DiGiovanni @ 2025-11-06T15:52 (+2)

this happens to break at least the craziest Pascalian wagers, assuming plausible imprecise credences (see DiGiovanni 2024).

FWIW, since writing that post, I've come to think it's still pretty dang intuitively strange if taking the Pascalian wager is permissible on consequentialist grounds, even if not obligatory. Which is what maximality implies. I think you need something like bracketing in particular to avoid that conclusion, if you don't go with (IMO really ad hoc) bounded value functions or small-probability discounting.

(This section of the bracketing post is appropos.)

tobycrisford 🔸 @ 2025-11-05T21:39 (+6)

I don't think longtermism is a nice solution to this problem. If you're open to letting astronomically large but unlikely scenarios dominate your expected value calculations, then I don't think this rounds out nicely to simply "reduce existential risk". The more accurate summary would be: reduce existential risk according to a worldview in which astronomical value is possible, which is likely to lead to very different recommendations than if you were to attempt to reduce existential risk unconditionally.

 https://forum.effectivealtruism.org/posts/RCmgGp2nmoWFcRwdn/should-strong-longtermists-really-want-to-minimize 
  

Ian Turner @ 2025-11-06T01:21 (+5)

I think this sounds nice but seems to presuppose that we know what to do to make the long-term go well. The situation with AI should inform us that it's actually quite possible to go in with good intentions and instead of making things better, actually make them worse.

Vasco Grilo🔸 @ 2025-11-05T16:50 (+5)

Thanks for the post, Matthew!

Vasco began by saying that people should give money to the Shrimp Welfare Project. But then, when he began counting the welfare of the roughly billion soil nematodes per person, he concluded that normal GiveWell-style charities beat the Shrimp Welfare Project, because they lower soil nematode populations (though he GiveWell charities are less effective for saving lives than the High-Impact Philanthropy Fund [HIPF]). And he’s not even that confident that soil nematodes have bad lives!

Meanwhile, I have become very uncertain about whether increasing agricultural land, such as by saving human lives, increases or decreases the number of soil nematodes. I recommend decreasing the uncertainty about effects on soil animals and microorganisms by making donations to Rethink Priorities (RP) restricted to projects on soil animals and microorganisms.

Vasco estimated that per dollar, HIPF prevents about 5 billion years of soil nematode life (and way more than that many years of bacteria life). But you know what’s a lot more than 5 billion? 10^50.

The future may have 10^50 QALYs of value, but I doubt one could increase future welfare by 10^50 QALY/$. Are you aware of any reasonably empirical quantitative model estimating the increase in welfare per $ accounting for longterm effects?

10^50 is a lot. But you know what’s more? 2^86 billion! That’s how many conscious sub-people you might have in your brain.

If welfare per human-year was proportional to 2^"number of neurons", a person with one more neuron than another would have 2 times as much welfare per human-year.