Better Futures Discussion Thread: With Fin Moorhouse
By Toby Tremlett🔹, finm @ 2026-04-20T08:59 (+40)
This week, we are highlighting Forethought's Better Futures series. To make the future go better, we can either work to avoid near-term catastrophes like human extinction or improve the futures where we survive. This series from Forethought explores that second option.
Fin Moorhouse (@finm), who authored two chapters in the series (Convergence and Compromise, and No Easy Eutopia) along with @William_MacAskill, has agreed to answer a few of your questions.
You can read (and comment) on the full series on the Forum. In order, the chapters are:
- Introducing Better Futures
- No Easy Eutopia
- Convergence and Compromise
- Persistent Path-Dependence
- How to Make the Future Better
- Supplement: The Basic Case for Better futures
Leave your questions and comments below. Note that Fin isn't committing to answer every question, and if you see someone else's question you can answer, you're free to.
Wei Dai @ 2026-04-25T20:05 (+21)
For example, in What We Owe The Future, Will said he thought that the expected value of the future, given survival, was less than 1% of what it might be.1 After being exposed to some of the arguments in this essay, he revised his views closer to 10%; after analysing them in more depth, that percentage dropped a little bit, to 5%-10%.
[...]
However, it's unlikely to me that companies will in fact produce morally uncertain AIs that are motivated by doing good de dicto. They probably won't have thought about this issue, and won't be motivated by trying to improve scenarios in which humanity is disempowered.
Given this combination of views, I'm surprised that Will doesn't support what @Holly Elmore ⏸️ 🔸 calls "Pause NOW" and instead want to see a pause later (after we have human-level AI). I'm curious if your own views are similar or how they differ from Will's. (My own "expected value of the future, given survival" I would say is similarly pessimistic, but I'm reluctant to put into numbers due to being very unsure how to quantify it.)
Aside from what Holly said in the linked comment, which I agree with, another argument more relevant to the current discussion is that many opportunities for making the future better seem to exist during the AI transition, including the early parts of it, so by not pausing ASAP (and currently having few resources for such interventions), we're permanently giving up these opportunities. Conversely, by pausing NOW, we buy more time to think and strategize about how to better intervene on these opportunities, or otherwise lay the groundwork for them.
For example, during the pause, we could:
- Try to solve metaphilosophy, or otherwise think about how to improve AI philosophical competence or moral epistemology.
- Try to get AI companies to "think about this issue" (of morally uncertain AIs that are motivated by doing good de dicto).
- Research ways to make such AIs safer from our (human) perspective so that there's less of a tradeoff between safety and Better Futures.
- Spread the idea of Better Futures generally so that when AI development resumes, there will be more people aware of and working on these issues.
Such interventions could mean the difference between the first human-level AIs being competent and critical moral/philosophical advisors, or independent moral (and safe) agents, vs uncritically doing what humans seem to want and/or giving bad/incompetent/sycophantic "advice" (when humans think to ask for it), which seemingly can make a big difference to how well the future goes.
What do you think about this argument, and overall about pause now vs later?
Tom_Davidson @ 2026-04-30T16:30 (+4)
Thanks for this.
In each the examples you give, i'm thinking that the pause would be significantly more beneficial (plausibly by 10x) if we pause when AI is already capable enough that it can significantly help us solve the issue. In general, they seem like the kinds of issues where AI could massively accelerate progress.
So if i'm choosing between international pause now vs international pause in 2 years, I choose the latter. (I assume we're talking about international pauses here rather than just the U.S. but lmk if you also support a unilateral pause now!)
I do find Holly's point that it might be damaging to quibble about exactly when we pause if that reduces the chance of a pause happening at all. And today we are very far from a pause actually happening, and one may well be needed in two years' time, so I def support efforts to get us closer to a pause!
I'm hesitant about saying "pause now" because I actually think a different policy might be much more effective. But I think a world where we were about to do an international pause would be better than the actual world.
(I want to think more about this topic and all of this is v tentative.)
Wei Dai @ 2026-05-02T18:38 (+6)
In each the examples you give, i'm thinking that the pause would be significantly more beneficial (plausibly by 10x) if we pause when AI is already capable enough that it can significantly help us solve the issue.
Why assume that there can only be one pause? Pausing now could make a later pause both more likely and more useful, by building the infrastructure and precedent for pausing, and by making subsequent AIs more aligned and differentially more productive in areas that we care about. If we end the first pause only after we've solved the problem of building aligned AIs that are philosophically and strategically competent, that would seemingly make subsequent pauses much easier.
I wonder if you're thinking that we won't be able to pause long enough to make significant progress on these problems? I can see that if we only have the "willpower" for a single short pause, then it becomes unclear when to best use it.
In general, they seem like the kinds of issues where AI could massively accelerate progress.
I have been warning for several years that AI could be differentially bad at philosophy and long-horizon strategy (due in part to AI training requiring massive amounts of training data and/or fast and cheap feedback loops, which are lacking for these fields, and in part to lack of understanding of e.g. metaphilosophy). So if we don't pause now (and use the time to fix this issue) then by the time we do pause, we'll likely have AIs that can accelerate other fields (such as math/coding/science/tech and manipulating humans) much more than the fields that are crucial for Better Futures.
Worse, we may end up with AIs that decelerate (in an absolute sense) hard-to-verify fields like philosophy and long-horizon strategy, because these AIs are better at coming up with plausible sounding ideas and arguments, and convincing humans of their truth, or persuading humans that their own bad ideas are actually good (which is already being reported under "AI psychosis" and "sycophancy"), than making real progress in these fields.
Tom_Davidson @ 2026-05-15T13:12 (+2)
Sorry for the slow reply!
Thanks, this is a helpful perspective.
I've normally thought from a frame of "we've got limited chips to spend on pausing, when is it best to spend them". I think this frame is reasonable if you're worried about irresponsible developers catching up or tradeoffs with the current gen's desire to survive.
But it is true that a pause today might make a pause in the future more likely.
Otoh, it could also make it less likely if ppl perceive that nothing concretely useful comes out of it, which is my worry with pausing today. Like, i think ~nothing useful would have come from pausing shortly after GPT-4 was released.
If we end the first pause only after we've solved the problem of building aligned AIs that are philosophically and strategically competent
Do you think this is possible with today's AI capabilities? I'd have thought you can't match human philosophy and strategy yet, but we are def getting closer.
Also, how do you think about whether to slow down vs pause, holding fixed the total delay relative to 'full speed ahead'? I'd have thought slow down is better re iterating on alignment as problems arise and re building philosophically competent AIs.
So if we don't pause now (and use the time to fix this issue) then by the time we do pause, we'll likely have AIs that can accelerate other fields (such as math/coding/science/tech and manipulating humans) much more than the fields that are crucial for Better Futures.
Interesting. I normally expect AI to accelerate philosophy and strategy less than the math/coding but more than science/tech. Science/tech rely on experimental bottlenecks, whereas for philosophy the only input is cognitive labour. But you're right, if AI can't do philosophy/strategy properly, it won't speed it up at all! So far, AI systems have been pretty good at these skills though?
James Brobin @ 2026-04-20T17:17 (+10)
Hi Fin,
I have a lot of questions so I figure I would just share all of them and you could respond to the ones you want to.
- I think Forethought is a super cool institution. What advice would have for someone who wanted to work there as a researcher? Do you think it's important to have a strong understanding of how LLMs work?
- I made this post where I categorized flourishing cause areas based on "How To Make The Future Better." I thought I'd share. I'm curious if this categorization generally aligns with how you think about the problem.
- Locking-in one’s values
- Ensuring the future is aligned with the correct values
- Working towards viatopia
- Promoting futures with more moral reflection
- Improving the ability for people with different views to get their desired futures
- Ensuring future people are able to create a good future
- Keeping humanity’s options open
- Improving global stability
- Improving future human’s decision making
- Empowering responsible actors
- Speeding up progress
- I made this post which is an overview of longtermism's ideas, writings, individuals, institutions, and history. I thought I'd share since you made the longtermism website.
- The Better Futures series assumes that the future will be net-positive by default. To me, the ideas presented in the series (strong self-modification, modification of descendants, selection of beliefs by evolutionary pressures) indicate that we should expect future humans to be very different from us, and that, as a result, we should expect the future to be neutral in expectation. Do you agree with this logic or do you think the future will be net-positive by default? Additionally, why?
- Currently, there are a wide range of ideas about how a post-AGI future will go and what features it will contain. To me, this strongly indicates that we should expect the post-AGI future could go in a very broad range of ways and that we should prepare for the many different ways it could go. At the same time, I get the sense that Forethought has a very specific vision about how a post-AGI future will go (there will be an intelligence explosion, tools for epistemics will be beneficial, we might begin acquiring resources in other solar systems, small sets of actors could use AGI in malicious ways.) I'm wondering how do you decide what ideas you think are likely, and do you guys have any measures in place to ensure you're receiving criticism of your ideas so you don't create an epistemic bubble?
- I understand that you have done some work related to space governance. A criticism I have of working on this field is that (1) it seems like it has been very intractable due to the lack of space treaties (2) if any great power has a decisive advantage, global treaties won't matter (3) even if you are able to get a law or treaty passed, corporate or state interests could easily override these laws later on (4) there's probably a low chance of success of even getting into a position where you could influence this stuff. As such, I'm wondering, if you think it's valuable for additional people to work in the field, why do you think this?
- It seems like longtermism is an unhelpful idea since it requires people to believe that our actions could persist for millions of years. I personally am pretty skeptical of this, although I do think it is possible. It also seems like the idea has been somewhat harmful to EA as a movement since people can always point out that some of the founders of the movement are focused on helping people millions of years from now, which sounds pretty crazy. I'm wondering if you agree with this assessment.
- In "How To Make The Future Better," MacAskill argues that we should make AIs encourage humans to be good people and use them as a source of moral reflection. This seems like it could be deeply problematic in case moral sense theory is true, but AIs lack a moral sense. Do you agree with this?
finm @ 2026-04-24T18:53 (+9)
Thanks James!
What advice would have for someone who wanted to work there as a researcher?
Some things I appreciate in my colleagues: having some discernment for which questions or ideas are most important, rather than just conceptually interesting but not urgent; being able to contribute to group conversations by driving at cruxes, being willing to ask naive questions and avoiding the impulse to sound clever for the sake of it, and being able to spot and entertain "big if true" hypotheses; and being able to clearly communicate ideas where you often don't have an especially deep literature to draw on.
Do you think it's important to have a strong understanding of how LLMs work?
I think it's important to understand how AI works in the fundamentals; including some of the theory. I don't think it's important to have deep technical knowledge of LLMs, unless you can think why those details could end up being relevant for macrostrategy.
On your second question, many of those points seem good to me. I'll single out "Locking-in one’s values" since I've been thinking about it recently. It seems to me that some people roughly think that great futures are futures which resemble our own (or which carry on our values) in many particular ways. In particular, maybe great futures are futures which are recognisably human in their values. Inhuman futures, like futures where AI successors call the shots, might just seem empty of what we today care about; even if they involve a lot of moral reflection and nothing morally offensive from a human perspective. We could call this a "humanity forever" view.
On the other hand, some people roughly think that great futures are necessarily futures which are radically different from humanity today, including in the values which guide it, and perhaps the kind of actors living there. See Dan Faggella on the "Worthy Successor" idea (and here), which I see as one version of this view.
Both these views care about preventing obvious catastrophes from AGI, but it seems to me like they might end up disagreeing quite profoundly on what should come next. It's possible that there is opportunity for trade and compromise between the two views, but in any case this strikes me as a potentially important difference in approach to post-AGI futures.
To me, the ideas presented in the series (strong self-modification, modification of descendants, selection of beliefs by evolutionary pressures) indicate that we should expect future humans to be very different from us, and that, as a result, we should expect the future to be neutral in expectation.
Firstly, you're right that the series doesn't discuss negative futures, but I should say that's not because Will or I think they are worth ignoring, or very unlikely in absolute terms. We didn't discuss them more just so we could make a more focused argument about how to think about making good futures even better.
I think your point (quoted) touches on the difference I mentioned above between "humanity forever" views and views which are more open to change in values. I think it's coherent to take a view such as:
- You want to value whatever is ultimately valuable. You're unsure what that is, but you trust the processes which guide the future to converge on it;
- You want to value whatever you would value under some idealised process of reflection, and you think the processes which guide the future will emulate idealised reflection on your own values closely enough;
- You value roughly what you currently value. But you're scope-insensitive: in order to think we've reached a great future, you just need your neck of the woods to be how you want it, and the rest of the future to avoid things you think are morally repugnant. You expect almost the entire future not to be guided by what you value; but you're confident you can get the things you want to be satisfied the future is great, and you're confident the rest of the future will avoid the morally repugnant (perhaps through trade)
- Similar to above, but what you personally value is cheap by the lights of other value systems which guide the future, and vice-versa. So you are confident you can secure a great future by your lights through trade.
Better Futures argues that these views may be less tenable than they first appear, but I think they're not totally doomed.
Additionally I would point out a potential "missing mood" in the framing we adopt of cardinally quantifying the value of the future in terms of a fraction of the value of the best feasible future. This suggests futures which are only, say, 10E-5 the value of the best feasible future are barren, hollow, 'neutral'. But this would be a mistake: potentially our own world, even with all the harm and pain removed, is achieving a tiny fraction of what a great future could achieve. So we might imagine (as Better Futures points out) a "common-sense eutopia" which is radically better than the world today, but still only a fraction as good as things could get. That could be true, but it doesn't undermine the value of such a future, which would also truly be (by stipulation) wildly better than the world today! All the joy and freedom and discovery and so on, in this near-zero world, would be entirely real and could dwarf all the good we have achieved and enjoyed so far.
Currently, there are a wide range of ideas about how a post-AGI future will go and what features it will contain. To me, this strongly indicates that we should expect the post-AGI future could go in a very broad range of ways and that we should prepare for the many different ways it could go.
Maybe I'm misreading but I don't think it follows from uncertainty about how things go that many different things will actually happen. For example, if you're uncertain who wins a political election, you don't infer that everyone wins and shares power.
At the same time, I get the sense that Forethought has a very specific vision about how a post-AGI future will go (there will be an intelligence explosion, tools for epistemics will be beneficial, we might begin acquiring resources in other solar systems, small sets of actors could use AGI in malicious ways.) I'm wondering how do you decide what ideas you think are likely, and do you guys have any measures in place to ensure you're receiving criticism of your ideas so you don't create an epistemic bubble?
I'm in a few minds about this, so I'll just list some reactions:
- You say the Forethought vision is "very specific", and then you list some claims (e.g. "small sets of actors could use AGI in malicious ways") which seem… surprisingly anodyne? Or in particular it doesn't strike me as egregious or unusual to put a decent amount of credence in those claims being true. I think that's all you need to take them seriously and work on them. Indeed I don't myself feel extremely confident in any one of them.
- I think there is a way to do criticism in a performative way, where you invite people you know to disagree, for reasons you are already familiar with. I don't think that is totally useless, because performing these dialogues in public can be useful for other people to decide what they think.
- On the other hand, I think the best kind of outside criticism for the sake of throwing out bad ideas often isn't very flashy, and can look like outside experts telling you "this isn't really how [my domain of expertise works], so [ABC] seems confused but [XYZ] seems plausible".
- From my perspective there is quite a lot of internal disagreement, including between broad worldviews, although that's relative.
- Speaking personally, I worry a bit that there are components of the implicit shared Forethought worldview which are tricky to pin down from the outside, and thus more likely to influence research decisions in an unscrutinised way. I do think this is a generic problem, and think the most useful place from which to notice and communicate these implicit beliefs is straddling being enough of an insider to have context, and enough of an outsider to see alternatives.
- On the other hand I think you do at some point just need to pick some assumptions and some worldview and work within it to make any progress at all. In my experience simply pointing out that those assumptions could be wrong is often less valuable than proposing more fleshed-out alternative assumptions and worldviews, which themselves can be criticised and so on…
I'm wondering, if you think it's valuable for additional people to work in the field, why do you think this?
We are at Forethought, running a research programme on space right now, which I guess reflects a view that it does seem worth investigating more. I don't think the central case for space runs through the hope for binding international treaties because I agree that we shouldn't expect them to hold. I think there are a few other reasons to want to investigate space. One is that the space economy could be somewhat relevant for the course of AGI development, for example if orbital data centres are a big deal, or because of the role of sensing satellites in peace and security.
Another is that most of the physical stuff is in space. At some point it seems likely to me, if the human project continues at all, that most of the important stuff will also eventually be in space. AGI + automated manufacturing + rapid R&D progress suggests that expanding into space could happen in the time span of decades rather than centuries or millennia; and that seems generically worth planning for. And it seems like there are some policy levers which don't root through international treaties.
To be clear I don't currently think that space governance should be the next big cause in EA or anything like that.
It seems like longtermism is an unhelpful idea since it requires people to believe that our actions could persist for millions of years.
This feels like a slightly odd sentence construction, because you seem to be saying that longtermism is unhelpful because it requires people to believe one of its central claims. I agree it's contentious and I'm certainly not confident that the effects of our actions could persist for millions of years but it seems plausible enough that the anticipated long-term effects of our actions should meaningfully weigh into what we prioritise, at least where you can tell a story about how your decisions could have some systematic long-run effects.
It also seems like the idea has been somewhat harmful to EA as a movement since people can always point out that some of the founders of the movement are focused on helping people millions of years from now, which sounds pretty crazy.
I do think that is plausible. Although, to state the obvious, there is a difference between which ideas have good or bad PR effects when you say them out loud, and which ideas are actually true or important. So questions about communicating longtermist ideas are, naturally, different from the question of whether longtermist ideas are worth taking seriously as ideas.
And then, I also want to say: the full-on version of longtermism — that the very long-run effects of our actions are overwhelmingly important for what we prioritise — just doesn't feel especially necessary for working on most or even all of the topics that Forethought is focused on. There is a far more common-sense and mundane reason to focus on them, which is that they could matter enormously within our own lifetimes! Another way of putting that is that when trying to prioritise between possible focuses within Forethought, my personal view is that longtermism is rarely a crux. Maybe my colleagues disagree with that; obviously I'm not speaking on their behalf.
In "How To Make The Future Better," MacAskill argues that we should make AIs encourage humans to be good people and use them as a source of moral reflection. This seems like it could be deeply problematic in case moral sense theory is true, but AIs lack a moral sense. Do you agree with this?
I'm not sure I'm entirely following your points but I don't see a strong reason why AIs or non-human entities could not in principle engage in genuine moral reasoning in the same way that humans do. Maybe instead the AIs will do something which superficially resembles real moral reasoning, but which is closer to just telling humans what they want to hear.
I do think that is not a crazy thing to worry about because it is much easier to train some skill where an uncontroversial and abundant source of ground truth data exists. Moral reasoning is not one of those domains because people often don't agree on what good moral reasoning looks like. So I think there is much work to be done on that front although I'm not sure that answers your question.
Thanks again for your questions!
Rafael Ruiz @ 2026-04-28T18:04 (+9)
Hi Fin, sorry I'm a bit late with my question, I was rereading parts of the Better Futures series. First of all, I have to say it's one of my favorite article series I've ever read, and I'll be citing it in my own work going forward. The easygoing-versus-fussy distinction in particular is something I'm finding really interesting to dig into. :) Would love to discuss it in more detail at some point.
I wanted to push on the metaphor of sailing to an island, which appears at the start of No Easy Eutopia, but my question is going to take some preamble explanation (sorry!).
I find myself preferring a slightly different picture. Rather than thinking of eutopia as an island we're navigating to, I tend to think of society as the ship itself, drifting through a sea of value over time (a topography of better and worse regions we're already moving through). Societal change feels to me more like a search through uncharted moral territories than an expedition to a specific destination. On that picture, the priority seems more likely to be "how do we improve the ship, so that society reliably moves toward better regions of the sea?"
A couple of clarifications. First, I grant fussiness, I agree most plausible axiologies locate near-best futures in a very narrow region (I lean towards total hedonistic utilitarianism, myself). Second, I'm not a a quietist, in my own work I'm defending what I call moral niche construction, a fairly interventionist view on which we should actively reshape institutions, technologies, and even our own moral psychology (through things like AI moral decisionmakers or bioenhancement) to push society toward better regions. So the disagreement isn't really about ambition, either.
Where I want to press is the following. In the ship-improvement picture, I can grant openly that we probably will never reach eutopia. We end up in a high-value region of the sea (in a local optima), much better than where we are now, plausibly very good in absolute terms, but not the narrow island.
That sounds like a concession, but on rereading Convergence and Compromise, it looks to me like the target-pursuit picture probably doesn't reach the island either: you mention how WAM-convergence is unlikely, partial convergence plus trade faces serious obstacles, value-destroying threats can eat most of the value... So the comparison isn't "guaranteed eutopia versus probably-not-eutopia", since you yourself seem pretty pessimistic. So it's two orientations that both probably miss the island, where one delivers reliable improvements to our current region of the sea along the way, and the other keeps optimizing toward a target it probably won't hit. And, well, if you miss the moon, you don't really land upon the stars... you drift in empty space and die, haha.
(There are similar points on Jerry Gaus' The Tyranny of the Ideal, and on recent debates between ideal theory and non-ideal theory in moral and political philosophy)
So, finally, my question is: given that target-pursuit probably doesn't reach eutopia either, on the series' own analysis, why is the practical orientation toward the narrow target rather than toward improving our current region of the sea (e.g. pursuing very high + plausibly easy to reach and resilient local optima)? What's the case for target-pursuit as a practical orientation, once we factor in that we will probably fail? Is it a case akin to fanaticism, where, if we land in the island, the payoff would be huge?
(Apologies in advance if this is addressed somewhere in the series, my memory context window isn't large enough to hold the whole essay series at once!)
finm @ 2026-05-06T16:38 (+6)
Thanks for the kind words, Rafael!
I can see how the island analogy is confusing:
- It suggests that the task of society is to reach some very particular kind of state(s), and otherwise it (presumably) flounders.
- One way that's inaccurate is that you can ask how well things are going on the ship, before it reaches the island, or after it misses the island.
- It's also unclear that the landscape of possible futures looks like a relatively discrete region of "success" and its complement, a region of failure.
- Finally, the value of the expedition, whether or not it reaches the island, might depend on the course which the ship took. In jargon, the value of society at a time might be stateful with respect to its history, or the value over time might not be time-separable. In words, it's about the journey…
So if that's what you had in mind, I agree.
And then as I read you, the worry is something like: "Better Futures argues that great futures are likely to be difficult to reach. So even if great futures are somehow many times better than mediocre futures, shouldn't we plan to make mediocre futures slightly better (or less bad), rather than throw a Hail Mary at the best futures?"
I wonder if there are two versions of this worry. The first might be: "the strategy which maximises the chance of passing some (or any) 'great' threshold of value is meaningfully worse, in expected value terms, than other strategies". In particular it could be that the max(p(great future)) strategy involves doing common-sensically bad stuff with a slim chance of paying off, and which otherwise does harm. For example, strategies which involve massively centralising power and then hoping that whoever holds all the power makes the right decisions.
I think some version of this worry is decently plausible. But I don't think anyone thinks that you absolutely should take whatever course maximises the chance of a great future. Rather, I think the rough idea in Better Futures is that, as a heuristic, it seems generally worthwhile to choose actions in light of whether they make great futures more likely. This is similar to Bostrom's "maxipok" principle, which Will and Guive Assadi argue against here. But both principles are derived from asking "what heuristic seems like it does a good job at approximating trying to take the max-EV option in a more granular way"; rather than suggesting a direct alternative. So the question is whether it's a good approximation of max-EV, rather than a good alternative to it.
The second version of the worry is something like: "on the Better Futures worldview, the max-EV strategy may well involve a Hail Mary with a slim chance of a great future, and otherwise a very likely outcome which is mediocre or bad. And this seems fanatical, or otherwise wrong."
Here again I would be pretty sympathetic, and it might be useful to distinguish which actions are EV maximising, from which actions are right. If you are a consequentialist (with caveats and asterisks) you think the right action is the EV-maximising one. But we don't try to argue that you should think right action is always EV-maximising and vice-versa (and IIRC there is a footnote trying to make this clear-ish). As with other cases of fanaticism, you could think that the plan which results in the most expected value is not the right plan!
Practically speaking, my hope is that aiming for truly great futures, and just trying to improve incrementally on 'default' futures, recommend quite similar and compatible courses. For example, it seems like power concentration looks pretty bad for both, in practice.
Rafael Ruiz @ 2026-05-15T13:56 (+1)
That's really great! You posed my question better than I did, and answered it more detail than I was expecting. Thanks a lot. :)
RE: "my hope is that aiming for truly great futures, and just trying to improve incrementally on 'default' futures, recommend quite similar and compatible courses."
For what it's worth, it might be worth your time to check Gerald Gauss' "The Tyranny of the Ideal", particularly his framework around page 82, with what he calls "The Choice": that sometimes we might face scenarios in which the local optimum is in the opposite direction from where the global optimum is. I discovered it recently and I thought it was pretty good for thinking about these kinds of issues.
JKM @ 2026-04-26T12:36 (+2)
Forethought's view that improving the future conditional on survival is more important than ensuring survival goes against the dominant view in EA for many years that we need to reduce extinction risk. Two questions on this:
- How far away from the optimal allocation of (longtermist) resources do you think the community currently is?
- For example, should we be radically reducing investment in things like addressing biorisk or nuclear risk? Do we need to be rethinking the allocation of resources within AI risk?
- Do you think there is anything that is being prioritized in the community that is actually harmful?
- For example, could certain AI alignment approaches be bad for future digital sentience?
Jari @ 2026-04-22T14:39 (+1)
I really liked the series :)