Which questions can’t we punt?

By Lizka, Owen Cotton-Barratt, rosehadshar @ 2026-02-23T19:17 (+72)

We think AI strategy researchers should prioritize questions related to earlier parts of the AI transition, even when that means postponing work on some questions that ultimately seem more important.

In brief, our case for taking this “just-in-time” perspective is:

Earlier-period-focused strategy is sometimes treated as resolved (as if the only remaining work is implementation and later-stage questions). We think this is wrong. We sketch out a tentative list of high-priority questions, organized into the following clusters:

  1. Understanding the early period — What is the likely trajectory of AI? What will early transformative impacts be? How will this affect people and institutions?
  2. Preparing for early challenges — Are there meaningful acute risks (misalignment/coups/bio) early on? How much do we need to be concerned about power concentration, and how do we need to adapt checks and balances?
  3. Looking ahead (what would set up later periods well?) — What are the likely trajectories later in the AI transition? What determines how well that goes? Which earlier choices might be hard to reverse past that point?
  4. Exploring the early period’s levers — What might we want to automate earlier? Our collective capacity to make sense of the world seems useful; how can we preserve and enhance it?
  5. Clarifying foundational/ontological questions — How should we characterize AI systems, present and future? How should we think about risks?

We also briefly outline some AI strategy questions that should be postponed according to this view.[2] In particular:

An illustration of this perspective:

A diagram titled "Which questions should we be trying to answer?", with "what period in time the question is about" on the X axis and "type of question; high-level to specific" on the Y axis. As we look further out, we should be punting more of the more specific questions, and only the higher-level ones should remain. The clusters above are loosely mapped onto this diagram.
 

A “just-in-time TAI strategy” perspective

The core of our perspective is, roughly:

We think many people are on board with something like this perspective (which isn’t to say it’s uncontroversial!), but that the implications haven’t always been properly drawn out. This is our attempt to do so.

To put it another way, we are seeking to ask “what do human minds really need to understand, over the next few years?”; this involves ruthlessly excluding questions which can be excluded. We think that this is the appropriate stance for a good chunk of our energy in orienting to the future. Other implicit orientations we think people sometimes adopt, that we can contrast ours with:

(We think each of these has its place, and we’re not claiming that everyone should stop using them. But we do think that more AI strategy right now should take the perspective above.[5])

An aside: it will remain true (even deep into an intelligence explosion) that waiting longer will yield more and better cognitive labor. Therefore there will be reason to continue the just-in-time strategy: things which are puntable generally should be punted. However, even when your general attitude is to punt on a question you should often make some minimal investment — doing a cheap amount of analysis to get a first-pass answer, partially as a hedge against model error in the judgements about what can be punted.

The early part of the AI transition as our responsibility & focus

From the perspective outlined above, the crucial questions to ask concern the immediate, mostly human future — the early part of the transition to advanced AI, rather than late in the transition or the AI era itself. This is the period that we know more about and have unique influence over, and the period before large quantities of AI research capacity change the possibility space for research endeavors.

For the sake of concreteness, we can think of this period ending a month or two after whenever our strategic research capacity becomes 100x greater than today.[6] An eyeballed guess might be that this will be in 5 years’ time (maybe something like “between 1.5 and 15 years from now”).

Some notes on this definition:

What does this mean about which questions we should prioritize?

With the early part of the AI transition as our special responsibility, we need to understand what it would even mean to navigate it well, and how to do that. So we want to have enough of a picture of what comes after to let us make informed choices about what position we want to be aiming for as we exit the early period.[7] And we want to understand the important dynamics and potential levers in the early period, as well as any challenges that we will need to face soon.[8]

We’ll now list more specific questions that seem important to us, with the warning that as we get more concrete we become somewhat less confident in our takes. These are grouped into five clusters, loosely related to the just-in-time framework suggested above.

a) Understanding the early period

We think many questions about getting a better picture of the early period deserve more attention:[9]

As a general rule we’ll want to focus on questions that meaningfully change the strategic landscape (in relevant scenarios) and which haven’t gotten a huge amount of attention.

b) Preparing for early challenges

As we develop our picture of the early period, we can start to ask which challenges might be urgent and how we should prepare:

c) Looking ahead (to see what would help later periods)

If we want to exit the early period in a good position, we need to know what "well-positioned" means. To do that, we want to get a sense for what might happen a bit further into the future (and how that might depend on what happened during our “early” period).[10] 

We think it makes sense to focus on a middle-ground here, and not think much about trajectories that take us past a medium-term “foresight horizon”. In principle, thinking about very late stages could be helpful if it allows us to back-chain to see where we want to be as we come out of the early period. But in practice we're skeptical about this approach: it involves reasoning about a future that’s so different from our present, and backchaining across multiple eras where the option space is so vast, that we think it’s pretty difficult to come to trustworthy conclusions.

So looking ahead could involve asking:

d) Exploring our levers

As a complement to starting by considering where our trajectory might lead and what we want to aim for, and then working backwards, we might start by considering the early period, asking “what levers do we have?”, and then asking whether it’s worth pulling those levers.

e) Clarifying foundational & ontological questions

Foundational questions are less directly action-relevant. But they feed into how we think about strategy, and paying attention to them seems high priority to us.[13] There are two reasons for this:

  1. They could help us to more clearly address the questions above
    1. E.g. better concepts for thinking about how AI intersects with checks and balances might make it easier for us to come to sensible conclusions there
    2. This seems especially true as AI progress breaks long-standing assumptions / strains our concepts
  2. We might be able to automate strategic research within a fixed ontology before AI can do a good job of automating devising new foundations or ontologies.
    1. If so, then work we do on helping people to think with clearer concepts might be relevant for a longer period — helping us to better leverage large amounts of (jagged) cognitive labour for strategy work

We cannot give a complete list of foundational questions that might be helpful to address. A few stubs which seem appealing to us:

Which AI strategy questions does this tell us to drop?

Many research areas in AI strategy focus primarily on later-stage issues. This shouldn’t seem very surprising; these are often precisely the highest-stakes and most neglected-seeming areas. However, this does also mean that a larger fraction of the questions in these areas will be hit by the “shouldn’t we punt that?” consideration than would be the case for work on e.g. near-term implementation questions.[14]

Ultimately we think that for later-stage issues — like alignment of superintelligent AI, space governance, AI welfare, and so on — we should start by assuming that a question should be postponed and then rule it in if we have an active reason to believe it’s “timely” (rather than the other way around).

Note:

So what does deprioritizing based on this reasoning actually look like? 

Below we take some areas in AI strategy and briefly consider which questions might be in/out (these notes depend to varying degrees on various background views that we don’t justify here):[17]

A final note on asking “Which questions matter?

These ideas grew out of a couple discussions about “which questions really truly matter?”

Whether or not you like our answers, we think that the question is a useful one to ask, and would recommend trying to answer it yourself. Some of the prompts we used, in case they are useful for inspiration:

Thanks to various people who left comments on an earlier draft of this memo! 

  1. ^

     For the sake of concreteness, we can think of this period ending a month or two after whenever our strategic research capacity becomes 100x greater than today. See more below.

  2. ^

     Perspectives that aren’t directly related to “we should focus on timely questions”, like our expectations on which AI developments we might see earlier/later, inform our views here.

  3. ^

     Related discussion here and here (and an intro to the "nearsightedness" concept here).

  4. ^

     Or a variant: “We basically know the menu of possible outcomes, and we just need to work to make the good ones more likely by helping to avoid the bad ones”.

  5. ^

     We also ignore most practical questions here. We’re trying to outline a high-level strategic view; in practice this could feed into many localized decisions people make, but engaging with the details of those decisions, while necessary, is beyond our scope here.

  6. ^

     Why “a month or two after”? Simply because we must have some time for the new strategic capacity to bear fruit. But for practical purposes we don’t think it’s important to engage with the nuance of the definition.

  7. ^

     There isn’t really a single “exit” moment, but it can still be helpful to think of a particular checkpoint.

  8. ^

     Of course, these are more-or-less two versions of the same goal. Handling the early period well is just a version of setting things up for the later period well. But they invite us to direct attention to different places, and we guess it is worth considering each separately.

  9. ^

     A sketch of our high-level view here: there’s a lot going on, new developments will interact with everything in confusing ways, and there are a bunch of different ways things could go. Our current understanding of this period is shallow & limited; we can do things like forecast one-dimensional questions reasonably well, but that’s not enough to find the intervention points and navigate this whole thing well.

  10. ^

     Note that we shouldn’t (and can’t) get extremely specific here — the specifics matter less when we’re just trying to pick a high-level target than when we’re trying to prepare for near-term challenges; and we’ll be too near-sighted to answer overly specific questions.

  11. ^

     We’re especially interested in these questions to the extent they might have answers that we could aim for; but perhaps thinking beyond that could help us to identify precursor states that are more likely to end up on a good track, even if they’re not there for sure.

  12. ^

     This is related to work on AI constitutions and character training / steering (and also probably this...)

  13. ^

     Note: in some cases you might expect that these questions will take longer to “cash out”. So, in terms the diagram above, the bottom-left part (foundational questions that are concerned with the immediate future) should plausibly be greyed out.

  14. ^

     Put another way, taking this consideration into account can save us more effort on these high-stakes long-term issues than would be the case for naturally-nearer-term areas of research.

  15. ^

    Note: Early research on later-stage questions could also improve how later-stage research goes; that seems to fit in the framework. (See also "parallelizable vs serial".)

    For some discussion of what speeding up AI uplift of research could look like in practice, see e.g. this post (which considers this question for AI safety). 

  16. ^

     Often these will be:

    - Questions that condition specifically on scenarios in which the issues do arise early. (Note that these scenarios might be pretty unusual — not the modal/median worlds in which the issue is imagined or shows up.)

    - Questions that focus on a way to start bootstrapping towards a solution to the broader problem. (A related post explores the idea of bootstrapping to "viatopia".)

  17. ^

    Some early-period-related questions seem to get swallowed up (or overshadowed) by higher-stakes-seeming and/or more idealized (easier to formalize) questions that concern later periods (but are "untimely"). I think this happens, for instance, with modeling imperfect agents or early AI impacts.

    (This feels pretty similar to problems that sometimes show up in ITN BOTECs.)

  18. ^

     A related recent post from Rose.

  19. ^

     This is a special case of the “which variables” question, but may be worth explicit attention.


cb @ 2026-02-23T21:03 (+8)

Nice post! I basically agreed overall. Some rambly thoughts:

Oliver Sourbut @ 2026-02-24T09:48 (+5)

Knowing these authors, my guess on ontology is that they might say that it could be instrumental in things like

  • motivating progress in safer paradigms of AI development
  • understanding 'hybrid' human-AI-org opportunities and threats
  • figuring out what types of 'post early' conditions look favourable for dealing with the next challenges

These all look like activities with bearing on how to tackle 'early' challenges.

Oliver Sourbut @ 2026-02-24T09:53 (+5)

I like this, and it's simultaneously exciting and bewildering to take seriously the prospect of punting difficult things.

It could be worth emphasising more clearly that this is about (futurist) strategy, which is about as cognitive as things get. Other types of preparation and problem-solving have other critical inputs, and may face ~inherent delays. For those, 'punting' can look risky, especially if you expect later phases to move quite fast. This has bearing on strategy: it's worth attempting to foretell the kinds of lead-time-constrained preparation that might be needed to face upcoming challenges.

(A concrete example that stands out to me is bio monitoring and defenses. But in general I'd love to see more and richer work on characterising emerging threats, especially technological. Not necessarily from Forethought! Other kinds of lead-time-constrained activities might involve coalition building and spreading well-informed takes about important topics.)