Crucial questions for longtermists

By MichaelA🔸 @ 2020-07-29T09:39 (+104)

This post was written for Convergence Analysis. It introduces a collection of “crucial questions for longtermists”: important questions about the best strategies for improving the long-term future. This collection is intended to serve as an aide to thought and communication, a kind of research agenda, and a kind of structured reading list.

Introduction

The last decade saw substantial growth in the amount of attention, talent, and funding flowing towards existential risk reduction and longtermism. There are many different strategies, risks, organisations, etc. to which these resources could flow. How can we direct these resources in the best way? Why were these resources directed as they were? Are people able to understand and critique the beliefs underlying various views - including their own - regarding how best to put longtermism into practice?

Relatedly, the last decade also saw substantial growth in the amount of research and thought on issues important to longtermist strategies. But this is scattered across a wide array of articles, blogs, books, podcasts, videos, etc. Additionally, these pieces of research and thought often use different terms for similar things, or don’t clearly highlight how particular beliefs, arguments, and questions fit into various bigger pictures. This can make it harder to get up to speed with, form independent views on, and collaboratively sculpt the vast landscape of longtermist research and strategy.

To help address these issues, this post collects, organises, highlights connections between, and links to sources relevant to a large set of the “crucial questions” for longtermists.[1] These are questions whose answers might be “crucial considerations” - that is, considerations which are “likely to cause a major shift of our view of interventions or areas”.

We collect these questions into topics, and then progressively then progressively break “top-level questions” down into the lower-level “sub-questions” that feed into them. For example, the topic “Optimal timing of work and donations” includes the top-level question “How will ‘leverage over the future” change over time?’, which is broken down into (among other things) “How will the neglectedness of longtermist causes change over time?” We also link to Google docs containing many relevant links and notes.

What kind of questions are we including?

The post A case for strategy research visualised the “research spine of effective altruism” as follows:

This post can be seen as collecting questions relevant to the “strategy” level.

One could imagine a version of this post that “zooms out” to discuss crucial questions on the “values” level, or questions about cause prioritisation as a whole. This might involve more emphasis on questions about, for example, population ethics, the moral status of nonhuman animals, and the effectiveness of currently available global health interventions. But here we instead (a) mostly set questions about morality aside, and (b) take longtermism as a starting assumption.[2]

One could also imagine a version of this post that “zooms in” on one specific topic we provide only a high-level view of, and that discusses that in more detail than we do. This could be considered to be work on “tactics”, or on “strategy” within some narrower domain. An example of something like that is the post Clarifying some key hypotheses in AI alignment. That sort of work is highly valuable, and we’ll provide many links to such work. But the scope of this post itself will be restricted to the relatively high-level questions, to keep the post manageable and avoid readers (or us) losing sight of the forest for the trees.[3]

Finally, we’re mostly focused on:

These can be seen as questions that reveal a “double crux” that explains the different strategies of different longtermists. We thus exclude questions about which practically, or by definition, all longtermists agree.

A high-level overview of the crucial questions for longtermists

Here we provide our current collection and structuring of crucial questions for longtermists. The linked Google docs contain some further information and a wide range of links to relevant sources, and I intend to continue adding new links in those docs for the foreseeable future.

“Big picture” questions (i.e., not about specific technologies, risks, or risk factors)

See here for notes and links related to these topics.

Questions about emerging technologies

See here for notes and links related to these topics.

Questions about specific existential risks (which weren’t covered above)

See here for notes and links related to these topics.

Questions about non-specific risks, existential risk factors, or existential security factors

See here for notes and links related to these topics.

We have also collected here some questions that seem less important, or where it’s not clear that there’s really disagreement on them that fuels differences in strategic views and choices among longtermists. These include questions about “natural” risks (other than “natural” pandemics, which some of the above questions already addressed).

Directions for future work

We’ll soon publish a post discussing in more depth the topic of optimal timing for work and donations (update: posted). We’d also be excited to see future work which:

Such work could be done as standalone outputs, or simply by making commenting on this post or the linked Google docs. Please also feel free to get in touch with us if you are looking to do any of the types of work listed above.

Acknowledgements

This post and the associated documents were based in part on ideas and earlier writings by Justin Shovelain and David Kristoffersson, and benefitted from input from them. We received useful comments on a draft of this post from Arden Koehler, Denis Drescher, and Gavin Taylor, and useful comments on the section on optimal timing from Michael Dickens, Phil Trammell, and Alex Holness-Tofts. We’re also grateful to Jesse Liptrap for work on an earlier draft, and to Siebe Rozendal for comments on another earlier draft. This does not imply these people’s endorsement of all aspects of this post.


  1. Most of the questions we cover are actually also relevant to people who are focused on existential risk reduction for reasons unrelated to longtermism (e.g., due to person-affecting arguments, and/or due to assigning sufficiently high credence to near-term technological transformation scenarios). However, for brevity, we will often just refer to “longtermists” or “longtermism”. ↩︎

  2. Of course, some questions about morality are relevant even if longtermism is taken as a starting assumption. This includes questions about how important reducing suffering is relative to increasing happiness, and how much moral status various beings should get. Thus, we will touch on such questions, and link to some relevant sources. But we’ve decided to not include such questions as part of the core focus of this post. ↩︎

  3. For example, we get as fine-grained as “How likely is counterforce vs. countervalue targeting [in a nuclear war]?”, but not as fine-grained as “Which precise cities will be targeted in a nuclear war?” We acknowledge that there’ll be some arbitrariness in our decisions about how fine-grained to be. ↩︎

  4. Some of these questions are more relevant to people who haven’t (yet) accepted longtermism, rather than to longtermists. But all of these questions can be relevant to certain strategic decisions by longtermists. See the linked Google doc for further discussion. ↩︎

  5. See also our Database of existential risk estimates. ↩︎

  6. This category of strategies for influencing the future could include work aimed towards shifting some probability mass from “ok” futures (which don’t involve existential catastrophes) to especially excellent futures, or shifting some probability mass from especially awful existential catastrophes to somewhat “less awful” existential catastrophes. We plan to discuss this category of strategies more in an upcoming post. We intend this category to contrast with strategies aimed towards shifting probability mass from “some existential catastrophe occurs” to “no existential catastrophe occurs” (i.e., most existential risk reduction work). ↩︎

  7. This includes things like how likely “ok” futures are relative to especially excellent futures, and how likely especially awful existential catastrophes are relative to somewhat “less awful” ones. ↩︎

  8. This is about altruism in a general sense (i.e., concern for the wellbeing of others), not just EA specifically. ↩︎

  9. This refers to actions that speed development up in a general sense, or that “merely” change when things happen. This should be distinguished from changing which developments occur, or differentially advancing some developments relative to others. ↩︎

  10. Biorisk includes both natural pandemics and pandemics involving synthetic biology. Thus, this risk does not completely belong in the section on “emerging technologies”. We include it here anyway because anthropogenic biorisk will be our main focus in this section, given that it’s the main focus of the longtermist community and that there are strong arguments that it poses far greater existential risk than natural pandemics do (see e.g. The Precipice). ↩︎


mike_mclaren @ 2020-09-19T21:17 (+26)

Thanks for writing this post! I enjoyed looking over these, many of which I have also been puzzling about.

What’s the minimum viable human population (from the perspective of genetic diversity)?

After seeing this question picked up here I thought I would share some quick thoughts from the perspective of a person with a population biology/evolution background. I think this is a reasonable question to ask, but I suspect is not as important as the other factors that go into the broader question of what is the minimum population size from which humanity is likely to recover, period. Genetics are just one factor and probably not the most important when we consider the probability of recovery after a severe drop in global population.

Suppose that after some catastrophic event the population of humanity has suddenly dropped to a much smaller and more fragmented global population, e.g. 10000 individuals scattered in ~100 groups of 100 each across the globe. While the population size is small, it will be particularly susceptible to going extinct due to random fluctuations in population size. The population size could remain stationary or gradually decline, until eventually a random event causes extinction. Or it could start increasing, until eventually it is large enough to be robust to extinction from a random event.

The idea of a minimum viable population size (MVP) from a purely genetic perspective is that, since small populations are predicted to have lower average genetic fitness due to an increase in the expression of recessive deleterious mutations ("inbreeding depression"), an increased fixation of deleterious mutations in the population, or a lack of genetic variation that would allow adaptation to environment, there is in theory a population size small enough where a population would decline and go extinct due to low genetic fitness.

But in reality, the population seems more likely to go extinct because of poor environmental conditions, random environmental fluctuations, loss of cultural knowledge (which, like genetic variation, goes down in small populations), or lack of physical goods and technology, none of which have much to do with genetic variation.

Another way in which the concept of a MVP is too simplistic is that it is defined with respect to a genetic "equilibrium" - it assumes that conditions have been stable enough that there is a constant level of genetic variation in the population. However, after a sudden population decline, we would be far from equilibrium - we would still have lots of genetic variation from the time the population was large. This variation would start to decay, but as different local populations become fixed for different variants, much of this variation would be maintained at the global level and could be converted back into local variation by small amounts of migration. Such considerations are not usually included in MVP considerations. (Some collaborators and I have written about this last point at it relates to conserving endangered species here)

Perhaps we should keep the term "minimum viable population size" but use a broader definition based on likelihood to survive, period. I see that Wikipedia uses a broad definition that includes extinction due to demographic and environmental stochasticity, but often MVP is used as in the OP to refer just to extinction due to genetic reasons, so it is important to clarify terms.

MichaelA @ 2020-09-20T06:39 (+3)

Very interesting, thanks! Strong upvoted.

in reality, the population seems more likely to go extinct because of poor environmental conditions, random environmental fluctuations, loss of cultural knowledge (which, like genetic variation, goes down in small populations), or lack of physical goods and technology, none of which have much to do with genetic variation.

This matches what I had tentatively believed before seeing your comment - i.e., I had suspected that genetic diversity wasn't among the very most important considerations when modelling odds of recovery from collapse. So I've now updated to more confidence in that view. 

I raised MVP (from a genetic perspective) just as one of many considerations, and primarily because I'd seen it mentioned in The Precipice. (Well, Ord doesn't make it 100% clear that he's just talking about MVP from a genetic perspective, but the surrounding text suggests he is. Hanson also devotes two paragraphs to the topic, again alongside other considerations.)

Perhaps we should keep the term "minimum viable population size" but use a broader definition based on likelihood to survive, period. I see that Wikipedia uses a broad definition that includes extinction due to demographic and environmental stochasticity, but often MVP is used as in the OP to refer just to extinction due to genetic reasons, so it is important to clarify terms.

I'd agree that clarifying what one means is important. This is why I explicitly noted that here I was using MVP in a sense focused only on genetic diversity. To touch on the other "aspects" of MVP, I also have "What population size is required for economic specialisation, technological development, etc.?" 

It seems fine to me for people to also use MVP in a sense referring to all-things-considered ability to survive, or in a sense focused only on e.g. economic specialisation, as long as they make it clear that that's what they're doing. Indeed, I do the latter myself here: I write there that a seemingly important parameter for modelling odds of recovery is "Minimum viable population for sufficient specialisation to maintain industrialised societies, scientific progress, etc."

Another way in which the concept of a MVP is too simplistic...

I wasn't aware of these points; thanks for sharing them :)

mike_mclaren @ 2020-09-27T21:12 (+1)

Thanks for your response and the link to your newer post and the Ord and Hanson refs. I'll just add a thought I had while reading

This is why I explicitly noted that here I was using MVP in a sense focused only on genetic diversity. To touch on the other "aspects" of MVP, I also have "What population size is required for economic specialisation, technological development, etc.?"

It seems fine to me for people to also use MVP in a sense referring to all-things-considered ability to survive, or in a sense focused only on e.g. economic specialisation...

This all makes sense, but sounds to me like to be at risk of leaving out the population/conservation biology perspective (beyond genetic considerations). A large part of what motivated me to write my original post is that I do think it is indeed valuable to use frameworks from population and conservation biology to study human extinction risk - but it is important to include all factors identified in those fields as being important; namely, environmental and demographic stochasticity, as well as habitat fragmentation and degradation, which could pose much greater risks than inbreeding and genetic drift.

MichaelA @ 2020-09-28T07:30 (+3)

Yeah, that sounds right. Those factors were left out just because I didn't think of including them (because I don't know very much about these frameworks from population and conservation biology), rather than because I explicitly decided to include them, and I'd guess you're right that attending to those factors and using those frameworks would be useful. So thanks for highlighting this :)

There are probably also various other "crucial questions" people could highlight, as well as questions that would fit under these questions and get more into the fine-grained details, and I'd encourage people to comment here, comment in the google doc, or create their own documents to highlight those things. (I say this partly because this post has a very broad scope, so a vast array of fields will have relevant knowledge, and I of course have very limited knowledge of most of those fields.)

Davidmanheim @ 2020-07-29T13:27 (+10)

This is really fantastic, and seems like there is a project that could be done as a larger collaboration, building off of this post.

It would be a significant amount of additional work, but it seems very valuable to list resources relevant to each question - especially as some seem important, but have been partly addressed. (For example, re: estimates of natural pandemic risks, see my paper, and then Andrew Snyder-Beattie's paper.)

Given that, would you be interested in having this put into a Google Doc and inviting people to collaborate on a more comprehensive overall long-termist research agenda document?

MichaelA @ 2020-07-29T13:48 (+4)

Thanks for the comment! I definitely agree that listing relevant resources would be useful, as would allowing people to collaborate on that, and in fact we've already done so! The links to relevant resources can be found in the Google docs linked to in each place where it says "See here for notes and links related to these topics."

I actually already had a link to your paper in the Google doc section on naturally arising pandemics. Though I didn't have the Snyder-Beattie paper there, so thanks for mentioning that - I've now added it.

I'd definitely encourage people to comment on those Google docs to suggest additional resources, questions, points about implications, etc.

I hadn't really thought of making this overview article itself an editable Google doc, but it seems possible that'd be useful, so here's the link to what was the draft of this post. People can feel free to continue to make comments there (or here), and I may make some changes to this post in response.

Did you have something different/more than that in mind when you said “having this put into a Google Doc and inviting people to collaborate on a more comprehensive overall long-termist research agenda document”?

Also, as more general points:

  • I definitely imagine there could be useful further collaborations building off this project (beyond just suggesting more resources and questions). And I’d guess that I and/or Convergence would be happy with to work/talk with people on that (though I’m not speaking for Convergence when I say that).
  • I think making collaboratively editable Google docs of things is often a great move (this was part of the motivation for my central directory of open research questions and my database of existential risk estimates)
Paal_SK @ 2020-08-01T13:54 (+5)

This is a really useful overview of crucial questions that have a ton of applications for conscientious longtermists!

The plan for future work seems even more interesting though. Some measures have beneficial effects for a broad range of cause-areas, and others less so. It would be very interesting to see how a set of interventions do in a cost-benefit analysis where interconnections are taken into account.

It would also be super-interesting to see the combined quantitative assessments of a thoughtful group of longtermist's answers to some of these questions. A series of surveys and some work in sheets could go a long way towards giving us a better picture of where our aims should be.

Looking forward to seeing more work on this area!

Max_Daniel @ 2020-08-02T19:45 (+6)
It would also be super-interesting to see the combined quantitative assessments of a thoughtful group of longtermist's answers to some of these questions. A series of surveys and some work in sheets could go a long way towards giving us a better picture of where our aims should be.

I used to think similarly, but now am more skeptical about quantitative information on longtermists' beliefs.

[ETA: On a second reading, maybe the tone of this comment is too negative. I still think there is value in some surveys, specifically if they focus on a small number of carefully selected questions for a carefully selected audience. Whereas before my view had been closer to "there are many low-hanging fruits in the space of possible surveys, and doing even quickly executed versions of most surveys will have a lot of value."]

I've run internal surveys on similar questions at both FRI (now Center on Longterm Risk) and the Future of Humanity Institute. I've found it very hard to draw any object-level conclusions from the results, and certainly wouldn't feel comfortable for the results to directly influence personal or organizational goals. I feel like my main takeaways were:

  • It's very hard to figure out what exactly to ask about. E.g. how to operationalize different types of AI risk?
  • Even once you've settled on some operationalization, people will interpret it differently. It's very hard to avoid this.
  • There usually is a very large amount of disagreement between people.
  • Based on my own experience of filling in such surveys and anecdotal feedback, I'm not sure how much to trust the answers if at all. I think many people simply don't have stable views on the quantitative values one wants to ask about, and essentially 'make up' an answer that may be mostly determined by psychological substitution.

(These are also sufficient reasons for why I've never published the results of such surveys, though sometimes there were also other reasons.)

On reflection, maybe this isn't that surprising: e.g. how to delineate different types of AI risk is an active topic of research, and people write long texts about it; some people have disagreed for years, and don't fully understand each others' views even though they've tried for dozens of hours. It would be fairly surprising if the ask to fill in a survey would make the fundamental uncertainty and confusion suggested by this background go away.

MichaelA @ 2020-08-03T00:07 (+7)

Thanks for sharing your thoughts. I feel uncertain about how valuable it'd be to collect quantitative info about people's beliefs on questions like these, and your comment has provided useful a input/perspective on that matter.

Some thoughts/questions in response:

  1. Do you think it's not even net positive to collect such info (e.g., because people end up anchoring on the results or perceiving the respondents as simplistic thinkers)? Or do you just think it's unclear that it's net positive enough to justify the time required (from the survey organiser and from the respondents)?
  2. Do you think such info doesn't even reduce our uncertainty and confusion at all? Or just that it only reduces it by a small amount?
    • Relatedly, I have an impression that people sometimes deny the value of quantitative estimates/forecasts in general based on seeming to view us as simply either "uncertain" or "certain" on a given matter (e.g., "we'll still have no idea at all"). In contrast, I think we always have some but not complete uncertainty, and that we can often/always move closer to certainty by small increments.
    • That said, one can share that view of mine and yet think these estimates/forecasts (or any other particular thing) don't help us move closer to certainty at all.
  3. It seems to me that those takeaways are not things everyone is (viscerally) aware of, and that they're things it's valuable for people to be (viscerally) aware of. So it seems to me plausible that these seemingly disappointing takeaways actually indicate some value to these efforts. Does that sound right to you?
    • E.g., I wouldn't be surprised if a large portion of people who don't work at places like FHI wouldn't realise that it's hard to know how to even operationalise different types of AI risk, and would expect that people at FHI all agree pretty closely on some of these questions.
    • And I wouldn't be super surprised if even some people who do work at places like FHI thought operationalisations would be relatively easy, agreement would be pretty high, etc. Though I don't really know.
    • That said, there may be other, cheaper ways to spread those takeaways. E.g., perhaps, simply having a meeting where those points are discussed explicitly but qualitatively, and then releasing a statement on the matter.
  4. Would you apply similar thinking to the question of how valuable existential risk estimates in particular are? I'd imagine so? Does this mean you see the database of existential risk estimates as of low or negative value?
    • I ask this question genuinely rather than defensively. I'm decently confident the database is net positive, but very uncertain about how positive, and open to the idea that it's net negative.
Max_Daniel @ 2020-08-03T08:46 (+13)
Do you think it's not even net positive to collect such info (e.g., because people end up anchoring on the results or perceiving the respondents as simplistic thinkers)? Or do you just think it's unclear that it's net positive enough to justify the time required (from the survey organiser and from the respondents)?

Personally, I think it's net positive but not worth the time investment in most cases. But based on feedback some other people think it's net negative, at least when not executed exceptionally well - mostly due to anchoring, projecting a sense of false confidence, risk of numbers being quoted out of context etc.

Do you think such info doesn't even reduce our uncertainty and confusion at all? Or just that it only reduces it by a small amount?

I think an idealized survey would reduce uncertainty a bit. But in practice I think it's too hard to tell the signal apart from the noise, and so that it basically doesn't reduce object-level uncertainty at all. I'm more positive about the results providing some high-level takeaways (e.g. "people disagree a lot") or identifying specific disagreements (e.g. "these two people disagree a lot on that specific question").

It seems to me that those takeaways are not things everyone is (viscerally) aware of, and that they're things it's valuable for people to be (viscerally) aware of. So it seems to me plausible that these seemingly disappointing takeaways actually indicate some value to these efforts. Does that sound right to you?

Yes, that sounds right to me. I think it's a bit tricky to get the message right though. I think I'd want to roughly convey a (more nuanced version of) "we still need people who can think through questions themselves and form their own views, not just people who seek guidance from some consensus which on many questions may not exist". (Buck's post on deference and inside-view models is somewhat related.) But it's tricky to avoid pessimistic/non-constructive impressions like "people have no idea what they're talking about, so we should stop giving any weight to them" or "we don't know anything and so can't do anything about improving the longterm future".

I also do feel a bit torn about the implications myself. After all, the survey issues mostly indicate a failure of a specific way of making beliefs explicit, not necessarily a practical defect in those beliefs themselves. (Weird analogy: if you survey carpenters on weird questions about tables, maybe they also won't give very useful replies, but they might still be great at building tables.) And especially if we're pessimistic about the tractability of reducing confusion, then maybe advice along the lines of (e.g.) "try to do useful AI safety work even if you can't give super clear justifications for what you're doing and don't fully understand the views of many of your peers" is among the best generic advice we can give, despite some remaining unease from people who are temperamentally maths/analytic philosopher types such as myself.

Would you apply similar thinking to the question of how valuable existential risk estimates in particular are? I'd imagine so? Does this mean you see the database of existential risk estimates as of low or negative value?

I think a database is valuable precisely because it shows a range of estimates, including the fact that different estimates sometimes diverge a lot.

Regarding existential risk estimates, I do see value in doing research on specific questions that would make us adjust those estimates, and then adjusting them accordingly. But this is probably not among the top criteria I'd use to pick research questions, and usually I'd expect most of the value to come from other sources (e.g. identifying potential interventions/solutions, field building or other indirect effects, ...). The reason mostly is that I'm skeptical marginal research will change "consensus estimates" by enough that the change in the quantitative probability by itself will have practical consequences. E.g. I think it mostly doesn't matter for practical purposes if you think the risk of extinction from AI this century is, say, 8% or 10% (making up numbers, not my beliefs). If I thought there was a research project that would cause most people to revise that estimate to, say, 0.1% I do think this would be super valuable. But I don't think there is such a research project. (There are already both people whose credences are 0.1% and 10%, respectively, but the issue is they don't fully understand each other, disagree about how to interpret the evidence etc. - and additional research wouldn't significantly change this.)

Again, I do think there are various valuable research projects that would inform our views on how likely extinction from AI is, among other things. But I'd expect most of the value to come from things other than moving that specific credence.

In any case, all of these things are very different from asking someone who hasn't done such research to fill in a survey. I think surveying more people on what their x-risk credences are will have ~zero or even negative epistemic value for the purpose of improving our x-risk estimates. Instead, we'd need to identify specific research questions, have people spend a long time doing the required research, and then ask those specific people. (So e.g. I think Ord's estimate have positive epistemic value, and they also would if he stated them in a survey - the point is that this is because he has spent a lot of time deriving these specific estimates. But if you survey people, even longtermist researchers, most of them won't have done such research, and even if they have lots of thoughts on relevant questions if you ask them to give a number they haven't previously derived with great care they'll essentially 'make it up'.)

MichaelA @ 2020-08-04T02:22 (+5)

Thanks, that's all really interesting.

I think I largely agree, except that I think I'm on the fence about the last paragraph.

Regarding existential risk estimates, I do see value in doing research on specific questions that would make us adjust those estimates, and then adjusting them accordingly. 

I agree with what you say in this paragraph. But it seems somewhat separate to the question of how valuable it is to elicit and collate current views? 

I think my views are roughly as follows: 

"Most relevant experts are fairly confident that certain existential risks (e.g., from AI) as substantially more likely than others (e.g., from asteroids or gamma ray bursts). The vast majority of people - and a substantial portion of EAs, longtermists, policymakers, etc. - probably aren't aware experts think that, and might guess that the difference in risk levels is less substantial, or be unable to guess which risks are most likely. (This seems analogous to the situation with large differences in charity cost-effectiveness.) Therefore, eliciting and collecting experts' views can provide a useful input into other people's prioritisation decisions.

That said, on the margin, it'll be very hard to shift the relevant experts' credences on x-risk levels by more than, for example, a factor of two. And there are often already larger differences in other factors in our decisions - e.g., tractability of or personal fit for interventions. In addition, we don't know how much weight to put on experts' specific credences anyway. So there's not that much value in trying to further inform the relevant experts' credences on x-risk levels. (Though the same work that would do that might be very valuable for other reasons, like helping those experts build more detailed models of how risks would occur and what the levers for intervention are.)"

Does that roughly match your views?

If I thought there was a research project that would cause most people to revise that estimate to, say, 0.1% I do think this would be super valuable.

Just to check, I assume you mean that there'd be a lot of value in a research project that would cause most people to revise that estimate to (say) 0.1%, if indeed the best estimate is (say) 0.1%, and that wouldn't cause such a revision otherwise? 

One alternative thing you might mean: "I think the best estimate is 0.1%, and I think a research project that would cause most people to realise that would be super valuable." But I'm guessing that's not what you mean?

Max_Daniel @ 2020-08-04T12:00 (+4)
Does that roughly match your views?

Yes, that sounds roughly right. I hadn't thought about the value for communicating with broader audiences.

Just to check, I assume you mean that there'd be a lot of value in a research project that would cause most people to revise that estimate to (say) 0.1%, if indeed the best estimate is (say) 0.1%, and that wouldn't cause such a revision otherwise?

Yes, that's what I meant.

(I think my own estimate is somewhere between 0.1% and 10% FWIW, but also feels quite unstable and like I don't trust that number much.)

Milan_Griffes @ 2020-09-29T22:16 (+4)

I propose two additions to this list:

Without a solid theory of consciousness, our views about what matters will keep being based on moral intuitions and it will be hard to make progress on disputes.

MichaelA @ 2020-09-30T06:24 (+4)

[Unstructured, quickly written collection of reactions]

I agree that those two things would be valuable, largely for the reason you mention. Improving our neuroimaging capabilities could also be useful for some interventions to reduce long-term risks from malevolence

Though there could also be some downsides to each of those things; e.g.., better neuroimaging could perhaps be used for purposes that make totalitarianism or dystopias more likely/worse in expectation. (See "Which technological changes could increase or decrease existential risks from totalitarianism and dystopia? By how much? What other effects would those political changes have on the long-term future?")

---

I think the main reason I didn't already include a question directly about consciousness is what's captured here:

This post can be seen as collecting questions relevant to the “strategy” level.

One could imagine a version of this post that “zooms out” to discuss crucial questions on the “values” level, or questions about cause prioritisation as a whole. This might involve more emphasis on questions about, for example, population ethics, the moral status of nonhuman animals, and the effectiveness of currently available global health interventions. But here we instead (a) mostly set questions about morality aside, and (b) take longtermism as a starting assumption.

Though I acknowledge that this division is somewhat arbitrary, and also that consciousness is at least arguably/largely/somewhat an empirical rather than "values"/"moral" matter. (One reason I'm implicitly putting it partly in the "moral" bucket is that we might be most interested in something like "consciousness of a morally relevant sort", such that our moral views influence which features we're interested in investigating.)

---

After reading your comment, I skimmed again through the list of questions to see what of the things I already had were closest, and where those points might "fit". Here are the questions I saw that seemed related (though they don't directly address our understanding of consciousness):

What is the possible quality of the human-influenced future?

  • How does the “difficulty” or “cost” of creating pleasure vs. pain compare?

Can and will we expand into space? In what ways, and to what extent? What are the implications? 

  • Will we populate colonies with (some) nonhuman animals, e.g. through terraforming? [it's the implications of terraforming that make this relevant]

Can and will we create sentient digital beings? To what extent? What are the implications?

  • Would their experiences matter morally?
  • Will some be created accidentally?

[...]

  • How close to the appropriate size should we expect influential agents’ moral circles to be “by default”?
Milan_Griffes @ 2020-10-08T18:55 (+2)

Some related material in this blog post: How understanding valence could help make future AIs safer

Milan_Griffes @ 2020-09-30T18:41 (+2)

Thanks for this!

fwiw I would definitely bucket consciousness research and neuroimaging under "strategy", though agree that the bucketing is somewhat arbitrary.

david_reinstein @ 2021-09-01T15:01 (+1)

Can and will we create sentient digital beings? To what extent? What are the implications?

  • Would their experiences matter morally?
  • Will some be created accidentally

I might add to this "if we could create sentient (conscious) digital beings"

I think this relates to the comment from @MichaelA1y

My shortform thoughts on this are HERE