Max_Daniel's Quick takes

By Max_Daniel @ 2019-12-13T11:17 (+21)

null
Max_Daniel @ 2019-12-17T11:56 (+71)

[Some of my high-level views on AI risk.]

[I wrote this for an application a couple of weeks ago, but thought I might as well dump it here in case someone was interested in my views. / It might sometimes be useful to be able to link to this.]

[In this post I generally state what I think ​before ​updating on other people’s views – i.e., what’s ​sometimes known as​ ‘impressions’ as opposed to ‘beliefs.’]

Summary

Why I'm interested in TAI as a lever to improve the long-run future

I expect my perspective to be typical of someone who has become interested in TAI through their engagement with the effective altruism (EA) community. In particular,

Less standard but not highly unusual (within EA) high-level views I hold more tentatively:

My empirical views on TAI

I think the strongest reasons to expect TAI this century are relatively outside-view-based (I talk about this century just because I expect that later developments are harder to predictably influence, not because I think a century is particularly meaningful time horizon or because I think TAI would be less important later):

I think there are several reasons to be skeptical, but that the above succeeds in establishing a somewhat robust case for TAI this century not being wildly implausible.

My impression is that I’m less confident than the typical longtermist EA in various claims around TAI, such as:

My guess is this is due to different priors, and due to frequently having found extant specific arguments for TAI-related claims (including by staff at FHI and Open Phil) less convincing than I would have predicted. I still think that work on TAI is among the few best shots for current longtermists.

Stefan_Schubert @ 2019-12-17T13:30 (+36)

Awesome post, Max, many thanks for this. I think it would be good if these difficult questions were discussed more on the forum by leading researchers like yourself.

I think you should post this as a normal post; it's far too good and important to be hidden away on the shortform.

Jonas Vollmer @ 2020-06-14T16:05 (+11)

I second Stefan's suggestion to share this as a normal post – I realize I should have read your shortform much sooner.

MaxRa @ 2020-10-14T14:49 (+5)

Thanks for putting your thoughts together, I only accidentally stumbled on this and I think it would be a great post, too.

I was really surprised about you giving ~20% for TAI this century, and am still curious about your reasoning, because it seems to diverge strongly from your peers. Why do you find inside-view based arguments less convincing? I've updated pretty strongly on the deep (reinforcement) learning successes of the last years, and on our growing computational and algorithmic level understanding of the human mind. I've found AI Impacts' collection of inside- and outside-view arguments against current AI leading to AGI fairly unconvincing, e.g. the list of "lacking capacities" seem to me (as someone following CogSci, ML and AI Safety related blogs) to get a lot of productive research attention.

Pablo_Stafforini @ 2019-12-17T14:32 (+2)

[deleted because the question I asked turned out to be answered in the comment, upon careful reading]

Max_Daniel @ 2019-12-13T11:17 (+45)

What's the right narrative about global poverty and progress? Link dump of a recent debate.

The two opposing views are:

(a) "New optimism:" [1] This is broadly the view that, over the last couple of hundred years, the world has been getting significantly better, and that's great. [2] In particular, extreme poverty has declined dramatically, and most other welfare-relevant indicators have improved a lot. Often, these effects are largely attributed to economic growth.

(b) Hickel's critique: Anthropologist Jason Hickel has criticized new optimism on two grounds:

Link dump (not necessarily comprehensive)

If you only read two things, I'd recommend (1) Hasell's and Roser's article explaining where the data on historic poverty comes from and (2) the take by economic historian Branko Milanovic.

By Hickel (i.e. against "new optimism"):

By "new optimists":

Commentary by others:

My view

[1] It's not clear to me if "new optimism" is actually new. I'm using Hickel's label just because it's short and it's being used in this debate anyway, not to endorse Hickel's views or make any other claim.

[2] There is an obvious problem with new optimism, which is that it's anthropocentric. In fact, on many plausible views, the total axiological value of the world at any time in the recent past may be dominated by the aggregate wellbeing of nonhuman animals; even more counterintuitively, it may well be dominated by things like the change in the total population size of invertebrates. But this debate is about human wellbeing, so I'll ignore this problem.

Jonas Vollmer @ 2020-06-14T16:15 (+13)

In addition to the examples you mention, the world has become much more unequal over the past centuries, and I wonder how that impacts welfare. Relatedly, I wonder to what degree there is more loneliness and less purpose and belonging than in previous times, and how that impacts welfare (and whether it relates to the Easterlin paradox). EAs don't seem to discuss these aspects of welfare often. (Somewhat related books: Angus Deaton's The Great Escape and Junger's Tribe.)

Denise_Melchin @ 2020-06-14T18:00 (+21)

(Have not read through Max' link dump yet, which seems very interesting, I also feel some skepticism of the 'new optimism' worldview.)

One major disappointment in Pinker's book as well as in related writings for me has been that they do little to acknowledge that how much progress you think the world has seen depends a lot on your values. To name some examples, not everyone views the legalization of gay marriage and easier access to abortion as progress, and not everyone thinks that having plentiful access to consumer goods is a good thing.

I would be very interested in an analysis of 'progress' in light of the different moral foundations discussed by Haidt. I have the impression that Pinker exclusively focuses on the 'care/harm' foundation, while completely ignoring others like Sanctity/purity or Authority/respect and this might be where some part of the disconnect between the 'New optimists' and opponents is coming from.

Jonas Vollmer @ 2020-06-15T08:23 (+10)

Your point reminds me of the "history is written by the winners" adage – presumably, most civilizations would look back and think of their history as one of progress because they views their current values most favorably.

Perhaps this is one of the paths that would eventually contribute to a "desired dystopia" outcome, as outlined in Ord's book: we fail to realize that our social structure is flawed and leads to suffering in a systematic manner that's difficult to change.

(Also related: https://www.gwern.net/The-Narrowing-Circle )

willbradshaw @ 2020-07-14T13:06 (+6)

I have relatively little exposure to Hickel, save for reading his guardian piece and a small part of the dialogue that followed from that, but I don't get the impression he's coming from a position of putting more weight on Sanctity/purity or Authority/respect; in general I'd guess that few people in left-wing social-science academia are big on those sorts of moral foundations, except indirectly via moral/cultural relativism.

Taking Haidt's moral foundations theory as read for the moment, I'd guess that the Fairness foundation is doing a lot of the work in this disagreement. In general, leftists and liberals seem to differ a lot in what they consider culpable harm, and Fairness/exploitation seems like a big part of that.

Aidan O'Gara @ 2020-07-14T11:00 (+3)

Very interesting writeup, I wasn't aware of Hickel's critique but it seems reasonable.

Do you think it matters who's right? I suppose it's important to know whether poverty is increasing or decreasing if you want to evaluate the consequences of historical policies or events, and even for general interest. But does it have any specific bearing on what we should do going forwards?

willbradshaw @ 2020-07-14T13:12 (+6)

Do you think it matters who's right?

I think it matters quite a lot when it comes to assessing where to go from here: in particular, how cautious and conservative to be, and how favourable towards untested radical change.

If things have gotten way better and are likely to continue to get way better in the foreseeable future, then we should probably broadly stick with what we're doing – some tinkering around the edges to fix obvious abuses, but no root-and-branch restructuring unless something goes obviously and profoundly wrong.

Whereas if things are failing to get better, or are actively getting worse, then it might be worth taking big risks in order to get out of the hole.

I've often had conversations with people to my left where they seem way too willing to smash stuff in the process of getting to deep systemic change, which is potentially sensible if you think we're in a very bad place and getting worse but madness if you think we're in an extremely unusually good place and getting better.

Max_Daniel @ 2020-07-14T12:46 (+4)

Thanks, this is a good question. I don't think it has specific bearing on future actions, but does have some broader relevance. For example, longtermists have sometimes discussed the total value of the long-term future: in this context, we may be interested in whether things have been getting better or worse in order to extrapolate this trend forward.

(Though this is not why I wrote this post. - That was more because I happened to find it interesting personally.)

Of course, this trend extrapolation would only be one among many considerations. In addition, ideally we'd want a trend on the world's total value, not a trend on just poverty. So e.g. the anthropocentrism would be a problem here.

lucy.ea8 @ 2019-12-13T22:43 (+1)

I agree that the world has gotten much better than it was.

There are two important reasons for this, the other improvements that we see mostly follow from them.

  1. Energy consumption (is wealth) The energy consumption per person has increased over the last 500 years and that increased consumption translates to welfare.
  2. Education (knowledge) The amount of knowledge that we as humanity posses has increased dramatically, and that knowledge is widely accessible. 75% of kids finishing 9th grade, 12.5% finishing 6th grade, 4.65% less than 6th grade unfortunately around 7-8% kids have never gone to school. Education increases translate to increase in health, wealth (actually energy consumption) more in countries with market economies than non-market economies.

The various -isms (capitalism, socialism, communism, neoliberalism, colonialism, fascism) have very little to do with human development, and in fact have been very negative for human development. (I am skipping theory about how the -isms are supposed to work, and jumping to the actual effects).

Max_Daniel @ 2020-06-26T13:41 (+41)

[See this research proposal for context. I'd appreciate pointers to other material.]

[WIP, not comprehensive] Collection of existing material on 'impact being heavy-tailed'

Conceptual foundations

Impact in general / cause-agnostic

Less than 1% of our donors account for 50% of our recorded donations. This amounts to dozens of people, while the next 40% of donations (from both pledge donors and non-pledge donors) is distributed among hundreds. This suggests that most of our impact comes from a small-to-medium-size group of large donors (rather than from a very small group of very large donors, or from a large group of small donors).[6]

EA community building

Global health

Misc

Linch @ 2020-06-26T23:37 (+14)

I did some initial research/thinking on this before the pandemic came and distracted me completely. Here's a very broad outline that might be helpful.

Max_Daniel @ 2020-06-27T08:07 (+2)

Great, thank you!

I saw that you asked Howie for input - are there other people you think it would be good to talk to on this topic?

Linch @ 2020-06-29T08:23 (+4)

You're probably aware of this, but Anders Sandberg has done some thinking about this. Also presumably David Roodman based on his public writings (though I have not contacted him myself).

More broadly, I'm guessing that anybody who either you've referenced above, or who I've linked in my doc, would be helpful, though of course many of them are very busy.

Max_Daniel @ 2020-07-07T17:28 (+10)

[Mathematical definitions of heavy-tailedness. Currently mostly notes to myself - I might turn these into a more accessible post in the future. None of this is original, and might indeed be routine for a maths undergraduate specializing in statistics.]

There are different definitions of when a probability distribution is said to have a heavy tail, and several closely related terms. They are not extensionally equivalent. I.e. there are distributions that are heavy-tailed according to some, but not all common definitions; this is for example true for the log-normal distribution.

Here I'll collect all definitions I encounter, and what I know about how they relate to each other.

I don't think the differences matter for most EA purposes, where the weakest definition that includes e.g. log-normals seems safe to use (except maybe #0 below, which might be too weak). I'm mainly collecting the definitions because I'm curious and because I think they can be an avoidable source of confusion for someone trying to understand discussions involving heavy-tailedness. (The differences might matter for more technical purposes, e.g. when deciding which statistical method to use to analyze certain data.)

There is also a less interesting way in which definitions can differ: a distribution can have a heavy right tail, a heavy left tail, or both. Some definitions thus come in three variants. I'm for now going to ignore this, stating only one variant per definition.

List of definitions

X will always denote a random variable.

0. X is leptokurtic (or super-Gaussian) iff its kurtosis is strictly larger than 3 (which is the kurtosis of e.g. all normal distributions), i.e. µ_4/σ^4 > 3, where µ_4 = E[(X - E[X])^4] is the fourth central moment and σ is the standard deviation.

1. X has a heavy right tail iff the moment-generating function of X is infinite at all t > 0.

2. X is heavy-tailed iff it has an infinite nth moment for some n.

3. X is heavy-tailed iff it has infinite variance (i.e. infinite 2nd central moment).

4. X has a long right tail iff for all real numbers t the conditional probability P[X > x + t | X > x] converges to 1 as x goes to infinity.

4b. X has a heavy right tail iff there is a real number x_0 such that the conditional mean exceedance (CME) E[X - x | X > x] is a strictly increasing function of x for x > x_0. (This is a definition by Bryson, 1974, who may have coined the term 'heavy-tailed' and shows that distributions with constant CME are precisely the exponential distributions.)

5. X is subexponential (or fulfills the catastrophe principle) iff for all n > 0 and i.i.d. random variables X_1, ..., X_n with the same distribution as X the quotients of probabilities P[X_1 + ... + X_n > x] / P[max(X_1, ..., X_n)] converges to 1 as x goes to infinity.

6. X has a regularly varying right tail with tail index 0 < α ≤ 2 iff there is a slowly varying function L: (0,+∞) → (0,+∞) such that for all x > 0 we have P[X > x] = x^(-α) * L(x). (L is slowly varying iff, for all a > 0, the quotient L(ax)/L(x) converges to 1 as x goes to infinity.)

Relationships between definitions

(Note that even for those I state without caveats I haven't convinced myself of a proof in detail.)

I'll use #0 to refer to the clause on the right hand side of the "iff" statement in definition 0, and so on.

(For some of these one might have to use the suitable versions of heavy right tail / left tail etc. - e.g. perhaps #1 needs to be replaced with "heavy right and left tail" or "heavy right or left tail" etc.)

  • I suspect that #0 is the weakest condition, i.e. that all other definitions imply that X is super-Gaussian.
  • I suspect that #6 is the strongest condition, i.e. implies all others.
  • I think that: #3 => #2 => #1 and #5 => #4 => #1 (where '=>' denotes implications).

Why I think that:

  • #0 weakest: Heuristically, many other definitions state or imply that some higher moments don't exist, or are at least "close" to such a condition (e.g. #1). By contrast, #0 merely requires that a certain moment is larger than for the normal distribution. Also, the exponential distribution is super-Gaussian but not usually considered to be heavy-tailed - in fact, "heavy-tailed" is sometimes loosely explained to mean "having heavier tails than an exponential distribution".
  • #6 strongest: The condition basically says that the distribution behaves like a Pareto distribution (or "power law") as we look further down the tail. And for Pareto distributions with α ≤ 2 it's well known and easy to see that the variance doesn't exist, i.e. #3 holds. Similarly, I've seen power laws being cited as examples of distributions fulfilling the catastrophe principle, i.e. #5.
  • #3 => #2 is obvious.
  • #2 => #1: A statement very close to the contrapositive is well known: if the moment-generating function exists in an open neighborhood around some value, then the nth moments about that value are given by the nth derivative of the moment-generating function at that value. (I'm not sure if there can be weird cases where the moment-generating function exists in some points but no open interval.)
  • #5 => #4 and #4 => #1 are stated on Wikipedia.
Inda @ 2020-06-26T22:47 (+2)

This is a good link-list. It seems undiscoverable here though. I think thinking on how you can make such lists discoverable is useful. Making it a top-level post seems an obvious improvement.

Max_Daniel @ 2020-06-27T08:05 (+4)

Thanks for the suggestion. I plan to make this list more discoverable once I feel like it's reasonably complete, e.g. by turning it into its own top-level post or appending it to a top-level post writeup of my research on this topic.

Max_Daniel @ 2021-03-29T11:29 (+28)

[Longtermist EA vs. human progress/progress studies.

I'm posting a quick summary of my current understanding, which I needed to write anyway for an email conversation. I'm not that familiar with the human progress/progress studies communities and would be grateful if people pointed out where my impression of them seems off, as well as for takes on whether I seem correct about what the key points of agreement and disagreement are.]

[ETA: See this reply from Tony from the 'progress studies' community .]
 

Here's a quick summary of my understanding of the 'longtermist EA' and 'progress studies' perspectives, in a somewhat cartoonish way to gesture at points of agreement and disagreement. 

EA and progress studies mostly agree about the past. In particular, they agree that the Industrial Revolution was a really big deal for human well-being, and that this is often overlooked/undervalued. E.g. here's a blog post by someone somewhat influential in EA:

https://lukemuehlhauser.com/industrial-revolution/
 

Looking to the future, the progress studies community is most worried about the Great Stagnation. They are nervous that science seems to be slowing down, that ideas are getting harder to find, and that economic growth may soon be over. Industrial-Revolution-level progress was by far the best thing that ever happened to humanity, but we're at risk of losing it. That seems really bad. We need a new science of progress to understand how to keep it going. Probably this will eventually require a number of technological and institutional innovations since our current academic and economic systems are what's led us into the current slowdown.

If we were making a list of the most globally consequential developments from the past, EAs would in addition to the Industrial Revolution point to the Manhattan Project and the hydrogen bomb: the point in time when humanity first developed the means to destroy itself. (They might also think of factory farming as an example for how progress might be great for some but horrible for others, at least on some moral views.) So while they agree that the world has been getting a lot better thanks to progress, they're also concerned that progress exposes us to new nuclear-bomb-style risks. Regarding the future, they're most worried about existential risk -- the prospect of permanently forfeiting our potential of a future that's much better than the status quo. Permanent stagnation would be an existential risk, but EAs tend to be even more worried about catastrophes from emerging technologies such as misaligned artificial intelligence or engineered pandemics. They might also be worried about a potential war between the US and China, or about extreme climate change. So in a sense they aren't as worried about progress stopping than they are about progress being mismanaged and having catastrophic unintended consequences. They therefore aim for 'differential progress' -- accelerating those kinds of technological or societal change that would safeguard us against these catastrophic risks, and slowing down whatever would expose us to greater risk. So concretely they are into things like "AI safety" or "biosecurity" -- e.g. making machine learning systems more transparent so we could tell if they were trying to deceive their users, or implementing better norms around the publication of dual-use bio research.

The single best book on this EA perspective is probably The Precipice by my FHI colleague Toby Ord.

Overall, EA and the progress studies perspective agree on a lot -- they're probably closer than either would be to any other popular 'worldview'. But overall EAs probably tend to think that human progress proponents are too indiscriminately optimistic about further progress, and too generically focused on keeping progress going. (Both because it might be risky and because EAs probably tend to be more "optimistic" that progress will accelerate anyway, most notably due to advances in AI.) Conversely, human progress proponents tend to think that EA is insufficiently focused on ensuring a future of significant economic growth and the risks imagined by EAs either aren't real or that we can't do much to prevent them except encouraging innovation in general.

Aaron Gertler @ 2021-03-31T21:15 (+6)

I think this could be a good non-Shortform post. I can think of some tags I'd like to apply to it, and it's the best short answer I've seen to a question I've heard from multiple people in EA spaces.

Max_Daniel @ 2021-05-31T21:40 (+2)

Thanks for this prompt. Now posted here.

Max_Daniel @ 2020-02-19T10:31 (+21)

[On https://www.technologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ ]

Max_Daniel @ 2020-02-14T18:50 (+20)

[Is longtermism bottlenecked by "great people"?]

Someone very influential in EA recently claimed in conversation with me that there are many tasks X such that (i) we currently don't have anyone in the EA community who can do X, (ii) the bottleneck for this isn't credentials or experience or knowledge but person-internal talent, and (iii) it would be very valuable (specifically from a longtermist point of view) if we could do X. And that therefore what we most need in EA are more "great people".

I find this extremely dubious. (In fact, it seems so crazy to me that it seems more likely than not that I significantly misunderstood the person who I think made these claims.) The first claim is of course vacuously true if, for X, we choose some ~impossible task such as "experience a utility-monster amount of pleasure" or "come up with a blueprint for how to build safe AGI that is convincing to benign actors able to execute it". But of course more great people don't help with solving impossible tasks.

Given the size and talent distribution of the EA community my guess is that for most apparent X, the issue either is that (a) X is ~impossible, or (b) there are people in EA who could do X, but the relevant actors cannot identify them, or (c) acquiring the ability to do X is costly (e.g. perhaps you need time to acquire domain-specific expertise), even for maximally talented "great people", and the relevant actors either are unable to help pay that cost (e.g. by training people themselves, or giving them the resources to allow them to get training elsewhere) or make a mistake by not doing so.

My best guess for the genesis of the "we need more great people" perspective: Suppose I talk a lot to people at an organization that thinks there's a decent chance we'll develop transformative AI soon but it will go badly, and that as a consequence tries to grow as fast as possible to pursue various ambitious activities which they think reduces that risk. If these activities are scalable projects with short feedback loops on some intermediate metrics (e.g. running some super-large-scale machine learning experiments), then I expect I would hear a lot of claims like "we really need someone who can do X". I think it's just a general property of a certain kind of fast-growing organization that's doing practical things in the world that everything constantly seems like it's on fire. But I would also expect that, if I poked a bit at these claims, it would usually turn out that X is something like "contribute to this software project at the pace and quality level of our best engineers, w/o requiring any management time" or "convince some investors to give us much more money, but w/o anyone spending any time transferring relevant knowledge". If you see that things break because X isn't done, even though something like X seems doable in principle (perhaps you see others do it), it's tempting to think that what you need is more "great people" who can do X. After all, people generally are the sort of stuff that does things, and maybe you've actually seen some people do X. But it still doesn't follow that in your situation "great people" are the bottleneck ...

Curious if anyone has examples of tasks X for which the original claims seem in fact true. That's probably the easiest way to convince me that I'm wrong.

Buck @ 2020-02-21T05:36 (+21)

I'm not quite sure how high your bar is for "experience", but many of the tasks that I'm most enthusiastic about in EA are ones which could plausibly be done by someone in their early 20s who eg just graduated university. Various tasks of this type:

  • Work at MIRI on various programming tasks which require being really smart and good at math and programming and able to work with type theory and Haskell. Eg we recently hired Seraphina Nix to do this right out of college. There are other people who are recent college graduates who we offered this job to who didn't accept. These people are unusually good programmers for their age, but they're not unique. I'm more enthusiastic about hiring older and more experienced people, but that's not a hard requirement. We could probably hire several more of these people before we became bottlenecked on management capacity.
  • Generalist AI safety research that Evan Hubinger does--he led the writing of "Risks from Learned Optimization" during a summer internship at MIRI; before that internship he hadn't had much contact with the AI safety community in person (though he'd read stuff online).
    • Richard Ngo is another young AI safety researcher doing lots of great self-directed stuff; I don't think he consumed an enormous amount of outside resources while becoming good at thinking about this stuff.
  • I think that there are inexperienced people who could do really helpful work with me on EA movement building; to be good at this you need to have read a lot about EA and be friendly and know how to talk to lots of people.

My guess is that EA does not have a lot of unidentified people who are as good at these things as the people I've identified.

I think that the "EA doesn't have enough great people" problem feels more important to me than the "EA has trouble using the people we have" problem.

Max_Daniel @ 2020-02-21T12:35 (+4)

Thanks, very interesting!

I agree the examples you gave could be done by a recent graduate. (Though my guess is the community building stuff would benefit from some kinds of additional experience that has trained relevant project management and people skills.)

I suspect our impressions differ in two ways:

1. My guess is I consider the activities you mentioned less valuable than you do. Probably the difference is largest for programming at MIRI and smallest for Hubinger-style AI safety research. (This would probably be a bigger discussion.)

2. Independent of this, my guess would be that EA does have a decent number of unidentified people who would be about as good as people you've identified. E.g., I can think of ~5 people off the top of my head of whom I think they might be great at one of the things you listed, and if I had your view on their value I'd probably think they should stop doing what they're doing now and switch to trying one of these things. And I suspect if I thought hard about it, I could come up with 5-10 more people - and then there is the large number of people neither of us has any information about.

Two other thoughts I had in response:

  • It might be quite relevant if "great people" refers only to talent or also to beliefs and values/preferences. E.g. my guess is that there are several people who could be great at functional programming who either don't want to work for MIRI, or don't believe that this would be valuable. (This includes e.g. myself.) If to count as "great person" you need to have the right beliefs and preferences, I think your claim that "EA needs more great people" becomes stronger. But I think the practical implications would differ from the "greatness is only about talent" version, which is the one I had in mind in the OP.
  • One way to make the question more precise: At the margin, is it more valuable (a) to try to add high-potential people to the pool of EAs or (b) change the environment (e.g. coordination, incentives, ...) to increase the expected value of activities by people in the current pool. With this operationalization, I might actually agree that the highest-value activities of type (a) are better than the ones of type (b), at least if the goal is finding programmers for MIRI and maybe for community building. (I'd still think that this would be because, while there are sufficiently talented people in EA, they don't want to do this, and it's hard to change beliefs/preferences and easier to get new smart people excited about EA. - Not because the community literally doesn't have anyone with a sufficient level of innate talent. Of course, this probably wasn't the claim the person I originally talked to was making.)
Buck @ 2020-02-23T01:00 (+20)
My guess is I consider the activities you mentioned less valuable than you do. Probably the difference is largest for programming at MIRI and smallest for Hubinger-style AI safety research. (This would probably be a bigger discussion.)

I don't think that peculiarities of what kinds of EA work we're most enthusiastic about lead to much of the disagreement. When I imagine myself taking on various different people's views about what work would be most helpful, most of the time I end up thinking that valuable contributions could be made to that work by sufficiently talented undergrads.

Independent of this, my guess would be that EA does have a decent number of unidentified people who would be about as good as people you've identified. E.g., I can think of ~5 people off the top of my head of whom I think they might be great at one of the things you listed, and if I had your view on their value I'd probably think they should stop doing what they're doing now and switch to trying one of these things. And I suspect if I thought hard about it, I could come up with 5-10 more people - and then there is the large number of people neither of us has any information about.

I am pretty skeptical of this. Eg I suspect that people like Evan (sorry Evan if you're reading this for using you as a running example) are extremely unlikely to remain unidentified, because one of the things that they do is think about things in their own time and put the results online. Could you name a profile of such a person, and which of the types of work I named you think they'd maybe be as good at as the people I named?

It might be quite relevant if "great people" refers only to talent or also to beliefs and values/preferences

I am not intending to include beliefs and preferences in my definition of "great person", except for preferences/beliefs like being not very altruistic, which I do count.

E.g. my guess is that there are several people who could be great at functional programming who either don't want to work for MIRI, or don't believe that this would be valuable. (This includes e.g. myself.)

I think my definition of great might be a higher bar than yours, based on the proportion of people who I think meet it? (To be clear I have no idea how good you'd be at programming for MIRI because I barely know you, and so I'm just talking about priors rather than specific guesses about you.)

---

For what it's worth, I think that you're not credulous enough of the possibility that the person you talked to actually disagreed with you--I think you might doing that thing whose name I forget where you steelman someone into saying the thing you think instead of the thing they think.

Max_Daniel @ 2020-02-26T18:00 (+7)
I don't think that peculiarities of what kinds of EA work we're most enthusiastic about lead to much of the disagreement. When I imagine myself taking on various different people's views about what work would be most helpful, most of the time I end up thinking that valuable contributions could be made to that work by sufficiently talented undergrads.

I agree we have important disagreements other than what kinds of EA work we're most enthusiastic about. While not of major relevance for the original issue, I'd still note that I'm surprised by what you say about various other people's view on EA, and I suspect it might not be true for me: while I agree there are some highly-valuable tasks that could be done by recent undergrads, I'd guess that if I made a list of the most valuable possible contributions then a majority of the entries would require someone to have a lot of AI-weighted generic influence/power (e.g. the kind of influence over AI a senior government member responsible for tech policy has, or a senior manager in a lab that could plausibly develop AGI), and that because of the way relevant existing institutions are structured this would usually require a significant amount of seniority. (It's possible for some smart undergrads to embark on a path culminating in such a position, but my guess this is not the kind of thing you had in mind.)

I am pretty skeptical of this. Eg I suspect that people like Evan (sorry Evan if you're reading this for using you as a running example) are extremely unlikely to remain unidentified, because one of the things that they do is think about things in their own time and put the results online. [...]
I am not intending to include beliefs and preferences in my definition of "great person", except for preferences/beliefs like being not very altruistic, which I do count.

I don't think these two claims are plausibly consistent, at least if "people like Evan" is also meant to exclude beliefs and preferences: For instance, if someone with Evan-level abilities doesn't believe that thinking in their own time and putting results online is a worthwhile thing to do, then the identification mechanism you appeal to will fail. More broadly, someone's actions will generally depend on all kinds of beliefs and preferences (e.g. on what they are able to do, on what people around them expect, on other incentives, ...) that are much more dependent on the environment than relatively "innate" traits like fluid intelligence. The boundary between beliefs/preferences and abilities is fuzzy, but as I suggested at the end of my previous comment, I think for the purpose of this discussion it's most useful to distinguish changes in value we can achieve (a) by changing the "environment" of existing people vs. (b) by adding more people to the pool.

Could you name a profile of such a person, and which of the types of work I named you think they'd maybe be as good at as the people I named?

What do you mean by "profile"? Saying what properties they have, but without identifying them? Or naming names or at least usernames? If the latter, I'd want to ask the people if they're OK with me naming them publicly. But in principle happy to do either of these things, as I agree it's a good way to check if my claim is plausible.

I think my definition of great might be a higher bar than yours, based on the proportion of people who I think meet it?

Maybe. When I said "they might be great", I meant something roughly like: if it was my main goal to find people great at task X, I'd want to invest at least 1-10 hours per person finding out more about how good they'd be at X (this might mean talking to them, giving them some sort of trial tasks etc.) I'd guess that for between 5 and 50% of these people I'd eventually end up concluding they should work full-time doing X or similar.

Also note that originally I meant to exclude practice/experience from the relevant notion of "greatness" (i.e. it just includes talent/potential). So for some of these people my view might be something like "if they did 2 years of deliberate practice, they then would have a 5% to 50% chance of meeting the bar for X". But I know think that probably the "marginal value from changing the environment vs. marginal value from adding more people" operationalization is more useful, which would require "greatness" to include practice/experience to be consistent with it.

If we disagree about the bar, I suspect that me having bad models about some of the examples you gave explains more of the disagreement than me generally dismissing high bars. "Functional programming" just doesn't sound like the kind of task to me with high returns to super-high ability levels, and similar for community building; but it't plausible that there are bundles of tasks involving these things where it matters a lot if you have someone whose ability is 6 instead of 5 standard deviations above the mean (not always well-defined, but you get the idea). E.g. if your "task" is "make a painting that will be held in similar regards as the Mona Lisa" or "prove P != NP" or "be as prolific as Ramanujan at finding weird infinite series for pi", then, sure, I agree we need an extremely high bar.

For what it's worth, I think that you're not credulous enough of the possibility that the person you talked to actually disagreed with you--I think you might doing that thing whose name I forget where you steelman someone into saying the thing you think instead of the thing they think.

Thanks for pointing this out. FWIW, I think there likely is both substantial disagreement between me and that person and that I misunderstood their view in some ways.

richard_ngo @ 2020-06-15T16:55 (+4)

Task X for which the claim seems most true for me is "coming up with novel and important ideas". This seems to be very heavy-tailed, and not very teachable.

I would also expect that, if I poked a bit at these claims, it would usually turn out that X is something like "contribute to this software project at the pace and quality level of our best engineers, w/o requiring any management time" or "convince some investors to give us much more money, but w/o anyone spending any time transferring relevant knowledge".

Neither of these feel like central examples of the type of thing EA needs most. Most of the variance of the impact of the software project will be in how good the idea is; same for most of the variance of the impact of getting funding.

Robin Hanson is someone who's good at generating novel and important ideas. Idk how he got that way, but I suspect it'd be very hard to design a curriculum to recreate that. Do you disagree?

Max_Daniel @ 2020-06-16T09:35 (+2)
Task X for which the claim seems most true for me is "coming up with novel and important ideas". This seems to be very heavy-tailed, and not very teachable.

I agree that the impact from new ideas will be heavy tailed - i.e. a large share of the total value from new ideas will be from the few best ideas, and few people. I'd also guess that this kind of creativity is not that teachable. (Though not super certain about both.)

I feel less sure that 'new ideas' is among the things most needed in EA, when discounted by the difficulty of generating them. (I do think there probably are a number of undiscovered and highly important ideas out there, partly based on EA's track record and partly based on a sense that there are a lot of things we don't know or understand about how to make the long-term future go well.) If I had to guess where to optimally invest flexible resources at the margin, I feel highly uncertain whether it would be in "find people who're good at generating new ideas" versus things like "advance known research directions" or "accumulate AI-weighted influence/power".

richard_ngo @ 2020-06-16T17:11 (+4)

People tend to underestimate the importance of ideas, because it's hard to imagine what impact they will have without doing the work of coming up with them.

I'm also uncertain how impactful it is to find people who're good at generating ideas, because the best ones will probably become prominent regardless. But regardless of that, it seems to me like you've now agreed with the three points that the influential EA made. Those weren't comparative claims about where to invest marginal resources, but rather the absolute claim that it'd be very beneficial to have more talented people.

Then the additional claim I'd make is: some types of influence are very valuable and can only be gained by people who are sufficiently good at generating ideas. It'd be amazing to have another Stuart Russell, or someone in Stephen Pinker's position but more onboard with EA. But they both got there by making pioneering contributions in their respective fields. So when you talk about "accumulating AI-weighted influence", e.g. by persuading leading AI researchers to be EAs, that therefore involves gaining more talented members of EA.

Jonas Vollmer @ 2020-06-14T15:51 (+2)

I stumbled a bit with the framing here: I think it's often the case that you need a lot of person-internal talent (including a good attitude, altruistic commitment, etc.) to learn X.

I'd personally be excited to spend more time on mentorship of EA community members but it feels kind of hard to find potential mentees who aren't already in touch with many other mentors (either because I'm bad at finding them or because we need more "great people" or because I'm not great at mentoring people to learn X).

Max_Daniel @ 2020-06-15T12:09 (+4)

I agree that, basically by definition, higher talent means higher returns on learning. My claim was not that talent is unimportant, but roughly that the answer to "Why don't we have anyone in the community who can do X?" more often is "Because no-one has spent enough effort practicing X." than it is "Because there is no EA who is sufficiently talented that they could do X well given an optimal environment, training etc.".

(More generally, I agree that the OP could do a better job at framing the debate, setting out the key considerations and alternative views etc. I hope to write an improved version in the next few months.)

Max_Daniel @ 2020-08-13T09:54 (+16)

[EA's focus on marginal individual action over structure is a poor fit for dealing with info hazards.]

I tend to think that EAs sometimes are too focused on optimizing the marginal utility of individual actions as opposed to improving larger-scale structures. For example, I think it'd be good if there was much content and cultural awareness on how to build good organizations as there is on how to improve individual cognition. - Think about how often you've heard of "self improvement" or "rationality" as opposed to things like "organizational development".

(Yes, this is similar to the good old 'systemic change' objection aimed at "what EAs tend to do in practice" rather than "what is implied by EAs' normative views".)

It occurred to me that one instance where this might bite in particular are info hazards.

I often see individual researchers agonizing about whether they can publish something they have written, which of several framings to use, and even which ideas are safe to mention in public. I do think that this can sometimes be really important, and that there are areas with a predictably high concentration of such cases, e.g. bio.

However, in many cases I feel like these concerns are far-fetched and poorly targeted.

On the other hand, in such cases often there are important info hazards in the areas researchers are working about. For example, I think it's at least plausible that there is true information on, say, the prospects and paths to transformative AI, that would be bad to bring to the attention of, say, senior US or Chinese government officials.

It's not the presence of these hazards but the connection with typical individual researcher actions that I find dubious. To address these concerns, rather than forward chaining from individual action one considers to take for other reasons, I suspect it'd be more fruitful to backward-chain from the location of large adverse effects (e.g. the US government starting an AGI project, if you think that's bad). I suspect this would lead to a focus on structure for the analysis, and a focus on policy for solutions. Concretely, questions like:

Max_Daniel @ 2021-06-07T23:29 (+11)

[PAI vs. GPAI]

So there is now (well, since June 2020) both a Partnership on AI and a Global Partnership on AI.

Unfortunately GPAI's and PAI's FAQ pages conspicuously omit "how are you differnet from (G)PAI?".

Can anyone help?

At first glance it seems that:

I also note that it's slightly ironic that GPAI differs from PAI by having added the adjective "global". It's based on an OECD recommendation, but the OECD is very much not a "global" organization - it's a club of rich market democracies. (Though GPAI membership differs a lot from the OECD - fewer than half of OECD members have joined GPAI, and some notable non-OECD member such as India have.)

RyanCarey @ 2021-06-08T10:28 (+4)

I think PAI exists primarily for companies to contribute to beneficial AI and harvest PR benefits from doing so. Whereas GPAI is a diplomatic apparatus, for Trudeau and Macron to influence the conversation surrounding AI.

tonymmorley @ 2021-03-30T04:52 (+9)

(Max Daniel) Effective Altruism:

“Looking to the future, the progress studies community is most worried about the Great Stagnation. They are nervous that science seems to be slowing down, that ideas are getting harder to find, and that economic growth may soon be over. Industrial-Revolution-level progress was by far the best thing that ever happened to humanity, but we're at risk of losing it. That seems really bad. We need a new science of progress to understand how to keep it going. Probably this will eventually require a number of technological and institutional innovations since our current academic and economic systems are what's led us into the current slowdown.”

(Tony Morley) Human Progress – Progress Studies

The Industrial Revolution was a pivotal and critical point for the launch of human progress and prosperity; however, it came with costs and problems which needed to be solved in turn. The Industrial Revolution was the kick-off point where civilization mastered the ability to utilize the chemical energy banked in geological formation to invest in building a prosperous civilization. I do not think we are at risk of losing the progress of civilization build on the Industrial Revolution, but rather that the progress which the revolution kicked off, needs to continue to mature. The human progress proponent seeks as a priority to continue to advance the dramatic story of improving global living standards, while making rational and fact-based choices with investment and risk. When it comes to “what is progress?” as defined elegantly by Pinker in the following text, we have not yet reached the top of the s-curve for the majority of the global population. There is still room for progress in progress. 

Pinker: What is Progress? 

“What is progress? You might think that the question is so subjective and culturally relative as to be forever unanswerable. In fact, it’s one of the easier questions to answer. Most people agree that life is better than death. Health is better than sickness. Sustenance is better than hunger. Abundance is better than poverty. Peace is better than war. Safety is better than danger. Freedom is better than tyranny. Equal rights are better than bigotry and discrimination. Literacy is better than illiteracy. Knowledge is better than ignorance. Intelligence is better than dull-wittedness. Happiness is better than misery. Opportunities to enjoy family, friends, culture, and nature are better than drudgery and monotony. All these things can be measured. If they have increased over time, that is progress.” - Enlightenment Now, The Case for Reason, Science, Humanism, and Progress – Steven Pinker c2018

(Max Daniel) Effective Altruism:

“So while they agree that the world has been getting a lot better thanks to progress, they're also concerned that progress exposes us to new nuclear-bomb-style risks.”

(Tony Morley) Human Progress – Progress Studies

Agreed, many of the technologies and systems which have driven human progress forward, have also created novel risks and consequences that in turn must be mitigated or eliminated. The utilization of coal is a perfect example of this. Coal represented the metaphorical equivalent of an energy seed investment which humans have used since the Industrial Revolution through to today, to get civilization up and running. However, the ubiquitous use of coal for heating, smelting, and later electricity generation, has introduced or exacerbated issues including climate change, environmental surface disturbance and water and air pollution. Which are issues of human progress that in turn require further solutions. 

Similarly, our mastery of nuclear theory, and subsequently nuclear energy; provided both great manifest opportunities to push civilization forward, while at the same time it opened a pandora’s box of risk with regards to the use of nuclear energy for mass destruction and or loss of life. Our command of nuclear theory is widely disseminated, and as such there is little chance of fully reducing the risk to zero. There are, however, effective means of control for reducing the likelihood and consequence, of nuclear risk, and I would argue that utilizing those controls and mitigations are an example of human progress in action.

With regards to the article by Patrick Collison and Tyler Cowen, “We Need a New Science of Progress”, I do think there is value here. While the primary thesis for the history of progress and its principal drivers is fairly well established, it is poorly communicated, and not universally available or understood. What is needed is to take the human progress thesis as outlined in books like Progress, and Open by Johan Norberg, Factfulness by Hans Rosling, The Rational Optimist by Matt Ridley and Enlightenment Now by Steven Pinker, The Birth of Plenty: How the Prosperity of the Modern Work was Created – William J. Bernstein, amongst others, and use the insights contained therein to motivate our civilization to build a better future based on sound, rational and effective modes of operation. We can debate at length whether indefinite economic growth and ongoing human progress is possible, desirable, or ethical, once we have lifted nearly all of the worlds’ people out of extreme poverty – and not before then. “First comes a full stomach, then comes ethics.” – Bertolt Brecht, The Threepenny Opera c1928

(Max Daniel) Effective Altruism:

“(They might also think of factory farming as an example for how progress might be great for some but horrible for others, at least on some moral views.)”

(Tony Morley) Human Progress – Progress Studies

Agreed, factory farming has been both a blessing and a curse, particularly with respect to our use of animals for food. Access to more food, better food, and more meat and protein, is a driving force for progress at the developing country level, but causes enormous issues in the highly developed West. There’s certainly room for progress in agriculture and livestock management and wellbeing. Synthetic meat anyone? See, “Why Meat is the Best Worst Thing in the World” by Kurzgesagt c2018

(Max Daniel) Effective Altruism:

So while they agree that the world has been getting a lot better thanks to progress, they're also concerned that progress exposes us to new nuclear-bomb-style risks.

(Tony Morley) Human Progress – Progress Studies

I fully agree. As we attempt to solve problems and improve our personal and collective standards of living as a species, we generate other, largely unforeseen problems and consequences that require yet further solutions, mitigations or substitutions in turn. This has been an active trend in humanity for more than a hundred thousand years - a classic example of “Hatchet, Ratchet, Pivot” theory, advanced by Ruth DeFries

“Our species long lived on the edge of starvation. Now we produce enough food for all 7 billion of us to eat nearly 3,000 calories every day. This is such an astonishing thing in the history of life as to verge on the miraculous. The Big Ratchet is the story of how it happened, of the ratchets, the technologies and innovations, big and small, that propelled our species from hunters and gatherers on the savannahs of Africa to shoppers in the aisles of the supermarket. The Big Ratchet itself came in the twentieth century, when a range of technologies, from fossil fuels to scientific plant breeding to nitrogen fertilizers, combined to nearly quadruple our population in a century, and to grow our food supply even faster. To some, these technologies are a sign of our greatness to others, of our hubris. MacArthur fellow and Columbia University professor Ruth DeFries argues that the debate is the wrong one to have. Limits do exist, but every limit that has confronted us, we have surpassed. That cycle of crisis and growth is the story of our history indeed, it is the essence of The Big Ratchet. Understanding it will reveal not just how we reached this point in our history, but how we might survive it.” - The Big Ratchet, How Humanity Thrives in the Face of Natural Crisis – Ruth DeFries

(Max Daniel) Effective Altruism:

“Regarding the future, they're most worried about existential risk -- the prospect of permanently forfeiting our potential of a future that's much better than the status quo. Permanent stagnation would be an existential risk, but EAs tend to be even more worried about catastrophes from emerging technologies such as misaligned artificial intelligence or engineered pandemics. They might also be worried about a potential war between the US and China, or about extreme climate change. So in a sense they aren't as worried about progress stopping than they are about progress being mismanaged and having catastrophic unintended consequences.”

(Tony Morley) Human Progress – Progress Studies

I agree, however, civilization cannot mitigate existential risk without the courage, confidence, drive to do so. From my perspective, the human progress movement seeks to make the following broad case, 

  1. All things considered; the world/living standards used to be much worse. 
  2. All things considered; the world/living standards have improved dramatically for nearly everyone. 
  3. All things considered; the world/living standards still need to improve dramatically / there are still problems to be solved. 
  4. We should keep making the world a better place, improve living standards etc. (preferably actively)

Now, there is at times, much debate amongst the Effective Altruism community and the Human Progress community about 3) and 4), however I think we universally agree on the majority of 1) – 4). 

(Max Daniel) Effective Altruism:

“ Permanent stagnation would be an existential risk, but EAs tend to be even more worried about catastrophes from emerging technologies such as misaligned artificial intelligence or engineered pandemics. They might also be worried about a potential war between the US and China, or about extreme climate change.”

(Tony Morley) Human Progress – Progress Studies

I agree, civilization is not doing enough to consider (low likelihood, high consequence risks) such as “misaligned artificial intelligence”, “engineered pandemics” and other ‘LLHC’ risks, e.g., catastrophic climate change, nuclear war, asteroid impact, massive volcanic eruption, multi-state conventional warfare, genetically advanced selective human development (see Homo Deus: A History of Tomorrow by Yuval Noah Harari) etc. These are risks which need ongoing risk assessment and appropriate mitigation. On a side note, I have formal tertiary qualifications in risk assessment and mitigation, as it has been the field of my normal non-human progress vocation for the last decade.  

(Max Daniel) Effective Altruism:

“They therefore aim for 'differential progress' -- accelerating those kinds of technological or societal change that would safeguard us against these catastrophic risks, and slowing down whatever would expose us to greater risk. So concretely they are into things like "AI safety" or "biosecurity" -- e.g. making machine learning systems more transparent so we could tell if they were trying to deceive their users, or implementing better norms around the publication of dual-use bio research.”

(Tony Morley) Human Progress – Progress Studies

I’m afraid commentary on this section falls outside my scope of expertise. 

(Max Daniel) Effective Altruism:

“Overall, EA and the progress studies perspective agree on a lot -- they're probably closer than either would be to any other popular 'worldview'. 

(Tony Morley) Human Progress – Progress Studies

#Agreed 

(Max Daniel) Effective Altruism:

"But overall EAs probably tend to think that human progress proponents are too indiscriminately optimistic about further progress, and too generically focused on keeping progress going."

(Tony Morley) Human Progress – Progress Studies:

The human progress and progress studies movement is not a blindly optimistic look at a mission completed, or progress concluded – but rather that the world has become a much better place, while remaining in need of enormous progress. “Bad and Better”

“The solution is not to balance out all the negative news with more positive news. That would just risk creating a self-deceiving, comforting, misleading bias in the other direction. It would be as helpful as balancing too much sugar with too much salt. It would make things more exciting, but maybe even less healthy. A solution that works for me is to persuade myself to keep two thoughts in my head at the same time. It seems that when we hear someone say things are getting better, we think they are also saying “don’t worry, relax” or even “look away.” But when I say things are getting better, I am not saying those things at all. I am certainly not advocating looking away from the terrible problems in the world. I am saying that things can be both bad and better. Think of the world as a premature baby in an incubator. The baby’s health status is extremely bad and her breathing, heart rate, and other important signs are tracked constantly so that changes for better or worse can quickly be seen. After a week, she is getting a lot better. On all the main measures, she is improving, but she still has to stay in the incubator because her health is still critical. Does it make sense to say that the infant’s situation is improving? Yes. Absolutely. Does it make sense to say it is bad? Yes, absolutely. Does saying “things are improving” imply that everything is fine, and we should all relax and not worry? No, not at all. Is it helpful to have to choose between bad and improving? Definitely not. It’s both. It’s both bad and better. Better, and bad, at the same time. That is how we must think about the current state of the world.” - Factfulness, Ten Reasons We're Wrong About The World - And Why Things Are Better Than You Think – Hans Rosling, Ola Rosling, Anna Rosling Rönnlund

Max_Daniel @ 2021-03-30T09:46 (+3)

Thanks Tony, I appreciate you engaging here.

It sounds to me like we're largely on the same page, and that my original post may have somewhat overstated the differences between at least some 'human progress' proponents and the longtermist EA perspective. 

On the other hand, looking at just the priorities revealed by what these communities focus on in practice, it does seem like there must be some disagreements.

FWIW, I would guess that one of the main ways in which what you say would lead to pushback from EAs is that at times it sounds somewhat anthropocentric - i.e. considering the well-being of humans, but not non-human animals. 

  • Many if not most EAs believe that nonhuman animals have the capacity to suffer in a morally relevant way, and so consider factory farming to be a moral catastrophe not just because of its adverse effects on the human population or the climate, but for the sake of farmed animals having bad lives under cruel conditions (whether these animals are raised and slaughtered in the US or in China - less intensive animal farming is much more widespread outside of the US, but even globally the vast majority of farmed animals live in factory farms because these have so much larger animal populations).
  • On the other hand, I do think there may be a fair amount of convergence between EA and 'human progress proponents' on how to address this problem. In particular, I think it's fair to say that EAs tend to be less ideologically committed to particular strategies such as vegan outreach and instead, as they try to do in any cause, adopt an open-minded approach in order to identify whatever works best. E.g. they're at least open to and have funded welfare reforms, tend to be interested in clean meat and other animal product alternatives - e.g. the Good Food Institute is one of the very few top charities recommended by the EA-aligned Animal Charity Evaluators.
  • EAs have also pushed the envelope on which cause areas may merit consideration if one cares about the suffering of non-human animals. For instance, they're aware that many more marine than land animals are directly killed for human consumption, and have helped launch new organizations in this area such as the Fish Welfare Initiative. In terms of even more "out there" topics, EAs have considered the well-being of wild animals (taking them seriously as individuals we care about rather than at the species level for the sake of biodiversity), including whether insects and other invertebrates may have the capacity to suffer (which is relevant both because most animals alive are invertebrates and for evaluating insect farming as another reaction to the issues of factory farming).

To be clear, I think we may well agree on most of this. And it's not directly relevant to the future-related issues we've been discussing (though see e.g. here and here). I'm partly mentioning this because the ideal communication strategy for engaging EAs in this particular area probably looks a bit different since EA has such an unusually large fraction of people who are unusually sympathetic to, and open about, farmed and wild animal welfare being globally important considerations.

Max_Daniel @ 2020-01-08T14:26 (+7)

[Some of my tentative and uncertain views on AI governance, and different ways of having impact in that area. Excerpts, not in order, from things I wrote in a recent email discussion, so not a coherent text.]

1. In scenarios where OpenAI, DeepMind etc. become key actors because they develop TAI capabilities, our theory of impact will rely on a combination of affecting (a) 'structure' and (b) 'content'. By (a) I roughly mean how the relevant decision-making mechanisms look like irrespective of the specific goals and resources of the actors the mechanism consists of; e.g., whether some key AI lab is a nonprofit or a publicly traded company; who would decide by what rules/voting scheme how Windfall profits would be redistributed; etc. By (b) I mean something like how much the CEO of a key firm, or their advisors, care about the long-term future. -- I can see why relying mostly on (b) is attractive, e.g. it's arguably more tractable; however, some EA thinking (mostly from the Bay Area / the rationalist community to be honest) strikes me as focusing on (b) for reasons that seem ahistoric or otherwise dubious to me. So I don't feel convinced that what I perceive to be a very stark focus on (b) is warranted. I think that figuring out if there are viable strategies that rely more on (a) is better done from within institutions that have no ties with key TAI actors, and also might be best done my people that don't quite match the profile of the typical new EA that got excited about Superintelligence or HPMOR. Overall, I think that making more academic research in broadly "policy relevant" fields happen would be a decent strategy if one ultimately wanted to increase the amount of thinking on type-(a) theories of impact.

2. What's the theory of impact if TAI happens in more than 20 years? More than 50 years? I think it's not obvious whether it's worth spending any current resources on influencing such scenarios (I think they are more likely but we have much less leverage). However, if we wanted to do this, then I think it's worth bearing in mind that academia is one of few institutions (in a broad sense) that has a strong track record of enabling cumulative intellectual progress over long time scales. I roughly think that, in a modal scenario, no-one in 50 years is going to remember anything that was discussed on the EA Forum or LessWrong, or within the OpenAI policy team, today (except people currently involved); but if AI/TAI was still (or again) a hot topic then, I think it's likely that academic scholars will read academic papers by Dafoe, his students, the students of his students etc. Similarly, based on track records I think that the norms and structure of academia are much better equipped than EA to enable intellectual progress that is more incremental and distributed (as opposed to progress that happens by way of 'at least one crisp insight per step'; e.g. the Astronomical Waste argument would count as one crisp insight); so if we needed such progress, it might make sense to seed broadly useful academic research now. 

[...]

My view is closer to "~all that matters will be in the specifics, and most of the intuitions and methods for dealing with the specifics are either sort of hard-wired or more generic/have different origins than having thought about race models specifically". A crux here might be that I expect most of the tasks involved in dealing with the policy issues that would come up if we got TAI within the next 10-20 years to be sufficiently similar to garden-variety tasks involved in familiar policy areas that as a first pass: (i) if theoretical academic research was useful, we'd see more stories of the kind "CEO X / politician Y's success was due to idea Z developed through theoretical academic research", and (ii) prior policy/applied strategy experience is the background most useful for TAI policy, with usefulness increasing with the overlap in content and relevant actors; e.g.: working with the OpenAI policy team on pre-TAI issues > working within Facebook on a strategy for how to prevent the government to split up the firm in case a left-wing Democrat wins > business strategy for a tobacco company in the US > business strategy for a company outside of the US that faces little government regulation > academic game theory modeling. That's probably too pessimistic about the academic path, and of course it'll depend a lot on the specifics (you could start in academia to then get into Facebook etc.), but you get the idea.

[...]

Overall, the only somewhat open question for me is whether ideally we'd have (A) ~only people working quite directly with key actors or (B) a mix of people working with key actors and more independent ones e.g. in academia. It seems quite clear to me that the optimal allocation will contain a significant share of people working with key actors [...]

If there is a disagreement, I'd guess it's located in the following two points: 

(1a) How big are countervailing downsides from working directly with, or at institutions having close ties with, key actors? Here I'm mostly concerned about incentives distorting the content of research and strategic advice. I think the question is broadly similar to: If you're concerned about the impacts of the British rule on India in the 1800s, is it best to work within the colonial administration? If you want to figure out how to govern externalities from burning fossil fuels, is it best to work in the fossil fuel industry? I think the cliche left-wing answer to these questions is too confident in "no" and is overlooking important upsides, but I'm concerned that some standard EA answers in the AI case are too confident in "yes" and are overlooking risks. Note that I'm most concerned about kind of "benign" or "epistemic" failure modes: I think it's reasonably easy to tell people with broadly good intentions apart from sadists or even personal-wealth maximizers (at least in principle -- if this will get implemented is another question); I think it's much harder to spot cases like key people incorrectly believing that it's best if they keep as much control for themselves/their company as possible because after all they are the ones with both good intentions and an epistemic advantage (note that all of this really applies to a colonial administration with little modification, though here in cases such as the "Congo Free State" even the track record of "telling personal-wealth maximizers apart from people with humanitarian intentions" maybe isn't great -- also NB I'm not saying that this argument would necessarily be unsound; i.e. I think that in some situations these people would be correct).

(1b) To what extent to we need (a) novel insights as opposed to (b) an application of known insights or common-sense principles? E.g., I've heard claims that the sale of telecommunication licenses by governments is an example where post-1950 research-level economics work in auction theory has had considerable real-world impact, and AFAICT this kind of auction theory strikes me as reasonably abstract and in little need of having worked with either governments or telecommunication firms. Supposing this is true (I haven't really looked into this), how many opportunities of this kind are there in AI governance? I think the case for (A) is much stronger if we need little to no (a), as I think the upsides from trust networks etc. are mostly (though not exclusively) useful for (b). FWIW, my private view actually is that we probably need very little of (a), but I also feel like I have a poor grasp of this, and I think it will ultimately come down to what high-level heuristics to use in such a situation.


aarongertler @ 2020-01-16T23:27 (+5)

I found this really fascinating to read. Is there any chance that you might turn it into a "coherent text" at some point?

I especially liked the question on possible downsides of working with key actors; orgs in a position to do this are often accused of collaborating in the perpetuation of bad systems (or something like that), but rarely with much evidence to back up those claims. I think your take on the issue would be enlightening.

Max_Daniel @ 2020-01-17T12:00 (+2)

Thanks for sharing your reaction! There is some chance that I'll write up these and maybe other thoughts on AI strategy/governance over the coming months, but it depends a lot on my other commitments. My current guess is that it's maybe only 15% likely that I'll think this is the best use of my time within the next 6 months.

Max_Daniel @ 2020-03-23T09:29 (+6)

[Epistemic status: speculation based on priors about international organizations. I know next to nothing about the WHO specifically.]

[On the WHO declaring COVID-19 a pandemic only (?) on March 12th. Prompted by this Facebook discussion on epistemic modesty on COVID-19.]

- [ETA: this point is likely wrong, cf. Khorton's comment below. However, I believe the conclusion that the timing of WHO declarations by itself doesn't provide a significant argument against epistemic modesty still stands, as I explain in a follow-up comment below.] The WHO declaring a pandemic has a bunch of major legal and institutional consequences. E.g. my guess is that among other things it affects the amounts of resources the WHO and other actors can utilize, the kind of work the WHO and others are allowed to do, and the kind of recommendations the WHO can make.

- The optimal time for the WHO to declare a pandemic is primarily determined by these legal and institutional consequences. Whether COVID-19 is or will in fact be a pandemic in the everyday or epidemiological sense is an important input into the decision, but not a decisive one.

- Without familiarity with the WHO and the legal and institutional system it is a part of, it is very difficult to accurately assess the consequences of the WHO declaring a pandemic. Therefore, it is very hard to evaluate the timing of the WHO's declaration without such familiarity. And being even maximally well-informed about COVID-19 itself isn't even remotely sufficient for an accurate evaluation.

- The bottom line is that the WHO officially declaring that COVID-19 is a pandemic is a totally different thing from any individual persuasively arguing that COVID-19 is or will be a pandemic. In a language that would accurately reflect differences in meaning, me saying that COVID-19 is a pandemic and the WHO declaring COVID-19 is a pandemic would be done using different words. It is simply not the primary purpose of this WHO speech act to be an early, accurate, reliable, or whatever indicator of whether "COVID-19 is a pandemic", to predict its impact, or any other similar thing. It isn't primarily epistemic in any sense.

- If just based on information about COVID-19 itself someone confidently thinks that the WHO ought to have declared a pandemic earlier, they are making a mistake akin to the mistake reflected by answering "yes" to the question "could you pass me the salt?" without doing anything.

So did the WHO make a mistake by not declaring COVID-19 to be a pandemic earlier, and if so how consequential was it? Well, I think the timing was probably suboptimal just because my prior is that most complex institutions aren't optimized for getting the timing of such things exactly right. But I have no idea how consequential a potential mistake was. In fact, I'm about 50-50 on whether the optimal time would have been slightly earlier or slightly later. (Though substantially earlier seems significantly more likely optimal than substantially later.)

Khorton @ 2020-03-23T14:02 (+16)

"The WHO declaring a pandemic has a bunch of major legal and institutional consequences. E.g. my guess is that among other things it affects the amounts of resources the WHO and other actors can utilize, the kind of work the WHO and others are allowed to do, and the kind of recommendations the WHO can make."

Are you sure about this? I've read that there aren't major implications to it being officially declared a pandemic.

This article suggests there aren't major changes based on 'pandemic' status https://www.bbc.co.uk/news/world-51839944

Max_Daniel @ 2020-03-25T16:05 (+22)

[Epistemic status: info from the WHO website and Wikipedia, but I overall invested only ~10 min, so might be missing something.]

Under the 2005 International Health Regulations (IHR), states have a legal duty to respond promptly to a PHEIC.
[Note by me: The International Health Regulations include multiple instances of "public health emergency of international concern". By contrast, they include only one instance of "pandemic", and this is in the term "pandemic influenza" in a formal statement by China rather than the main text of the regulation.]
  • The WHO declared a PHEIC due to COVID-19 on January 30th.
  • The OP was prompted by a claim that the timing of the WHO using the term "pandemic" provides an argument against epistemic modesty. (Though I appreciate this was less clear in the OP than it could have been, and maybe it was a bad idea to copy my Facebook comment here anyway.) From the Facebook comment I was responding to:
For example, to me, the WHO taking until ~March 12 to call this a pandemic*, when the informed amateurs I listen to were all pretty convinced that this will be pretty bad since at least early March, is at least some evidence that trusting informed amateurs has some value over entirely trusting people usually perceived as experts.
  • Since the WHO declaring a PHEIC seems much more consequential than them using the term "pandemic", the timing of the PHEIC declaration seems more relevant for assessing the merits of the WHO response, and thus for any argument regarding epistemic modesty.
  • Since the PHEIC declaration happened significantly earlier, any argument based on the premise that it happened too late is significantly weaker. And whatever the apparent initial force of this weaker argument, my undermining response from the OP still applies.
  • So overall, while the OP's premise appealing to major legal/institutional consequences of the WHO using the term "pandemic" seems false, I'm now even more convinced of the key claim I wanted to argue for: that the WHO response does not provide an argument against epistemic modesty in general, nor for the epistemic superiority of "informed amateurs" over experts on COVID-19.
Lukas_Gloor @ 2020-03-25T21:30 (+5)

About declaring it a "pandemic," I've seen the WHO reason as follows (me paraphrasing):

«Once we call it a pandemic, some countries might throw up their hands and say "we're screwed," so we should better wait before calling it that, and instead emphasize that countries need to try harder at containment for as long as there's still a small chance that it might work.»

So overall, while the OP's premise appealing to major legal/institutional consequences of the WHO using the term "pandemic" seems false, I'm now even more convinced of the key claim I wanted to argue for: that the WHO response does not provide an argument against epistemic modesty in general, nor for the epistemic superiority of "informed amateurs" over experts on COVID-19.

Yeah, I think that's a good point.

I'm not sure I can have updates in favor or against modest epistemology because it seems to me that my true rejection is mostly "my brain can't do that." But if I could have further updates against modest epistemology, the main Covid-19-related example for me would be how long it took some countries to realize that flattening the curve instead of squishing it is going to lead to a lot more deaths and tragedy than people seem to have initially thought. I realize that it's hard to distinguish between what's actual government opinion versus what's bad journalism, but I'm pretty confident there was a time when informed amateurs could see that experts were operating under some probably false or at least dubious assumptions. (I'm happy to elaborate if anyone's interested.)

MichaelStJules @ 2020-03-25T23:31 (+4)
For example, to me, the WHO taking until ~March 12 to call this a pandemic*, when the informed amateurs I listen to were all pretty convinced that this will be pretty bad since at least early March, is at least some evidence that trusting informed amateurs has some value over entirely trusting people usually perceived as experts.

Also, predicting that something will be pretty bad or will be a pandemic is not the same as saying it is now a pandemic. When did it become a pandemic according to the WHO's definition?

Expanding a quote I found on the wiki page in the transcript here from 2009:

Dr Fukuda: An easy way to think about pandemic – and actually a way I have some times described in the past – is to say: a pandemic is a global outbreak. Then you might ask yourself: “What is a global outbreak”? Global outbreak means that we see both spread of the agent – and in this case we see this new A(H1N1) virus to most parts of the world – and then we see disease activities in addition to the spread of the virus. Right now, it would be fair to say that we have an evolving situation in which a new influenza virus is clearly spreading, but it has not reached all parts of the world and it has not established community activity in all parts of the world. It is quite possible that it will continue to spread and it will establish itself in many other countries and multiple regions, at which time it will be fair to call it a pandemic at that point. But right now, we are really in the early part of the evolution of the spread of this virus and we will see where it goes.

But see also WHO says it no longer uses 'pandemic' category, but virus still emergency from February 24, 2020.

Max_Daniel @ 2020-03-23T16:31 (+5)

Thank you for pointing this out! It sounds like my guess was probably just wrong.

My guess was based on a crude prior on international organizations, not anything I know about the WHO specifically. I clarified the epistemic status in the OP.

Max_Daniel @ 2020-06-26T12:46 (+5)

[Context: This is a research proposal I wrote two years ago for an application. I'm posing it here because I might want to link to it. I plan to spend a few weeks looking into a subquestion: how heavy-tailed is EA talent, and what does this imply for EA community building?]

Research proposal: Assess claims that "impact is heavy-tailed"

Why is this valuable?

EAs frequently have to decide how much resources to invest to estimate the utility of their available options; e.g.:

One major input to such questions is how heavy-tailed the distribution of altruistic impact is: The better the best options are relative to a random option, the more valuable it is to identify the best options.

Claims like “impact is heavy-tailed” are widely accepted in the EA community—with major strategic consequences (e.g. [1], “Talent is high variance”)—but have sometimes been questioned [2, 3, 4, 5].

These claims are often made in an imprecise way, which makes it hard to estimate the extent of their practical implications (should you spend a month or a year doing research before deciding?), and hard to check if one actually disagrees about them. E.g., is the claim that we can now see that Einstein did much more for progress in physics than 90% of the world population at his time, or that in 1900 our subjective expected value for the progress Einstein would make would have been much higher than the value for a random physics graduate student, or something in between?

Suggested approach

1. Collect several claims of this type that have been made.
2. Review statistical measures of heavy-tailedness.
3. Limit the project’s scope appropriately. E.g., focus just on the claim that “talent is heavy-tailed” and its implications for community building.
4. Refine claims into precise candidate versions, i.e. something like “looking backwards, the empirical distribution of the number of published papers by researcher looks like it was sampled from a distribution that doesn’t have finite variance” rather than “researcher talent is heavy-tailed”.
5. Assess the veracity of those claims, based on published arguments about them and general properties of heavy-tailed distributions (e.g. [6]). Perhaps gather additional data.
6. Write up the results in an accessible way that highlights the true, precise claims and their practical implications.

Concerns

Tobias_Baumann @ 2020-06-30T16:31 (+3)

This could be relevant. It's not about the exact same question (it looks at the distribution of future suffering, not of impact) but some parts might be transferable.

Max_Daniel @ 2020-07-20T14:22 (+4)

[A failure mode of culturally high mental health awareness.]

In my experience, there is a high level of mental health awareness in the EA community. That is, people openly talk about mental health challenges such as depression, and many will know about how to help people facing such challenges (e.g. by helping them to get professional treatment). At least more so than in other communities I've known.

I think this is mostly great, and on net much preferable over low mental health awareness.

However, I recently realized one potential failure mode: There is a risk of falsely overestimating another individual's mental health awareness. For example, suppose I talk to an EA who appears to struggle with depression; I might then think "surely they know that depression is treatable, and most likely they're already doing CBT", concluding there isn't much I can do to help. I might even think "it would be silly for me to mention CBT because it's common knowledge that depression can often be treated that way, and stating facts that are common knowledge is at best superfluous and at worst insulting (because I'd imply the other person might lack some kind of basic knowledge)".

Crucially, this would be a mistake even if I was correct that the person I'm talking to was, by virtue of exposure to the EA community, more likely than usual to have heard of CBT. This is because of a large asymmetry in value: It can be extremely valuable for both one's personal well-being and one's expected impact on the world to e.g. start treatment for depression; the cost of saying something obvious or even slightly annoying pales by comparison.

This suggests a few lessons:

Max_Daniel @ 2020-07-20T13:47 (+4)

[A rebuttal of one argument for hard AI takeoff due to recursive self-improvement which I'm not sure anyone was ever making.

Wrote this as a comment in a Google doc, but thought I might want to link to it sometimes.]

I'm worried that naive appeals to self-improvement are sometimes due to a fallacious inference from current AIs / computer programs to advanced AIs. I.e. the implicit argument is something like:

1. It's much easier to modify code than to do brain surgery.
2. Therefore, advanced AI (self-)improvement will be much easier than human (self-)improvement.

But my worry is that the actual reason why "modifying code" seems more feasible to us is the large complexity difference between current computer programs and human cognition. Indeed, at a physicalist level, it's not at all clear that moving around the required matter on a hard disk or SDD or in RAM or whatever is easier than moving around the required matter in a brain. The difference instead is that we have developed intermediate levels of abstraction (from assembly to high-level programming languages) that massively facilitate the editing process -- they bridge the gap between the hardware level and our native cognition in just the right way. But especially to someone with functionalist or otherwise substrate-neutral inclinations it may seem likely that they key feature that enabled us to construct the intermediate-level abstractions was precisely the small complexity of the "target cognition" compared to our native cognition.

RyanCarey @ 2020-07-20T14:08 (+7)

To evaluate its editability, we can compare AI code to code, and to the human brain, along various dimensions: storage size, understandability, copyability, etc. (i.e. let's decompose "complexity" into "storage size" and "understandability" to ensure conceptual clarity)

For size, AI code seems more similar to humans. AI models are already pretty big, so may be around human-sized by the time a hypothetical AI is created.

For understandability, I would expect it to be more like code, than to a human brain. After all, it's created with a known design and objective that was built intentionally. Even if the learned model has a complex architecture, we should be able to understand its relatively simpler training procedure and incentives.

And then, an AI code will, like ordinary code - and unlike the human brain - be copyable, and have a digital storage medium, which are both potentially critical factors for editing.

Size (i.e. storage complexity) doesn't seem like a very significant factor here.

I'd guess the editability of AI code would resemble the editability of code moreso than that of a human brain. But even if you don't agree, I think this points at a better way to analyse the question.

Max_Daniel @ 2020-07-20T14:34 (+4)

Agree that looking at different dimensions is more fruitful.

I also agree that size isn't important in itself, but it might correlate with understandability.

I may overall agree with AI code understandability being closer to code than the human brain. But I think you're maybe a bit quick here: yes, we'll have a known design and intentional objective on some level. But this level may be quite far removed from "live" cognition. E.g. we may know a lot about developmental psychology or the effects of genes and education, but not a lot about how to modify an adult human brain in order to make specific changes. The situation could be similar from an AI system's perspective when trying to improve itself.

Copyability does seem like a key difference that's unlikely to change as AI systems become more advanced. However, I'm not sure if it points to rapid takeoffs as opposed to orthogonal properties. (Though it does if we're interested in how quickly the total capacity of all AI system grows, and assume hardware overhang plus sufficiently additive capabilities between systems.) To the extent it does, the mechanism seems to be relevantly different from recursive self-improvement - more like "sudden population explosion".

Max_Daniel @ 2020-07-20T14:36 (+4)

Well, I guess copyability would help with recursive self-improvement as follows: it allows to run many experiments in parallel that can be used to test the effects of marginal changes.

JP Addison @ 2020-07-20T14:32 (+2)

I would expect advanced AI systems to still be improveable in a way that humans are not. You might lose all ability to see inside the AI's thinking process, but you could still make hyperparameter tweaks. Humans you can also make hyperparameter tweaks, but unless your think AIs will take 20 years to train, it still seems easier than comparable human improvement.

Max_Daniel @ 2020-07-20T16:48 (+2)

Fair point. It seems that the central property of AI systems this arguments rests on is their speed, or the time until you get feedback. I agree it seems likely that AI training time (and then ability to evaluate performance on withheld test data or similar) in wall-clock speed will be shorter than feedback loops for humans (e.g. education reforms, genetic engineering, ...).

However, some ways in which this could fail to enable rapid self-improvement:

  • The speed advantage could be offset by other differences, e.g. even less interpretable "thinking processes".
  • Performance at certain tasks may be bottlenecked by feedback from slow real-world interactions. (If sim2real transfer doesn't work well for some tasks.)
Max_Daniel @ 2020-07-20T13:21 (+2)

**Would some moral/virtue intuitions prefer small but non-zero x-risk?**

[Me trying to lean into a kind of philosophical reasoning I don't find plausible. Not important, except perhaps as cautionary tale for what kind of things could happen if you wanted to base the case for reducing x-risk on purely non-consequentialist reasons.]

(Inspired by a conversation with Toby Newberry about something else.)

The basic observation: we sometimes think that a person achieving some success is particularly praiseworthy, remarkable, virtuous, or similar if they could have failed. (Or if they needed to expend a lot of effort, suffered through hardship, etc.)

Could this mean that we removed one source of value if we reduced x-risk to zero? Achieving our full potential would then no longer constitute a contingent achievement - it would be predetermined, with failure no longer on the table.

We can make the thought more precise in a toy model: Suppose that at some time t_0 x-risk is permanently reduced to zero. The worry is that acts happening after t_0 (or perhaps acts of agents born after t_0), even if they produce value, are less valuable in one respect: In their role of being a part of us eventually achieving our full potentially, they can no longer fail. More broadly, humanity's great generation-spanning project (whatever that is) can no longer fail. Those humans living after t_0 therefore have a less valuable part in that project. They merely help push along a wagon that was set firmly in its tracks by their ancestors. Their actions may have various valuable consequences, but they no longer increase the probability of humanity's grand project succeeding.

(Similarly, if we're inclined to regard humanity as a whole, or generations born after t_0, as moral agents in their own right we might worry that zero x-risk detracts from the value of their actions.)

Some intuition pumps:

We may think that there is something objectionable, even dystopian about these situations. At the very least, we may think that the apparent successes of the child prodigy or the family business leaders count for less because, in one respect, they could not have failed.

If we give a lot of weight to such worries we may not want to eliminate x-risk. Instead, perhaps, we'd conclude that it's best to carefully manage x-risk: at any point, it should not be so high that we run an irresponsible risk of squandering all future potential - but it also should not be so low that our children are robbed of more value than we protect.

--

Some reasons why I think this is either implausible or irrelevant:

Larks @ 2020-08-06T03:48 (+4)

Interesting post. I think I have a couple of thoughts, please forgive the uneditted nature.

One issue is whether more than one person can get credit for the same event. If this is the case, then both the climber girl and the parents can get credit for her surviving the climb (after all, both their actions were sufficient). Similarly, both we and the future people can get credit for saving the world.

If not, then only one person can get the credit for every instance of world saving. Either we can harvest them now, or we can leave them for other people to get. But the latter strategy involves the risk that they will remain unharvested, leading to a reduction in the total quantity of creditworthiness mankind accrues. So from the point of view of an impartial maximiser of humanity's creditworthiness, we should seize as many as we can, leaving as little as possible for the future.

Secondly, as a new parent I see the appeal of the invisible robots of deliverance! I am keen to let the sproglet explore and stake out her own achievements, but I don't think she loses much when I keep her from dying. She can get plenty of moral achievement from ascending to new heights, even if I have sealed off the depths.

Finally, there is of course the numerical consideration that even if facing a 1% risk of extinction carried some inherent moral glory, it would also reduce the value of all subsequent things by 1% (in expectation). Unless you think the benefit from our children, rather than us, overcoming that risk is large compared to the total value of the future of humanity, it seems like we should probably deny them it.

Max_Daniel @ 2020-08-06T07:38 (+2)

Thanks, this all makes sense to me. Just one quick comment:

So from the point of view of an impartial maximiser of humanity's creditworthiness, we should seize as many as we can, leaving as little as possible for the future.

If I understand you correctly, your argument for this conclusion assumed that the total number of world-saving instances is fixed independently of anyone's actions. But I think in practice this is wrong, i.e. the number of world-saving opportunities is endogenous to people's actions including in particular whether they reap current world-saving opportunities.

Oversimplified example: perhaps currently there is one world-saving instance per year from Petrov-style incidents, i.e. countries not launching a nuclear strike in response to a false alarm of a nuclear attack. But if there was a breakthrough in nuclear disarmament that reduced nuclear stockpiles to zero this would also eliminate these future world-saving opportunities.

[Oversimplified b/c in fact a nuclear exchange isn't clearly an x-risk.]

Larks @ 2020-08-06T18:38 (+4)

Hey, yes - I would count that nuclear disarmament breakthrough as being equal to the sum of those annual world-saving instances. So you're right that the number of events isn't fixed, but their measure (as in the % of the future of humanity saved) is bounded.

matthew.vandermerwe @ 2020-08-07T08:48 (+3)

Nice post. I’m reminded of this Bertrand Russell passage:

“all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins ... Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul's habitation henceforth be safely built.” —A Free Man’s Worship, 1903

I take Russell as arguing that the inevitability (as he saw it) of extinction undermines the possibility of enduring achievement, and that we must therefore either ground life’s meaning in something else, or accept nihilism.

At a stretch, maybe you could run your argument together with Russell's — if we ground life’s meaning in achievement, then avoiding nihilism requires that humanity neither go extinct nor achieve total existential security.

Lukas_Gloor @ 2020-08-06T13:36 (+2)

Related: Relationships in a post-singularity future can also be set up to work well, so that the setup overdetermines any efforts by the individuals in them.

To me, that takes away the whole point. I don't think this would feel less problematic if somehow future people decided to add some noise to the setup, such that relationships occasionally fail.

The reason I find any degree of "setup" problematic is because this seems like emphasizing the self-oriented benefits one gets out of relationships, and de-emphasizing the from-you-independent identity of the other person. It's romantic to think that there's a soulmate out there who would be just as happy to find you as you are about finding them. It's not that romantic to think about creating your soulmate with the power of future technology (or society doing this for you).

This is the "person-affecting intuition for thinking about soulmates." If the other person exists already, I'd be excited to meet them, and would be motivated to put in a lot of effort to make things work, as opposed to just giving up on myself in the face of difficulties. By contrast, if the person doesn't exist yet or won't exist in a way independent of my actions, I feel like there's less of a point/appeal to it.