Comments for shorter Cold Takes pieces
By Holden Karnofsky @ 2021-11-03T12:48 (+48)
I'd like to start giving people the option of commenting on shorter Cold Takes pieces (which I don't cross-post here or provide audio for). I'm going to use this post for that: I will generally leave a comment for each piece, and people can leave their comments as replies to that.
Holden Karnofsky @ 2021-12-28T22:59 (+14)
Comments for What counts as death? will go here.
Molly @ 2021-12-31T02:44 (+6)
Vipassana meditation aims to give meditators experiential knowledge (rather than theoretical/intellectual understanding) of this conception of self. I think that's what a lot of people get out of psychedelics as well.
I thought this paper was really interesting: https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.12590
The abstract:
"It is an old philosophical idea that if the future self is literally different from the current self, one should be less concerned with the death of the future self (Parfit, 1984). This paper examines the relation between attitudes about death and the self among Hindus, Westerners, and three Buddhist populations (Lay Tibetan, Lay Bhutanese, and monastic Tibetans). Compared with other groups, monastic Tibetans gave particularly strong denials of the continuity of self, across several measures. We predicted that the denial of self would be associated with a lower fear of death and greater generosity toward others. To our surprise, we found the opposite. Monastic Tibetan Buddhists showed significantly greater fear of death than any other group. The monastics were also less generous than any other group about the prospect of giving up a slightly longer life in order to extend the life of another."
One interesting note: "None of the participants we studied were long-term meditators (Tsongkhapa, 1991), and one important question for future research will be whether highly experienced practitioners of meditation would in fact show reduced fear of self-annihilation." I don't know if they ever did that future research.
algekalipso @ 2022-01-03T02:34 (+4)
Hi Holden!
I am happy to see you think deeply about questions of personal identity. I've been thinking about the same for many years (e.g. see "Ontological Qualia: The Future of Personal Identity"), and I think that addressing such questions is critical for any consistent theory of consciousness and ethics.
I broadly agree with your view, but here are some things that stand out as worth pointing out:
First, I prefer Daniel Kolak's factorization of "views of personal identity". Namely, Closed Individualism (common sense - we are each a "timeline of experience"), Empty Individualism (we are all only individual moments of experience, perhaps most similar to Parfit's reductionist view as well as yours), and Open Individualism (we are all the same subject of experience).
I think that if Open Individualism is true a lot of ethics could be drastically simplified: caring about all sentient beings is not only kind, but in fact rational. While I think that Empty Individualism is a really strong candidate, I don't discard Open Individualism. If you do assume that you are the same subject of experience over time (which I know you discard, but many don't), I think it follows that Open Individualism is the only way to reconcile that with the fact that each moment of experience generated by your brain is different. In other words, if there is no identity carrier we can point to that connects every moment of experience generated by e.g. my brain, then we might infer that the very source of identity is the fact of consciousness per se. Just something to think about.
The other key thing I'd highlight is that you don't seem to pay much attention to the mystery of why each snapshot of your brain is unified. Parfit also seems have some sort of neglect around this puzzle, for I don't see it addressed anywhere in his writings despite its central importance to the problem of personal identity.
Synchrony is not a good criteria: there is no universal frame of reference. Plus, even if we could use synchrony as an approximate "unifier" of physical states, we then further have the problem that we would need a natural ground truth boundary to arise that would make your brain generate a moment of experience that is numerically distinct from those generated by other brains at the same time.
I do think that there is in fact a way to solve this. To do so, rather than thinking in terms of "binding" (i.e. why do these two atoms contribute to the same experience but not these two atoms?), we should think in terms of "boundaries" (i.e. what makes this region of reality have a natural boundary that separates it from the rest?). In particular, my solution uses topological segmentation, and IMO solves all of the classic problems. It results in a strong case for Empty Individualism, since topological boundaries in the fields of physics would be objective, causally significant, and frame-invariant (all highly desirable properties for the mechanism of individuation so that e.g. natural selection would have a way of recruiting moments of experience for computational purposes). Additionally, the topological pockets that define individual moments of experience would be spatiotemporal in nature. We don't need to worry about infinitesimal partitions and a lack of objective frames of reference for simultaneity because the topological pockets have definite spatial and temporal depth. There would, in fact, be a definite and objective answer to "how many experiences are there in this volume of spacetime?" and similar questions.
If interested, I recommend watching my video about my solution to the binding problem here: Solving the Phenomenal Binding Problem: Topological Segmentation as the Correct Explanation Space. Even just reading the video description goes a long way :-) Let me know your thoughts if you get to it.
All the best!
Jeff Klingner @ 2022-02-02T20:26 (+3)
What would be the "Continuous Replacement" take on cryonics?
For this question, assume that cryonics works (revival succeeds) and is costless. From a personal identity standpoint, is cryonics any different from a nap? Would you be interested in cryonics only to the extent that your projects and relationships were still around? i.e. interested only if your loved ones were also preserved? Less interested the more time will pass before revival? Would projects very long-term projects like "learn how the world works" or "protect humanity" or "see how this all turns out" provide enough justification?
And out of curiosity: Are you signed up for cryonics or interested in signing up?
RYC @ 2021-12-29T01:53 (+2)
Cf. section 6.3 of Parfit's Ethics:
[O]ne way of characterizing Parfit's reductionism would be as a kind of illusionism or anti-realism about personal identity: you could say that we don't really persist through time at all--we can just talk as though we do, for convenience.
Here's a crucial question: is it rational to anticipate experiences that will be felt by some "future self" to whom you are strongly R-related? Or does anticipation implicitly presuppose a non-reductionist view of identity? Parfit (1984, 312) does not commit himself either way, suggesting that it "seems defensible both to claim and to deny that Relation R gives us reason for special concern." Of course, your "future selves" (or R-related continuants) are as closely-related to you as can be, so if we have reason to be partial towards anyone, we presumably have reason to partial towards them. But it would still seem a significant loss if we could no longer think of our future selves as ourselves: if they became mere relatives, however close.
I don't think such a bleak view is forced on us, however. The distinction between philosophical reduction and elimination is notoriously thorny, and analogous questions arise all over the philosophical map. Consciousness, normativity, and free will are three examples for which it is comparably contentious whether reduction amounts to elimination. ...
I find it tempting to give different answers in different cases. Consciousness and normativity strike me as sui generis phenomena, missing from any account that countenances only things constituted by atoms. For free will and personal identity, by contrast, I'm inclined to think that the "non-reductive" views don't even make sense (the idea of ultimate sourcehood, or originally choosing the very basis on which you will make all choices--including that first one!--is literally incoherent). Reductive accounts of these latter phenomena can fill their theoretical roles satisfactorily, in my view.
Other readers may carve up the cases differently. However you do it, my suggestion would be that reductionists can more easily resist eliminativist pressures if they think there is no coherent possibility there to be eliminated. If ultimate sourcehood makes no sense, it would seem unreasonable to treat it as a requirement for anything else, including moral desert.^[To avoid amounting to a merely verbal dispute, I take it that reductionists and eliminativists must disagree about whether some putative reduction base suffices to fill an important theoretical role associated with the original concept.] So we might comfortably accept a compatibilist account as sufficing to make one responsible in the strongest sense, as there simply is nothing more that could be required. Perhaps a similar thing could be said of personal identity. If we think that "Further Fact" views are not merely theoretically extravagant, but outright impossible, it might be easier to regard relation R as sufficient to justify anticipation. What more could be required, after all?
This reasoning is not decisive. Eliminativists could insist that anticipation is *essentially* irrational, presupposing something that could not possibly be. Or they could insist that the Further Fact view is not incoherent, but merely contingently false. Even so, their side too seems to lack decisive arguments. As is so often the case in philosophy, it is up to us to judge what strikes us as the most plausible position, all things considered.
The non-eliminative, reductionist view is, at least, much less drastically revisionary. (If our future selves are better regarded as entirely new people, there would seem no basis for distinguishing killing from failing to bring into existence. You would have to reconceive of guns as contraceptive agents. Nobody survives the present moment anyway, on this view, so the only effect of lethally shooting someone would be to prevent a new, qualitatively similar person from getting to exist in the next moment. Not so bad!) Though even if Parfit's reductionism can vindicate ordinary anticipation and self-concern, it certainly calls for some revisions to our normative thought....
Holden Karnofsky @ 2022-01-03T19:35 (+3)
"Failing to bring in existence" seems an odd way of putting it. I would rephrase as "preventing from coming into existence," and I think that makes a big difference.
E.g., choosing not to have a child (or choosing not to help someone else have one) is not a crime, but any action that deliberately caused an unwanted miscarriage would be.
Beyond that, I think there is plenty of room (if one wants) to define the relationship between past and future selves as "something special" - such that it is a special kind of tragedy when someone loses their opportunity to have future selves, even exactly on par with how tragic we normally think of murder as being - without giving up the benefits of the view I outlined.
I think it is tragic for someone's life projects and relationships to be forcibly cut off - even when we imagine this as "cut off via the prevention of their future selves coming into existence to continue these projects and relationships" - in a way that "a life not coming into existence" isn't. (I am pretty lukewarm on the total view; people who are more into that view might just say these are equally tragic.) In addition to how tragic it is, it seems like a quite different situation w/r/t whether blame and punishment are called for.
wstewart @ 2022-05-05T03:10 (+1)
Philosophy without the black box
Continuity of consciousness may be a notion that's more significant than commonly imagined. Psychologist William James presented continuity in memorable form, in his "Principles of Psychology". 132 years later, his stream of thought, felt time-gaps, and unfelt time-gaps all remain active terms in the literature. Yet the greater concept -- subjective continuity -- seems not to be bounded by James' familiar text. The concept seems applicable even at the extremities of life; no accepted line of reasoning renders it inapplicable.
Continuity reasoning can be structured around the natural case; i.e., the natural conditions and transitions found at extremities. No fictive elements are necessary in the reasoning: no teleporters, duplicates, digital copies, or re-creations are required. In fact, sci-fi can cripple reasoning just because there's nothing to understand in the fictions, nothing functional inside the verbal "black box".
For my part, I've made do without such black box fictions; I reasoned without them. Judging from correspondence post-publication, this was the right call.
-
Aristotle said, "All men by nature desire to know." This was in fact the very first sentence of Aristotle's "Metaphysics". What to make of the black box, then?
There's nothing to know about the word, "teleporter", for example. One can imagine things, of course; but these imaginings can't be solidified. A writer can say, "Let's assume the teleportation black box works this way," but he says this without authority. The reader can reply, "No, assume it works this entirely different way," and overwrite the author's analysis, freely. There's no end to that fictive back-and-forth; it goes on and on.
Common facts receive comparatively little analysis.
So, was Aristotle right or wrong? Where the word "metaphysics" pertains, do all men desire to know, or not?
-
For consideration, the old essay: Metaphysics by Default
Chapters 1-4 are historical.
Chapters 5-7 give mathematical, computational, and neurological background, with a first inference.
Chapter 8 gives philosophical background, with a second inference.
Chapter 9 presents James' text and applies inferences toward reasoning for boundless continuity. Some novelties follow.
ws
Conor Murray @ 2022-01-11T02:01 (+1)
FYI Sam Harris has a good talk through of the death argument in #263
calebo @ 2021-12-30T18:40 (+1)
I've heard this view referred to as a time-slice view of personal identity before.
Personal identity is tied to ordinary questions about the identity and persistence of ordinary objects.
So, you should probably have the same set of persistence conditions (time-slice / constant replacement) for cups, computers, organisms, atoms etc.
If that's true, then "personality, relationships, and ongoing projects" are also only things that exist at a time-slices. Plausibly, they don't exist at all since each necessarily exists through time. Either way, there's no sense in which they can be shared with future selves.
I think this kind of issue is better solved by the "reductionist" understanding of Parfit's views than the "eliminativist" / "illusionist" version. There's no illusion of selfhood or constant replacement, just degrees of similarity that compose our idea of a self.
Holden Karnofsky @ 2022-01-03T19:36 (+3)
I'm not following why "[I] should probably have the same set of persistence conditions (time-slice / constant replacement) for cups, computers, organisms, atoms etc." I don't have those persistence conditions for myself, in every possible sense - only in one particular important sense I pointed at in the post.
I think there are coherent uses of the words "Holden Karnofsky" and the singular tense; you can think of them as pointing at a "set of selves" that has something important in common and has properties of its own as a set. What I'm rejecting is the idea that there is some "continuous consciousness" such that I should fear death when it's "interrupted," but not when it isn't. By a similar token, I think there are plenty of reasonable senses in which "my computer" is a single thing, and other senses in which my computer one day is different from my computer the next day. And same goes for my projects and relationships. In all of these cases, I could be upset if the future of such a thing is cut off entirely, but not if its physical instantiation is replaced with a functional duplicate.
Martin Trouilloud @ 2021-12-30T17:49 (+1)
If you vaporized me and created a copy of me somewhere else, that would just be totally fine. I would think of it as teleporting. It'd be chill.
...
If that's right, "constant replacement" could join a number of other ideas that feel so radically alien (for many) that they must be "impossible to live with," but actually are just fine to live with. (E.g., atheism; physicalism; weird things about physics. I think many proponents of these views would characterize them as having fairly normal day-to-day implications while handling some otherwise confusing questions and situations better.)
These contradict each other. Let's say, like you imagined in an earlier post, that one day I'll be able to become a digital person by destroying my physical body in a futuristic brain-scanning process. It's pretty obvious that the connected conscious experience I've (I hope!) experienced my whole life, would, at that transition, come to an end. Whether or not it counts as me dying, and whether this new person 'is' me, are to some extent just semantics. But your and Parfit's position seems to define away the basic idea of personal identity just to solve its problems. My lifelong connected conscious awareness would undeniably cease to exist; the awareness that was me will enter the inky nothingness. The fact that my clone is walking and talking is completely orthogonal to this basic reality.
So if I tried to live with this idea "for a full week", except at the end of the week I know I'd be shot and replaced, I'd be freaking out, and I think you would be too. Any satisfactory theory of personal identity has to avoid equating death with age-related change. I should read Reasons and Persons, but none of the paradoxes you link to undermine this 'connected consciousness' idea of personal identity (which differs from what Bernard Williams--and maybe Parfit?--would call psychological continuity). As I understand it, psychological continuity allows for any given awareness to end permanently as long as it's somewhere replaced, but what I'm naively calling 'connected consciousness' doesn't allow this.
Another way of putting it; in your view, the only reason death is undesirable is that it permanently ends your relationships and projects. I also care about this aspect, but for me, and I think most non-religious people, death is primarily undesirable because I don't want to sleep forever!
Holden Karnofsky @ 2022-01-03T19:37 (+2)
Both parts you quoted are saying that the notion of personal identity I'm describing is (or at least can be) "fine to live with." You might disagree with this, but I'm not following where the contradiction is between the two.
So if I tried to live with this idea "for a full week", except at the end of the week I know I'd be shot and replaced, I'd be freaking out, and I think you would be too.
What I meant was to try imagining that you disappear every second and are replaced by someone similar, and try imagining that over the course of a full week. (I think getting shot is adding distraction here - I don't think anyone wants someone they care about to experience getting shot.)
It's pretty obvious that the connected conscious experience I've (I hope!) experienced my whole life, would, at that transition, come to an end.
I don't find it obvious that there's something meaningful or important about the "connected conscious experience." If I imagine a future person with my personality and memories, it's not clear to me that this person lacks anything that "Holden a moment from now" has.
Another way of putting it; in your view, the only reason death is undesirable is that it permanently ends your relationships and projects. I also care about this aspect, but for me, and I think most non-religious people, death is primarily undesirable because I don't want to sleep forever!
I don't think death is like sleeping forever, I think it's like simply not existing at all. In a particular, important sense, I think the person I am at this moment will no longer exist after it.
Martin Trouilloud @ 2022-01-08T03:24 (+1)
They contradict each other in the sense that your full theory, since it includes the particular consequence that vaporization is chill, is I think not something anyone but a small minority would be fine to live with. Quantum mechanics and atheism impose no such demands. It's not too strong a claim to call this idea fine to live with when you're just going about your daily life, ignoring the vaporization part. "Fine to live with" has to include every consequence, not just the ones that are indeed fine to live with. I interpreted the second quote as arguing that not just you but the general public could get used to this theory, in the same way they got used to quantum mechanics, because it doesn't really affect their day-to-day. This is why I brought up your brain-scan hypothetical; here, the vaporization-is-chill consequence clearly affects their daily lives by offering a potentially life-or-death scenario.
I don't think death is like sleeping forever, I think it's like simply not existing at all. In a particular, important sense, I think the person I am at this moment will no longer exist after it.
Let's say I die. A week later, a new medical procedure is able to revive me. What is the subjective conscious experience of the physical brain during this week? There is none--exactly like during a dreamless sleep. Of course death isn't actually like sleeping forever; what's relevant is that the conscious experience associated with the dead brain atom-pile matches that of the alive, sleeping brain, and also that of a rock.
What I meant was to try imagining that you disappear every second and are replaced by someone similar, and try imagining that over the course of a full week. (I think getting shot is adding distraction here - I don't think anyone wants someone they care about to experience getting shot.)
It's not the gunshot that matters here. If at the end of this week I knew I'd painlessly, peacefully pass away, only to be reassembled immediately nearby with my family none the wiser, I would be freaking out just as much as in the gunshot scenario. The shorter replacemet timescale (a second instead of a week) is the real distraction; it brings in some weird and mostly irrelevant intuitions, even though they're functionally equivalent theories. Here's what I think would happen in the every-second scenario, assuming that I knew your theory was correct: I would quickly realize (albeit over the course of many separate lives and with the thoughts of fundamentally different people) that each successive Martin dies immediately, and that in my one-second wake are thousands of former Martins sleeping dreamlessly. This may eventually become fine to live with only to the extent that the person living it doesn't actually believe it--even if they believe they believe it. If I stayed true to my convictions and remained mentally alright, I'd probably spend most of my time staring at a picture of my family or something. This is why your call to try living with this idea for a week rings hollow to me. It's like a deep-down atheist trying to believe in God for a week; the emotional reaction can't be faked, even if you genuinely believe you believe in God.
I don't find it obvious that there's something meaningful or important about the "connected conscious experience." If I imagine a future person with my personality and memories, it's not clear to me that this person lacks anything that "Holden a moment from now" has.
I agree, this future person lacks nothing--from future person's perspective. From the perspective of about-to-be vaporized present person, who has the strongest claim to their own identity, future person lack any meaningful connection to present person beyond the superficial, as present person's brain's conscious experience will soon be permanently nothing, a state that future person's brain doesn't share. Through my normal life, even if all my brain's atoms eventually get replaced, it seems there is this 'connected consciousness' preserving one particular personal identity, rather than a new but otherwise identical one replacing it wholesale like in the teleporter hypothetical.
If I died, was medically revived a week later, and found a newly constructed Martin doing his thing, I would be pretty annoyed, and I think we'd both realize, given full mutual knowledge of our respective origins, that Martin's personal identity belongs to me and not him.
I don't intend these vague outlines to be an actual competing conception of personal identity, I have no idea what the real answer is. My core argument is that any theory that renders death-and-replacement functionally equivalent to normal life is unsatisfactory. You did inspire me to check out Reasons and Persons from the library; I hope I'm proven wrong by some thought experiment, and also that I'm not about to die.
Taymon @ 2021-12-29T06:21 (+1)
This (often framed as being about the hard problem of consciousness) has long been a topic of argument in the rationalsphere. What I've observed is that some people have a strong intuition that they have a particular continuous subjective experience that constitutes what they think of as being "them", and other people don't. I don't think this is because the people in the former group haven't thought about it. As far as I can tell, very little progress has been made by either camp of converting the other to their preferred viewpoint, because the intuitions remain even after the arguments have been made.
Martin Trouilloud @ 2021-12-30T18:06 (+2)
I think this is pretty strong evidence that Holden and Parfit are p-zombies :)
WS Campbell @ 2021-12-29T01:19 (+1)
Let's say is Holden at time T.
Plausible Moral Rule (PMR): People cannot be morally blameworthy for actions that occurred before they existed.
By the PMR, for instance, cannot be blameworthy for a murder committed by Ted Bundy.
Now suppose that committed murder on national television.
According to the view of personhood laid out in this post, plus the PMR, it seems like is not blameworthy for the murder committed by .
That seems whacky.
I think that seems whacky for precisely the reason that and are the same person.
(Quick note: seems blameworthy for 's murder in a way that's fundamentally different than the way we might say Holden's parents are blameworthy, even if is a minor.)
WS Campbell @ 2021-12-29T01:32 (+1)
Me: *pours water on Holden's head*
Holden: WTF??!
Me, 1 second later: It wasn't me!
Holden, considers:
- "Yeah it was! I saw you!"; or
- "Fair enough."
Holden Karnofsky @ 2022-01-03T19:39 (+3)
The reason I don't agree that this is an issue is that I don't accept the "plausible moral principle" (I alluded to this briefly in footnote 3 of the piece).
I titled the piece "what counts as death?" because it is focused on personal identity for that purpose. We need not accept "HT is not responsible for HT-1's actions" in order to accept "HT-1 cares about HT analogously to a close relation, with continuity of experience being unimportant here" or " HT-1 and HT do not have the kind of special relationship that powers a lot of fears about teleportation being death, and other paradoxes."
Admittedly, part of the reason I feel OK preserving the normal "responsibility" concept while scrapping the normal "death" concept is that I'm a pragmatist about responsibility: to me, "HT is responsible for HT-1's actions" means something like "Society should treat HT as responsible for HT-1's actions; this will get good results." My position would be a more awkward fit for someone who wanted to think of responsibility as something more fundamental, with a deep moral significance.
WS Campbell @ 2022-01-05T22:35 (+2)
Thanks for your thoughts, Holden! Fun to engage.
re: The Pragmatic View of Blameworthiness/Responsibility
I'm compelled against your "pragmatic" view of moral blame by something like Moore's open-question argument. It seems like we could first decide whether or not someone is blameworthy and then ask a further, separate question about whether they should be punished. For instance, imagine that Jack was involved in a car accident that resulted in Jill's death. Each of the following questions seems independently sensible to me:
(a) Is Jack morally responsible (i.e., blameworthy) for Jill's death?
(b) Assuming yes, is it morally right to punish Jack? (Set aside legal considerations for our purposes.)
If the pragmatic view about blameworthiness is correct, asking this second question (b) is as incoherent, vacuous, or nonsensical as saying, "I know there's water in this glass, but is it that's in there?" But if determining that (a) Jack is blameworthy for Jill's death still leaves open (b) the question of whether or not to punish Jack, then blameworthiness and punishment-worthiness are not identical (cf., the pragmatic view).[1]
re: Focus of the Piece was Death, not Moral Blame
I understood that the purpose of your post was to consider the implications of a certain view about personal identity continuity (PIC) for our conception of death. But I was trying to show that this particular view of PIC was incompatible with a commonsense view about moral blame. If they are in fact incompatible, and if the commonsense view about moral blame is right, then we have reason to reject this view of PIC (then don't need to ask what its implications are for our notions of death).
So is that view of moral blame wrong?
It seems prima facie correct to me that Jack cannot be blameworthy for an action that occurred before Jack existed.
But it seems like you reject this idea. I'll think harder about whether or not that view of blameworthiness is correct or not. For now:
I see how can be (causally, morally) responsible for something that does, but I don't see how can be responsible for something does unless and are the same person. For to be responsible for something does, assuming they're 2 different people, it seems like you'd have to have a concept of responsibility that is fully independent of causality (assuming no backwards-causation). I'm curious what view that would be.
As an aside, your Footnote 3 seems like a reason might have for caring about the interests and wellbeing of , but it doesn't seem like a reason why is in fact responsible for that other dude, (if they're 2 different people).
Thanks for your thoughts!
P.S. I'm new to all of this, so if anything about my comments is counter-normative, I'd be thrilled for some feedback!
- ^
We can further think about the separability of these two questions by asking (b) irrespective of (a). For instance, there might be pragmatic reasons to punish a car passenger for drinking alcohol even if there's nothing blameworthy about a passenger drinking alcohol per se.
Holden Karnofsky @ 2022-01-18T20:44 (+2)
In response to the paragraph starting "I see how ..." (which I can't copy-paste easily due to the subscripts):
I think there are good pragmatic arguments for taking actions that effectively hold Ht responsible for the actions of Ht-1. For example, if Ht-1 committed premeditated murder, this gives some argument that Ht is more likely to harm others than the average person, and should be accordingly restricted for their benefit. And it's possible that the general practice of punishing Ht for Ht-1's actions would generally deter crime, while not creating other perverse effects (more effectively than punishing someone else for Ht-1's actions).
In my view, that's enough - I generally don't buy into the idea that there is something fundamental to the idea of "what people deserve" beyond something like "how people should be treated as part of the functioning of a healthy society."
But if I didn't hold this view, I could still just insist on splitting the idea of "the same person" into two different things: it seems coherent to say that Ht-1 and Ht are the same person in one sense and different people in another sense. My main claim is that "myself 1 second from now" and "myself now" are different people in the same sense that "a copy of myself created on another planet" and "myself" are different people; we could simultaneously say that both pairs can be called the "same person" in a different sense, one used for responsibility. (And indeed, it does seem reasonable to me that a copy would be held responsible for actions that the original took before "forking.")
Holden Karnofsky @ 2021-11-09T17:55 (+12)
Comments for Rowing, Steering, Anchoring, Equity, Mutiny will go here for now. I hope to post the whole piece to the Forum separately, but I'm currently having trouble with formatting. I will post a link to it when it's up so that future comments can go there.
DanielFilan @ 2021-11-12T00:27 (+12)
I haven't heard much in the way of specific proposals for how the existing "system" could be fundamentally reformed, other than explicitly socialist and Marxist proposals such as the abolition of private property, which I don't support.
More right-wing flavoured versions that you could run into include flavours of anarcho-capitalism (see e.g. The Machinery of Freedom and The Problem of Political Authority) and Hansonian proposals such as futarchy and private criminal law enforcement.
Ian Turner @ 2023-01-28T21:44 (+1)
How would you classify populist anti-establishment movements like Donald Trump's presidency or Brexit?
To me these are also a kind of mutiny, in that their proponents are motivated more by a sense of grievance that the system is not working for them than a specific idea of what the system should look like instead.
Charles He @ 2021-11-09T21:49 (+1)
I don't think the link in the comment works. Here is a direct link:
https://www.cold-takes.com/rowing-steering-anchoring-equity-mutiny/
Matt Ball @ 2021-11-09T20:04 (+1)
Holden, I'm curious where you would put painist organizations - those who are only trying to reduce / alleviate pain. One Step for Animals and Lewis' Open Phil work are along these lines. Or do you think this is not a big enough area to warrant discussion (which could well be).
Holden Karnofsky @ 2022-01-03T19:43 (+2)
In my head, these seem like "equity," though I'll admit my phrasing in describing "equity" doesn't make this clear. A somewhat broader version of "equity" might be: "focus on improving the day-to-day lives of the people on the ship, rather than anything about where the ship is headed or who's deciding that."
Holden Karnofsky @ 2021-12-06T17:59 (+9)
Comments for today's post on Omicron will go here.
Linch @ 2021-12-07T09:36 (+4)
Re: your comment on Juan Cambeiro being
“one of the best Covid forecasters there is” according to this tweet, which might be based on some quantitative metrics I haven’t easily found or might just not be right
There isn't (ironically) a clean quantifiable metric for comparing forecasters [1] across different platforms and different question sets , but among the EA forecasting world (at least in 2020 when I paid attention to this), Juan has had nontrivial renown in having consistently one of the best coronavirus forecasting records across a broad range of platforms. For example, in addition to his native Metaculus, Juan currently is #3 for coronavirus questions among Good Judgement Open forecasters (and iirc used to be higher, note that he did not predict all questions). He was also #1 in a private GJP 2.0 tournament I was in. He became a certified superforecaster (TM) in the end of 2020. He also generally has explicit reasoning and good takes.
With the caveat that I mostly stopped paying attention to covid in 2021, I think Juan is plausibly one of the best covid forecasters out there, at least among people willing to put their forecasts out there publicly. I think it's plausible that private entities (eg in intelligence agencies, or trading firms) have noticeably better private forecasts on covid than the public ones we've seen, but I'm pretty agnostic about this.
[1] The Rick and Morty joke "science is more art than science"comes to mind.
Holden Karnofsky @ 2022-01-03T19:51 (+2)
This is helpful, thanks!
Target @ 2021-12-07T01:02 (+3)
Re Omicron-specific boosters -- I'd love some ideas about what to do here. Orgs like 1DaySooner are helpful for advocacy but I don't see any path to the kind of speed we need here.
And even Pfizer seems to think that there isn't a reason for urgency right now (WaPo):
“I believe, in principle, we will at a certain time point need a new vaccine against this new variant. The question is how urgent this needs to be available,” CEO Ugur Sahin told a conference hosted by Reuters.
I'm struggling to see a plausible intervention at all here.
(This is Dave Orr, on the board at Packard.)
(Copied from below from before there was this thread.)
Holden Karnofsky @ 2022-01-18T20:45 (+2)
Sorry, just saw this! This did not in fact work out on the hoped-for timeline, and I didn't have a grantee in mind - I think the right way to try to do something here would've been through direct dialogue with policymakers.
HaukeHillebrandt @ 2021-12-15T18:45 (+2)
Thanks for this post- I forwarded it to a Oxford bioethicist and nudged him to write about it and they just published a thoughtful piece on it in the BMJ: 'Regulating strain-specific vaccines – speed, rigour and challenge trials'.
Holden Karnofsky @ 2022-01-03T19:51 (+2)
Nice!
Andrew Clough @ 2021-12-06T18:59 (+1)
I'm curious about when the FDA's expedited flu vaccine approval came to be. It seems plausible to me that this is something grandfathered in from the early days and that the modern FDA wouldn't be flexible enough to start something like it.
Holden Karnofsky @ 2021-11-03T12:49 (+9)
Comments for Hunter-gatherer happiness will go here.
Sophia @ 2021-11-09T21:06 (+8)
This was possibly my favourite email in the Cold Takes email newsletter so far. I always enjoy understanding someone's thought process before they've become an expert on a topic. Once someone knows enough, I think that their views usually change too slowly to easily see or demonstrate (one new piece of information or consideration, one new data point, naturally can't swing the holistic viewpoint quite so much when a person knows a huge amount).
It (unsurprisingly) reminded me of early Givewell material. Givewell is likely right more of the time now than in 2007. With more careful thought and knowledge built-up over time, comes better calibration. There is something lost though. How do we know that someone would change their mind in response to new evidence if we rarely see them change their minds? There is something wonderful about seeing people shift their views somewhat (or their confidence in their views) in response to transparent thinking in real-time. Anecdotally, this seems to happen a lot more in conversation than in writing (everywhere, not just in the EA community and adjacent spaces). In conversation, it is often much more acceptable to express uncertainty about conclusions while still presenting a framework for how you are thinking around an issue. It seems to happen rarely in public outside this community and adjacent ones.
My priors on the object-level question are very different to Holden's. My worst mental health happens when I feel stagnant/ can't contribute/am not valued. Being in physical pain with purpose has always felt much more bearable than having all the creature comforts of our modern time while feeling like what I spend my time doing is meaningless and doesn't add value to anything I really care about. This is obviously extremely weak evidence; memory is unreliable and I am a single individual. There might already be good evidence either way on whether feeling like your day-to-day life has purpose is a better predictor of subjective wellbeing than income or health for a sample size greater than one.
If hunter-gatherer societies consistently give everyone roles that visibly contribute to the lives of the people they know and love (eg. searching for food for the tribe) my prior is that this would feel more purposeful than modern day life (on average). If purpose is a more important factor in predicting subjective wellbeing than health or material wealth, then I would expect this study (across tribes/maybe even looking at biomarkers of depression instead of a survey) to replicate.
Holden Karnofsky @ 2022-01-03T19:53 (+5)
I think that's a great point about the value of seeing people change their opinions in real time. I do wish there were more models of this.
I'm a bit skeptical on the "purpose" idea, mostly because I think most people have a pretty clear sense that they need to (or will need to) work - and/or provide direct care - in order to support their family. This seems pretty analogous to the hunter-gatherer situation (and I wouldn't assume that the latter feels more tangible or "direct" - my impression is that a lot of hunters can go a while without a clear, direct contribution to the hunt). If I wanted to look into this further, I might investigate hypotheses like "people in the military are especially happy" or "doctors are especially happy" or "people become less happy when they become financially able to stop working and do so" (I would guess these aren't true and would change my mind if they turned out to be).
Sophia @ 2022-01-06T11:15 (+1)
Hmm, interesting!
My guess still is that it matters how tangibly connected the activity is to the outcome. I think it matters a lot that filling out a spreadsheet for an insurance company for one’s actuarial job does not directly feed one’s children, even if the outcome is the same. This is similar to my intuition that jumping into a pond to save a drowning child probably feels more fulfilling than donating a large sum to Givewell recommended charities, even if the outcomes are fairly comparable. Even swimming around looking for drowning children and not finding them on most attempts but succeeding every now and then seems more intuitively fulfilling (but I might just be worse at simulating in my mind the long periods of failure than the brief moments of success).
I also think it matters whether one knows the people they are helping personally. I expect doctors to care less about helping their patients than a hunter-gather would care about gathering food for their family (and, to a lesser extent, their tribe). However, I would think it was more likely that your view was right than the one I expressed if doctors and social workers were less happy than other professionals in their income bracket (e.g. if actuaries were happier than doctors or accountants were happier than social workers).
The military is an interesting case and how informative I'd find military personnel happiness depends on who we're talking about. I suspect military leaders are happier than average and would change my mind if they weren't. I suspect lower-ranked soldiers in peacetime would be happier than the average person (I'd guess they would be unhappy during intense periods of training that are intended to simulate combat but I'd also guess that most of the time, they won't be in combat-simulating training). I would be surprised if soldiers during combat or intense training periods that try to simulate combat were happier than the average person because the physical conditions seem so extreme (I'd guess much more extreme than the everyday experience of a hunter-gatherer).
I suspect voluntary retirees would replace work with more meaningful activities (like doing pro bono/volunteer work or spending time helping their family and friends) so I am not sure how much them being happier would change my mind. I would be surprised if voluntary retirees were happier if it was also true that they did not spend much more time helping friends and family or on altruistic endeavours.
Linch @ 2022-01-15T07:14 (+2)
Speaking of "I think that's a great point about the value of seeing people change their opinions in real time," if you don't mind me asking, would you like to mention a sentence or two on why you no longer endorse the above paragraphs?
Sophia @ 2022-07-17T12:01 (+1)
Hi Linch, I'm sorry for taking so long to reply to this! I mainly just noticed I was conflating several intuitions and I needed to think more to tease them out.
(my head's no longer in this and I honestly never settled on a view/teased out the threads but I wanted to say something because I felt it was quite rude of me to have never replied)
lincolnq @ 2021-11-04T17:56 (+4)
Assuming that the high happiness reports from the Hadza are "real" (and not noise, sampling bias, etc), what might it be?
They have dramatically worse health and nutrition. Also worse "creature comforts" like cozy beds, Netflix and mulled wine. But maybe some combination of the following could be overcoming those drawbacks.
In the category of lifestyle/how you spend your time:
- Social structure (small communities, much stronger social connection, more social time)
- Work structure (more cooperation, more "meaning" in work due to knowing you're supporting your family directly / avoiding starvation for yourself and your loved ones)
- Non-social leisure structure (no Reddit, no TV; no street noise; you're always out in nature)
Or internal experience:
- Perhaps you'd have different dreams or fantasies?
- No Instagram, no "keeping up with the Joneses" or social-status stress beyond your immediate community
- Climate change, nuclear war, and x-risk presumably aren't a worry
- Could sexual and romantic relationships be more fulfilling related to the small community?
Other ideas?
damiensnyder @ 2021-11-06T18:36 (+2)
You already expressed skepticism on the survey of Hadza happiness, but Kat Woods offers more on why such a survey might give inaccurate results. From the intro:
I think the biggest takeaway I had from my experience is that I am even more skeptical of survey methodologies than I was before, and I started off pretty intensely skeptical. The reasons for this is that I think that misunderstandings caused by translation, education levels, and just normal human-to-human communication errors are not only common, but the rule.
The four major issues she notes are:
- Not understanding hypotheticals.
- Not understanding in general.
- People giving inconsistent answers.
- Refusing to rate happiness.
This is from surveying in Rwanda and Uganda, which will certainly have many of the same difficulties as surveying Hadza. (It also surprised me article that the US and Mexico would have much higher self-reported happiness than, for example, Italy. I wonder if this is a real effect or if happiness surveying is fraught with cultural issues in general.)
bobert93 @ 2021-11-05T13:31 (+1)
Because 'how the question is interpreted' makes comparing subjective happiness survey results hard to compare, one other way to approach the question of 'are hunter-gatherer societies happier' is to look at the people who move between hunter-gatherer and modern societies and study their happiness and outcomes. On the one hand, lots of hunter-gatherer peoples who switched into living in modern societies (e.g. Inuit in Greenland) have fairly bad outcomes ( to my limited knowledge, further study obviously needed here), on the other hand, few seem to opt to return to the hunter-gatherer lifestyle, potentially suggesting modern lifestyles are preferable to hunter-gather lifestyles.
Is there a significant cohort of people who've gone from living in modern societies and moved to live in hunter-gatherer societies? If yes, they'd be a useful group to survey. If not, is this evidence that modern lifestyles are preferable to hunter-gatherer ones, because no one 'votes with their feet' and moves from modern societies into hunter-gatherer ones?
Holden Karnofsky @ 2022-01-03T19:54 (+2)
I agree that "voting with one's feet" is an interesting angle. Some discussion of this angle is here (search for "Certainly, the part closest to my area of expertise raises questions").
Holden Karnofsky @ 2021-12-07T05:40 (+7)
[Placeholder for Describing Utopia comments]
JuanGarcia @ 2021-12-09T16:15 (+12)
In response to the following parts of your post:
- "the only relevant-seeming academic field I found (Utopian Studies) is rooted in literary criticism rather than social science"
- "most of the people there were literary scholars who had a paper or two on utopia but didn't heavily specialize in it"
- "Rather than excitement about imagining designing utopias, the main vibe was critical examination of why one would do such a thing"
I know a scholar who heavily specializes in the study of Utopia from the social sciences perspective (history) rather than literaty criticism: Juan Pro Ruiz, coordinator of the HISTOPIA project (~30 researchers, link in English). In their latest project, they are:
"analyzing the locations and geographical spaces of utopianism - both of unrealized or merely imaginary utopian projects (literature, cinema, art...) and of utopian experiments tested with greater or lesser success (in the form of social movements or intentional communities) - throughout contemporary history (19th to 21st centuries), while making an exceptional foray into the Modern Age in search of precedents and long-term trends. [...] even testing the heuristic possibilities of the human body as a space for the realization of utopias and dystopias in the field of contemporary science fiction or the transhumanist movement."
I recently attended a symposium on Utopian thinking by Juan Pro in Madrid. He seemed extremely knowledgeable in the subject, and quite positive about the usefulness of serious Utopian exploration as a tool for navigating the present towards a better future. From the HISTOPIA web page:
"HISTOPIA seeks to go beyond the philological approach that predominates in Utopian Studies and to historicize the study of contemporary utopias and dystopias, showing how they respond to the contexts in which they arise, since they reflect the problems and frustrations of a society as well as the aspirations for change it contains, and the conditions of possibility that a particular cultural and emotional framework offers for developing them. [...] to recognize a new surge of the utopian impulse in the present times, asserting its need to provide a channel for the “hope principle” and stimulate the emergence of innovative ideas that constitute responses to the problems of the present. In short, the group explores the meaning of utopia (and its alternatives) for contemporary societies, as a mechanism for the construction of possibilities, a true laboratory of thought and action, in which we experiment with the forms of political, economic and social organization of the future."
I found this email online if you want to contact him: juan.pro@uam.es. If you prefer, I could make an introduction.
Taymon @ 2021-12-08T14:17 (+6)
As far as I'm aware, the first person to explicitly address the question "why are literary utopias consistently places you wouldn't actually want to live?" was George Orwell, in "Why Socialists Don't Believe in Fun". I consider this important prior art for anyone looking at this question.
EAsphere readers may also be familiar with the Fun Theory Sequence, which Orwell was an important influence on.
On a related note, I get the impression that utopianism was not as outright intellectually discredited and unfashionable when Orwell wrote as it is today (e.g., the above essay predates Walden Two), even though most of the problems given in this piece were clearly already present and visible at that time. That seems like it does have something to do with the events of the 20th century, and their effects on the intellectual climate.
MaxRa @ 2021-12-08T11:15 (+6)
My favorite utopia is probably Scott Alexander‘s Archipelago of Civilized Communities, a world in which humans can form communities on any principles they desire on a new uninhabited island, while individuals have the freedom to leave at any time. The central government does very little except keeping peace, preventing some negative externalities and such. It doesn‘t sound too dull, I hope for a vast complexity of different communities and histories, the ability to travel between communitie, etc.
https://slatestarcodex.com/2014/06/07/archipelago-and-atomic-communitarianism/
Ariel_ZJ @ 2021-12-07T22:10 (+4)
Holden, have you had a look at the Terra Ignota series by Ada Palmer? It's one of the better explorations of a not-quite-Utopia-but-much-better-than-our-world that I've come across, and it certainly contains a large degree of diversity. It also doesn't escape being alien, but perhaps it's not so alien as to lose people completely. My one caveat is that it is comprised of four substantial books, so it's quite the commitment to get through if you're not doing it for your own leisure.
akrolsmir @ 2021-12-07T21:25 (+4)
The best fictional description I've ever read of utopia is in Worth the Candle's epilogues -- in that it made me feel "yeah, I'd enjoy living there". Some broad principles:
- People choose which kind of heaven they participate in
- All physical needs met, no resource constraints
- Everything is consent based; there are p-zombies to act out other urges
Highly worth a read!
Kenny Easwaran @ 2021-12-07T19:40 (+2)
I recently rewatched the movie Her (https://www.imdb.com/title/tt1798709/) which is one of the few examples of unironically utopian fiction I can find. The total extent of conflict and suffering in the movie is typical of a standard romantic comedy - the main character is going through a bad breakup with an ex, and dealing with a new relationship (which happens to be with an artificially intelligent phone operating system). It's got its own amounts of heartache and loss, but it's utopian in that all the bigger problems of the world seem to be gone. The main character lives in Los Angeles, but the city is full of skyscrapers, and it seems to be easy for people to afford a spacious apartment (and it's decorated in warm woods and gets lots of natural light, rather than being the sort of cold glass and steel thing people imagine in a skyscraper city). All the outdoor scenes are in beautiful pedestrian-oriented spaces, full of clean air and happy people of all races and genders, interacting in a friendly way. He can take the subway to the beach and the high speed rail up to Lake Tahoe. He has a fulfilling job helping clients compose thoughtful handwritten letters to their loved ones. He's worried about being judged for dating an operating system, but his best friend down the hall stays up late sharing videos with her new operating system friend, and his work friend suggests they go on a double date to Catalina island - it's only the ex who reacts poorly to his relationship with a computer. Other than the computer relationship, the thing I've heard the most negative reactions to about the movie is that it's a future where men wear high-waisted pants in 1970s colors. It might be worth studying that movie to see how to depict a utopia in a realistic way that people can like.
kokotajlod @ 2021-12-07T19:39 (+2)
I feel like the main blocker is homogeneity. A utopia in which everyone is free to design their own sub-community as they see fit (provided certain rights are respected) should appeal to pretty much everyone, surely... It can even contain conflict, in the form of law-abiding struggles over the only inherently scarce resources (status, attention, etc.). Like sports.
Nathan Sherburn @ 2022-07-24T05:57 (+1)
I have a suspicion that people often dislike talking about utopias for fear of hope.
A personal example: when asking a few friends why they didn't want to live forever, I got responses that seemed to indicate something like:
"I don't want false hope. I've spent years trying to make peace with the fact that I'm going to die. I don't even want to entertain the idea unless you have extraordinarily strong evidence it's possible."
I'd love to research this question.
VishrutArya @ 2021-12-13T00:29 (+1)
Erik Olin Wright, a sociologist and former member of the 'non-bullshit Marxists' (which included analytical philosophers GA Cohen and Jon Elster and economist John Roemer amongst others), has a good book Envisioning Real Utopias (full book on his webpage here) that I think would be a profitable read for those interested in utopias work.
JoelMcGuire @ 2021-12-13T17:45 (+1)
I would be interested in reading a summary of real utopias if one is available.
VishrutArya @ 2021-12-23T19:13 (+1)
Chapter 5 summarizes some of the book's themes.
This Guidelines article by Erik is an even shorter high-level take.
Asbak81 @ 2021-12-07T20:35 (+1)
Maybe this recent book could be of interest: Anna Neima - The Utopians: Six Attempts to Build the Perfect Society.
Holden Karnofsky @ 2022-01-18T04:24 (+5)
Comments for Empowerment and Stakeholder Management will go here.
Larks @ 2022-01-19T03:03 (+5)
Great post, I think this captures something very important about how the increasing size of, and focus on, externalities leads to more stakeholder vetos.
I think this dynamic affects private companies, governments and more. While "red tape" appears in institutions, I think the underlying cause of the "red tape" often comes from the behavior of private individuals. And I don't think the world becoming more "libertarian" (at least in the narrow sense of seeking to shrink government) would necessarily solve much (at least, I wouldn't expect it to lead to more subway stations!)
I think you're correct that the underlying cause is individuals, but I do think there is something here about the solution. Private businesses have always had to deal with stakeholders, like suppliers and workers, and have historically been able to deal with this relatively well, because costs to these stakeholders could be compensated with fungible dollars. This allows for mutually beneficial agreements, competition and so on. In contrast, many of the stakeholder vetoes that are created by government do not allow such solutions: paying off stakeholders is considered bribery rather than legitimate payment. It's true that making rights alienable is compatible with a relatively high degree of government oversight, but most people would probably regard it as a move in a libertarian direction.
rohinmshah @ 2022-01-19T10:12 (+3)
Another potential reason that empowerment could lead to more onerous stakeholder management is that we're able to take more large-scale, impactful actions, and so it's much more common to have affected stakeholders than it was in the past.
Habryka @ 2022-01-18T06:23 (+2)
Did you take this post down? Or does it not exist yet?
Holden Karnofsky @ 2022-01-18T19:56 (+3)
I generally put this comment up in advance of the post, so that I can link to it from the post. The post is up now!
Habryka @ 2022-01-18T22:36 (+2)
Checks out. Wasn't aware of that!
GMcGowan @ 2022-02-08T15:07 (+1)
Recent post responding to you
Holden Karnofsky @ 2022-01-14T20:53 (+4)
Comments for Jan. 14 Cold Links will go here.
maxfieldwallace @ 2022-01-15T02:49 (+2)
Re: net neutrality, I have no insider knowledge, this is just my personal opinion as an observer.
Little has changed since the NN repeal precisely because there was a relatively strong outcry at the time. It's hard to think of another issue that polls with 60-80% support across both parties.
Practically, "little has changed" in the sense that AFAIK in these 4 years no ISP has switched to a business model based on charging internet companies for access to "fast lanes". IMO this is only because introducing tiered pricing would likely generate significant backlash, and ISPs have good reason to believe that, given the outcry at the time of repeal.
The downsides of NN include unpredictable tail risks of a kind of lock-in that is very hard to undo.
At the time of repeal, I think there were basically two categories of "sky is falling" rhetoric. (1) rational actors trying to drum up public opposition despite knowing that the worst-case scenario is unlikely, and (2) media actors who jumped on the NN bandwagon, simply because it generated engagement.
Doesn't make sense for (1) to state "I was wrong" takes because nothing in these past 4 years falsifies the claim that eroding NN could gradually lead to an ossified internet with (much more) rent-seeking ISPs. (2) probably wouldn't recant anything since "we were wrong" stories seem like ineffective clickbait.
In short, I think nothing bad has happened yet because people were so fired up about NN in the first place, and because practically a rent-seeking ISP would need more time to capitalize on the repeal.
Holden Karnofsky @ 2022-03-31T23:09 (+3)
I think this is interesting and plausible, but I'm somewhat skeptical in light of the fact that there doesn't seem to have been much (or at least, very effective) outcry over the rollback of net neutrality.
Oleg Eterevsky @ 2022-01-08T06:58 (+4)
I think you are totally missing one aspect of art greatness: standing the test of time. A big part of Beethoven's perceived greatness is the fact that he wrote his pieces more than 200 years ago and we still listen to them. At the time people certainly appreciated Beethoven's music, but he probably wasn't considered the greatest composer of all times. Bach, who is often considered the best composer of all times wasn't even really famous among his contemporaries. The main reason why Bach and Beethoven are considered great is that their music is still familiar to most listeners more than 200 years later.
By definition, you can't apply this criterion to the contemporary musicians, since you don't know which ones will still be remembered even 50 years from now. The best you can do is look back to the previous century. Who were the most influential musicians of the 20th century? I think The Beatles are good candidates both in terms of how well-remembered they are and in terms of their influence on the genre. And yes, I suppose a few tunes by John Williams also qualify.
From the individual's perspective I am quite sure that at this very moment there are hundreds of musicians out there that are as talented as Beethoven. This follows simply from the growing population and the growing accessibility of education. Beethoven was the best from perhaps a million Europeans who were sufficiently high-class to become a musician. In the modern world billion(s) of people have a chance to become famous musicians, so it follows that there are at least hundreds of musicians as naturally talented as Beethoven.
You rightly say that the value of new music is in its originality. There's nothing particular in the classical music that modern composers can't imitate. We don't have 9 more symphonies like Beethoven's not because modern composers can't write them, but because nobody wants "fake" Beethoven even if it sounds as good as the original one. Here Nahre Sol writes new very high-quality music in the styles of multiple great composers of the past, but she does it just for educational purposes:
Ok, suppose there are talented musicians out there, but maybe all the good music is already written and it's hard to come up with new ideas? I don't think I buy it. First of all, that would imply that the earlier composers were picking all the low-hanging fruits compared to later music. This did actually happen in science where we can be confident that there are no simple theories like Newton's laws, that haven't been found by now. It doesn't feel this way at all in music. A lot of modern music is fundamentally as simple or simpler than most classical music. Conversely many classical composers wrote in very specific genres, that doesn't really shrink the space of possible new original music to be written after them.
In my experience, modern music is saturated with great new ideas and pieces. Their problem is that they compete against each other for attention of the public.
Holden Karnofsky @ 2021-12-22T08:43 (+4)
Comments on "Omicron bet" will go here.
WilliamKiely @ 2022-01-02T19:28 (+4)
Added to Metaculus:
- Will Holden win his Bet with Zvi about Omicron, conditional on one of them winning?
- Will Holden's Bet with Zvi about Omicron resolve ambiguously? (pending approval as of this comment)
WilliamKiely @ 2021-12-23T18:32 (+3)
Fun exercise: which side of this bet would you want to be on?
I'm definitely on Holden's side of the bet.
In summary, I assign 80% to Holden's outcome, 15% to the ambiguous "push" outcome, and 5% to Zvi's outcome.
This is a low-information forecast, but there seem to be three outcomes to the bet, and Zvi's outcome clearly seems to be the least likely:
(1) For Zvi to win, Covid cases (of all variants, including any future ones) need to average ~18 times higher over the first two months of 2022 than over the following 12 months (math: 18 = (0.75/2) / (0.25/12)).
18 is such a high ratio given Covid's track record so far. The ratio of the 7-day-average of US cases from its high (1/11/2021 = ~256,000/day) to its low (6/21/2021 = ~12,000/day) is ~21, barely higher than ~18. Plus, those two weeks were months apart, giving time for cases to drop off.
I don't see a very plausible way to get that kind of ratio for the first two months of 2022 over the 12 months afterwards. E.g. It seems unlikely that cases would drop off sufficiently quickly at the end of February to avoid adding a large number of cases in March (and to a lesser degree, April, etc). (i.e. Even if cases virtually disappeared later in 2022 (such that the second period being 12 months instead of 6 doesn't matter much), it's really hard for cases to drop off so quickly that the number of cases from March 1 onwards don't end up being at least a third of the number of cases from January and February.) The prior on that steep drop-off happening by the end of February is quite low and the fact that it's only a little over a week until January and cases are still on the rise doesn't make it seem more likely that there will be a steep drop-off before March. There's just no way Zvi could know that that is likely going to happen. I don't need to read his post to know that he doesn't know that.
Given this simple consideration that cases would have to drop off exceptionally fast at just the right time for Zvi's outcome to happen, I assign a 5% chance to Zvi's outcome happening.
It was really just strongly disagreeing that Zvi's outcome seemed likely that made me want to write this comment, but I'll go ahead and write estimates for the other two outcomes:
(2) For Holden to win, cases in the 12-month period after February 2022 have to exceed one-third of cases in the first two months of 2022 before a new variant comes along and takes over. I haven't heard of any new variant after Omicron. Such a new variant would have to be much more contagious than Omicron for it to have a chance to take over everywhere in the US in time to stop Omicron's cases from exceeding that one-third threshold in March-May. I hear Omicron is super contagious so that seems unlikely. Therefore, my low-information forecast is that Holden's outcome is 80% likely.
(3) For the bet to be a "push" (i.e. for the bet to resolve ambiguously), a new variant or variants need to take over before Omicron-and-previous-variant cases after March 1 exceed the one-third threshold mentioned above. This seems more likely than Zvi's outcome, but still not that likely. I'll assign 15% to the ambiguous outcome.
Pablo @ 2021-12-24T00:24 (+4)
Given this simple consideration that cases would have to drop off exceptionally fast at just the right time for Zvi's outcome to happen, I assign a 5% chance to Zvi's outcome happening.
Your analysis roughly matches my independent impression, but I'm pretty sure this simple consideration didn't escape Zvi's attention. So, it seems that you can't so easily jump from that analysis to the conclusion that Holden will win the bet, unless you didn't think much of Zvi as a reasoner to begin with or had a plausible error theory to explain this particular instance.
WilliamKiely @ 2021-12-24T04:17 (+4)
Yes, you're quite right, thanks. I failed to differentiate between my independent impression and my all-things-considered view when thinking about and writing the above. Thinking about it now, I realize ~5% is basically my independent impression, not my all-things-considered view. My all-things-considered view is more like ~20% Zvi wins--and if you told me yours was 40% then I'd update to ~35%, though I'd guess yours is more like ~25%. I meta-updated upwards based on knowing Zvi's view and the fact that Holden updated upwards on Zvi to 50%. (And even if I didn't know their views, my initial naive all-things-considered forecasts would very rarely be as far from 50% as 5% is unless there's a clear base rate that is that extreme.).
That said, I haven't read much of what Zvi has written in general and the one thing I do remember reading of his on Covid (his 12/24/20 Covid post) I strongly disagreed with at the time (and it turns out he was indeed overconfident). I recognize that this probably makes me biased against Zvi's judgment, leading me to want to meta-update on his view less than I probably should (since I hear a lot of people think he has good judgment and there probably are a lot of other predictions he's made which were good that I'm just not aware of), but at the same I really don't personally have good evidence of his forecasting track record in the way that I do of e.g. your record, so I'm much less inclined to meta-update a lot on him than I would e.g. on you.
Additionally, I did think of a plausible error theory earlier after writing the 5% forecast (specifically: a plausible story for how Zvi could have accepted such a bet at terrible odds). (I said this out loud to someone at the time rather than type it:) My thought was that Zvi's view in the conceptual disagreement they were betting on seems much more plausible to me than Zvi's position in the bet operationalization. That is, there are many scenarios that would make it look like Zvi was basically right that might technically cache out as a Holden win according to the exact betting terms described here. For example, there might be a huge Omicron wave--the largest Covid wave yet--and cases might drop quickly afterwards and it might be the last wave of the pandemic, and yet despite all of that, perhaps only 50% of the cases after January 1, 2022 happen before the end of February rather than the 75% necessary for Zvi to win.
Zvi thinks there’s a 70% chance of the following: “Omicron will blow through the US by 3/1/2022, leading to herd immunity and something like the ‘end’ of the COVID-19 pandemic.”
Holden proposed a bet and apparently they went back and forth a few emails on the bet operationalization before agreeing. My hypothesis then is that Zvi was anchored on his 70% confidence from this statement and didn't take the time to properly re-evaluate his forecast for the specific bet operationalization they ultimately agreed to. I can easily see him only spending a small amount of time thinking about the bet operationalization and agreeing to it without realizing that it's very different than the concept he was originally assigning 70% to due to wanting to agree to a bet on principle and not wanting to go back and forth a few more times by email.
Of course this is just a story and perhaps he did give consider the bet operationalization carefully. But I think even smart people with good judgment can easily make mistakes like this. If Zvi read my comment and responded, "Your hypothesis is wrong; I thought carefully about the bet operationalization and I'm confident I made a good bet" I'd meta-update on him a lot more--maybe up to 50% like Holden did. But I don't know enough about Zvi to know whether the mere fact that he agreed to a public bet with Holden is strong evidence that he thought about the exact terms carefully and wasn't biased by his strong belief in his original 70% statement.
(Noting to myself publicly that I want to learn to be more concise with my thoughts. This comment was too long.)
WilliamKiely @ 2022-09-23T23:32 (+2)
[Holden won the bet](https://thezvi.substack.com/i/66658630/i-lose-a-bet). In retrospect, I think I was justified in having high confidence and right that Zvi's bet was foreseeably bad. If anything, when I lowered my forecast from 95% to 68% for a couple weeks in April I was meta-updating too much on the community median and assigning too much weight to the possibility of an extremely large "cases to true infections" adjustment.
Note that I disagree with what Zvi wrote yesterday: "In hindsight, the question of ‘what counts as Omicron’ does have a strong bearing on who had the right side of this wager, and also is a key insight into the mistake that I made here."
I disagree that that was his mistake. Even if subvariants counted as different variants that would only increase the chance that the bet resolves ambiguously. There was (IMO) never a >70% chance (or even >50%) chance that Zvi would win (conditional on someone winning), even if the language of the bet considered subvariants to be different variant.
WilliamKiely @ 2021-12-23T18:48 (+2)
akrolsmir @ 2021-12-22T20:17 (+1)
I set up a prediction market for this bet! https://mantic.markets/AustinChen/will-at-least-75-of-the-usa-covid19
Also, this paragraph from Holden really resonated with me:
In an ideal world, we'd be making so many bets like this that our track records would give clear evidence of which of us was a better predictor, overall. But I don't think that's going to happen; it's a lot of work even to nail down a pretty simple, vivid disagreement like this one (and most important disagreements are much harder to reach bets on, and even this one may require a third-party judgment-driven adjustment). I don't think that whichever of us wins this one bet should gain too much credibility relative to the other.
What kind of tools, sites, or economic structures could enable this ideal world? At Mantic we're hoping accessible, user-created prediction markets will do the trick, but would love to hear alternative proposals!
Holden Karnofsky @ 2021-12-23T16:23 (+5)
Very cool, thanks!
I don't have a great answer to your question. I think the easier and more normalized it gets to make bets like this, the more of them there will be - that's about what I've got.
In practice I find some of the hardest parts of making bets like this are (a) noticing when a disagreement is of the right form such that it's likely to be tractable to turn it into a bet; (b) hashing out all the details of how the bet will be resolved and trying to make them closely match the original conceptual disagreement. (b) is usually so much work that it doesn't end up being worth it for the direct rewards (financial and otherwise), so some sort of norm that this is a virtuous thing to do could be important (but also, if there were a way to make (b) easier, that would be amazing).
Holden Karnofsky @ 2022-02-03T03:10 (+3)
Placeholder for comments on Beach Boys post
Jack Gillespie @ 2022-02-03T20:13 (+10)
I read your article and one element I think you might be missing, is the impact that Pet Sounds had on music production.
A Love Supreme is great, but it is pretty simple from a production standpoint. A group of talented musicians playing great music together.
Pet Sounds, on the other hand, is IMO widely regarded as an innovative musical production masterpiece. So leaving the quality of the songs aside, I recommend re-listening (maybe on high-end headphones) to how each of the sounds has been placed and fit together. I think often when people describe the album as being 'symphonic' they are in some way referring to the fact that this is a piece of pop music that feels like it holds a similar breadth and sophistication as an orchestra in terms of the raw sound.
I don't know that it will change the overall argument, but I thought you might be interested.
Charles He @ 2022-02-03T22:55 (+2)
I don't really get it.
Pet sounds seems like cute radio perfect hits.
What about Like a Rolling Stone? This has a full wall of sound that has space for all the instruments, and that seems hard to achieve. It was recorded in 1965, a year before before Pet Sounds.
There is so much going on:
- Dylan's turn into electric sound and the issues with that
- Its negative, scornful theme and ambiguity of its subject.
- It clocked in at a impractically long 6 minute time.
These choices should have really hurt commercially, and looked pretty crazy at the time.
Now it's like the canonical rock song of all time.
Charles He @ 2022-02-07T21:58 (+2)
I thought more and now I think Jack Gillespie's comment above is right, and my reply above is wrong.
Jack's comment also answers Holden's question about what Holden is overlooking about Pet Sounds.
I think the idea is that:
- Brian Wilson, by creating Pet Sounds, was a builder. He innovated and created a new "technology" that others could build off of.
- In contrast, Bob Dylan is a "harvester"—his innovations laid fewer foundations for others to work.
(I am writing the above as a Dylan fan and not liking the pop aesthetic of Pet Sounds.)
I'm not fully sure, but my guess is that we can't see this because many of the new ideas in Pet sounds have become cliches (overproduced shopping mall music) or used by others.
More datapoints:
The composer, Philip Glass, who is pretty cerebral, says this about Pet Sounds:
Philip Glass referred to "its willingness to abandon formula in favor of structural innovation, the introduction of classical elements in the arrangements, [and] production concepts in terms of overall sound which were novel at the time".
Also:
In August, the Beatles performed their last live show of their final tour, at San Francisco’s Candlestick Park. And as 1966 neared its end, the group began work on “Strawberry Fields Forever,” a song written by Lennon that would guide the band’s musical direction in the coming year.
In 1967, when Brian Wilson first heard the song, he pulled over in his car, broke down in tears and said, ‘They got there first.”
This sort of awareness suggests how Brian Wilson is a lot more than a tinkerer or just has good instincts with melody.
Charles He @ 2022-02-07T22:11 (+2)
It's inexplicable how Holden overlooks Brian Wilson's contributions, especially since he sticks in a giant quote with links showing the influence of Pet Sounds:
Promoted there as "the most progressive pop album ever", Pet Sounds garnered recognition for its ambitious production, sophisticated music, and emotional lyric content. It is considered to be among the most influential albums in music history …
Pet Sounds revolutionized the field of music production and the role of producers within the music industry, introduced novel approaches to orchestration, chord voicings, and structural harmonies, and furthered the cultural legitimization of popular music, a greater public appreciation for albums, the use of recording studios as an instrument, and the development of psychedelic music and progressive/art rock
Holden uses Coltrane's musical content as a contrast:
Pet Sounds came out more than a year after legendary jazz album A Love Supreme! I don't want to get carried away about what my subjective taste says, but … even if A Love Supreme isn’t your cup of tea, I’d guess you’ll think it’s a great deal more complex, cohesive, impressive, and interesting in just about every way (other than the lack of prominent "studio effects") than Pet Sounds. And it's not even clearly less accessible - looks like they sold a similar number of copies?3
So I'm probably going to get black balled from future funding, but I don't understand jazz or Coltrane. My knowledge of jazz comes from La La Land:
But my guess for what is going on that Coltrane is different in style and has a more cerebral focus on musical content, so it's unfair and prejudicial to use it as a lens to judge Brian Wilson's contributions (in studio production, popular music and psychedelic music, etc).
Peli Grietzer @ 2022-06-07T23:06 (+3)
The obvious answer to what frame of mind are you missing here is that you have to actually like the genre of music Pet Sounds is working in relation to.
Peli Grietzer @ 2022-06-07T23:16 (+4)
Andyway, it's worth noting that critical admiration for Pet Sounds only emerged about 20 years after it came out, so lots of the discussion of 'at the time' effects on critics doesn't fit that directly.
bbA @ 2022-04-22T18:24 (+3)
Simply put, there isn't any popular, jazz, or avant-garde music that was written and produced like Pet Sounds before Pet Sounds. It's literally an unprecedented work of art.
It's not just about the fact that it had an exorbitant budget, but the fact that it was composed and directed almost singlehandedly by Wilson. It's the fact that Wilson found a way to be as successful and sophisticated to the degree that he was. He created commercial AM radio pop music with very complex forms and structures within an industry whose markets clamored for either simple three-chord rock 'n' roll, bubblegum, throwaway novelty songs, or schmaltz - an accomplishment that no one had thought possible back then.
Every track contains dozens of different musical parts played and sung by a full-sized virtual orchestra (virtual because many of the parts - mostly the vocals - were overdubbed). These tracks were designed to be as intricate as possible without forcing the listener to struggle with the bombardment of information they're receiving. And it worked. Newly married couples around the world still choose God Only Knows as their wedding song - a song that was crafted so incredibly well that nobody notices that it has no key center until they try to learn to play it.
"But it's doubtful that it used the recording studio better than today's music does." I don't know, I guess this comes down to whether you prefer:
A) the kind of music that would be composed by one guy with strange ideas about music, recorded organically with analogue equipment and real singers and musicians, and then released as-is
B) the kind of music that is composed by algorithms, programmed in a DAW, recorded with autotuned singers, and then screened by test audiences to take out all the "weird" parts
"But it's inevitable that pop music would have gone in more complex directions." Another weird point. It's also inevitable that man will develop civilizations on other planets. Is the "inevitability" supposed make it any less impressive to us?
Yet, 55 years later, there's still no album that hits every checkmark that Pet Sounds does. So much for the inevitability. So much for progress.
I've heard albums with one or two tracks that sorta sound like Pet Sounds, but none of those attempts are structured as complex as songs like Here Today and Don't Talk. The soundalikes use bass harmonicas and harpsichords, but they don't really mess with time signatures, or feature six-part vocal arrangements, or use chord extensions like m7sus4/b5, or modulate keys several times in less than 2 minutes without sounding like free jazz. On the rare occasions that they do, then it's only when they're directly quoting/referencing a line from Pet Sounds.
Yes, there's plenty of complex music out there with all those weird chords and key changes, but it's not pop, it's jazz, prog, heavy metal, and so on and so forth. It's very difficult to be complex and stay pop. That is what makes Pet Sounds so amazing.
Miles Davis could not have written Pet Sounds, and Brian Wilson could not have written A Love Supreme. Why even compare the two? Different backgrounds, different genres, different types of musicians, different markets. So silly.
maxfieldwallace @ 2022-02-04T23:56 (+3)
I've felt flummoxed for a while about Pet Sounds. I first tried listening to it in high school (after learning of its acclaim) and couldn't make it through. When I listen to it now, over a decade later, I feel I can clearly hear and appreciate the "symphonic" quality of the songs, and the care and craft that went into the production, instrumentation, and compositions. It's not difficult for me to believe that it was a major leap forward and I think it's not too difficult to hear how influential it's been. A song I love, "John Allyn Smith Sails" by Okkervil River is partly an adaptation of "Sloop John B".
Moreover, when I listen to Pet Sounds with 'audiophile brain' the sounds, melodies, and harmonies all sound great. But I just don't enjoy listening to the album. The vocals sound detached and clinical to me. For such an acclaimed and highly-ranked album, I feel it doesn't have many raw emotional hooks.
Compare to others on the top of the lists Holden linked: Marvin Gaye, Nirvana, The Beatles, Joni Mitchell, Dylan. Their songs have some powerful emotional energy that Pet Sounds seems to lack-- and will typically make you feel something, even if it's not your cup of tea. To me, Pet Sounds sounds like the odd one out, so I still feel confused why it's so high on these lists.
Also, I would definitely rank A Love Supreme much closer to the top.
maxfieldwallace @ 2022-02-05T00:10 (+3)
The Velvet Underground & Nico might be a better comparison for Pet Sounds. I have some similar feelings about that album as for Pet Sounds-- of course there are huge differences in the sophistication of the production, compositions, and sound quality-- but I think some similarities in apathetic-sounding vocals (at least to me), influence on later artists, slow songs, light psychedelia. I doubt I'd put either in my top 30, but I do go out of my way to listen to TVU&N sometimes. It's got some of that "raw" quality.
vimspot @ 2022-02-03T21:39 (+3)
I had a similar reaction on my first listen to Pet Sounds. I think the impact it had on production means that you need to have not heard any music after it to fully hear its importance.
It sounds like a solid pop album to my ears. God only knows sounds beautiful to me but not other-worldly. But I'm assuming it would have blown my mind (as it did the Beatles) had I not heard the last 50 years of music.
markea @ 2022-02-03T22:41 (+2)
Other than A Love Supreme, what albums have you found really impactful? Could you write 500-1000 words off the top of your head on why one of those albums is a work of genius? Have you ever taken over a conversation at a party (without really intending to) to explain how good some piece of music is and how everyone should go home and listen to it right away?
I am really really not trying to call you a philistine, there is nothing wrong with not having super strong feelings about music. But my guess is that most music critics (professional or armchair) would answer yes to both. If you don't answer yes to these questions, maybe you're not responding to music the way that other people do. (Which, again, is okay.)
Personally, fine art (painting, sculpture) does almost nothing for me. I couldn't offer any authentic opinion at all on whether Jackson Pollack was a more important artist than Georgia O'Keefe. So I have to assume that the people who do care about that question are perceiving something that I'm not.
For what it's worth I consider Pet Sounds to be sublimely beautiful, but I have no idea how I'd explain what exactly is so beautful about it in a way that would convince anyone else.
Bill Benzon @ 2022-02-24T18:53 (+1)
To some extent I think a comparison between Pet Sounds and A Love Supreme is apples and qumquats.
But still...I suspect that someone who is capable of listening to and understanding A Love Supreme, whether or not they like it, is also capable to listening to and understanding Pet Sounds, whether or not they like it. But I don't think the converse is necessarily true. That is, having the ability to listen to and understand Pet Sounds does not imply that one can also understand A Love Supreme or, for that matter, a Beethoven piano sonata.
mother box @ 2022-02-06T03:34 (+1)
i wondered whilst reading through this if framing / comparing your take on a “complex” album (like A Love Supreme) is useful if we don’t actually dive into why you think that is more complex and layered than an album like Pet Sounds. would it be useful to contextualise A Love Supreme within Coltrane’s works as well and across the jazz landscape of the time? would a jazz purist consider Coltrane’s album, which is arguably in the popular music arena (as much as a jazz album can be), to be less complex than other contemporaries that someone with a more intense affinity for jazz might pick out (making assumptions here about your love/knowledge of jazz, but also thinking of the people i know that can only reference A Love Supreme or Kind of Blue when talking about great jazz). and do we all suffer from comparing pop rock music to pop jazz music and giving overwhelming weight to jazz just because of its supposed higher end status (which leads me back to the Beethoven writings and the way people perceive older classical music vs new music, etc.). these were all the questions that popped in my head when reading and would love to find some deep dives into these things.
jean @ 2022-02-05T03:24 (+1)
I’d love to hear your take after doing a similar listen-through of Radiohead’s discography up through Kid A. They dominated the top 5 of Pitchfork’s reader poll of the best albums of all time, and that judgement feels much more representative to me of what a contemporary rock fan might choose than Pet Sounds.
Holden Karnofsky @ 2022-03-31T23:09 (+2)
I personally like Radiohead a lot, but I don't feel like my subjective opinions are generally important here; with Pet Sounds I tried to focus on what seemed like an unusually clear-cut case (not that the album has nothing interesting going on, but that it's an odd choice for #1 of all time, especially in light of coming out a year after A Love Supreme).
Holden Karnofsky @ 2022-01-19T23:46 (+3)
Comments for Book non-review: The Dawn of Everything will go here.
James Herbert @ 2022-09-03T12:31 (+6)
I'm not sure you've quite nailed the central claim of the book. Which is fair, they don't make it clear, and I don't think the reviews did a good job of making it clear either.
I think it's more along the lines of:
Modern societies have lost the qualities of flexibility and political creativity that were once more common in human history. We have value lock in.
This seems plausible to me.
They also make the following claim:
Western civilisation is not conducive to human flourishing. This is made evident by the fact that Western civilisation did not spread of its own accord. Instead, European powers ‘have been obliged to spend the last 500 or so years aiming guns at people’s heads in order to force them to adopt it’.
This is more debatable. But I don't think it's very important (with regards to a discussion on the value of the book).
Why? Because they only state this claim due to the fact that this is why they care about the truth of the first claim. However, I expect most people on this forum already agree that value lock in is bad and, therefore, don't need to buy this second claim to find value in the book.
Instead, to determine the value of the book (provided you already buy the first claim AND think lock in is bad), one ought to investigate claims such as the following (made in the conclusion):
- Our society's lack of flexibility and political creativity has its origins in a confusion between care and domination.
- Societies such as ours, i.e. those that are large and complex, do not require domination to flourish.
JP Addison @ 2022-01-21T07:57 (+2)
I believe Scott Alexander has cited this book’s “ballistically false” claim, and I definitely remember ~believing it and finding it strongly compelling.
Calorion @ 2022-02-17T06:48 (+1)
It is so, so much worse. I investigated the claim in depth, found it was indeed "ballistically false" (the claim being that the source they cite supports what the book says, not whether what they say in the book is actually true), and then decided to find out if Wengrow had perhaps apologized and issued a retraction . I ran across this Twitter thread: https://twitter.com/davidwengrow/status/1460660171496173577?s=21 in which Wengrow defends his scholarship by variously claiming that people misread the source, that people misunderstood the claims in the source, that the source saying that "many" whites chose to stay with the Indians is evidence for his claim that they "almost invariably" did so, and that his source is unreliable and should not be trusted.
This guy is either very stupid, a very bad liar, or just lazy and thinks that everyone else is a gullible idiot.
Holden Karnofsky @ 2022-01-11T03:09 (+3)
[Placeholder for Why it matters if "ideas are getting harder to find" comments]
SamiPetersen @ 2022-01-13T09:41 (+2)
Reading this post reminded me of someone whose work may be interesting to look into: Rufus Pollock, a former academic economist who founded the Open Knowledge Foundation. His short book (freely available here) makes the case for replacing traditional IP, like patents and copyright, with a novel kind of remuneration. The major benefits he mentions include increasing innovation and creativity in art, science, technology, etc.
Aaron_Scher @ 2022-03-05T02:45 (+1)
For those particularly concerned with counterfactual impact, this is an argument to work on problems or in fields that are just beginning or don’t exist yet in which many of the wins haven’t been realized; this is not a novel argument. I think the bigger update is that “ideas get harder to find” indicates that you may not need to have Beethoven’s creativity or Newton’s math skills in order to make progress on hard problems which are relatively new or have received little attention. In particular, AI Safety seems like a key place where this rings true, in my opinion.
Bill Benzon @ 2022-02-23T02:31 (+1)
I vote for innovation as mining. I've visualized an abstract version of that starting on p. 14 ("Stagnation, Redux: Like diamonds, good ideas are not evenly distributed") of this working paper, What economic growth and statistical semantics tell us about the structure of the world, piggy-backing on Romer's 1992, Two Strategies for Economic Development.
Jeremy @ 2022-01-12T16:47 (+1)
I'll talk exclusively about music (mostly the broader rock/pop realm), because that’s the area that I know the best (being a lifelong obsessive music lover who has played in bands and dabbled in music production and DJing). It seems pretty clear to me that what you describe is already the current state in the world of music - and perhaps that’s partly why we don't see any more Beethovens?
Riffing on past work is, arguably, something every musician does, consciously, or unconsciously. They cover songs, "steal" riffs, sample, and combine ideas from other artists. It's quite rare that you find someone even attempting to do something completely without precedent. (This seems so self-evident that I'm not going to provide examples, but I would be happy to, upon request.)
Population growth/demographics, but also technology (recording and distribution, not even AI) have already resulted in exponential growth of the amount of music being produced. You used to have to pay to go into a studio to record an album, and get a record contract to distribute it. As a consumer, you'd have to work harder too. If you read about some music, you'd have to go to a store to find it and buy it, sometimes without every hearing it. Now people record in their bedrooms, upload to Bandcamp, Soundcloud, Spotify, etc., and consumers can find it online immediately. (Certainly, it seems like the evolution of things like the OpenAi Jukebox will blow this up to absurd proportions.)
Why has this not resulted in more universally acclaimed music? Perhaps partly because of intense competition? These days, many more musicians can earn some income from their music, but it's spread much more thin. Few can earn enough to make a living. This has resulted in a flourishing of many genres and sub-genres that appeal more narrowly, but may have a more consistent audience. When Beethoven was alive, you had his genius, but nowhere near the breadth and variety of music available today. It's not surprising that no one is as universally acclaimed now.
As Oleg Eterevsky mentions above there is certainly also an element of "stood the test of time", for Beethoven, which is not possible for contemporary works (though you could argue something like this for The Beatles or Led Zeppelin, for example).
Of course, in the idea that there is less "great art" "as judged by critical acclaim", the "as judged by critical acclaim" is doing a lot of work. Another reason we may not have universally acclaimed musicians could be the decline in usefulness of critics in general. You can hear anything instantly online. Why do you even need anyone telling you what to like? You can use your ears! Just like it's harder to make a living off of music these days, it's probably even harder to make a living off music criticism (though there are certainly less people trying).
I would argue that we are already experiencing an explosion of innovation in music, it just doesn't look like "great art" "as judged by critical acclaim", primarily because critical acclaim is less relevant (universal acclaim as a concept may just no longer exist), and secondarily, as Oleg mentioned above, we are not judging it through the test of time. It looks like many interesting and innovative ideas amongst a staggering breadth of new genres and sub-genres. So much so, that, even for an enthusiast, it is overwhelming.
As far as mining vs. discovering, I think it's definitely useful to think of the creative process in both ways, but that considering music exclusively a mining activity would be completely wrong. Musical ideas are expressed very different depending on who's expressing them. Another way of putting this: Is it the idea that's important or the expression of the idea (or some combination of both)?
In science, it's clearly heavily weighted towards the idea, but in music I think it has to be pretty equally both. If Beyoncé had come up with the ideas from Beethoven's 5th instead of Beethoven, it would have sounded a lot different (probably not like this, but maybe AI Jukebox will be able to show us some day soon). It's possible for a brilliant idea to be poorly expressed, sound like garbage, and get completely ignored. Conversely, plenty of rehashed ideas are brilliantly executed and widely appreciated.
Veering away from music briefly, I agree that a modern-day Shakespeare might resemble Sorkin. Beyond ideas being harder to find, I think you also have to factor in that there are just many more people of that talent level working today, and we tend to judge things in context. If there are 50 Shakespeares out there making excellent TV shows and movies today, no single one of them is going to seem that special. Beethoven and Shakespeare were head and shoulders above their contemporaries, in part, because they had fewer contemporaries.
Holden Karnofsky @ 2022-01-18T20:45 (+5)
I largely agree with this comment, and I didn't mean to say that different intellectual property norms would create more "Beethoven-like" figures critical-acclaim-wise. I more meant to say it would just be very beneficial to consumers. (And I do think music is in a noticeably better state (w/r/t the ease of finding a lot that one really likes) than film or books, though this could be for a number of reasons.)
Jeremy @ 2022-01-18T22:33 (+1)
One reason film may be in a worse state could be that it takes many more people to make a film - one person's idea/vision almost always has to pass through many more filters. They cost more to make and there is more pressure to make it into something that will be widely successful to recoup those up front investments.
Books I'm not so sure. It seems harder to write a novel to me, but maybe that's just because music comes more easily to me than writing. It strikes me that it's a much bigger time commitment to read enough of a novel to decide if you actually like it than it does to listen to a song and do the same. Perhaps this leads to self-publishing not being as viable option. Consumers rely more on filters/gatekeepers because you could spend a lifetime trying to sift through self-published novels and not find many good ones.
Music may have the advantage of being able to be consumed somewhat passively - while driving, working, etc., while movies and books are a more immersive.
More basically, you can consume astronomically more songs in a lifetime than books or movies.
Holden Karnofsky @ 2022-01-04T20:27 (+3)
Comments for Where's Today's Beethoven? will go here.
Benjamin_Todd @ 2022-01-10T11:57 (+11)
I was pretty struck by how per capita output isn't obviously going down, and it's only when you do the effective population estimates that it does.
Could this suggest a 4th hypothesis: the 'innate genius' theory: about 1 in 10 million people are geniuses, and at least since around 1400, talent spotting mechanisms were good enough to find them, so the fraction of the population that was educated or urbanised doesn't make a difference to their chances of doing great work.
I think I've seen people suggest this idea - I'm curious why you didn't include it in the post.
Charles Dillon @ 2022-01-10T13:12 (+5)
This seems implausible to me, unless I'm misunderstanding something.
Are all such geniuses pre-1900 assumed to come from the aristocratic classes? Why?
If no, are there many counterexamples of geniuses in the lower classes being discovered in that time by existing talent spotting mechanisms?
If yes, why would this not be the case any more post-1900, or is the claim that it is still the case?
Benjamin_Todd @ 2022-01-10T13:41 (+3)
It's not exactly a nice conclusion.
You'd need to think something like geniuses tend to come from families with genius potential, and these families also tend to be in the top couple of percent by income.
It would line up with claims made by Gregory Clark in The Son Also Rises.
To be clear, I'm not saying I agree with these claims or think this model is the most plausible one.
Charles Dillon @ 2022-01-10T16:47 (+3)
Understood, thanks. Yeah, this seems like a bit of an implausible just-so story to me.
HowieL @ 2022-01-05T05:58 (+4)
"Some of the people who have written the most detailed pieces about "innovation stagnation" seem to believe something like the "golden age" hypothesis - but they seem to say so only in interviews and casual discussions, not their main works."
Just fyi - You mention Peter Thiel in a footnote here. It's been a while since I read it but iirc Peter Thiel describes something you might consider a version of the golden age hypothesis in a bit of dusk in the "You are not a lottery ticket" chapter of zero to one.
Milanesa @ 2023-04-19T23:21 (+3)
This post lacks knowledge about western contemporary music (that's how "classical" music is kind of called nowadays). A brief list of innovative composers on par with Beethoven:
Debussy, Ravel, Shostakovich, Schoenberg, Berg, Webern, Stravinsky, Prokofiev, Bartok, Varese, Messiaen, Ligeti, Berio, Boulez, Nono, Stockhausen, Steve Reich, John Cage, Penderecki, Ginastera, Villa-Lobos, Xenakis, Saariaho, John Adams, Elliot Carter, Manoury, Grisey, Murail, Haas, Kurtag, Davidovsky, Sciarrino, Alexander Schubert, Steen-Andersen, Ablinger, Oliveira, Mary, Kokoras... and of course there are a lot more.
A fun way to keep up on new composers is watching the ScoreFollower youtube channel videos. You can also look for composition contests and check out the winners and jury for names. Or... just google for contemporary music, read books about it or about music history, or even look for musicology research (the scientific study of music). Hope this helps.
kokotajlod @ 2022-01-05T18:27 (+3)
[Disclaimer: Sheer idle speculation, not important or rigorous]
I am generally a fan of the innovation-as-mining hypothesis. However, even within the broad tent of that hypothesis, there is room to debate e.g. whether there has been a recent, temporary slowdown in progress due to cultural or genetic factors in addition to the usual ideas-getting-harder-to-find factor. I have two ideas here that I'd be interested to see explored:
1. You say
Finally, this hypothesis implies that a literal duplicate of Beethoven, transplanted to today's society, would be a lot less impressive. My own best guesses at what Beethoven and Shakespeare duplicates would accomplish today might show up in a future short post that will make lots of people mad.
What about a duplicate of John von Neumann? Maybe our modern geniuses like Terry Tao are his equal, but I sometimes wonder if he was a class above even them.
2. One argument you make against the Golden Age hypothesis is that typically the golden age is also the first age, which is a suspicious coincidence. IIRC, I read somewhere that average human brain size has shrunk over the last ten thousand years or so. I dunno if that's true but suppose it is. Given the correlation between brain size and IQ, one might wonder whether selection pressure for intelligence -- or some important component of it -- has also diminished in the last ten thousand years or so. If that were true, a version of the Golden Age hypothesis would be more likely, and also would successfully predict that observed "golden ages" in various fields would happen at the beginning of said fields.
Calion @ 2022-01-09T15:51 (+2)
>the "golden age" hypothesis (people in the past were better at innovation), the "bad taste" hypothesis (Beethoven and others don't deserve their reputations), and the "innovation as mining" hypothesis (ideas naturally get harder to find over time, and we should expect art and science to keep slowing down by default).
I think you're missing what I consider the most likely explanation: There are a lot more people in these fields now, trying to be the best. What's remarkable about these historical figures is not that they were better at what they did than people nowadays, but that they did it first. So I am not sure we'd notice a new Shakespeare. We'd simply lump him in with all of the other really good playwrights we have. Nothing would make him stand out as the best.
So it's possible that our scientists, artists, etc. are better than these historical giants, but we just can't tell.
myst_05 @ 2022-01-05T18:12 (+2)
The movie Yesterday sort of tackled this in an interesting way. Imagine a parallel universe where everything is the same but the Beatles never came together. Would someone releasing their exact music in 2021 still become highly successful and considered a musical icon? In the movie the answer is yes. In real life I imagine the answer would be no - the same exact music would no longer sound innovative and would thus not become particularly successful. This New-Beatles band might reach the level of a Top-100 artist but they'd never see the same level of admiration as the Beatles did and still do.
So I believe we're simply not judging more recent art works by the same standards, resulting in a huge bias towards older works. Beethoven is only noteworthy because his works are a cultural meme at this point - he was a great musician for his time, sure, but right now there's probably tens of thousands of musicians who could make music of the same caliber straight on their laptops. Today's Beethoven publishes his amazing tracks on SoundCloud and toils in obscurity.
Patrick @ 2022-01-09T02:21 (+1)
So I believe we're simply not judging more recent art works by the same standards, resulting in a huge bias towards older works.
Why is it wrong to credit past art for innovations that have since become commonplace? If a musician's innovations became widespread, I would count that as evidence of the musician's skill. Similarly, Euclid was a big deal even though there are millions of people who know more math today than he did.
Beethoven is only noteworthy because his works are a cultural meme at this point - he was a great musician for his time, sure, but right now there's probably tens of thousands of musicians who could make music of the same caliber straight on their laptops. Today's Beethoven publishes his amazing tracks on SoundCloud and toils in obscurity.
This sounds like an extreme overstatement, at least if applied to classical music. Some modern classical music it is pretty good, and better than Beethoven's less-acclaimed works. And the best of it is probably on par with Beethoven's greatest hits. But much of it is unmemorable—premiered, then mercifully forgotten. The catalog of the Boston Modern Orchestra Project is representative of modern classical orchestral music, and I think most of it falls far short of Beethoven's best symphonies. The concertgoing public strongly prefers the old stuff, to the consternation of adventurous conductors.
John Loder @ 2022-01-05T12:42 (+2)
Very much enjoyed the post.
The thesis that recent (50 year) declines in innovation productivity are best explained by innovation generally getting structurally harder over time does, I think, somewhat overfit the data.
Sketched argument below:
- Innovation is cumulative. And in particular new tools create new possibilities for innovation as much as the reverse. So no astronomy without the telescope, no modern medicine without organic chemistry, no Beethoven without the invention of the piano, no early mathematics without Hindu-Arabic numerals, etc.
- When the right tool arrives, there is a subsequent explosion of innovation, followed by a slow down.
- There is a degree of randomness in these bursts, and the 70 years around the turn of the 19th/20th century was a particularly strong cluster (from the publication of Maxwell's equations in 1865 to the Trinity nuclear test in 1945). Humanity went from candles and horses to nuclear power, jet engines, eradication of most communicable diseases, electrification, relativity and quantum mechanics, the telephone, early computers, and many others. Art and culture also shifted abruptly and in a very interesting way.
- Note that this was an acceleration from the 19th century - innovation doesn't always get harder.
- If the limiting factor is the right tool, rather than people or money, then huge investment in research will lead to drops in productivity in producing fundamental breakthroughs. And the people we call geniuses are just those that get their hands on the tool first (bit like Bill Gates being one of a handful of people to globally able play with computers in their teens).
- Post 1970 (?) slowdown in innovation is to some extent a contrast with an exceptional cluster, and and in itself a relative trough.
The big question, it seems to me, is whether AI and ~CRISPR the sorts of fundamental tools that can spark a new acceleration?
Douglas Knight @ 2022-03-11T18:45 (+1)
It may be hard to compare art from different periods, but it is direct to compare science and engineering from different periods because the same thing was discovered or invented multiple times.
Knowledge is not a ratchet. Sometimes knowledge is lost. But it is not only catastrophes like burning libraries and riots against scholars. There are Leaden Ages where scientific knowledge is lost century after century, such as Alexandria for about five centuries starting 150AD. Any period of progress is a Golden Age compared to that. Do people know that they are in a Leaden Age? I don't think the Alexandrians knew. The first task is not to fool yourself.
If a second age reconstructs the knowledge of the first age faster, it might be because they are better, or it might be because they are supported by the notes of the pioneers. But what if they are slower? This is strong evidence that the first age really was Golden. In particular, the Hellenistic Age, 330-130BC, subsumed virtually all scientific progress for thousands of years, at least to 1600, and maybe to 1700.
anonfornow @ 2022-02-24T04:33 (+1)
One possibility (which may or not have been mentioned) is that an overflow of information/stimulation as a result of technology and faster paced societies inhibits creativity. Part of the issue may arise from excessive entertainment: Beethoven may have created musical pieces in periods of boredom, which the modern day Beethoven spends scrolling through social media or watching Netflix.
dwarvendatamining @ 2022-01-05T00:42 (+1)
I think another potential explanation relates to the way people think about the history of a given field when asked to reflect on it (e.g. to create a top 100 list). We tend to conceive of fields as progressions unfolding over time, and even if we don't think this is always in a "better" direction, at least we conceive of the field as consisting of time periods characterized by a dominant paradigm or style. Certainly this is the way that "history of X" classes are usually taught.
If this is the case, it seems natural to me that, when asked to reflect on the "most important" individuals or contributions to a field in its history, we will tend to structure that reflection around our conception of these periods, and likely identify an emblematic individual for each period. Indeed, part of our conception of "greatest" might include a feature like "dominated their field for a decade or more," and obviously the frequency of individuals characterized in this way cannot increase with greater population, education, or anything at all! To the extent that our thinking follows this approach, we will tend to see "best of" lists being pretty flat over time, and therefore, appearing to decline when normalized by anything that increases over time.
Jeff Sackmann @ 2022-01-04T22:15 (+1)
Thanks for this, it's a fascinating subject.
At risk of anticipating your follow-ups, I have two suggestions regarding art. I don't think they apply as well to science.
- If a work is considered to be among the greats, the older it is, the more foundational it has become. An enormous amount of great music since Beethoven is, often very deliberately, developing Beethoven's ideas further, or introducing new ideas by tweaking what Beethoven (or Mozart or Bach) did. Thus, what the art is gets tied up with the foundational works. In mining terms, finding a motherlode also seems to mean shutting down (or, at best, reducing focus on) other mines. It's impossible to imagine western classical music without Beethoven, in part because such a significant amount of it is Beethovian. Had some very talented and charismatic musician come along at the right time from the Balkans, maybe that foundational slot would be taken by someone/something else. If this is correct, there's bound to be some historical figures that are considered head-and-shoulders above the rest, and they must be quite old. A contemporary person cannot fill this role, though it's conceivable that a contemporary person would fill this role for people 200 years down the road.
- The effectiveness of Beethoven's and Shakespeare's works relies on performance. While there are attempts at period authenticity, the most popular recording of the 5th symphony or performance of Hamlet is not that, and hasn't been that for a long time. This relates to (1) in a couple of ways:
- There have been centuries of "testing" to optimize the experience of these works, and it is ongoing. (This point is less relevant to, say, novelists, even if people are constantly re-interpreting Dickens.) Ranking Shakespeare #1 is really ranking centuries-of-optimizations-Shakespeare #1, which puts David Mamet at a pretty big disadvantage.
- Foundational works impact performance practice. At risk of oversimplifying, teenage violinists go to conservatory to learn how to play Beethoven. Even in the unlikely event that they never play Beethoven, present-day composers write for musicians trained to play Beethoven, not Harry Partch. Not all music requires virtuosity (or violinists), but huge subfields of the arts involve creators devising works for performers who were trained for old stuff.
hrosspet @ 2022-01-04T22:05 (+1)
Thank you for a thought provoking post! I enjoyed it a lot.
I also find the "innovation as mining" hypothesis intuitive. I'd just add that innovation gets harder for humans, but we don't know whether it holds in general (think AI). Our mental capacity has been roughly constant since ancient Greece, but there is more and more previous work to understand before one can come up with something new. This might not be true for AI, if their capacity scales.
On the other hand there is a combinatorial explosion of facts that you can combine to come up with an innovation and I don't know what fraction of the combinations will actually be useful and judged as innovation. So overall, the difficulty might increase, stay roughly the same, or decrease, depending on how the number of useful combination scales with the number of all combinations.
I also suspect that subjective rankings of past accomplishments just tend, for whatever reason, to look overly favorably on the past.
One explanation of this would be that innovation needs time to collect its impact. Old innovations are well integrated into the society, so they have already collected most of its impact, while new innovations have most of their impact still in the future, so we don't perceive them as transformative yet.
Kieron George @ 2021-12-08T20:16 (+3)
It feels super suspicious that the smallest possible source of violent death("Individual homicides") and the largest possible source of violent death("Mass Atrocities") would have significant contributions to the violent death rate, but the middle is excluded as insignificant.
Are there other examples like this where the smallest & largest sources of something are both significant with the middle excluded as negligible?
Holden Karnofsky @ 2022-01-03T19:55 (+4)
I didn't exclude the middle because I think it's insignificant - more because I didn't have data on it. That said, I would actually guess that it's not as big as the other two categories.
If two groups got in a fight in the US and there were lots of deaths, these would be classified as homicides; for violent deaths to not be classified as homicides, there needs to be some sort of breakdown or abuse of the legal framework. Those events don't seem so common that I think we're obviously missing a ton when we just look at the most deadly ones (though I do wish we had data on all of them).
MattBall @ 2022-02-03T20:22 (+2)
Totally agree. Yes, the production was good, but Sgt. Pepper's took it to the next level but also with brilliant songs.
Regarding engineering, you might find this interesting:
https://www.amazon.com/dp/B000OVLIQU/ref=dp-kindle-redirect?_encoding=UTF8&btkr=1
I didn't think I would, but it was fascinating (I listened to the audio version)
Jack Gillespie @ 2022-02-03T20:28 (+1)
Great tip, thanks!
Holden Karnofsky @ 2022-02-03T05:57 (+2)
(Placeholder for comments on "To Match the Greats, Don’t Follow In Their Footsteps")
Holden Karnofsky @ 2022-01-25T23:52 (+2)
Comments for Cost disease and civilizational decline will go here.
Holden Karnofsky @ 2022-01-25T18:35 (+2)
Comments on Reader reactions and update on "Where's Today's Beethoven" will go here.
ozymandias @ 2022-07-31T16:58 (+3)
An interesting counterexample to some of your points is the Disney Renaissance, generally considered to be the golden age of Disney animation, which started fifty years after Disney began animating films. AIUI, the conventional wisdom is that there happened to be a confluence of incredible talents: in particular, Howard Ashman and Alan Menken were an incredible songwriting duo. The Renaissance was also when the iconic Disney princess line was invented. Before the Renaissance, Disney happened to have made films about princesses, but it wasn't a distinct category, any more than films about talking animals were considered a distinct category. The anecdote I've heard is that a producer noticed that girls were wearing handmade Sleeping Beauty and Cinderella dresses, and decided to appeal to the obvious market here!
I for one would be very interested in it if you decided to look into why the Disney Renaissance was so good so long after Disney began animating films.
Ben Wōden @ 2022-01-29T17:50 (+1)
You write "Is it in fact the case that the difference between the 1st- and 2nd-best performer should shrink as the number of competitors goes up? This isn't obvious to me either way."
I think that, if you're drawing without replacement n times from a normal distribution, the difference between highest and second-highest value drawn should shrink as n rises, but that the opposite is true if the distribution is log-normal.
I would expect that "greatness" in terms of critical acclaim in some field is log-distributed, so, the bigger the field, the greater the extent to which the leader should stand out above the second-best.
Holden Karnofsky @ 2022-01-06T18:49 (+2)
[Placeholder for How artistic ideas could get harder to find comments]
Kai Williams @ 2022-01-14T17:37 (+1)
I generally like the innovation-as-mining hypothesis with regards to the science and with some respect to the arts, but I think that there is one issue with the logical chain.
You said that "[i]f not for this phenomenon [that ideas get harder to find], sequels should generally be better than the original," but I don't think this is necessarily true. I think a more likely reason that sequels aren't generally better than the original is mostly regression to the mean and selection effects, with two main causes:
- Pure quality: Presumably, an author or a screenwriter will only make a sequel if the original enjoyed a sufficient level of success to merit the effort. While some of that quality is likely due to the skill of the author, some of it is likely due to luck. Accordingly, the quality of subsequent sequels is likely worse, even if the author improves over time.
- Idea selection: When writing the original, the author had a lot of leeway in what type of media was being written, what the world building looked like, what plot was going to be used, etc. Given the uncertainty of the enterprise and the need to make a good pitch, the author likely chose ideas for their highest quality. However, when the sequel is written, the choice of ideas is less strict. For one, the pool of ideas is smaller, but further, the standards are lower. If a sequel is written, it is often due to demand from audiences rather than the desire of creators, so ideas are not held to the same standard.
I think a relevant example here is that of albums. There is this idea of a "sophomore slump" in albums, where a band's second album tends to be worse than their first. I don't think this is due to it being hard to make good albums after your first (quality generally seems to improve over the next few albums after that), but a shrinking pool of songs to choose from. On an artists debut album, they can choose pretty much any song they've ever written. On the second however, the artist is restricted to only the songs they've written after the first album and anything that wasn't good enough to make it onto the first. Even though they don't face the constraint of working within their pre-existing world, the quality decreases. I think that it is likely that that is the case here as well.
As such, I'm not sure exactly why "ideas get harder to find" should be the default. Another explanation for why the per-capita great successes could be lower than in the past has to do with competition and the importance of being the best. In sports, there is a lot of attention paid to the greats of the past, even though current athletes are at a higher level. The fiftieth best quarterback today in the NFL could very well be better than the best quarterback in 1980, but we remember the best quarterback then and not the fiftieth best now. As such, because the middle-brow gets cluttered (essentially, to be successful in the middlebrow, you need to be successful everywhere), there is much stiffer competition to be the best of the middle-brow today.
As such, you likely see more total success and less success per capita, which is about the pattern we see right now. That being said, I enjoyed the post.
lincolnq @ 2022-01-10T17:26 (+1)
I think this is one of your best posts. I learned a lot, built new models of art, and laughed out loud multiple times.
Taymon @ 2022-01-08T03:07 (+1)
Obligatory link to Scott Alexander's "Ambijectivity" regarding the contentiousness of defining great art.
Holden Karnofsky @ 2022-01-05T20:12 (+2)
Comments for AI alignment research links will go here.
Holden Karnofsky @ 2021-12-23T16:19 (+2)
Comments for Utopia links will go here.
Czynski @ 2021-12-23T23:33 (+5)
For a realistic but largely utopic near-future setting, I recommend Rainbows End by Vernor Vinge. Much of the plot involves a weak and possibly immersion-breaking take on AGI, but in terms of forecasting a near-future world where most problems have become substantially more superficial and mild, the background events and supporting material is very good.
Aaron Gertler @ 2021-12-25T04:22 (+4)
I don't think I've seen anyone reference the Culture series in connection with these posts yet. The series places a utopian post-scarcity and post-death society — the Culture, run by benevolent AIs that do a good job of handling human values — in conflict with societies that are not the Culture.
I've only read The Player of Games myself, and that book spends more time with the non-utopian than the utopian society, but it's still a good book, and one that many people recommend as an entry point into the series.
tessa @ 2021-12-31T21:23 (+4)
I haven't read The Culture series but/and I really enjoyed this meta piece about it: Why The Culture Wins: An appreciation of Iain M. Banks for a really excellent discussion of meaning-seeking within a post-scarcity utopia. An excerpt:
In fact, modern science fiction writers have had so little to say about the evolution of culture and society that it has become a standard trope of the genre to imagine a technologically advanced future that contains archaic social structures. ... Such a postulate can be entertaining, to the extent that it involves a dramatic rejection of Marx’s view, that the development of the forces of production drives the relations of production (“The hand-mill gives you society with the feudal lord; the steam-mill, society with the industrial capitalist.”). Put in more contemporary terms, Marx’s claim is that there are functional relations between technology and social structure, so that you can’t just combine them any old way. Marx was, in this regard, certainly right, hence the sociological naiveté that lies at the heart of Dune. Feudalism with energy weapons makes no sense – a feudal society could not produce energy weapons, and energy weapons would undermine feudal social relations.
...
One interesting consequence of this process [of globalized cultural evolution] is that the competition between cultures is becoming defunctionalized. The institutions of modern bureaucratic capitalism solve many of the traditional problems of social integration in an almost mechanical way. As a result, when considering the modern “hypercultures” – e.g. American, Japanese, European – there is little to choose from a functional point of view. None are particularly better or worse, from the standpoint of constructing a successful society. And so what is there left to compete on? All that is left are the memetic properties of the culture, which is to say, the pure capacity to reproduce itself.
Taymon @ 2022-01-08T04:10 (+1)
The Fun Theory Sequence (which is on a similar topic) had some things to say about the Culture.
Sharmake @ 2022-07-19T21:16 (+1)
Sorry to come in late, but I do have a link for a near-utopia here:
https://orionsarm.com/ is essentially as utopian as it can be, barring the 3 powers of time travel, reality warping or thermodynamics breaking.
Holden Karnofsky @ 2021-12-10T04:45 (+2)
[Placeholder for Visualizing Utopia comments]
renancunha @ 2022-04-27T21:40 (+2)
It could be argued that advances today are already taking meaning out of our lives (thus for some people utopia is already getting too far). One example of this is teenagers that suffer more from depression but less from other diseases. This post also reminded me of a blog post by philosopher Mike Huemer in which he argues that the best afterlife is reincarnation https://fakenous.net/?p=2491
3. Reincarnation > Heaven
What might God do instead?It would make more sense for a god to send us back into the world, to live this sort of existence again. Each life involves meaningful struggles in the face of adversity, the exercise of free will, and the opportunity to exhibit virtue. It never gets old, because we forget everything at the end of each lifetime. It can go on for eternity in that way.
richard_ngo @ 2022-01-05T12:39 (+2)
You might be interested in my effort to characterise utopia.
Holden Karnofsky @ 2021-12-02T05:36 (+2)
[Placeholder for Progress Studies comments]
Holden Karnofsky @ 2021-12-02T05:36 (+2)
[Placeholder for GPT-3 comments]
Holden Karnofsky @ 2021-11-30T19:55 (+2)
Comments for Did life get better during the pre-industrial era? (Ehhhh) will go here.
Holden Karnofsky @ 2021-11-16T18:11 (+2)
Comments for Has violence declined, when we include the world wars and other major atrocities? will go here.
Gary Gechlik @ 2023-03-08T18:52 (+1)
I think the key to understanding the changes is to have a spreadsheet based on human developmental cycles. Human attraction to literally fiction is important in the periods from ages 4-25, it how we learn to read and enjoy reading in an intellectually imaginative way. Having said that, over 45, there is a collapse of productivity in reading fiction, that does not occur in those reading scientific literature, technological news, and political news. Beginning at the age of 65, those who continue to read scientific literature and technology news stay significantly current, but those who focus on political news experience a similar intellectual collapse, CNN/FOX-NEWSISM.
It is a process that is about the neuronal development, growth, plateau, and inevitable decline. As intelligent people live longer, they must transition their intellectual diet. It's why normal productive fifty year olds that have high incomes, big homes, etc, etc, don't seem to attend rock concerts, even though, thirty years earlier, they had a poster of that rock group on the wall in their dorm room.
Barney Miller @ 2022-03-05T06:16 (+1)
I think Pet Sounds is overrated a bit too, though I like it a lot more than you. My only issues with your review are two fold.
-
Comparing any pop music to Coltrane really isn’t fair. Apples & oranges, as others here have pointed out.
-
I don’t see you talking about harmony, chord progression, melody, rhythm or time signature at all when mentioning The Beach Boys or Coltrane. You don’t have to be trained in music theory to enjoy music, especially pop. But I think you do need to know the basics of song structure to critique it.
May I suggest a YouTube channel called “professional musicians react”. I’m an armature guitar player, but almost anyone can learn from this channel. They really explain what goes on in pop music that makes it so memorable. Whether it’s old stuff like The Beach Boys & Beatles or new stuff with amazing melodies & chord changes in Bruno Mars latest Silk Sonic record. Which is much cheesier than The Beach Boys (yet in some ways more accomplished)
Lastly, seek out Brian Wilson’s “Smiley Smile” which is technically a solo album from the 2000’s but really it’s his masterpiece to that he never got to finish with The Beach Boys. (Who as a recording act were kinda ruined by singer Mike Love who would change Brian’s lyrics & arrangements and make the tunes much cheesier).
Aidan Alexander @ 2022-02-07T10:26 (+1)
Hi there! Is there anywhere you can direct me to that makes the case that constant replacement occurs? In what sense do we stop existing and get replaced by a new person each moment? What is your reason for believing this? This is stated in the post but not justified anywhere. Apologies if I have missed it somewhere. I also tried googling 'constant replacement', 'constant replacement self', 'constant replacement identity' etc. and couldn't find more on this.
Holden Karnofsky @ 2022-03-31T23:10 (+2)
I didn't make a claim that constant replacement occurs "empirically." As far as I can tell, it's not possible to empirically test whether it does or not. I think we are left deciding whether we choose to think of ourselves as being constantly replaced, or not - either choice won't contradict any empirical observations. My post was pointing out that if one does choose to think of things that way, a lot of other paradoxes seem to go away.
ffx @ 2022-01-25T22:39 (+1)
The estimate for the increase in effective population should take into account that for extraordinarily talented individuals, the chances to become acclaimed scientists or artists probably increased less than for the average person. For example, someone like Beethoven was probably likely to get musical education and the option to pursue his talents even around 1800. This suggests that the size of the effective population should increase less than linearly in the number of people with access to education.
Similar stories can be told for other factors that drive the effective population growth. (I could not figure out if these considerations are reflected in the estimates.)
mother box @ 2022-01-12T05:25 (+1)
I’m sure this has been brought up in the comments at some point over the past three “Beethoven” writings, but it’s an interesting idea of art and “mining” considering some of the examples of great O.G. artists (Shakespeare, Lucas/Star Wars) mined and borrowed heavily from existing previous or contemporary work (Herbert’s Dune; Chaucer, Kit Marlowe, Greek/Roman mythology, history, and drama). I’m more unfamiliar with classical music, but I can only assume the same for Beethoven.
bhyer @ 2022-01-09T17:52 (+1)
You've done an admirable job of charting the relative "quality" of intellectual/artistic/musical advances but not the "why" other than, "it was cool to be smart" as in ancient Greece. What other factors could be considered? Let's start with nutritional—did the consumption of stimulants like caffeine and later, tobacco, bump up the advances? Did the consumption of large amounts of alcohol, served in pewter, serve to depress IQs enough to perpetuate the Dark Ages? Did recreational drug use boost the creativity of music and art in the mid- to late-20th century?
Then there are political/military factors, such as the burning of the library on Alexandria or (for instance) the clamp the Chinese dynasties held on technology and trade. When the Europeans first visited China and learned about gunpowder, they took the tech and improved on it. When they returned centuries later, they had improved their cannons while the Chinese were still using the same designs from earlier.
Chris Allen @ 2021-12-30T19:08 (+1)
You are overthinking this imho. Death is just a word that can be defined in multiple ways by reasonable people, so you can define duplicating yourself as not dying if you want. Doesn’t mean other people will agree. Another person could legitimately say that dying is when their original organic body stops working, that is equally legitimate definition. Deciding to have duplicates built is a decision not a definitional thing, you can subscribe to the latter definition of death and still want to have duplicates of yourself either electronic or organic. I think a lot of the confusion around this subject comes from religious times, and duality style thinking. We are just physical objects at the end of the day and of course we can create near duplicates of any physical object and say they are fungible if we want.
clarkherring @ 2021-12-29T01:03 (+1)
This is exactly the way I view my life. i am surprised that someone also holds this worldview. I also keep a journal as this is a means of "dead" Clarks talking to the living Clark. This soon to be dead Clark also ends messages via the journal to yet to be born Clarks. This is a comment on What Counts as Dead
jonrobinson2 @ 2021-12-28T23:20 (+1)
On a separate note from your piece on "what counts as death" there is quite the debate in medicine about how to classify when a patient is dead. If you're interested highly recommend this;
http://bedside-rounds.org/episode-65-the-last-breath/
J.A. Littlewell @ 2021-12-07T19:12 (+1)
You might be interested in the Real Utopias project, which spawned several edited volumes, mostly written from social scientific perspectives: https://www.ssc.wisc.edu/~wright/RealUtopias.htm