AI as a Psychosocial Singularity
By Kenneth_Diao @ 2026-04-16T14:06 (+4)
This is a linkpost to https://graspingatwaves.substack.com/p/ai-as-a-psychosocial-singularity
I’ve long felt that AI is a peculiar psychological object, and recently, it clicked for me that it is useful to think of it as a psychosocial singularity. Much the way that black holes bend and break the fabric of spacetime, AI bends and breaks the weave of the mind and the weft of the world. Love it, hate it, have nothing to do with it, and no matter what, it will still find a way to suck you in.
The closer we get to the center of AI development and AI safety, the more distorted our thinking becomes. People have dedicated their lives to building it, poured billions to trillions of dollars into developing and deploying it, and spelled out radically different visions of the world as a consequence of it. They have worshipped it as a god and decried it as an abomination. The question is not whether one’s thinking is distorted, but rather what those distortions are and whether they are more useful than they are harmful.
I’m no expert on AI. A lot of what I’m going off is vibes. But I have some thoughts and feelings about the kinds of thinking that surround AI which I’m going to share, and you can tell me how right and how wrong you think I am.
A Tale of Two Eschatologies
One story about AI is that it spells the end of the world. AI capabilities are advancing at incredible speed, and soon, they will be more intelligent and capable than us and able to autonomously complete long-horizon tasks. Whether it be more swiftly through nuclear or biological weapons or less swiftly through sheer indifference, AI will wipe out all of humanity. One thing is clear: our chapter in the history of the world is coming to an end.
Another story about AI is that it spells the end of the world as we know it. AI capabilities are advancing at incredible speed, and soon, they will be more intelligent and capable than us and able to autonomously complete long-horizon tasks. Whether it be more swiftly through recursive self-improvement or less swiftly through gradual refinement and integration, AI will raise humanity to unprecedented heights. One thing is clear: our chapter in the history of the world is coming to an end.
How many times in our history has the psyche had to contend with something that (plausibly) simultaneously held the power to save the world and to destroy it? I think it’s hard for people, myself included, to find the proper dialectic. So we cleave to one end or the other, or else oscillate violently between the two.
But notice how the strength of the contradiction emerges from and reduces to a proportionally strong unity. Both Doomers and Accelerationists branch from a common worldview; that AI is advancing quickly, it will usher in unprecedented change, and now is the time to act. Their conflict makes it seem like they disagree immensely when really they are different denominations of the same religion.
The Great Divide
Maybe polarization is nothing new to us; maybe what AI brings to the table is magnitude. We can all agree that AI will be some kind of immense force upon the world. What we are uncertain of is exactly how great that force will be and in what direction it will take us.
I think this highlights a divide between rationalist and empiricist types. I suspect that at the extremes we find rationalist types relatively more frequently, while we find empiricist types to be more moderate. Why? Well, rationalism emphasizes a priorireasoning, giving primacy to deductions derived from foundational principles and reasonable assumptions while being somewhat suspicious of experience. By contrast, empiricism holds that experience is the source of our knowledge, and that reason must rely upon it and be grounded in it. To me, this maps to rationalists having stronger priors than empiricists and empiricists being more moved by evidence than rationalists. There are many priors one can have, but we only live in one world, hence my hypothesis.
I should say that I’m a pretty strong empiricist, and the way I’ve presented this so far probably casts rationalism in an unfavorable light. But rationalism is not necessarily bad, and empiricism is not necessarily better. Many of the people who first raised the alarm about AI and predicted the outcomes we see today were people I would consider more rationalist types, and they were able to see further because of their reliance on careful and principled reasoning in the absence of experiential evidence. An advantage of rationalism over empiricism is that the former is not as restricted by past experiences, which is useful for predicting unprecedented events such as the end of the world.
But the strength is also the weakness, and the weakness also the strength. What troubles me about rationalism is how unbounded and fragile it can be; its very persistence in the face of negative evidence leaves it prone to Pascalian flights of fancy. Empiricism is guided by experience to be more humble about its world-models and suspicious of those who make extreme claims.
This might explain some of the divide between the safety/accelerationist communities and most other people and groups. Both safety and accelerationist people reason that AI will bring unprecedented change. But the change they prophesy is abstract—it is based on lines, numbers, graphs, and benchmarks. It tells of fantastical scenarios that could plausibly be, but which have never happened. For those who ground their world-models in hard evidence or personal experience, in stories and images and memories, it is perhaps unsurprising that the safety and accelerationist narratives are viewed with apathy and suspicion.
What’s the right answer here? Well, I don’t know if there is one, at least not right now. But I think what we can do is to build coalitions around the areas we do agree on and to have more dialogue between different communities that share a concern about AI. What we have to remember is that there are many ways of getting to the same place, and that it is in all of our best interests to work together rather than working apart.
AI and Uncertainty
A running theme here is that there’s a lot of uncertainty, and a lot of room for uncertainty, about what the future of AI holds. I think it’s tempting to collapse that uncertainty into binaries of certainty. AI will definitely kill us or definitely not kill us. It will be unprecedentedly transformative or it’s just another run-of-the-mill technology. It is completely alien to us or completely understandable.
I think this all-or-nothing thinking tends to weigh against the safety side. If you even put some probability—say, 10%—on AI creating an astronomically bad future, you should strongly support efforts to significantly increase safety measures. By contrast, if you convince yourself to put 0 probability on such a future, it makes more sense to support going full steam ahead and/or denigrate AI safety efforts as crazy talk.
But I think safety people can still appeal to those who believe that AI definitely won’t lead to an astronomically bad future, that it definitely won’t be insanely transformative, and that it is definitely completely understandable. Even in this restricted scenario, AI could very plausibly lead to massive and unprecedented power/wealth concentration, the development and deployment of incredibly deadly and destructive weapons, and the entrenchment of totalitarian regimes around the world. The recent upending of the cybersecurity world by the mere existence of Claude Mythos is a great example of a more grounded but still highly dangerous risk. It’s the sort of thing that people can really see happening right now, something more than lines on a graph or numbers in a table.
Timeline Dilation
Black holes cause time dilation, where a year near the event horizon could equal decades elsewhere. AI, meanwhile, causes timeline dilation, where individuals close to its event horizon compress the typical human expectation of many decades of life into the course of a few years. Or months. Or…
And it’s one of those things that you can’t just dismiss out of hand. AI is genuinely advancing incredibly quickly, and we genuinely don’t know what the future holds.
But whether or not it’s true, this timeline dilation has a number of deleterious effects. The obvious one is on the personal level. I’ve heard anecdotes of people pushing themselves to insane lengths before burning out, in some cases because their personal doomsday came and went. I’ve heard that there are people who are burning every bit of the future they can on the altar of the present so they can do that much more in the AI space before the singularity hits. To me, that seems pretty alarming, if (sadly) understandable.
But this extends beyond the personal. People who have very short timelines are inclined to work on short-term projects. If you’re very sure that timelines are very short, perhaps on the order of a year or a couple of years, you’re not going to favor projects that work on fundamental technical and governance issues which take a lot of time to bear fruit. You’re going to favor marginal and shallow fixes that get you immediate returns. And I stress that this makes a lot of sense if you are very confident that timelines are very short. But I think there’s a failure mode in which timelines are actually not that short, but because we keep thinking that they’re short, we keep putting out fires and never get around to solving the deeper problems.
The Gravity of Power
The corrupting influence of power is something that deserves its own article. But I cannot talk about how we think about AI without talking about power.
I know some people in the AI safety space who share some deep similarities with me. We want to make our lives mean something, to leave the world a better place than we found it. We long to be part of something greater than ourselves, and we may even feel lost or depressed if we are deprived of that. We want to find communities and people who share this daring compassion, this long-sighted vision. And when we find that, man, there is nothing that is more fulfilling, more wonderful, more joyful than to be with our people; people who, whether together or apart, are destined to do great things.
I suspect some people know someone like this who has gone over to the dark side. And we may be at a loss for how to explain that. How can someone so good and noble fall to the temptation of power?
Recall instrumentally convergent values. To change the world, to leave our mark, and to accomplish great things all require gaining power. The drive to be part of something bigger drives us to build coalitions and climb hierarchies, in the process gaining power that a lone individual could never dream of. And who tends to receive signals that they are making a positive mark on the world? Those who are in prestigious positions with a lot of power.
It can feel like our motivations are pure, our judgment unclouded, and our actions broadly beneficent. But we cannot trust those feelings and the rationalizations that accompany them to tell us whether we are truly doing the right thing.
It takes a special kind of person to resist the temptations of power once they possess it, especially in an environment under as much pressure as the AI space. It doesn’t just require the absence of greed, of pride, of callousness, of wrong thinking. It requires a positive courage to stand against the system one is a part of, even at the potential expense of everything one has—not just one’s job or one’s equity, but one’s entire life trajectory, one’s community, one’s opportunity to make one’s mark. Some people have what it takes. But you should be brutally honest with yourself about whether you’re one of them before you agree to deal with the devil. To put it bluntly, as this great article from MIRI does:
“Several promising software engineers have asked me: Should I work at a frontier AI lab?
My answer is always ‘No.’”
Misunderstanding Moloch
This also deserves its own article, but I think it’s relevant enough to briefly discuss here.
Ever since Scott Alexander wrote “Meditations on Moloch,” the term “Moloch” has been a watchword in the EA, AI, and rationalist communities. And rightfully so. For it gives voice to a deep and fundamental truth, which is that collective action problems are some of the most recurrent, pernicious, and formidable problems that we have faced, past and present. They form their own singularities, pulling us down into equilibria from which escape seems impossible.
AI presents an especially terrible dilemma. It is precisely the technological acceleration that Scott Alexander warned about; in his words, “[t]he limit of multipolar traps as technology approaches infinity is ‘very bad.’” And yet… surely we must race ahead, because this is too important to let the Enemy reach the singularity first. There are few topics where the Meditations on Moloch are more relevant.
What makes me sad is that I think that we as a community only got part of the message. We internalized that collective action problems are monumental problems that are not the fault of, and cannot be solved by, any single individual. But here I will quote more of Scott Alexander himself on some of the aspects of Moloch I think we missed:
My answer is: Moloch is exactly what the history books say he is. He is the god of child sacrifice, the fiery furnace into which you can toss your babies in exchange for victory in war. He always and everywhere offers the same deal: throw what you love most into the flames, and I can grant you power.
A Molochian victory is no victory at all. It is a game that no one can truly win, only one that we can, in our wisdom, refuse to play; for the history of this world has shown that a Molochian victory can be a terrible, terrible thing. We therefore have a moral obligation to deny Moloch’s deal. And if there is no possibility of doing so, we must make one.
Indeed, foolish deals with Moloch have likely accelerated our race to the precipice. Reading through (part of) Empire of AI by Karen Hao drove home to me just how contingent our current timeline is. The Scaling Hypothesis, which has driven so much of the progress we’ve seen over the past few years, was pretty much derided by everyone except for OpenAI. It stands to reason that had OpenAI not aggressively pursued scaling, we might not have had had LLMs for at least several more years. How much that would’ve helped us is an unanswerable question, but at the very least it wouldn’t have hurt. Some people at OpenAI weren’t so safety-minded, and would have pursued scaling anyway. But some of the more safety-minded ones considered the possibility that they would be helping to build a Leviathan. In Chapter 5, Hao lays out how Dario Amodei, now the CEO of Anthropic, reasoned through this consideration:
But there was a problem: If OpenAI continued to scale up language models, it could exacerbate the possible dangers it had warned about with GPT-2. Amodei argued to the rest of the company—and Altman agreed—that this did not mean it should shy away from the task. The conclusion was in fact the opposite: OpenAI should scale its language model as fast as possible, Amodei said, but not immediately release it…
It was only a matter of time before other people would start scaling up language models further. That meant the best way to ensure beneficial AGI was for OpenAI to leap ahead and, with the internal lead time, figure out how to make its scaled model safer.
How’s that been working out for us?
Don’t get me wrong. I think Amodei is a largely thoughtful and well-intentioned guy who wants to do the right thing (unlike Altman), and I think that Anthropic is consistently one of the least worst AI companies. But the world we are in now is the result of even the most well-intentioned sacrifices to Moloch. We must then ask ourselves if there is not a better alternative than this Molochian best.
The other gods sit on their dark thrones and think ‘Ha ha, a god who doesn’t even control any hell-monsters or command his worshippers to become killing machines. What a weakling! This is going to be so easy!’ But somehow Elua is still here. No one knows exactly how. And the gods who oppose Him tend to find Themselves meeting with a surprising number of unfortunate accidents.
So I agree with Robin Hanson: This is the dream time. This is a rare confluence of circumstances where we are unusually safe from multipolar traps, and as such weird things like art and science and philosophy and love can flourish.
If we truly lived in the Prisoner’s Dilemma, then collective action problems would be completely unsolvable. Yet we have seen many historical examples of the emergence of coordination to solve collective action problems. By modus tollens, we don’t live in the Prisoner’s Dilemma. And sure, that’s a little facetious. But it gestures at something real, something that Scott Alexander gestures to in Meditations on Moloch: collective action problems aren’t unsolvable. And we—many of us, at least—have the great privilege of living free from death, violence, and torture, in a world where there exist others who are kind, cooperative, and trustworthy. We have hope, and we have the dream time. We cannot afford to give those up.
As long as the offer’s open, it will be irresistible. So we need to close the offer. Only another god can kill Moloch. We have one on our side, but he needs our help. We should give it to him.
But we can only solve these collective action problems if we believe we can and believethat there will be others who believe in us and will act in good faith. We must, at some point, be willing to take a leap of faith, to refuse the temptation to bow to Moloch and to help Elua instead.
Because if we can’t do that, then we’ve all already lost.