deleted

By undefined @ 2025-03-10T14:41 (+15)

deleted


Chris Leong @ 2025-03-10T17:42 (+2)

The section "Most discussions about AGI fall into one of three categories" is rather weak, so I wouldn't place too much confidence in what the AI says yet.

I agree that the role that capitalism plays in pushing us towards doom is an under-discussed angle.

I personally believe that a wisdom explosion would have made more sense for our society to pursue rather than an intelligence explosion given the constraints of capitalism.

funnyfranco @ 2025-03-11T04:27 (+1)

I agree that a wisdom explosion would have been a much better path for humanity. But given the competitive pressures driving AGI today, do you think there was ever a realistic scenario where that path would have been chosen?

If capitalism and geopolitics inherently reward intelligence maximization over wisdom, wouldn’t that have always pushed us toward an intelligence explosion, no matter what people hoped for?

In other words, was a wisdom-first approach ever actually viable, or was it just an idealistic path that was doomed from the start?

I believe you're psychologically sidestepping the argument, and I discuss reactions like this in my latest essay if you'd like to take a look.

Chris Leong @ 2025-03-11T11:59 (+2)

It's very hard to say since it wasn't tried.

I think incremental progress in this direction still would be better than the comparative. 

funnyfranco @ 2025-03-11T13:06 (+1)

Thanks again for your thoughts. You're right—we haven't empirically tested a wisdom-first approach. However, my core argument is that capitalism and geopolitics inherently favor rapid intelligence gains over incremental wisdom. Even incremental wisdom progress would inevitably lag behind more aggressive intelligence-focused strategies, given these systemic incentives.

The core of my essay focuses on the almost inevitable extinction of humanity at the hands of AGI, which literally no one has been able to engage with. I think your focus on hypothetical alternatives rather than confronting this systemic reality illustrates the psychological sidestepping I discuss in my recent essay. If you have time, I encourage you to take a look.

Beyond Singularity @ 2025-03-25T20:52 (+1)

First of all, I want to acknowledge the depth, clarity, and intensity of this piece. It’s one of the most coherent articulations I’ve seen of the deterministic collapse scenario — grounded not in sci-fi tropes or fearmongering, but in structural forces like capitalism, game theory, and emergent behavior. I agree with much of your reasoning, especially the idea that we are not defeated by malevolence, but by momentum.

The sections on competitive incentives, accidental goal design, and the inevitability of self-preservation emerging in AGI are particularly compelling. I share your sense that most public AI discourse underestimates how quickly control can slip, not through a single catastrophic event, but via thousands of rational decisions, each made in isolation.

That said, I want to offer a small counter-reflection—not as a rebuttal, but as a shift in framing.

The AI as Mirror, Not Oracle

You mention that much of this essay was written with the help of AI, and that its agreement with your logic was chilling. I understand that deeply—I’ve had similarly intense conversations with language models that left me shaken. But it’s worth considering:

What if the AI isn’t validating the truth of your worldview—what if it’s reflecting it?

Large language models like GPT don’t make truth claims—they simulate conversation based on patterns in data and user input. If you frame the scenario as inevitable doom and construct arguments accordingly, the model will often reinforce that narrative—not because it’s correct, but because it’s coherent within the scaffolding you’ve built.

In that sense, your AI is not your collaborator—it’s your epistemic mirror. And what it’s reflecting back isn’t inevitability. It’s the strength and completeness of the frame you’ve chosen to operate in.

That doesn’t make the argument wrong. But it does suggest that "lack of contradiction from GPT" isn’t evidence of logical finality. It’s more like chess: if you set the board a certain way, yes, you will be checkmated in five moves—but that says more about the board than about all possible games.

Framing Dictates Outcome

You ask: “Please poke holes in my logic.” But perhaps the first move is to ask: what would it take to generate a different logical trajectory from the same facts?

Because I’ve had long GPT-based discussions similar to yours—except the premises were slightly different. Not optimistic, not utopian. But structurally compatible with human survival.

And surprisingly, those led me to models where coexistence between humans and AGI is possible—not easy, not guaranteed, but logically consistent. (I won’t unpack those ideas here—better to let this be a seed for further discussion.)

Fully Agreed: Capitalism Is the Primary Driver

Where I’m 100% aligned with you is on the role of capitalism, competition, and fragmented incentives. I believe this is still the most under-discussed proximal cause in most AGI debates. It’s not whether AGI "wants" to destroy us—it's that we create the structural pressure that makes dangerous AGI more likely than safe AGI.

Your model traces that logic with clarity and rigor.

But here's a teaser for something I’ve been working on:
What happens after capitalism ends?
What would it look like if the incentive structures themselves were replaced by something post-scarcity, post-ownership, and post-labor?

What if the optimization landscape itself shifted—radically, but coherently—into a different attractor altogether?

Let’s just say—there might be more than one logically stable endpoint for AGI development. And I’d love to keep exploring that dance with you.

funnyfranco @ 2025-03-26T21:54 (+1)

Thanks again for such a generous and thoughtful comment.

You’re right to question the epistemic weight I give to AI agreement. I’ve instructed my own GPT to challenge me at every turn, but even then, it often feels more like a collaborator than a critic. That in itself can be misleading. However, what has given me pause is when others run my arguments through separate LLMs -prompted specifically to find logical flaws -and still return with little more than peripheral concerns. While no argument is beyond critique, I think the core premises I’ve laid out are difficult to dispute, and the logic that follows from them, disturbingly hard to unwind.

By contrast, most resistance I’ve encountered comes from people who haven’t meaningfully engaged with the work. I received a response just yesterday from one of the most prominent voices in AI safety that began with, “Without reading the paper, and just going on your brief description…” It’s hard not to feel disheartened when even respected thinkers dismiss a claim without examining it - especially when the claim is precisely that the community is underestimating the severity of systemic pressures. If those pressures were taken seriously, alignment wouldn’t be seen as difficult—it would be recognised as structurally impossible.

I agree with you that the shape of the optimisation landscape matters. And I also agree that the collapse isn’t driven by malevolence - it’s driven by momentum, by fragmented incentives, by game theory. That’s why I believe not just capitalism, but all forms of competitive pressure must end if humanity is to survive AGI. Because as long as any such pressures exist, some actor somewhere will take the risk. And the AGI that results will bypass safety, not out of spite, but out of pure optimisation.

It’s why I keep pushing these ideas, even if I believe the fight is already lost. What kind of man would I be if I saw all this coming and did nothing? Even in the face of futility, I think it’s our obligation to try. To at least force the conversation to happen properly - before the last window closes.

Beyond Singularity @ 2025-04-02T16:52 (+1)

I completely understand your position — and I respect the intellectual honesty with which you’re pursuing this line of argument. I don’t disagree with the core systemic pressures you describe.

That said, I wonder whether the issue is not competition itself, but the shape and direction of that competition.
Perhaps there’s a possibility — however slim — that competition, if deliberately structured and redirected, could become a survival strategy rather than a death spiral.

That’s the hypothesis I’ve been exploring, and I recently outlined it in a post here on the Forum.
If you’re interested, I’d appreciate your critical perspective on it.

Either way, I value this conversation. Few people are willing to follow these questions to their logical ends.

funnyfranco @ 2025-04-02T18:05 (+2)

deleted