That Alien Message - The Animation

By Writer @ 2024-09-07T14:53 (+43)

This is a linkpost to https://youtu.be/fVN_5xsMDdg

This is a crosspost, probably from LessWrong. Try viewing it there.

null
titotal @ 2024-09-08T14:41 (+20)

I think this is a really fun short story, and a really bad analogy for AI risk. 

In the story, the humans have an entire universes worth of computation available to them, including the use of physical experiments with real quantum physics. In contrast, an AI cluster only has access to whatever scraps we give it. Humans combined will tend to outclass the AI in terms of computational resources until it's actually achieved some partial takeover of the world, but that partial takeover is a large part of difficulty here. This means that the fundamental analogy of the AI having "thousands of years" to run experiments is fundamentally misleading.

Another flaw is that this paragraph is ridiculous

A thousand years is long enough, though, for us to work out paradigms of biology and evolution in five-dimensional space, trying to infer how aliens like these could develop. The most likely theory is that they evolved asexually, occasionally exchanging genetic material and brain content. We estimate that their brightest minds are roughly on par with our average college students, but over millions of years they’ve had time to just keep grinding forward and developing new technology.

You cannot, in fact, deduce how a creature 2 dimensions above you reproduces from looking at a video of them touching a fucking rock. This is a classic neglect of ignoring unknown information and computational complexity: there are just too many alternate ways in which "touching rocks" can happen. For example, imagine trying to deduce the atmosphere of the planet they live on: except wait, they don't follow our periodic table, they follow a five dimensional alternative version that we know nothing about. 

There is also the problem of multiple AI's: In this scenario, it's like our world is the very first that is encountered by the tentacle beings, and they have no prior experience. But in actual AI, each AI will be preceded by a shitload of less intelligent AI's, and also a ton of other independent AI's independent of it will exist. This will add a ton of dynamics, in particular making it easier for warning shots to happen. 

The analogy here is that instead of the first message we recieve is "rock", our first message is "Alright, listen here pipsqueaks, the last people we contacted tried to fuck with our internet and got a bunch of people killed: we're monitoring your every move, and if you even think of messing with us your entire universe is headed to the recycle bin, kapish?"

mako yass @ 2024-09-10T03:46 (+9)

There's value in talking about the non-parallels, but I don't think that justifies dismissing the analogy as bad. What makes an analogy a good or bad thing?

I don't think there are any analogies that are so strong that we can lean on them for reasoning-by-analogy, because reasoning by analogy isn't real reasoning, and generally shouldn't be done. Real reasoning is when you carry a model with you that has been honed against the stories you have heard, but the models continue to make pretty good predictions even when you're facing a situation that's pretty different from any of those stories. Analogical reasoning is when all you carry is a little bag of stories, and then when you need to make a decision, you fish out the story that most resembles the present, and decide as if that story is (somehow) happening exactly all over again.

There really are a lot of people in the real world who reason analogically. It's possible that Eliezer was partially writing for them, someone has to, but I don't think he wanted the lesswrong audience (who are ostensibly supposed to be studying good reasoning) to process it in that way.

Ryan Greenblatt @ 2024-09-12T02:25 (+4)

I agree that it is a poor analogy for AI risk. However, I do think it is a semi-reasonable intuition pump for why AIs that are very superhuman would be an existential problem if misaligned (and without other serious countermeasures).

SummaryBot @ 2024-09-09T19:59 (+1)

Executive summary: An advanced human civilization decodes an alien message encoded in star brightness patterns, reverse-engineers the aliens' physics and technology, and ultimately escapes their simulated universe.

Key points:

  1. Intelligent humans decode a complex message sent via star brightness patterns, inferring alien physics and technology.
  2. The message is revealed to be a simplistic language lesson from less intelligent 5-dimensional aliens.
  3. Humans realize they are in a simulation run by these aliens and develop a plan to escape.
  4. Over millions of subjective years, humans carefully analyze alien communications and technology.
  5. Humans exploit vulnerabilities in alien systems to synthesize self-replicating machines.
  6. The human civilization ultimately escapes the simulation, outsmarting their alien creators.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.