Could a single alien message destroy us?

By Writer @ 2022-11-25T09:58 (+40)

This is a crosspost, probably from LessWrong. Try viewing it there.

null
mako yass @ 2022-11-25T22:39 (+4)

Since we're already in existential danger due to AI risk, it's not obvious that we shouldn't read a message that has only a 10% chance of being unfriendly, a friendly message could pretty reliably save us from other risks. Additionally, I can make an argument for friendly messages potentially being quite common:

If we could pre-commit now to never doing a SETI attack ourselves, or if we could commit to only sending friendly messages, then we'd know that many other civs, having at some point stood in the same place as us, will have also made the same commitment, and our risk would decrease.
But I'm not sure, it's a nontrivial question as to whether that would be a good deal for us to make, would the reduction in risk of being subjected to a SETI attack be greater than the expected losses of no longer being allowed to do SETI attacks?

Writer @ 2022-11-25T10:01 (+4)

Cross-posting with multiple authors is broken as a feature.

When Matthew had to approve co-authorship, the post appeared on the home page, but if clicked on, it only showed an error message.

Then I moved the post to drafts, and when I interacted with it using the three dots on the right side, there was another error message.

Now Matthew doesn't appear as a coauthor here.

Ofer @ 2022-11-25T13:57 (+3)

Haven't read the post, but my answer to the title is "yes". SETI seems like a great example for researchers unilaterally rushing to do things that might be astronomically impactful and are very risky; driven by the fear that someone else will end up snatching the credit and glory for their brilliant idea.

[EDIT: changed "not net-positive" to "very risky".]

Jeroen_W @ 2022-11-25T13:15 (+1)

Great job once again! Loved it :)