New Book: 'Nexus' by Yuval Noah Harari

By timfarkas @ 2024-10-03T13:54 (+14)

TLDR: Harari wrote a book that explains AGI and superintelligence to a broad audience, good news for EA communication! He also develops some interesting viewpoints related to the interplay of AI and the world's political systems. This is just an info post and contains a short review of concepts I found interesting.

Awesome development that I have not seen discussed on here so far: Yuval Noah Harari just dropped his newest book 'Nexus' which deals essentially with 21st information technologies - algorithms and AI - and how they affect societies and humanity as a whole.

I think this book is great news for two reasons:

  1. Introducing important concepts surrounding AI x-risk to a broad audience in a very accessible yet nuanced and reasonably in-depth manner
  2. Developing and presenting interesting, novel concepts and viewpoints surrounding novel information technologies based on his perspective as a historicist

Currently at 4.4 stars on Amazon

Making AI x-risk concepts accessible to a broader audience

In my eyes, he does a highly elegant job of introducing concepts such as AI, AGI, superintelligence, the alignment problem, existential risk to a broad audience, and on his own terms as a historicist and popsci writer!

Just one example of his unique approach, he defines AI as 'alien intelligence' which is a smart move: He thereby greatly facilitates his response to common knee-jerk objections non-technical people often have to the concept of autonomous, agentic AI: Namely, that it could never have feelings and consciousness (as it is not human) and is thus not able to have goals or autonomy, just being a 'tool'. Using 'aliens' as an intuition-pump for smart but cryptic, incomprehensible, inhumane agents, I think he succeeds in making the apparent contradiction of intelligence without human feelings or consciousness much more intuitive.

Another interesting one is him defining the 'alignment problem' very broadly as acting in a way that is locally rational but not aligned with a global goal. His first examples of that are historic: A military commander encountering enemy insurgents engaging him from within a religious institution may achieve his local objective (take out enemies) by bombing the building, but may thus endanger the global goal (defeat the enemy country) by weakening public support for the war effort due to public outcry. Only after giving a few 'human' examples of misalignment does he bridge to more intuitively 'outlandish' scenarios like Bostrom's hypothetical paperclip maximiser (locally 'winning' for the paperclip company but globally destroying all human value in the process) in the context of AI misalignment. I find this approach of widening the concept of misalignment to many relatable human examples elegant in thereby, again, making the concept of AI misalignment much more intuitive.

In my eyes, this way, Harari succeeds in making communication of the risks of AI to a broad audience more down-to-earth, relatable and intuitive, while retaining nuance and depth.

'Sapiens', one of his previous books, sold 25 million copies worldwide and, in my eyes, thus positively influenced societal democratic discourse. Similarly, Nexus will likely be read by millions of people and give them a more competent and nuanced understanding of the risks of AGI, a big win for AI x-risk communication!

Novel concepts and viewpoints

At the same time, even as someone who is well-versed in tech bubble discourse surrounding AI development, I still feel like I got a good deal of value reading this book!

While a sizeable portion of it deals with AI x-risk, the majority of its content is dedicated to analysing the impact of information technologies on political systems in the past, present and future. I found this interesting, as I find these points somewhat neglected in discourse in the EA / tech bubble, which mostly focuses on the technical and philosophical implications of AI, AI alignment and AI x-risk, and much fewer historic and political perspectives.

Information and political systems

For starters, its first half mostly deals with the concept of 'information' and 'information networks'. He treats these concepts not from the lens of information theory but rather from a memetic viewpoint, drawing on his previous books' concept of stories as a kind of social technology essential for early human progress. That is, information is treated as 'that which brings people together and crafts common myths / world models'.

Based on this, he develops an interesting framework of viewing the archetypal ends of the political spectrum - autocracy/totalitarianism and democracy - as fundamentally different in how they structure information flow: Roughly, autocracy and totalitarianism aim to centralise all information and suppress dissent, and thus prioritise order over error-correction or truth.
Democracies keep information flow more decentralised, with dissent from free press, free institutions or the people due to free speech being a routine part of daily functioning, thus prioritising error-correction and truth over order.

With this framework, he analyses how the development of information technologies (language and memes -> writing -> printing press -> telecommunication -> computers, internet and algorithms -> AI) affected the feasibility of these political orders over time. One essential claim of his book is that the development of information technologies was a catalysing factor of the development of both modern large-scale democracies and totalitarian systems.

For instance, historically, democracy was constrained to small city states for thousands of years until the advent of telecommunication technologies (newspaper and radio) enabled democratic discourse on larger scales thus giving rise to modern large-scale democracies like the US.

At the same time, while autocracies were historically feasible on large scales for a longer time, control of autocrats over their population was always severely limited by the bandwidth and latency of information spread. Due to this limited control, coups, revolutions, and assassinations were common-place. The modern advent of telecommunication technologies was a turning point for autocracies, too, unlocking totalitarian control of all possible political rivals and 'problematic' parts of the population by enabling large-scale spy and secret police networks.

Applying this perspective to recent developments in information technologies, he concludes that further advances in AI are likely to disproportionally favour totalitarian systems' control over their population by enabling unprecedented levels of surveillance, dissident detection, and suppression of dissent.

Furthermore, he analyses how at the same time, the advance of information technology by itself does not favour truthful information and has historically rather disproportionately amplified the spread of inflammatory or outrageous content. He cites examples from the printing press enabling the early modern witch hunt crazes in much of Europe by spreading misinformation about a global witch conspiracy, to present-day Facebook algorithms playing a determining role in enabling and fuelling an ethnic cleansing campaign against the Muslim Rohingya minority in Myanmar with tens of thousands murdered in both cases.

He thus makes the point that, by default, the progress of information technologies is likely to destabilise democratic systems and strengthen authoritarian ones. He makes the case, that the correlation of the decline of democratic discourse in the western world, characterised by increasing polarisation and a rise of populism, with the wide-spread adoption of social media algorithms optimising for engagement (rather than truth or democratic value) is no accident.

In conclusion, even sparing AI x-risk, he paints a much less optimistic picture and more nuanced picture of technological progress than often encountered in techno-optimist 'lines going up' circles: Historically, technological revolutions such as the printing press, the industrial revolution, or telecommunication have tremendously improved the lives of millions of people, but have also gone hand in hand with the most terrible failed experiments of history like early modern witch hunts, Colonialist Imperialism, Nazi Germany, or the Stalinist Soviet Union. At the same time, he still allows hope that liberal democratic systems can weather these changes, citing democracies like the US, or the UK, that withstood the appeal of these modern totalitarian systems and self-corrected historic mistakes such as slavery or imperialism.

Thus, this book left me motivated to stay alert, not take our political systems and freedoms for granted, and fight for the maintenance of democratic and liberal values in the western world. It also made explicit the unsolved problems that social media algorithms maximising engagement, misinformation bot networks, or lack of truth-seeking error correction in media pose, giving me a desire to solve them. dem/acc anyone?


CB🔸 @ 2024-10-03T16:39 (+4)

Thanks for the clear summary ! Good to know that Harari decided to write on this topic, his way of presenting things is often really engaging.

Is there a section on the impact of AI on animals ? This is a topic of great importance he probably cares about as well.

timfarkas @ 2024-10-04T00:44 (+4)

Thanks for the kind words! While animal ethics are mentioned a few times to illustrate points about ethical views changing or similar, I can't remember a bigger treatment of animal rights or their intersection with AI.