Knowledge, Reasoning, and Superintelligence

By Owen Cotton-Barratt @ 2025-03-26T23:28 (+17)

This is a linkpost to https://strangecities.substack.com/p/knowledge-reasoning-and-superintelligence

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Chris Leong @ 2025-03-27T03:38 (+6)

In retrospect, it seems that LLM's were initially successful because they allowed engineers to produce certain capabilities in a way that almost maximally leaned on crystallized knowledge and minimally leaned on fluid intelligence.

It appears that LLM's have continued to be successful because we've gradually been able to get them to rely more on fluid intelligence.

SummaryBot @ 2025-03-28T16:47 (+1)

Executive summary: The post argues that understanding the distinction between crystallized and fluid intelligence is key to analyzing the development and future trajectory of AI systems, including the potential dynamics of an intelligence explosion and how superintelligent systems might evolve and be governed.

Key points:

  1. Intelligence has at least two distinct dimensions—crystallized (stored knowledge) and fluid (real-time reasoning)—which apply to both humans and AI systems.
  2. AI systems like AlphaGo and current LLMs use a knowledge production loop, where improved knowledge boosts performance and generates further knowledge, enabling recursive improvement.
  3. Crystallized intelligence is necessary for performance, and likely to remain crucial even in superintelligent systems, as deriving everything from scratch is inefficient.
  4. Future systems may differ significantly in their levels of crystallized vs fluid intelligence, raising scenarios like a "naive genius" or a highly knowledgeable but shallow reasoner.
  5. A second loop—focused on improving fluid intelligence algorithms themselves—may drive the explosive dynamics of an intelligence explosion, but might be slower or require many steps of knowledge accumulation first.
  6. Open questions include how to govern AI knowledge creation and access, whether agentic systems are required for automated research, and how this framework can inform differential progress and safety paradigms.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.