Knowledge, Reasoning, and Superintelligence

By Owen Cotton-Barratt @ 2025-03-26T23:28 (+21)

This is a linkpost to https://strangecities.substack.com/p/knowledge-reasoning-and-superintelligence

This is a crosspost, probably from LessWrong. Try viewing it there.

null
Chris Leong @ 2025-03-27T03:38 (+6)

In retrospect, it seems that LLM's were initially successful because they allowed engineers to produce certain capabilities in a way that almost maximally leaned on crystallized knowledge and minimally leaned on fluid intelligence.

It appears that LLM's have continued to be successful because we've gradually been able to get them to rely more on fluid intelligence.

Oliver Sourbut @ 2025-04-23T13:31 (+5)

(cross-posted on LW)

Love this!

As presaged in our verbal discussion my top conceptual complement would be to emphasise exploration/experimentation as central to the knowledge production loop - the cycle of 'developing good taste to plan better experiments to improve taste (and planning model)' is critical (indispensable?) to 'produce new knowledge which is very helpful by the standards of human civilization' (on any kind of meaningful timescale).

This because just flailing, or even just 'doing stuff', gets you some novelty of observations, but directedly seeking informative circumstances at the boundaries of the known (which includes making novel unpredictable events happen, as well as getting equipped with richer means to observe and record them, and perhaps preparing to deliberatively extract insight) turns out to be able to mine vastly more insight per resource (time, materials, etc.). Hence science, but also hence individual human and animal playfulness, curiosity, adversarial exercises and drills (self-play ish), and whatnot.

Said another way, maybe I'd characterise 'the way that fluid intelligence and crystallised intelligence synergise in the knowledge production loop' as 'directed exploration/experimentation'?

Having said that, I don't necessarily think these capacities need to reside 'in the same mind', just as contemporary human orgs get more of this done and more effectively than individuals. But the pieces do need to be fit to each other (like, a physicist with great physics taste can't usually very well complement a bio lab without first becoming a person with great bio taste).

SummaryBot @ 2025-03-28T16:47 (+1)

Executive summary: The post argues that understanding the distinction between crystallized and fluid intelligence is key to analyzing the development and future trajectory of AI systems, including the potential dynamics of an intelligence explosion and how superintelligent systems might evolve and be governed.

Key points:

  1. Intelligence has at least two distinct dimensions—crystallized (stored knowledge) and fluid (real-time reasoning)—which apply to both humans and AI systems.
  2. AI systems like AlphaGo and current LLMs use a knowledge production loop, where improved knowledge boosts performance and generates further knowledge, enabling recursive improvement.
  3. Crystallized intelligence is necessary for performance, and likely to remain crucial even in superintelligent systems, as deriving everything from scratch is inefficient.
  4. Future systems may differ significantly in their levels of crystallized vs fluid intelligence, raising scenarios like a "naive genius" or a highly knowledgeable but shallow reasoner.
  5. A second loop—focused on improving fluid intelligence algorithms themselves—may drive the explosive dynamics of an intelligence explosion, but might be slower or require many steps of knowledge accumulation first.
  6. Open questions include how to govern AI knowledge creation and access, whether agentic systems are required for automated research, and how this framework can inform differential progress and safety paradigms.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.