Towards the Operationalization of Philosophy & Wisdom
By Thane Ruthenis @ 2024-10-28T19:45 (+1)
This is a linkpost to https://aiimpacts.org/towards-the-operationalization-of-philosophy-wisdom/
This is a crosspost, probably from LessWrong. Try viewing it there.
nullSummaryBot @ 2024-10-28T20:44 (+1)
Executive summary: Philosophy and wisdom can be operationalized through formal frameworks - philosophy as the derivation of novel ontologies that decompose reality into separately studiable domains, and wisdom as meta-level cognitive heuristics that predict the real-world consequences of using object-level heuristics.
Key points:
- Philosophy involves deriving ontologies that allow decomposing complex domains into simpler subdomains, similar to John Wentworth's natural abstractions framework. This process is computationally demanding but convergent across agents.
- Wisdom operates as "inversions of inversions" - taking object-level cognitive heuristics as input and predicting their actual consequences, often stored as implicit knowledge/common sense rather than explicit reasoning.
- AGIs would necessarily develop philosophical competence and wisdom as they are crucial for efficient reasoning, but may not address philosophical incompetence in their human operators unless specifically designed to do so.
- Operationalizing metaphilosophy (moving it outside philosophy proper philosophical work through standardization, measurement, and specialized training.
- Automating philosophy is likely AGI-complete, as it requires deriving novel ontologies. Current AI tools like LLMs lack this capability, though research on natural abstractions offers a promising direction.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.