Remmelt

See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

 

Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety. 

Posts

Why Stop AI is barricading OpenAI
by Remmelt @ 2024-10-14 | –20 | 0 comments
An AI crash is our best bet for restricting AI
by Remmelt @ 2024-10-11 | +15 | 0 comments
Who looked into extreme nuclear meltdowns?
by Remmelt @ 2024-09-01 | +4 | 0 comments
Anthropic is being sued for copying books to train Claude
by Remmelt @ 2024-08-31 | +3 | 0 comments
Leverage points for a pause
by Remmelt @ 2024-08-28 | +6 | 0 comments
Some reasons to start a project to stop harmful AI
by Remmelt @ 2024-08-22 | +5 | 0 comments
If AI is in a bubble and the bubble bursts, what would you do?
by Remmelt @ 2024-08-19 | +28 | 0 comments
Lessons from the FDA for AI
by Remmelt @ 2024-08-02 | +6 | 0 comments
What is AI Safety’s line of retreat?
by Remmelt @ 2024-07-28 | +4 | 0 comments
Fifteen Lawsuits against OpenAI
by Remmelt @ 2024-03-09 | +54 | 0 comments
Why I think it's net harmful to do technical safety research at AGI labs
by Remmelt @ 2024-02-07 | +41 | 0 comments
This might be the last AI Safety Camp
by Remmelt, Linda Linsefors @ 2024-01-24 | +87 | 0 comments
The convergent dynamic we missed
by Remmelt @ 2023-12-12 | +2 | 0 comments
Funding case: AI Safety Camp
by Remmelt, Linda Linsefors @ 2023-12-12 | +45 | 0 comments
My first conversation with Annie Altman
by Remmelt @ 2023-11-21 | 0 | 0 comments
Why a Mars colony would lead to a first strike situation
by Remmelt @ 2023-10-04 | –18 | 0 comments
We are not alone: many communities want to stop Big Tech from scaling unsafe AI
by Remmelt @ 2023-09-22 | +28 | 0 comments
How teams went about their research at AI Safety Camp edition 8
by Remmelt @ 2023-09-09 | +13 | 0 comments
4 types of AGI selection, and how to constrain them
by Remmelt @ 2023-08-09 | +7 | 0 comments
What did AI Safety’s specific funding of AGI R&D labs lead to?
by Remmelt @ 2023-07-05 | +24 | 0 comments
The Control Problem: Unsolved or Unsolvable?
by Remmelt @ 2023-06-02 | +4 | 0 comments
Anchoring focalism and the Identifiable victim effect: Bias in Evaluating AGI...
by Remmelt @ 2023-01-07 | –2 | 0 comments
Illusion of truth effect and Ambiguity effect: Bias in Evaluating AGI X-Risks
by Remmelt @ 2023-01-05 | +1 | 0 comments
Normalcy bias and Base rate neglect: Bias in Evaluating AGI X-Risks
by Remmelt @ 2023-01-04 | +5 | 0 comments
Status quo bias; System justification
by Remmelt @ 2023-01-03 | +4 | 0 comments
Belief Bias: Bias in Evaluating AGI X-Risks
by Remmelt @ 2023-01-02 | +5 | 0 comments
Challenge to the notion that anything is (maybe) possible with AGI
by Remmelt @ 2023-01-01 | –19 | 0 comments
Curse of knowledge and Naive realism: Bias in Evaluating AGI X-Risks
by Remmelt @ 2022-12-31 | +5 | 0 comments
Reactive devaluation: Bias in Evaluating AGI X-Risks
by Remmelt @ 2022-12-30 | +2 | 0 comments
Bandwagon effect: Bias in Evaluating AGI X-Risks
by Remmelt @ 2022-12-28 | +4 | 0 comments
Presumptive Listening: sticking to familiar concepts and missing the outer...
by Remmelt @ 2022-12-27 | +3 | 0 comments
Mere exposure effect: Bias in Evaluating AGI X-Risks
by Remmelt @ 2022-12-27 | +4 | 0 comments
Institutions Cannot Restrain Dark-Triad AI Exploitation
by Remmelt @ 2022-12-27 | +8 | 0 comments
Introduction: Bias in Evaluating AGI X-Risks
by Remmelt @ 2022-12-27 | +4 | 0 comments
How 'Human-Human' dynamics give way to 'Human-AI' and then 'AI-AI' dynamics
by Remmelt @ 2022-12-27 | +4 | 0 comments
Nine Points of Collective Insanity
by Remmelt @ 2022-12-27 | +1 | 0 comments
List #3: Why not to assume on prior that AGI-alignment workarounds are...
by Remmelt @ 2022-12-24 | +6 | 0 comments
List #2: Why coordinating to align as humans to not develop AGI is a lot easier...
by Remmelt @ 2022-12-24 | +3 | 0 comments
List #1: Why stopping the development of AGI is hard but doable
by Remmelt @ 2022-12-24 | +24 | 0 comments
Why mechanistic interpretability does not and cannot contribute to long-term AGI...
by Remmelt @ 2022-12-19 | +17 | 0 comments
Two tentative concerns about OpenPhil's Macroeconomic Stabilization Policy work
by Remmelt @ 2022-01-03 | +42 | 0 comments
How teams went about their research at AI Safety Camp edition 5
by Remmelt @ 2021-06-28 | +24 | 0 comments
Some blindspots in rationality and effective altruism
by Remmelt @ 2021-03-21 | +53 | 0 comments
A parable of brightspots and blindspots
by Remmelt @ 2021-03-21 | +10 | 0 comments
Are we actually improving decision-making?
by Remmelt @ 2021-02-04 | +22 | 0 comments
Delegated agents in practice: How companies might end up selling AI services...
by Remmelt @ 2020-11-26 | +11 | 0 comments
Consider paying me (or another entrepreneur) to create services for effective...
by Remmelt @ 2020-11-03 | +44 | 0 comments
The Values-to-Actions Decision Chain: a lens for improving coordination
by Remmelt @ 2018-06-30 | +33 | 0 comments
The first AI Safety Camp & onwards
by Remmelt @ 2018-06-07 | +25 | 0 comments
The Values-to-Actions Decision Chain: a rough model
by Remmelt @ 2018-03-02 | +1 | 0 comments
Proposal for the AI Safety Research Camp
by Remmelt @ 2018-02-02 | +16 | 0 comments
Reflections on community building in the Netherlands
by Remmelt @ 2017-11-02 | +10 | 0 comments
Effective Altruism as a Market in Moral Goods – Introduction
by Remmelt @ 2017-08-06 | +2 | 0 comments
Testing an EA network-building strategy in the Netherlands
by Remmelt @ 2017-07-03 | +16 | 0 comments