Remmelt
See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
I post here about preventing unsafe AI.
Note that I'm no longer part of EA, because of overreaches I saw during my time in the community (core people leading technocratic projects with ruinous downside risks, a philosophy based around influencing consequences over enabling collective choice-making, and a culture that's bent on proselytising both while not listening deeply enough to integrate other perspectives).
Posts
Hunger strike in front of Anthropic by one guy concerned about AI risk
by Remmelt @ 2025-09-05 | +16 | 0 comments
by Remmelt @ 2025-09-05 | +16 | 0 comments
Anthropic's leading researchers acted as moderate accelerationists
by Remmelt @ 2025-09-01 | +74 | 0 comments
by Remmelt @ 2025-09-01 | +74 | 0 comments
Our bet on whether the AI market will crash
by Remmelt, Marcus Abramovitch 🔸 @ 2025-05-08 | +54 | 0 comments
by Remmelt, Marcus Abramovitch 🔸 @ 2025-05-08 | +54 | 0 comments
Who wants to bet me $25k at 1:7 odds that there won't be an AI market crash in...
by Remmelt @ 2025-04-08 | +7 | 0 comments
by Remmelt @ 2025-04-08 | +7 | 0 comments
OpenAI lost $5 billion in 2024 (and its losses are increasing)
by Remmelt @ 2025-03-31 | 0 | 0 comments
by Remmelt @ 2025-03-31 | 0 | 0 comments
We don't want to post again "This might be the last AI Safety Camp"
by Remmelt @ 2025-01-21 | +42 | 0 comments
by Remmelt @ 2025-01-21 | +42 | 0 comments
What do you mean with ‘alignment is solvable in principle’?
by Remmelt @ 2025-01-17 | +10 | 0 comments
by Remmelt @ 2025-01-17 | +10 | 0 comments
Funding Case: AI Safety Camp 11
by Remmelt, Linda Linsefors, Robert Kralisch @ 2024-12-23 | +42 | 0 comments
by Remmelt, Linda Linsefors, Robert Kralisch @ 2024-12-23 | +42 | 0 comments
Ex-OpenAI researcher says OpenAI mass-violated copyright law
by Remmelt @ 2024-10-24 | +11 | 0 comments
by Remmelt @ 2024-10-24 | +11 | 0 comments
If AI is in a bubble and the bubble bursts, what would you do?
by Remmelt @ 2024-08-19 | +28 | 0 comments
by Remmelt @ 2024-08-19 | +28 | 0 comments
Why I think it's net harmful to do technical safety research at AGI labs
by Remmelt @ 2024-02-07 | +42 | 0 comments
by Remmelt @ 2024-02-07 | +42 | 0 comments
We are not alone: many communities want to stop Big Tech from scaling unsafe AI
by Remmelt @ 2023-09-22 | +28 | 0 comments
by Remmelt @ 2023-09-22 | +28 | 0 comments
How teams went about their research at AI Safety Camp edition 8
by Remmelt @ 2023-09-09 | +13 | 0 comments
by Remmelt @ 2023-09-09 | +13 | 0 comments
What did AI Safety’s specific funding of AGI R&D labs lead to?
by Remmelt @ 2023-07-05 | +24 | 0 comments
by Remmelt @ 2023-07-05 | +24 | 0 comments
Anchoring focalism and the Identifiable victim effect: Bias in Evaluating AGI...
by Remmelt @ 2023-01-07 | –2 | 0 comments
by Remmelt @ 2023-01-07 | –2 | 0 comments
Illusion of truth effect and Ambiguity effect: Bias in Evaluating AGI X-Risks
by Remmelt @ 2023-01-05 | +1 | 0 comments
by Remmelt @ 2023-01-05 | +1 | 0 comments
Normalcy bias and Base rate neglect: Bias in Evaluating AGI X-Risks
by Remmelt @ 2023-01-04 | +5 | 0 comments
by Remmelt @ 2023-01-04 | +5 | 0 comments
Status quo bias; System justification: Bias in Evaluating AGI X-Risks
by Remmelt @ 2023-01-03 | +4 | 0 comments
by Remmelt @ 2023-01-03 | +4 | 0 comments
Challenge to the notion that anything is (maybe) possible with AGI
by Remmelt @ 2023-01-01 | –19 | 0 comments
by Remmelt @ 2023-01-01 | –19 | 0 comments
Curse of knowledge and Naive realism: Bias in Evaluating AGI X-Risks
by Remmelt @ 2022-12-31 | +5 | 0 comments
by Remmelt @ 2022-12-31 | +5 | 0 comments
Presumptive Listening: sticking to familiar concepts and missing the outer...
by Remmelt @ 2022-12-27 | +3 | 0 comments
by Remmelt @ 2022-12-27 | +3 | 0 comments
How 'Human-Human' dynamics give way to 'Human-AI' and then 'AI-AI' dynamics
by Remmelt @ 2022-12-27 | +4 | 0 comments
by Remmelt @ 2022-12-27 | +4 | 0 comments
List #3:
Why not to assume on prior that AGI-alignment workarounds are...
by Remmelt @ 2022-12-24 | +6 | 0 comments
by Remmelt @ 2022-12-24 | +6 | 0 comments
List #2:
Why coordinating to align as humans to not develop AGI is a lot easier...
by Remmelt @ 2022-12-24 | +3 | 0 comments
by Remmelt @ 2022-12-24 | +3 | 0 comments
List #1:
Why stopping the development of AGI is hard but doable
by Remmelt @ 2022-12-24 | +24 | 0 comments
by Remmelt @ 2022-12-24 | +24 | 0 comments
Why mechanistic interpretability does not and cannot contribute to long-term AGI...
by Remmelt @ 2022-12-19 | +17 | 0 comments
by Remmelt @ 2022-12-19 | +17 | 0 comments
Two tentative concerns about OpenPhil's Macroeconomic Stabilization Policy work
by Remmelt @ 2022-01-03 | +42 | 0 comments
by Remmelt @ 2022-01-03 | +42 | 0 comments
How teams went about their research at AI Safety Camp edition 5
by Remmelt @ 2021-06-28 | +24 | 0 comments
by Remmelt @ 2021-06-28 | +24 | 0 comments
Delegated agents in practice:
How companies might end up selling AI services...
by Remmelt @ 2020-11-26 | +11 | 0 comments
by Remmelt @ 2020-11-26 | +11 | 0 comments
Consider paying me (or another entrepreneur) to create services for effective...
by Remmelt @ 2020-11-03 | +44 | 0 comments
by Remmelt @ 2020-11-03 | +44 | 0 comments
The Values-to-Actions Decision Chain: a lens for improving coordination
by Remmelt @ 2018-06-30 | +34 | 0 comments
by Remmelt @ 2018-06-30 | +34 | 0 comments
Effective Altruism as a Market in Moral Goods – Introduction
by Remmelt @ 2017-08-06 | +2 | 0 comments
by Remmelt @ 2017-08-06 | +2 | 0 comments
Testing an EA network-building strategy in the Netherlands
by Remmelt @ 2017-07-03 | +16 | 0 comments
by Remmelt @ 2017-07-03 | +16 | 0 comments