So8res
Posts
Apocalypse insurance, and the hardline libertarian take on AI risk
by So8res @ 2023-11-28 | +21 | 0 comments
by So8res @ 2023-11-28 | +21 | 0 comments
Ability to solve long-horizon tasks correlates with wanting things in the...
by So8res @ 2023-11-24 | +38 | 0 comments
by So8res @ 2023-11-24 | +38 | 0 comments
Thoughts on the AI Safety Summit company policy requests and responses
by So8res @ 2023-10-31 | +42 | 0 comments
by So8res @ 2023-10-31 | +42 | 0 comments
AI as a science, and three obstacles to alignment strategies
by So8res @ 2023-10-25 | +41 | 0 comments
by So8res @ 2023-10-25 | +41 | 0 comments
If interpretability research goes well, it may get dangerous
by So8res @ 2023-04-03 | +33 | 0 comments
by So8res @ 2023-04-03 | +33 | 0 comments
A rough and incomplete review of some of John Wentworth's research
by So8res @ 2023-03-28 | +27 | 0 comments
by So8res @ 2023-03-28 | +27 | 0 comments
A stylized dialogue on John Wentworth's claims about markets and optimization
by So8res @ 2023-03-25 | +18 | 0 comments
by So8res @ 2023-03-25 | +18 | 0 comments
Truth and Advantage: Response to a draft of "AI safety seems hard to measure"
by So8res @ 2023-03-22 | +11 | 0 comments
by So8res @ 2023-03-22 | +11 | 0 comments
Focus on the places where you feel shocked everyone's dropping the ball
by So8res @ 2023-02-02 | +92 | 0 comments
by So8res @ 2023-02-02 | +92 | 0 comments
How could we know that an AGI system will have good consequences?
by So8res @ 2022-11-07 | +25 | 0 comments
by So8res @ 2022-11-07 | +25 | 0 comments
Superintelligent AI is necessary for an amazing future, but far from sufficient
by So8res @ 2022-10-31 | +35 | 0 comments
by So8res @ 2022-10-31 | +35 | 0 comments
Decision theory does not imply that we get to have nice things
by So8res @ 2022-10-18 | +36 | 0 comments
by So8res @ 2022-10-18 | +36 | 0 comments
Contra shard theory, in the context of the diamond maximizer problem
by So8res @ 2022-10-13 | +27 | 0 comments
by So8res @ 2022-10-13 | +27 | 0 comments
Where I currently disagree with Ryan Greenblatt’s version of the ELK approach
by So8res @ 2022-09-29 | +21 | 0 comments
by So8res @ 2022-09-29 | +21 | 0 comments
Brainstorm of things that could force an AI team to burn their lead
by So8res @ 2022-07-25 | +26 | 0 comments
by So8res @ 2022-07-25 | +26 | 0 comments
On how various plans miss the hard bits of the alignment challenge
by So8res @ 2022-07-12 | +125 | 0 comments
by So8res @ 2022-07-12 | +125 | 0 comments
A central AI alignment problem: capabilities generalization, and the sharp left...
by So8res @ 2022-06-15 | +51 | 0 comments
by So8res @ 2022-06-15 | +51 | 0 comments