Elliott Thornley (EJT)
I work on AI alignment. Right now, I'm using ideas from decision theory to design and train safer artificial agents.
I also do work in ethics, focusing on the moral importance of future generations.
You can email me at thornley@mit.edu.
Posts
Towards shutdownable agents via stochastic choice
by Elliott Thornley (EJT) @ 2024-07-08 | +26 | 0 comments
by Elliott Thornley (EJT) @ 2024-07-08 | +26 | 0 comments
My favourite arguments against person-affecting views
by Elliott Thornley (EJT) @ 2024-04-02 | +84 | 0 comments
by Elliott Thornley (EJT) @ 2024-04-02 | +84 | 0 comments
Critical-Set Views, Biographical Identity, and the Long Term
by Elliott Thornley (EJT) @ 2024-02-28 | +9 | 0 comments
by Elliott Thornley (EJT) @ 2024-02-28 | +9 | 0 comments
The Shutdown Problem: Incomplete Preferences as a Solution
by Elliott Thornley (EJT) @ 2024-02-23 | +26 | 0 comments
by Elliott Thornley (EJT) @ 2024-02-23 | +26 | 0 comments
The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists
by Elliott Thornley (EJT) @ 2023-10-23 | +35 | 0 comments
by Elliott Thornley (EJT) @ 2023-10-23 | +35 | 0 comments
How much should governments pay to prevent catastrophes? Longtermism’s limited...
by Elliott Thornley (EJT), CarlShulman @ 2023-03-19 | +258 | 0 comments
by Elliott Thornley (EJT), CarlShulman @ 2023-03-19 | +258 | 0 comments
[Creative Writing Contest] [Referral] Pascal's Mugger Strikes Again
by Elliott Thornley (EJT) @ 2021-09-14 | +5 | 0 comments
by Elliott Thornley (EJT) @ 2021-09-14 | +5 | 0 comments
The Impossibility of a Satisfactory Population Prospect Axiology
by Elliott Thornley (EJT) @ 2021-05-12 | +36 | 0 comments
by Elliott Thornley (EJT) @ 2021-05-12 | +36 | 0 comments