Jordan Arel

I have been on a mission to do as much good as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.

A few years ago I wrote a book draft I was calling “Ways to Save The World” or "Paths to Utopia" which imagined broad innovative strategies for preventing existential risk and improving the long-term future.

Upon discovering Effective Altruism in January 2022, while preparing to start a Master's of Social Entrepreneurship degree at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist research and community building work.

I am now researching "Deep Reflection," processes for determining how to get to our best achievable future, including interventions such as "The Long Reflection," "Coherent Extrapolated Volition," and "Good Reflective Governance."

Posts

Shortlist of Viatopia Interventions
by Jordan Arel @ 2025-10-31 | +10 | 0 comments
Viatopia and Buy-In
by Jordan Arel @ 2025-10-31 | +7 | 0 comments
Why Viatopia is Important
by Jordan Arel @ 2025-10-31 | +5 | 0 comments
Introduction to Building Cooperative Viatopia: The Case for Longtermist...
by Jordan Arel @ 2025-10-31 | +6 | 0 comments
(outdated version) Shortlist of Longtermist Interventions
by Jordan Arel @ 2025-10-21 | +4 | 0 comments
(outdated version) Viatopia and Buy-In
by Jordan Arel @ 2025-10-21 | +6 | 0 comments
(outdated version) Why Viatopia is Important
by Jordan Arel @ 2025-10-21 | +4 | 0 comments
(outdated version) Introduction to Building Cooperative Viatopia: The Case for...
by Jordan Arel @ 2025-10-21 | +6 | 0 comments
In defense of the goodness of ideas
by Jordan Arel @ 2025-10-18 | +6 | 0 comments
“Momentism”: Ethics for Boltzmann Brains
by Jordan Arel @ 2025-08-05 | +8 | 0 comments
Pragmatic decision theory, causal one-boxing, and how to literally save the...
by Jordan Arel @ 2025-07-28 | +4 | 0 comments
Bill Gates, Charles Koch, et al. Are Giving $1 Billion To Boost Economic...
by Jordan Arel @ 2025-07-19 | +11 | 0 comments
Is Optimal Reflection Competitive with Extinction Risk Reduction? - Requesting...
by Jordan Arel @ 2025-06-29 | +18 | 0 comments
Is there any funding available for (non x-risk) work on improving trajectories...
by Jordan Arel @ 2025-05-29 | +6 | 0 comments
To what extent is AI safety work trying to get AI to reliably and safely do what...
by Jordan Arel @ 2025-05-23 | +12 | 0 comments
A crux against artificial sentience work for the long-term future
by Jordan Arel @ 2025-05-18 | +11 | 0 comments
Announcing “sEAd The Future”: Effective Sperm and Egg Bank
by Jordan Arel @ 2025-04-01 | +4 | 0 comments
Does “Momentism” via Eternal Inflation dominate Longtermism in expectation?
by Jordan Arel @ 2024-08-17 | +20 | 0 comments
Designing Artificial Wisdom: Decision Forecasting AI & Futarchy
by Jordan Arel @ 2024-07-14 | +5 | 0 comments
Designing Artificial Wisdom: GitWise and AlphaWise
by Jordan Arel @ 2024-07-13 | +6 | 0 comments
Designing Artificial Wisdom: The Wise Workflow Research Organization
by Jordan Arel @ 2024-07-12 | +14 | 0 comments
On Artificial Wisdom
by Jordan Arel @ 2024-07-11 | +23 | 0 comments
10 Cruxes of Artificial Sentience
by Jordan Arel @ 2024-07-01 | +31 | 0 comments
What is the easiest/funnest way to build up a comprehensive understanding of AI...
by Jordan Arel @ 2024-04-30 | +14 | 0 comments
What is the most convincing article, video, etc. making the case that AI is an...
by Jordan Arel @ 2023-07-11 | +4 | 0 comments
What AI Take-Over Movies or Books Will Scare Me Into Taking AI Seriously?
by Jordan Arel @ 2023-01-10 | +11 | 0 comments
How Many Lives Does X-Risk Work Save From Nonexistence On Average?
by Jordan Arel @ 2022-12-08 | +34 | 0 comments
AI Safety in a Vulnerable World: Requesting Feedback on Preliminary Thoughts
by Jordan Arel @ 2022-12-06 | +5 | 0 comments
Maybe Utilitarianism Is More Usefully A Theory For Deciding Between Other...
by Jordan Arel @ 2022-11-17 | +6 | 0 comments
Will AI Worldview Prize Funding Be Replaced?
by Jordan Arel @ 2022-11-13 | +26 | 0 comments
Jordan Arel's Quick takes
by Jordan Arel @ 2022-11-09 | +2 | 0 comments
What Criteria Determines Who Gets Into EAG & EAGx?
by Jordan Arel @ 2022-09-26 | +10 | 0 comments
Fine-Grained Karma Voting
by Jordan Arel @ 2022-09-26 | +5 | 0 comments
Why Wasting EA Money is Bad
by Jordan Arel @ 2022-09-22 | +47 | 0 comments
How To Actually Succeed
by Jordan Arel @ 2022-09-12 | +11 | 0 comments
How have nuclear winter models evolved?
by Jordan Arel @ 2022-09-11 | +14 | 0 comments
Is there a “What We Owe The Future” fellowship study guide?
by Jordan Arel @ 2022-09-01 | +8 | 0 comments
Is there any research or forecasts of how likely AI Alignment is going to be a...
by Jordan Arel @ 2022-08-14 | +8 | 0 comments
How I Came To Longtermism On My Own & An Outsider Perspective On EA Longtermism
by Jordan Arel @ 2022-08-07 | +35 | 0 comments
How long does it take to undersrand AI X-Risk from scratch so that I have a...
by Jordan Arel @ 2022-07-27 | +29 | 0 comments
Is there an EA Discord Group?
by Jordan Arel @ 2022-07-14 | +6 | 0 comments
Is Our Universe A Newcomb’s Paradox Simulation?
by Jordan Arel @ 2022-05-15 | +16 | 0 comments
Help Me Choose A High Impact Career!!!
by Jordan Arel @ 2022-05-06 | +18 | 0 comments
Who are the leading advocates, and what are the top publications on broad...
by Jordan Arel @ 2022-04-28 | +4 | 0 comments
Which Post Idea Is Most Effective?
by Jordan Arel @ 2022-04-25 | +26 | 0 comments