mariushobbhahn

I recently founded Apollo Research: https://www.apolloresearch.ai/ 

I was previously doing a Ph.D. in ML at the International Max-Planck research school in Tübingen, worked part-time with Epoch and did independent AI safety research. 

For more see https://www.mariushobbhahn.com/aboutme/

I subscribe to Crocker's Rules

Posts

There should be more AI safety orgs
by mariushobbhahn @ 2023-09-21 | +117 | 0 comments
Apollo Research is hiring evals and interpretability engineers & scientists
by mariushobbhahn @ 2023-08-04 | +19 | 0 comments
Announcing Apollo Research
by mariushobbhahn @ 2023-05-30 | +156 | 0 comments
The next decades might be wild
by mariushobbhahn @ 2022-12-15 | +130 | 0 comments
Announcing AI safety Mentors and Mentees
by mariushobbhahn @ 2022-11-23 | +62 | 0 comments
Disagreement with bio anchors that lead to shorter timelines
by mariushobbhahn @ 2022-11-16 | +85 | 0 comments
Some advice on independent research
by mariushobbhahn @ 2022-11-08 | +65 | 0 comments
Lessons learned from talking to >100 academics about AI safety
by mariushobbhahn @ 2022-10-10 | +138 | 0 comments
What success looks like
by mariushobbhahn, MaxRa, Yannick_Muehlhaeuser, JasperGo, slg @ 2022-06-28 | +112 | 0 comments
What is the right ratio between mentorship and direct work for senior EAs?
by mariushobbhahn @ 2022-06-15 | +60 | 0 comments
EA needs to understand its “failures” better
by mariushobbhahn @ 2022-05-24 | +67 | 0 comments
How many EAs failed in high risk, high reward projects?
by mariushobbhahn @ 2022-04-26 | +90 | 0 comments
EA retreats are really easy and effective - The EA South Germany retreat 2022
by mariushobbhahn, Yannick_Muehlhaeuser @ 2022-04-14 | +24 | 0 comments
AI safety starter pack
by mariushobbhahn @ 2022-03-28 | +126 | 0 comments
EA should learn from the Neoliberal movement
by mariushobbhahn @ 2022-03-22 | +12 | 0 comments
Where would we set up the next EA hubs?
by mariushobbhahn, MaxRa, Yannick_Muehlhaeuser, JasperGo @ 2022-03-16 | +55 | 0 comments
There should be an AI safety project board
by mariushobbhahn @ 2022-03-14 | +24 | 0 comments
I want to be replaced
by mariushobbhahn @ 2022-02-01 | +57 | 0 comments
Should GMOs (e.g. golden rice) be a cause area?
by mariushobbhahn @ 2022-01-31 | +106 | 0 comments
How to write better blog posts
by mariushobbhahn @ 2022-01-25 | +79 | 0 comments
AI acceleration from a safety perspective: Trade-offs and considerations
by mariushobbhahn, Tilman @ 2022-01-19 | +12 | 0 comments
What is the role of Bayesian ML for AI alignment/safety?
by mariushobbhahn @ 2022-01-11 | +39 | 0 comments
EA megaprojects continued
by mariushobbhahn, slg, MaxRa, JasperGo, Yannick_Muehlhaeuser @ 2021-12-03 | +183 | 0 comments
When to get off the train to crazy town?
by mariushobbhahn @ 2021-11-22 | +75 | 0 comments
[Discussion] Best intuition pumps for AI safety
by mariushobbhahn @ 2021-11-06 | +10 | 0 comments
Constructive Criticism of Moral Uncertainty (book)
by mariushobbhahn @ 2021-06-04 | +28 | 0 comments
Should Chronic Pain be a cause area?
by mariushobbhahn @ 2021-05-18 | +69 | 0 comments
Thoughts on Personal Finance for Effective Altruists
by mariushobbhahn @ 2021-01-29 | +28 | 0 comments
Machine Learning and Effective Altruism
by mariushobbhahn @ 2021-01-16 | +11 | 0 comments
How much (physical) suffering is there? Part II: Animals
by mariushobbhahn @ 2021-01-10 | +13 | 0 comments
How much (physical) suffering is there? Part I: Humans
by mariushobbhahn @ 2021-01-10 | +10 | 0 comments