Akash
AI safety governance/strategy research & field-building.
Formerly a PhD student in clinical psychology @ UPenn, college student at Harvard, and summer research fellow at the Happier Lives Institute.
Posts
Chinese scientists acknowledge xrisk & call for international regulatory body...
by Akash @ 2023-11-01 | +31 | 0 comments
by Akash @ 2023-11-01 | +31 | 0 comments
Reframing the burden of proof: Companies should prove that models are safe...
by Akash @ 2023-04-25 | +35 | 0 comments
by Akash @ 2023-04-25 | +35 | 0 comments
Request to AGI organizations: Share your views on pausing AI progress
by Akash @ 2023-04-11 | +85 | 0 comments
by Akash @ 2023-04-11 | +85 | 0 comments
Reliability, Security, and AI risk: Notes from infosec textbook chapter 1
by Akash @ 2023-04-07 | +15 | 0 comments
by Akash @ 2023-04-07 | +15 | 0 comments
New survey: 46% of Americans are concerned about extinction from AI; 69% support...
by Akash @ 2023-04-05 | +143 | 0 comments
by Akash @ 2023-04-05 | +143 | 0 comments
The Overton Window widens: Examples of AI risk in the media
by Akash @ 2023-03-23 | +112 | 0 comments
by Akash @ 2023-03-23 | +112 | 0 comments
The Wizard of Oz Problem: How incentives and narratives can skew our perception...
by Akash @ 2023-03-20 | +16 | 0 comments
by Akash @ 2023-03-20 | +16 | 0 comments
AI Governance & Strategy: Priorities, talent gaps, & opportunities
by Akash @ 2023-03-03 | +21 | 0 comments
by Akash @ 2023-03-03 | +21 | 0 comments
Qualities that alignment mentors value in junior researchers
by Akash @ 2023-02-14 | +31 | 0 comments
by Akash @ 2023-02-14 | +31 | 0 comments
How evals might (or might not) prevent catastrophic risks from AI
by Akash @ 2023-02-07 | +28 | 0 comments
by Akash @ 2023-02-07 | +28 | 0 comments
Many AI governance proposals have a tradeoff between usefulness and feasibility
by Akash @ 2023-02-03 | +22 | 0 comments
by Akash @ 2023-02-03 | +22 | 0 comments
An overview of some promising work by junior alignment researchers
by Akash @ 2022-12-26 | +10 | 0 comments
by Akash @ 2022-12-26 | +10 | 0 comments
Podcast: Tamera Lanham on AI risk, threat models, alignment proposals,...
by Akash @ 2022-12-20 | +14 | 0 comments
by Akash @ 2022-12-20 | +14 | 0 comments
12 career advising questions that may (or may not) be helpful for people...
by Akash @ 2022-12-12 | +14 | 0 comments
by Akash @ 2022-12-12 | +14 | 0 comments
Podcast: Shoshannah Tekofsky on skilling up in AI safety, visiting Berkeley, and...
by Akash @ 2022-11-25 | +14 | 0 comments
by Akash @ 2022-11-25 | +14 | 0 comments
Announcing AI Alignment Awards: $100k research contests about goal...
by Akash @ 2022-11-22 | +60 | 0 comments
by Akash @ 2022-11-22 | +60 | 0 comments
Apply to attend an AI safety workshop in Berkeley (Nov 18-21)
by Akash @ 2022-11-06 | +19 | 0 comments
by Akash @ 2022-11-06 | +19 | 0 comments
Instead of technical research, more people should focus on buying time
by Akash @ 2022-11-05 | +107 | 0 comments
by Akash @ 2022-11-05 | +107 | 0 comments
Resources that (I think) new alignment researchers should know about
by Akash @ 2022-10-28 | +20 | 0 comments
by Akash @ 2022-10-28 | +20 | 0 comments
7 traps that (we think) new alignment researchers often fall into
by Akash @ 2022-09-27 | +73 | 0 comments
by Akash @ 2022-09-27 | +73 | 0 comments
Criticism of EA Criticisms: Is the real disagreement about cause prio?
by Akash @ 2022-09-02 | +30 | 0 comments
by Akash @ 2022-09-02 | +30 | 0 comments
Community Builder Writing Contest:
$20,000 in prizes for reflections
by Akash @ 2022-03-12 | +39 | 0 comments
by Akash @ 2022-03-12 | +39 | 0 comments
What are the best (brief) resources to introduce EA & longtermism?
by Akash @ 2021-12-19 | +5 | 0 comments
by Akash @ 2021-12-19 | +5 | 0 comments
What are some EA-aligned statements that almost everyone agrees with?
by Akash @ 2020-11-10 | +18 | 0 comments
by Akash @ 2020-11-10 | +18 | 0 comments