Akash

AI safety governance/strategy research & field-building.

Formerly a PhD student in clinical psychology @ UPenn, college student at Harvard, and summer research fellow at the Happier Lives Institute.

Posts

Verification methods for international AI agreements
by Akash @ 2024-08-31 | +20 | 0 comments
Advice to junior AI governance researchers
by Akash @ 2024-07-08 | +38 | 0 comments
Mitigating extreme AI risks amid rapid progress [Linkpost]
by Akash @ 2024-05-21 | +36 | 0 comments
OpenAI's Preparedness Framework: Praise & Recommendations
by Akash @ 2024-01-02 | +16 | 0 comments
Navigating emotions in an uncertain & confusing world
by Akash @ 2023-11-20 | +33 | 0 comments
Chinese scientists acknowledge xrisk & call for international regulatory body...
by Akash @ 2023-11-01 | +31 | 0 comments
Winners of AI Alignment Awards Research Contest
by Akash @ 2023-07-13 | +49 | 0 comments
Eisenhower's Atoms for Peace Speech
by Akash @ 2023-05-17 | +17 | 0 comments
Discussion about AI Safety funding (FB transcript)
by Akash @ 2023-04-30 | +104 | 0 comments
Reframing the burden of proof: Companies should prove that models are safe...
by Akash @ 2023-04-25 | +35 | 0 comments
DeepMind and Google Brain are merging [Linkpost]
by Akash @ 2023-04-20 | +32 | 0 comments
Request to AGI organizations: Share your views on pausing AI progress
by Akash @ 2023-04-11 | +85 | 0 comments
AI Safety Newsletter #1 [CAIS Linkpost]
by Akash @ 2023-04-10 | +38 | 0 comments
Reliability, Security, and AI risk: Notes from infosec textbook chapter 1
by Akash @ 2023-04-07 | +15 | 0 comments
New survey: 46% of Americans are concerned about extinction from AI; 69% support...
by Akash @ 2023-04-05 | +143 | 0 comments
What would a compute monitoring plan look like? [Linkpost]
by Akash @ 2023-03-26 | +61 | 0 comments
The Overton Window widens: Examples of AI risk in the media
by Akash @ 2023-03-23 | +112 | 0 comments
The Wizard of Oz Problem: How incentives and narratives can skew our perception...
by Akash @ 2023-03-20 | +16 | 0 comments
[Linkpost] Scott Alexander reacts to OpenAI's latest post
by Akash @ 2023-03-11 | +105 | 0 comments
Questions about Conjecure's CoEm proposal
by Akash @ 2023-03-09 | +19 | 0 comments
AI Governance & Strategy: Priorities, talent gaps, & opportunities
by Akash @ 2023-03-03 | +21 | 0 comments
Fighting without hope
by Akash @ 2023-03-01 | +35 | 0 comments
Qualities that alignment mentors value in junior researchers
by Akash @ 2023-02-14 | +31 | 0 comments
4 ways to think about democratizing AI [GovAI Linkpost]
by Akash @ 2023-02-13 | +35 | 0 comments
How evals might (or might not) prevent catastrophic risks from AI
by Akash @ 2023-02-07 | +28 | 0 comments
Many AI governance proposals have a tradeoff between usefulness and feasibility
by Akash @ 2023-02-03 | +22 | 0 comments
Talk to me about your summer/career plans
by Akash @ 2023-01-31 | +31 | 0 comments
Advice I found helpful in 2022
by Akash @ 2023-01-28 | +40 | 0 comments
11 heuristics for choosing (alignment) research projects
by Akash @ 2023-01-27 | +30 | 0 comments
"Status" can be corrosive; here's how I handle it
by Akash @ 2023-01-24 | +22 | 0 comments
Wentworth and Larsen on buying time
by Akash @ 2023-01-09 | +48 | 0 comments
[Linkpost] Jan Leike on three kinds of alignment taxes
by Akash @ 2023-01-06 | +29 | 0 comments
My thoughts on OpenAI's alignment plan
by Akash @ 2022-12-30 | +16 | 0 comments
An overview of some promising work by junior alignment researchers
by Akash @ 2022-12-26 | +10 | 0 comments
Podcast: Tamera Lanham on AI risk, threat models, alignment proposals,...
by Akash @ 2022-12-20 | +14 | 0 comments
12 career advising questions that may (or may not) be helpful for people...
by Akash @ 2022-12-12 | +14 | 0 comments
Podcast: Shoshannah Tekofsky on skilling up in AI safety, visiting Berkeley, and...
by Akash @ 2022-11-25 | +14 | 0 comments
Winners of the community-building writing contest
by Akash @ 2022-11-25 | +29 | 0 comments
Announcing AI Alignment Awards: $100k research contests about goal...
by Akash @ 2022-11-22 | +60 | 0 comments
Ways to buy time
by Akash @ 2022-11-12 | +47 | 0 comments
Apply to attend an AI safety workshop in Berkeley (Nov 18-21)
by Akash @ 2022-11-06 | +19 | 0 comments
Instead of technical research, more people should focus on buying time
by Akash @ 2022-11-05 | +107 | 0 comments
Resources that (I think) new alignment researchers should know about
by Akash @ 2022-10-28 | +20 | 0 comments
Consider trying Vivek Hebbar's alignment exercises
by Akash @ 2022-10-24 | +16 | 0 comments
Possible miracles
by Akash @ 2022-10-09 | +38 | 0 comments
7 traps that (we think) new alignment researchers often fall into
by Akash @ 2022-09-27 | +73 | 0 comments
Apply for mentorship in AI Safety field-building
by Akash @ 2022-09-17 | +21 | 0 comments
AI Safety field-building projects I'd like to see
by Akash @ 2022-09-11 | +31 | 0 comments
13 background claims about EA
by Akash @ 2022-09-07 | +70 | 0 comments
Criticism of EA Criticisms: Is the real disagreement about cause prio?
by Akash @ 2022-09-02 | +30 | 0 comments
Four questions I ask AI safety researchers
by Akash @ 2022-07-17 | +30 | 0 comments
A summary of every "Highlights from the Sequences" post
by Akash @ 2022-07-15 | +47 | 0 comments
An unofficial Replacing Guilt tier list
by Akash @ 2022-07-02 | +41 | 0 comments
$500 bounty for alignment contest ideas
by Akash @ 2022-06-30 | +18 | 0 comments
A summary of every Replacing Guilt post
by Akash @ 2022-06-30 | +127 | 0 comments
Lifeguards
by Akash @ 2022-06-10 | +69 | 0 comments
Talk to me about your summer plans
by Akash @ 2022-05-04 | +87 | 0 comments
Three Reflections from 101 EA Global Conversations
by Akash @ 2022-04-25 | +128 | 0 comments
Round 1 winners of the community-builder writing contest
by Akash @ 2022-04-22 | +21 | 0 comments
Reflect on Your Career Aptitudes (Exercise)
by Akash @ 2022-04-10 | +16 | 0 comments
Time-Time Tradeoffs
by Akash @ 2022-04-01 | +61 | 0 comments
Questions That Lead to Impactful Conversations
by Akash, kuhanj @ 2022-03-24 | +63 | 0 comments
32 EA Forum Posts about Careers and Jobs (2020-2022)
by Akash @ 2022-03-19 | +30 | 0 comments
Community Builder Writing Contest: $20,000 in prizes for reflections
by Akash @ 2022-03-12 | +39 | 0 comments
We Ran a "Next Steps" Retreat for Intro Fellows
by Akash @ 2022-02-05 | +28 | 0 comments
Announcing a Student Essay Contest on The Precipice
by Akash, Neha @ 2022-01-30 | +55 | 0 comments
Advice I've Found Helpful as I Apply to EA Jobs
by Akash @ 2022-01-23 | +90 | 0 comments
We Should Run More EA Contests
by Akash @ 2022-01-12 | +35 | 0 comments
What are the best (brief) resources to introduce EA & longtermism?
by Akash @ 2021-12-19 | +5 | 0 comments
Six Takeaways from EA Global and EA Retreats
by Akash @ 2021-12-16 | +55 | 0 comments
Motivational Resources for Longtermists
by Akash @ 2021-12-08 | +6 | 0 comments
The Case for Reducing EA Jargon & How to Do It
by Akash @ 2021-11-22 | +27 | 0 comments
What Should be Taught in Workshops for Community Builders?
by Akash @ 2021-11-15 | +15 | 0 comments
Is there an EA Glossary?
by Akash @ 2021-11-02 | +9 | 0 comments
Stanley the Uncertain [Creative Writing Contest]
by Akash @ 2021-10-28 | +33 | 0 comments
Helping AMF create a system for donating stocks/securities
by Akash @ 2021-02-06 | +10 | 0 comments
EAs working at non-EA organizations: What do you do?
by Akash @ 2020-12-10 | +44 | 0 comments
What are some EA-aligned statements that almost everyone agrees with?
by Akash @ 2020-11-10 | +18 | 0 comments
Things I Learned at the EA Student Summit
by Akash @ 2020-10-27 | +148 | 0 comments