MikhailSamin

Are you interested in AI X-risk reduction and strategies? Do you have experience in comms or policy? Let’s chat!

aigsi.org develops educational materials and ads that most efficiently communicate core AI safety ideas to specific demographics, with a focus on producing a correct understanding of why smarter-than-human AI poses a risk of extinction. We plan to increase and leverage understanding of AI and existential risk from AI to impact the chance of institutions addressing x-risk.

Early results include ads that achieve a cost of $0.10 per click (to a website that explains the technical details of why AI experts are worried about extinction risk from AI) and $0.05 per engagement on ads that share simple ideas at the core of the problem.

Personally, I’m good at explaining existential risk from AI to people, including to policymakers. I have experience of changing minds of 3/4 people I talked to at an e/acc event.

Previously, I got 250k people to read HPMOR and sent 1.3k copies to winners of math and computer science competitions (including dozens of IMO and IOI gold medalists); have taken the GWWC pledge; created a small startup that donated >100k$ to effective nonprofits.

I have a background in ML and strong intuitions about the AI alignment problem. I grew up running political campaigns and have a bit of a security mindset.

My website: contact.ms

You’re welcome to schedule a call with me before or after the conference: contact.ms/ea30 

Posts

Drexler's Nanosystems is now available online
by MikhailSamin @ 2024-06-01 | +32 | 0 comments
Drexler's Nanosystems is now available online
by MikhailSamin @ 2024-06-01 | 0 | 0 comments
Drexler's Nanosystems is now available online
by MikhailSamin @ 2024-06-01 | 0 | 0 comments
Claude 3 claims it's conscious, doesn't want to die or be modified
by MikhailSamin @ 2024-03-04 | +8 | 0 comments
FTX expects to return all customer money; clawbacks may go away
by MikhailSamin @ 2024-02-14 | +38 | 0 comments
An EA used deceptive messaging to advance her project; we need mechanisms to...
by MikhailSamin @ 2024-02-13 | +24 | 0 comments
NYT is suing OpenAI&Microsoft for alleged copyright infringement; some quick...
by MikhailSamin @ 2023-12-28 | +29 | 0 comments
Some quick thoughts on "AI is easy to control"
by MikhailSamin @ 2023-12-07 | +5 | 0 comments
It's OK to eat shrimp: EAs Make Invalid Inferences About Fish Qualia and Moral...
by MikhailSamin @ 2023-11-13 | –4 | 0 comments
A transcript of the TED talk by Eliezer Yudkowsky
by MikhailSamin @ 2023-07-12 | +39 | 0 comments
Please wonder about the hard parts of the alignment problem
by MikhailSamin @ 2023-07-11 | +8 | 0 comments
I have thousands of copies of HPMOR in Russian. How to use them with the most...
by MikhailSamin @ 2022-12-27 | +39 | 0 comments
You won’t solve alignment without agent foundations
by MikhailSamin @ 2022-11-06 | +14 | 0 comments
Saving lives near the precipice
by MikhailSamin @ 2022-07-29 | +18 | 0 comments
Samin's Quick takes
by MikhailSamin @ 2022-07-24 | +1 | 0 comments