I'm interviewing Nova Das Sarma about AI safety and information security. What shouId I ask her?

By Robert_Wiblin @ 2022-03-25T15:38 (+17)

Next week for The 80,000 Hours Podcast I'll be interviewing Nova Das Sarma.

She works to improve computer and information security at Anthropic, a recently founded AI safety and research company.

She's also helping to find ways to provide more compute for AI alignment work in general.

Here's an (outdated) LinkedIn and in-progress personal website, and an old EA Forum post from Claire Zabel and Luke Muehlhauser about the potential EA relevance of information security.

What should I ask her?


Jaime Sevilla @ 2022-03-25T17:58 (+9)

What are the key companies Nova would like to help strenghten their computer security?

MathiasKB @ 2022-03-25T19:46 (+8)

Thank you for asking this question on the forum! 

It has been somewhat frustrating to follow you on Facebook and seeing all these great people you were about to interview, without being able to contribute with anything.

Jaime Sevilla @ 2022-03-25T18:06 (+5)

Any hot takes on the recent NVIDIA hack? Was it preventable? Was it expected? Any AI Safety implications?

Jaime Sevilla @ 2022-03-25T18:00 (+5)

Why is Anthropic working on computer security? What are the key computer security problems she thinks is prioritary to solve?

Jaime Sevilla @ 2022-03-25T17:56 (+5)

How worried is she about dual use of https://hofvarpnir.ai/  for capability development?

Jaime Sevilla @ 2022-03-25T17:56 (+5)

What AI Safety research lines are most bottlenecked on compute?

Jaime Sevilla @ 2022-03-26T15:01 (+4)

Is there any work on historical studies of leaks in the ML field? 

Would you like such a project to exist? What sources of information are there?

Erich_Grunewald @ 2022-03-25T20:29 (+4)

How large a portion of infosec risk is due to software/hardware issues and how large due to social engineering?

JulianHazell @ 2022-03-25T19:39 (+3)

How important is compute for AI development relative to other inputs? How certain are you of this?

JulianHazell @ 2022-03-25T19:37 (+3)

There have been estimates that there are around 100 AI researchers & engineers focused on AI alignment. This seems quite small given the scale of the problem. What are some of the bottlenecks for scaling up, and what is being done to alleviate this?

Erich_Grunewald @ 2022-03-25T20:30 (+1)

To what extent if any is centralisation/decentralisation useful in improving infosec?

Erich_Grunewald @ 2022-03-25T20:30 (+1)

The obvious way to reduce infosec risk is to beef up security. Another is to disincentivise actors from attacking in the first place. Are there any good ways of doing that (other than maybe criminal justice)?

JulianHazell @ 2022-03-25T19:36 (+1)

What opportunities, if any at all, do individual donors (or people who might not have suitable backgrounds for safety/governance careers) have to positively shape the development of AI?