Sam Clarke's Quick takes

By Sam Clarke @ 2021-10-04T08:10 (+3)


Sam Clarke @ 2023-06-09T14:56 (+16)

(Post 3/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)

Some hot takes on AI governance field-building strategy

SamClarke @ 2021-10-04T08:10 (+16)

An effective mental health intervention, for me, is listening to a podcast which ideally (1) discusses the thing I'm struggling with and (2) has EA, Rationality or both in the background. I gain both in-the-moment relief, and new hypotheses to test or tools to try.

Esp since it would be scalable, this makes me think that creating an EA mental health podcast would be an intervention worth testing - I wonder if anyone is considering this?

In the meantime, I'm on the look out for good mental health podcasts in general.

MichaelA @ 2021-10-04T09:38 (+8)

This does sound like an interesting idea. And my impression is that many people found the recent mental health related 80k episode very useful (or at least found that it "spoke to them"). 

Maybe many episodes of Clearer Thinking could also help fill this role? 

Maybe one could promote specific podcast episodes of this type, see if people found them useful in that way, and if so then encourage those podcasts to have more such eps or a new such podcast to start?

Though starting a podcast is pretty low-cost, so it'd be quite reasonable to just try it without doing that sort of research first.

SamClarke @ 2021-10-04T15:54 (+8)

Incidentally, that 80k episode and some from Clearer Thinking are the exact examples I had in mind!

Maybe one could promote specific podcast episodes of this type, see if people found them useful in that way, and if so then encourage those podcasts to have more such eps or a new such podcast to start?

As a step towards this, and in case any other find it independently useful, here are the episodes of Clearer Thinking that I recall finding helpful for my mental health (along with the issues they helped with).

  • #11 Comfort Languages and Nuanced Thinking (for thinking through I what need, and what loved ones need, in difficult times)
  • #21 Antagonistic Learning and Civilization (had some useful thoughts about how education has taught me that breaking rules makes me bad, whereas in reality, breaking rules is just a cost to include in my calculation of what the best action is)
  • #22 Self-Improvement and Research Ethics (getting more traction on why my attempts at self-improvement often don't work)
  • #25 Happiness and Hedonic Adaptation (hedonic adaptation seems like a very important concept for living a happier life, and this is the best discussion of it that I've heard)
  • #26 Past / Future Selves and Intrinsic Values (I recall something being useful about how I relate to past and future me)
  • #43 Online and IRL Relationships (getting relationships are a big part of my happiness and this had a very dense collection of insights about how to do relationships well - other dense insights have come from reading Nonviolent Communication and doing Circling with partners)
  • #54 Self-Improvement and Behavior Change (lots of stuff, most important was realising that many "negative" behaviour patterns are actually bringing you some benefit in a convoluted way, and until you identify find a substitute for that benefit, they'll be very hard to change)
  • #60 Heaven and hell on earth (thinking about the value of "bad" mental states like anxiety and depression)
  • #65 Utopia on earth and morality without guilt (thinking through how I relate to my desire to do good, guilt vs bright desire; the handle of "clingy-ness" for a certain flavour of mental experiences)
  • #68 How to communicate better with the people in your life (getting more traction on why some social interactions leave me feeling disconnected/isolated)
David_Althaus @ 2021-12-30T15:33 (+7)

I've been thinking about starting such an EA mental health podcast for a while now (each episode would feature a guest describing their history with EA and mental health struggles, similar to the 80k episode with Howie).

However, every EA whom I've asked to interview—only ~5 people so far, to be fair—was concerned that such an episode would be net negative for their career (by, e.g., becoming less attractive to future employers or collaborators). I think such concerns are not unreasonable though it seems easy to overestimate them.

Generally, there seems to be a tradeoff between how personal the episode is and how likely the episode is to backfire on the interviewee.

One could mitigate such concerns by making episodes anonymous (and perhaps anonymizing the voice as well). Unfortunately, my sense is that this would make such episodes considerably less valuable.

I'm not sure how to navigate this; perhaps there are solutions I don't see. I also wonder how Howie feels about having done the 80k episode. My guess is that he's happy that he did it; but if he regrets it that would make me even more hesitant to start such a podcast.

HowieL @ 2021-12-31T00:53 (+8)

I thought about this a bunch before releasing the episode (including considering various levels of anonymity). Not sure that I have much to say that's novel but I'd be happy to chat with you about it if it would help you decide whether to do this.[1]

The short answer is:

  1. Overall, I'm very glad we released my episode. It ended up getting more positive feedback than I expected and my current guess is that in expectation it'll be sufficiently beneficial to the careers of other people similar to me that any damage to my own career prospects will be clearly worth it.
  2. It was obviously a bit stressful to put basically everything I've ever been ashamed of onto the internet :P, but overall releasing the episode has not been (to my knowledge) personally costly to me so far. 
    1. My guess is that the episode didn't do much harm to my career prospects within EA orgs (though this is in part because a lot of the stuff I talked about in the episode was already semi-public knowledge w/in EA and any  future EA  employer would have learned about them before deciding to hire me anyway). 
    2. My guess is that if I want to work outside of EA in the future, the episode will probably make some paths less accessible. For example, I'm less sure the episode would have been a good idea if it was very important to me to keep U.S. public policy careers on the table.

[1] Email me if you want to make that happen since the Forum isn't really integrated into my workflow. 

David_Althaus @ 2022-01-01T11:19 (+1)

Thanks, Howie! Sent you an email.

Sam Clarke @ 2023-06-09T14:59 (+13)

(Post 4/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)

Some exercises for developing good judgement

I’ve spent a bit of time over the last year trying to form better judgement. Dumping some notes here on things I tried or considered trying, for future reference.

  1. ^

    I think this framing of the exercise might have been mentioned to me by Michael Aird.

michel @ 2023-06-09T17:57 (+2)
  • Find Google Docs where people (whose judgement you respect) have left comments and an overall take on the promisingness of the idea. Hide their comments and form your own take. Compare. (To make this a faster process, pick a doc/idea where you have enough background knowledge to answer without looking up loads of things)

This is a good tip! Hadn't thought of this.

Sam Clarke @ 2022-04-25T15:08 (+6)

Ways of framing EA that (extremely anecdotally*) make it seem less ick to newcomers. These are all obvious/boring; I'm mostly recording them here for my own consolidation

These frames can also apply to any specific cause area.

*like, I remember talking to a few people who became more sympathetic when I used these frames.

Stefan_Schubert @ 2022-04-25T16:06 (+6)

I like the thinking in some ways, but think there are also some risks. For instance, emphasising EA being diverse in its ways of doing good could make people expect it to be more so than it actually is, which could lead to disappointment. In some ways, it could be good to be upfront with some of the less intuitive aspects of EA.

Sam Clarke @ 2022-04-26T16:03 (+3)

Agreed, thanks for the pushback!

Sam Clarke @ 2023-06-09T14:51 (+5)

(Post 1/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)

Some key uncertainties in AI governance field-building

According to me, these are some of the key uncertainties in AI governance field-building—questions which, if we had better answers to them, might significantly influence decisions about how field-building should be done.

How best to find/upskill more people to do policy development work?

What are the most important profiles that aren’t currently being hired for, but nonetheless might matter?

Reasons why this seems important to get clarity on:

To what extent should talent pipeline efforts treat AI governance as a (pre-)paradigmatic field?

  1. ^

    Re: “positions with a deadline”: it seems plausible to me that there will be these windows of opportunity when important positions come up, and if you haven’t built the traits you need by that time, it’s too late. E.g. more talent very skilled at public comms would probably have been pretty useful in Q1-2 2023.

  2. ^

    Counterpoint: the strongest version of this consideration assumes a kind of “efficient market hypothesis” for people building up their own skills. If people aren’t building up their own skills efficiently, then there could still be significant gains from helping them to do so, even for positions that are currently being hired for. Still, I think this consideration carries some weight.

Sam Clarke @ 2023-06-09T15:25 (+4)

(Post 6/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)

Some heuristics for prioritising between talent pipeline interventions

Explicit backchaining is one way to do prioritisation. I sometimes forget that there are other useful heuristics, like:

Sam Clarke @ 2023-06-09T14:54 (+4)

(Post 2/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)

Misc things it seems useful to do/find out

Sam Clarke @ 2023-06-09T15:18 (+2)

(Post 5/N with some rough notes on AI governance field-building strategy. Posting here for ease of future reference, and in case anyone else thinking about similar stuff finds this helpful.)

Laundry list of talent pipeline interventions

Note to self: more detailed but less structured version of these notes here.