Agustín Covarrubias's Quick takes

By Agustín Covarrubias 🔸 @ 2024-01-03T21:58 (+6)

null
Agustín Covarrubias 🔸 @ 2025-10-31T21:04 (+42)

Not sure who needs to hear this, but Hank Green has published two very good videos about AI safety this week: an interview with Nate Soares and a SciShow explainer on AI safety and superintelligence.

Incidentally, he appears to have also come up with the ITN framework from first principles (h/t @Mjreard).

Hopefully this is auspicious for things to come?

akash 🔸 @ 2025-10-31T21:12 (+16)

Hank Green should attend an EAG next year.

Arnold Beckham @ 2025-11-01T16:47 (+3)

Only if someone's inviting him perhaps? @akash 🔸 

Agustín Covarrubias 🔸 @ 2025-10-31T21:14 (+3)

so true

Lorenzo Buonanno🔸 @ 2025-11-01T12:08 (+13)

Hopefully this is auspicious for things to come?

My understanding is that they already raise and donate millions of dollars per year to effective projects in global health (especially tuberculosis)
For what it's worth, their subreddit seems a bit ambivalent about explicit "effective altruism" connections (see here or here)

 

Btw, I would be surprised if the ITN framework was independently developed from first principles:

  • He says exactly the same 3 things in the same order
  • They have known about effective altruism for at least 11 years (see the top comment here)
  • There have been many effective altruism themed videos in their "Project for Awesome" campaign several years
  • They have collaborated several times with 80,000 hours and Giving What We Can
  • There are many other reasonable things you can come up with (e.g. urgency)
Agustín Covarrubias @ 2024-08-22T18:24 (+28)

SB 1047 is a critical piece of legislation for AI safety, but there haven’t been great ways of getting up to speed, especially since the bill has been amended several times. Since the bill's now finalized, better resources exist to catch up. Here's a few:

If you are working in AI safety or AI policy, I think understanding this bill is pretty important. Hopefully this helps.

Agustín Covarrubias @ 2024-01-03T21:58 (+27)

Some random appreciations (because someone nudged me to give positive feedback that I would have otherwise not shared with anyone):

  1. ^

    Having been in similar positions many times as a community builder (both inside and outside EA), I know just how difficult the job is, and how, for example, often what others see as clear failures are just the result of them lacking information which can't be publicly shared without harming others. 

Agustín Covarrubias @ 2024-05-01T00:32 (+9)

Quick poll [✅ / ❌]: Do you feel like you don't have a good grasp of Shapley values, despite wanting to? 

(Context for after voting: I'm trying to figure out if more explainers of this would be helpful. I still feel confused about some of its implications, despite having spent significant time trying to understand it)

Stan Pinsent @ 2024-05-03T06:58 (+1)

I have a post that takes readers through a basic example of how to calculate Shapley values.

Agustín Covarrubias @ 2024-05-03T21:52 (+7)

I read your post while I was writing up the wiki article on Shapley values and thought it was really useful. Thanks for making that post!