Gavin's Quick takes

By Gavin @ 2022-01-29T15:46 (+5)

null
technicalities @ 2022-01-29T15:46 (+29)

Inspired by Jaime's charming rundown of his quarterly(!) output, I'll put something up:

In 2021, I

Charles He @ 2022-01-29T20:39 (+3)

This seems overwhelmingly awesome, congrats and I hope you are doing great in the Bahamas.

As a small point, and a sincere question,  I'm curious about the "personal framework" or beliefs that led you to stop consuming even low levels of caffeine and alchohol, but at the same time, start or try using the medications you indicated. 

I'm curious because some people I met, who foreswear alcohol and caffeine, would also oppose the personal use of many medications too.

To be clear, I find any combination of abstinence/use of any of those 4 things fine (and not my business unless openly discussed).

technicalities @ 2022-01-29T21:15 (+8)

Thanks Charles!

My reasoning about caffeine is here. For common genomes, I expect it to have no chronic cognitive benefit and to harm sleep quality for basically no gain. I think I'm one of those genomes. Nor do I get the pleasure or motivation others seem to. (The same reasoning probably applies to all stimulants.) Might get into fancy loose-leaf tea one day, but just for fun.

No particular reasoning about booze. Certainly not puritanism. The alleged health benefits fell apart (or rather the credibility of the field studying it did), I don't much like it, and luckily my social life doesn't need the help.

When reading up for Off Road I started to wonder if maybe I am mildly ADHD myself. I opted for the House MD method of diagnosis: suck it and see.

technicalities @ 2022-01-29T21:19 (+3)

I should mention that some clever friends of mine try "stimulant cycling" instead of quitting caffeine entirely. This might avoid the downregulation trap.

casebash @ 2022-01-29T20:59 (+2)

Wow, sounds like an amazing year!

What's the standard for AI Safety Camp these days?

technicalities @ 2022-01-29T21:22 (+2)

I should have said "median" (supply-side: participants just being really good) rather than "standard" (our setting a high bar).

Bunch of ML PhD students and people whose writing I seriously admired before they applied.

This year is interesting cos we tried hard to get non-ML people to join. We've got a pro Continental philosopher coming for instance!

Gavin @ 2022-12-14T16:24 (+18)

Looks like we have a cost-saving way to prevent 7 billion male chick cullings a year.

I snipe at accelerationist anti-welfarists in the thread, but it's an empirical question whether removing horrifying parts of the horrifying system ends up delaying abolition and being net-harmful. It seems extremely unlikely (and assumes that one-shot abolition is possible) but I haven't modelled it.

Gavin @ 2022-08-17T14:25 (+12)

So happy to see this new longtermist fellowship running in Kenya.

Gavin @ 2022-10-02T16:09 (+10)

Bostrom selects his most neglected paper here.

Hauke Hillebrandt @ 2022-10-02T17:36 (+5)

crossposted from my blog

 'Nick Bostrom's 'Future of Humanity' papers'

In 2018, Nick Bostrom published an anthology of his papers in German under “The Future of Humanity”:

  1. The Future of Humanity
  2. Existential Risk Prevention as Global Priority
  3. In Defense of Posthuman Dignity
  4. Dignity and Enhancement
  5. Why I Want to be a Posthuman When I Grow Up
  6. Are You Living In A Computer Simulation?

Some other good papers by him:

Hauke Hillebrandt @ 2022-10-02T17:38 (+2)

Also  might be worth paging radiobostrom.com

Gavin @ 2022-04-30T11:29 (+9)

On Frank Ramsey, the first explicit longtermist

Pablo @ 2022-04-30T16:11 (+14)

I liked your post! But I don't find the claim that Ramsey was the first "explicit" longtermist very plausible

. The quote about discounting being "ethically indefensible and arises merely from the weakness of the imagination" echoes points made earlier by other economists, e.g. Pigou:

Generally speaking, everybody prefers present pleasures or satisfactions of given magnitude to future pleasures or satisfactions of equal magnitude, even when the latter are perfectly certain to occur. But this preference for present pleasures does not -- the idea is self-contradictory -- imply that a present pleasure of given magnitude is any greater than a future pleasure of the same magnitude. It implies only that our telescopic faculty is defective, and that we, therefore, see future pleasures, as it were, on a diminished scale

This is from The Economics of Welfare, published when Ramsey was a teenager, and eight years before the essay in which the quote appears.

Gavin @ 2022-04-30T17:02 (+10)

I was very unclear about what justifies that claim, pardon: 

Ramsey deriving the form of the intertemporal decision and then setting  seems much clearer than Pigou (or Sidgwick, who waved in the direction of the position much earlier than either). 

"First quantitative longtermist"? "First strong longtermist"?

Pablo @ 2022-04-30T17:36 (+6)

Ah, right. Yes, regardless of what we call him, this is undoubtedly a significant milestone in the historical development of longtermism. (I'm not personally comfortable with calling Ramsey or anyone else the "first" [qualification] longtermist because I think longtermism involves multiple claims, not just an endorsement of a zero discount rate, although that claim is clearly a central one.)

I'd love to see more posts exploring early longtermist or proto-longtermist thinking!

Gavin @ 2023-02-19T13:31 (+4)

it is good to omit doing what might perhaps bring some profit to the living, when we have in view the accomplishment of other ends that will be of much greater advantage to posterity.

 

- Descartes (1637)

Gavin @ 2022-10-03T14:15 (+5)

Lovely satire of international development. 

(h/t Eva Vivalt)

Gavin @ 2022-09-13T20:07 (+5)

The ladder of EA weirdness

  1. Obligation to the global poor

  2. Obligation to farmed nonhumans

  3. Obligation to wild nonhumans

...

n. Obligation to potential humans and nonhumans

...

m. Obligation to take psychedelics / dissolve the self

o. Obligation to electrons

...

p. Obligation to acausally trade with those outside the light cone

q. Obligation to acausally trade with those elsewhere in the multiverse

r. Obligation to entities somewhere inside the universal prior

Gavin @ 2022-12-01T18:29 (+4)

On AI quietism. Distinguish four things:

  1. Not believing in AGI takeover.
  2. Not believing that AGI takeover is near. (Ng)
  3. Believing in AGI takeover, but thinking it'll be fine for humans. (Schmidhuber)
  4. Believing that AGI will extinguish humanity, but this is fine. 
    1. because the new thing is superior (maybe by definition, if it outcompetes us). 
    2. because scientific discovery is the main thing

(4) is not a rational lack of concern about an uncertain or far-off risk: it's lack of caring, conditional on the risk being real.

Can there really be anyone in category (4) ?


I expect this cope to become more common over the next few years.

RyanCarey @ 2022-12-01T18:42 (+2)

(4) was definitely the story with Ben Goertzen and his "Cosmism". I expect some "a/acc" libertarian types will also go for it. But it is and will stay pretty fringe imo.

Gavin @ 2022-09-28T12:28 (+4)

There is a vast amount of philosophical progress. But almost all of it is outside philosophy. Jaw-dropping list, just on the topic of democracy; things that Rousseau writing on democracy suffers from lacking:

 

https://www.tandfonline.com/doi/full/10.1080/0020174X.2022.2124542

Pablo @ 2022-09-28T15:25 (+2)

Great epigraph!

Gavin @ 2022-08-10T15:14 (+4)

Review of the New Yorker piece. It's a model of its type, for good and ill but mostly good. 

The good: The essence is correct. EA is now powerful enough that public scrutiny is fully justified. Lewis-Kraus engages with the ideas, and skips tabloid cheap shots. (The house style always involves little gossipy comments about fashion and eye colour, but here it's more about scruffy clothing than physical appearance). 

For instance, it's extremely easy to caricature utilitarianism. Certainly many professional philosophers do. But Lewis-Kraus chooses the neutral definition: no cavilling about hedonism, reductionism, Gradgrind, nor very much about honor. Similarly, AI risk is oddly underemphasised, and we all know how easy that is to piss on. 

The hypothesis of MacAskill's bad faith is entertained and rejected. So too with Bernard Williams' quietism: looked at and put back on the shelf. "perhaps one thought too few".

The bad: gossip and false balance. Girlfriends and buildings are named, needlessly, privacy and risk be damned. The dissident's gender is revealed for absolutely no reason. Journalists as a class have an underdeveloped sense of the risks they are exposing people to. The house style demands irrelevant detail, and apparently places style above potential impacts.

I can't help but admire the symbols he picks out of real life, even though they are the nonfiction equivalent of puns or entrail reading:

* Of xrisk research: "an Oxford building that overlooks a graveyard."

* "The room featured a series of ornately carved wooden clocks, all of which displayed contrary times; an apologetic sign read “Clocks undergoing maintenance,” but it was an odd portent for a talk about the future"

* "We passed People’s Park, which had become a tent city, but his eyes flicked toward the horizon."


Some risible bits:


> abandon the world view of the “benevolent capitalist” and, just as Engels worked in a mill to support Marx, to live up to its more thoroughgoing possibilities

Incredible. Engels ran a Manchester cotton mill and inherited a fifth of it; he was a benevolent capitalist!


> the chances of human extinction during the next century stand at about 1–6, or the odds of Russian roulette

That's not how odds work


> It does, in any case, seem convenient that a group of moral philosophers and computer scientists happened to conclude that the people most likely to safeguard humanity’s future are moral philosophers and computer scientists

jfc. If you worry that practitioners of a field are ignoring something, you're a crank and a trespasser. If you worry about the tail risks of your own field, you're suffering from convenient delusions of grandiosity.

The PR suspicion is funny ("Was MacAskill’s gambit with me—the wild swimming in the frigid lake—merely a calculation that it was best to start things off with a showy abdication of the calculus?"). GLK didn't mention any of this in his profile of Rothberg, a businessman with incentives and a presumably similarly sized filter on his speech. But mention consequentialism and suddenly everyone assumes you're a master at acting and a 4D chess player. But he was just primed for it by the dissident so nvm.


> I could see how comforting it was, when everything seemed so awful, to take refuge on the higher plane of millenarianism.

Literally backwards. I find it much more emotionally difficult to contemplate x-risk than terrible but limited events.


But overall GLK is the real deal, as good as magazine writers get. See also him on Paige Harden and Scott Alexander.

Gavin @ 2022-06-07T16:18 (+4)

TIL I learned about the Utilitarian Fandom.

(Derives from old Felicifia, and so I guess Pablo wrote a lot of it.)

Gavin @ 2022-05-31T07:59 (+3)

Several absurd things about this video, but we could learn a lot about delivery from it.

I want to save the world and - you know, money - money's great! I can't get enough money. And you know what i'm going to do with it? I'm going to buy wilderness areas with it! 

Every single cent I get goes straight into conservation. And guess what Charles: I don't give a rip whose money it is mate. I'll use it and i'll spend it on buying land.

Passion can make even bullet-biting instrumental harm sound noble and humane. 

(Obviously this is a symmetric weapon.)

Gavin @ 2022-03-27T22:01 (+3)

Ben Franklin's diary included the daily exhortation to rise and work some "Powerful Goodness". Better name than Effective Altruism tbf.

A_lark @ 2022-03-29T02:20 (+1)

Love this!

Leftism virtue cafe @ 2022-03-28T04:33 (+1)

yeh i never like the name 'effective altruism'

Gavin @ 2022-08-12T09:33 (+2)

Thread for serious AI safety researchers who aren't longtermists

Gabriel 

Shoker

Gavin @ 2022-06-30T21:33 (+2)

"Effective Accelerationism"

(Kent Brockman: I for one welcome our Vile Offspring.)

Gavin @ 2022-06-28T11:40 (+2)

List of important project ideas from Alyssa Vance

niplav @ 2022-06-28T11:42 (+1)

The link is broken, I'm afraid.

Gavin @ 2022-06-28T12:31 (+2)

fixd thanks

Gavin @ 2022-05-01T11:44 (+2)

Trevor Chow offers a simple explanation/criterion for the neartermist / longtermism / progress studies divide

Gavin @ 2022-03-16T11:14 (+2)

PlumX is an academic web analytics service, looking at how papers are shared. It's mostly not very good, but they recently added Overton, which specifically scrapes the occasions a paper is cited in policy documents. This seems important!