Latest comments on the EA Forum

Comments on 2024-04-24

Jason @ 2024-04-24T13:16 (+2) in response to You probably want to donate any Manifold currency this week

Also, there's a statement in a publicly-accessible stand up meeting summary (speaker unknown) that "I also tentatively think Manifund wants to end the charity program after this"

https://manifoldmarkets.notion.site/Standup-de82dcce7411478fa52048c229a2eda2

(screenshot on file in my email)

Austin @ 2024-04-24T16:00 (+3)

Speaker there was me - I think there's like a ~70% chance we decide to end the charity program after this round of payments, tentatively as of May 15 or or end of May.

The primary reason is that the real money cash outs should supersede it, and running the charity program is operationally kind of annoying. The charity program is neither a core focus for Manifold or Manifund, so we might not want to keep it up. Will make a broader announcement if this ends up being the case.

JP Addison @ 2024-04-24T13:46 (+6) in response to Nathan Young's Quick takes

I want to throw in a bit of my philosophy here.

Status note: This comment is written by me and reflects my views. I ran it past the other moderators, but they might have major disagreements with it.

I agree with a lot of Jason’s view here. The EA community is indeed much bigger than the EA Forum, and the Forum would serve its role as an online locus much less well if we used moderation action to police the epistemic practices of its participants.

I don’t actually think this that bad. I think it is a strength of the EA community that it is large enough and has sufficiently many worldviews that any central discussion space is going to be a bit of a mishmash of epistemologies.[1]

Some corresponding ways this viewpoint causes me to be reluctant to apply Habryka’s philosophy:[2]

Something like a judicial process is much more important to me. We try much harder than my read of LessWrong to apply rules consistently. We have the Forum Norms doc and our public history of cases forms something much closer to a legal code + case law than LW has. Obviously we’re far away from what would meet a judicial standard, but I view much of my work through that lens. Also notable is that all nontrivial moderation decisions get one or two moderators to second the proposal.

Related both to the epistemic diversity, and the above, I am much more reluctant to rely on my personal judgement about whether someone is a positive contributor to the discussion. I still do have those opinions, but am much more likely to use my power as a regular user to karma-vote on the content.

Some points of agreement: 

Old users are owed explanations, new users are (mostly) not

Agreed. We are much more likely to make judgement calls in cases of new users. And much less likely to invest time in explaining the decision. We are still much less likely to ban new users than LessWrong. (Which, to be clear, I don’t think would have been tenable on LessWrong when they instituted their current policies, which was after the launch of GPT-4 and a giant influx of low quality content.)

I try really hard to not build an ideological echo chamber

Most of the work I do as a moderator is reading reports and recommending no official action. I have the internal experience of mostly fighting others to keep the Forum an open platform. Obviously that is a compatible experience with overmoderating the Forum into an echo chamber, but I will at least bring this up as a strong point of philosophical agreement.

Final points:

I do think we could potentially give more “near-ban” rate limits, such as the 1 comment/3 days. The main benefit of this I see is as allowing the user to write content disagreeing with their ban.

  1. ^

    Controversial point! Maybe if everyone adopted my own epistemic practices the community would be better off. It would certainly gain in the ability to communicate smoothly with itself, and would probably spend less effort pulling in opposite directions as a result, but I think the size constraints and/or deference to authority that would be required would not be worth it.

  2. ^

    Note that Habryka has been a huge influence on me. These disagreements are what remains after his large influence on me.

Jason @ 2024-04-24T15:27 (+2)

I do think we could potentially give more “near-ban” rate limits, such as the 1 comment/3 days. The main benefit of this I see is as allowing the user to write content disagreeing with their ban.

I think the banned individual should almost always get at least one final statement to disagree with the ban after its pronouncement. Even the Romulans allowed (will allow?) that. Absent unusual circumstances, I think they -- and not the mods -- should get the last word, so I would also allow a single reply if the mods responded to the final statement.

More generally, I'd be interested in ~"civility probation," under which a problematic poster could be placed for ~three months as an option they could choose as an alternative to a 2-4 week outright ban. Under civility probation, any "probation officer" (trusted non-mod users) would be empowered to remove content too close to the civility line and optionally temp-ban the user for a cooling-off period of 48 hours. The theory of impact comes from the criminology literature, which tells us that speed and certainty of sanction are more effective than severity. If the mods later determined after full deliberation that the second comment actually violated the rules in a way that crossed the action threshold, then they could activate the withheld 2-4 week ban for the first offense and/or impose a new suspension for the new one. 

We are seeing more of this in the criminal system -- swift but moderate "intermediate sanctions" for things like failing a drug test, as opposed to doing little about probation violations until things reach a certain threshold and then going to the judge to revoke probation and send the offender away for at least several months. As far as due process, the theory is that the offender received their due process (consideration by a judge, right to presumption of innocence overcome only by proof beyond a reasonable doubt) in the proceedings that led to the imposition of probation in the first place.

Lakin @ 2024-04-24T14:54 (+1) in response to If You're Going To Eat Animals, Eat Beef and Dairy

Few people I talk to in these communities know this, but animals several-thousand-pound in size used to roam the Earth. And not just wooly mammoths either: 8,000-lb sloths (Megatherium), armadillos "roughly the same size and weight as a Volkswagen Beetle" (Glyptodon), 7,000-lb marsupials (Diprotodon), and many more. 

Suspiciously, the megafauna on each continent mostly went extinct every time humans got on that continent.[1][2] Personally, I suspect that humans largely evolved to hunt megafauna. 

Note that megafauna meat is quite different than meat of smaller animals. It had a much larger amount (and percentage!) of fat.[3]

These days I eat 1-1.5lbs of lean beef per day, and I supplement 8-16oz of fat in the form of butter. I've been eating basically just this (also some seafood, rarely a potato, rarely some other things) for the last 3.5 years.

  1. ^

    https://ourworldindata.org/quaternary-megafauna-extinction Every time humans got on a new continent, all of the megafauna died…

  2. ^
  3. ^
Lakin @ 2024-04-24T15:17 (+1)

Someone DM'd me asking for more information. See https://www.mostly-fat.com/eat-meat-not-too-little-mostly-fat/ and https://www.youtube.com/watch?v=UOQCKEoflPc 

Vasco Grilo @ 2024-04-24T14:14 (+11) in response to If You're Going To Eat Animals, Eat Beef and Dairy

Thanks for the follow up, Matthew! Strongly upvoted.

My best guess is also that additional GHG emissions are bad for wild animals, but it has very low resilience, so I do not want to advocate for conservationism. My views on the badness of the factory-farming of birds are much more resilient, so I am happy with people switching from poultry to beef, although I would rather have them switch to plant-based alternatives. Personally, I have been eating plant-based for 5 years.

Moreover, as Clare Palmer argues

Just flagging this link seems broken.

I think you have misinterpreted what my article about discounting is recommending.

Sorry! It sounded so much like you were referring to Weitzman 1998 that I actually did not open the link. My bad! I have now changed "That paper says one should discount" to "One should discount".

a traditional justification for discounting is that if we didn’t, we’d be obliged to invest nearly all our income, since the number of future people could be so great.

I do not think this is a good argument for discounting. If it turns out we should invest nearly all our income to maximise welfare, then I would support it. In reality, I think the possibility of the number of future people being so great is more than offset by the rapid decay of how much we could affect such people, such that investing nearly all our income is not advisable.

I argue for discounting damages to those who would be much better off than we are at conventional rates, but giving sizable—even if not equal—weight to damages that would be suffered by everyone else, regardless of how far into the future they exist.

This rejects (perfect) impartiality, right? I strongly endorse expected total hedonistic utilitarianism, so I would rather maintain impartiality. At the same time, the above seems like a good heuristic for better outcomes even under fully impartial views.

Matthew Rendall @ 2024-04-24T15:10 (+3)

Thanks, Vasco! That's odd--the Clare Palmer link is working for me. It's her paper 'Does Nature Matter? The Place of the Nonhuman in the Ethics of Climate Change'--what looks like a page proof is posted on www.academia.edu.

One of the arguments in my paper is that we're not morally obliged to do the expectably best thing of our own free will, even if we reliably can, when it would benefit others who will be much better off than we are whatever we do. So I think we disagree on that point. That said, I entirely endorse your argument about heuristics, and have argued elsewhere that even act utilitarians will do better if they reject extreme savings rates.

Denis @ 2024-04-24T13:12 (+15) in response to Personal reflections on FTX

I've had quite a few disagreements with other EA's about this, but I will repeat it here, and maybe get more downvotes. But I've worked for 20 years in a multinational and I know how companies deal with potential reputational damage, and I think we need to at least ask ourselves if it would be wise for us to do differently. 

EA is part of a real world which isn't necessarily fair and logical. Our reputation in this real world is vitally important to the good work we plan to do - it impacts our ability to get donations, to carry out projects, to influence policy. 

We all believe we're willing to make sacrifices to help EA succeed. 

Here's the hard part: Sometimes the sacrifice we have to make is to go against our own natural desire to do what feels right. 

It feels right that Will and other people from EA should make public statements about how bad we feel about FTX and how we'll try to do better in future and so on. 

But the legal advice Will got was correct, and was also what was best for EA. 

There was zero chance that the FTX scandal could reflect positively on EA. But there were steps Will and others could take to minimise the damage to the EA movement. 

The most important of these is to distance ourselves from the crimes that SBF committed. He committed those crimes. Not EA. Not Will. SBF caused massive harm to EA and to Will. 

I see a lot of EA's soul-searching and asking what we could have done differently. Which is good in a way. But we need to be very careful. Admitting that we (EA movement) should have done better is tantamount to admitting that we did something wrong, which is quickly conflated in public opinion with "SBF and EA are closely intertwined, one and the same." (Remember how low public awareness of EA is in general). 

The communication needs to be: EA was defrauded by SBF. He has done us massive harm. We want to make sure nobody will ever do that to EA again. We need to ensure that any public communication puts SBF on one side, and EA on the other side, a victim of his crimes just like the millions of investors. 

The fact that he saw himself as an EA is not the point. Nobody in EA encouraged him to commit fraud. People in EA may have been a bit naive, but nobody in EA was guilty of defrauding millions of investors. That was SBF. 

So Will's legal advice was spot on. Any immediate statement would have seemed defensive, as if he had something to feel guilty about, which would have resulted in more harm to the public perception of EA because of association with SBF.  

  • SBF committed crimes. 
  • Will or EA did not commit crimes, or contribute to SBF's crimes. 
  • SBF defrauded and harmed millions of investors.
  • SBF also defrauded and harmed the EA movement. 
  • The EA movement is angry with SBF. We want to make sure that nobody ever does that to us again. 

As "good people", we all want to look back and ask if there was something we could have done differently that would have prevented Sam from harming those millions of innocent investors. It is natural to wonder, the same way we see any tragedy and wonder if we could have prevented it. But we need to be very careful about the PR aspects of this (and yes, we all hate PR, but it is reality - read Pirandello if you don't believe me!). If we start making statements that suggest that we did something wrong, we're just going to be directing some of the public anger away from SBF and towards EA. I don't think that's helpful. 

There is one caveat: if someone acting on behalf on an EA organisation truly did something wrong which contributed to this fraud, then obviously we need to investigate that. But I am not aware of any evidence to suggest that happened. 

Jason @ 2024-04-24T15:05 (+5)

The communication needs to be: EA was defrauded by SBF. He has done us massive harm. We want to make sure nobody will ever do that to EA again. We need to ensure that any public communication puts SBF on one side, and EA on the other side, a victim of his crimes just like the millions of investors. 

Upvoted. 

But a problem is: I don't think many people outside of EA believe that, nor will they believe it merely because EA sources self-interestedly repeat it. They do not have priors to believe EA was not somehow responsible for what happened, and the publicly-available evidence (mainly the Time article) points in the direction of at least some degree of responsibility. The more EA proclaims its innocence without coughing up evidence that is credible to the broader world, the more guilty it looks.

But I've worked for 20 years in a multinational and I know how companies deal with potential reputational damage, and I think we need to at least ask ourselves if it would be wise for us to do differently. 

Consistency in Following the Usual Playbook

The usual playbook, as I see it, includes shutting up and hope that people lose interest and move on. I accept that there's a reasonable case for deploying the usual playbook. But I don't think you can really pick and choose elements out of that playbook. 

For example, one of the standard plays is to quickly throw out most people in the splash zone of the scandal without any real adjudication of their culpability. This serves in part as propitiation to the masses, as well as a legible signal that you're taking the whole thing seriously. It obviates some of the need for a publicly-credible investigation, because you've already expelled anyone for whom there is a reasonable basis to believe culpability might exist. This is true even though the organization knows there is a substantial possibility that the sacrificed individuals were not culpable, or at least not culpable enough to warrant their termination/removal. 

Under the standard playbook, at least Will and Nick would be rendered personae non grata very early in the story. Their work is thrown down the memory hole, and neither is spoken of positively for at least several years. None of that is particularly fair or truth-seeking, of course. But I don't think you get to have it both ways -- you can't credibly decline to follow the playbook because it is not truth-seeking and is unfair to certain insiders, and then reject calls for a legible, truth-seeking investigation because it doesn't line up with the playbook. Although people have resigned from boards, and the extent of their "soft power" has been diminished, I don't think EA has followed the standard crisis-management playbook in this regard.

Who Judges the Organization's Crisis Response?

For non-profits, often the judge of the organization's crisis response is the donor base. In most cases, that donor base is much more diverse and less intertwined than it is at (say) EVF. Although donors are not necessarily well-aligned to broader public concerns, the practical requirement that organizations satisfy concerns of their donor base means that the standard playbook includes at least a proxy for taking actions to address public concerns. EVF has had, as far as I can tell, exactly one systematically important donor and that donor is also ~an insider. Compare to, e.g., universities facing heat over alleged antisemitism from various billionaire donors. There's no suggestion that Ackman, Lauder, et al. are in an insider relationship to Penn, MIT, etc. in the same way Open Phil is to EVF. Thus, the standard playbook is generally used under circumstances where there is an baseline business requirement to be somewhat willing to take actions to address a proxy for public concerns.

As I see it, at least some (but not all) of the calls for transparency and investigation are related to a desire for some sort of broader accountability that most non-profits face much more than EA organizations. As far as I can tell, the most suitable analogue to "a medium-size group of donors" for other nonprofits may be "the EA community, many members of which are making large indirect donations in terms of salary sacrifice." The challenge is that discussions with the EA community are public in a way that communications with a group of a few dozen key donors are not for many non-profits.

David Thorstad @ 2024-04-24T03:12 (+4) in response to Motivation gaps: Why so much EA criticism is hostile and lazy

I’d like to hope that academics are aiming for a level of understanding above that of a typical user on an Internet forum.

All academic works have a right to reply. Many journals print response papers and it is a live option to submit responses to critical papers, including mine. It is also common to respond to others in the context of a larger paper. The only limit to the right of academic reply is that the response must be of suitable quality and interest to satisfy expert reviewers.

Larks @ 2024-04-24T14:57 (+10)

All academic works have a right to reply. Many journals print response papers and it is a live option to submit responses to critical papers, including mine. It is also common to respond to others in the context of a larger paper. The only limit to the right of academic reply is that the response must be of suitable quality and interest to satisfy expert reviewers.

This sounds like... not having a right of reply? The right means a strong presumption if not an absolute policy that criticized people can defend themselves in the same place as they were criticized. If only many, not all, journals print response papers, and only if you jump through whatever hoops and criteria the expert reviewers put in front of you, I'm not sure how this is different to 'no right of reply'.

A serious right would mean journals would send you an email with the critical paper, the code and the underlying data, and give you time to create your response (subject to some word limit, copy-editing etc.) for them to publish. 

Joseph Lemien @ 2024-04-24T14:55 (+2) in response to Joseph Lemien's Quick takes

Ben West recently mentioned that he would be excited about a common application. It got me thinking a little about it. I don't have the technical/design skills to create such a system, but I want to let my mind wander a little bit on the topic. This is just musings and 'thinking out out,' so don't take any of this too seriously.

What would the benefits be for some type of common application? For the applicant: send an application to a wider variety of organizations with less effort. For the organization: get a wider variety of applicants.

Why not just have the post openings posted to LinkedIn and allow candidates to use the Easy Apply function? Well, that would probably result in lots of low quality applications. Maybe include a few question to serve as a simple filter? Perhaps a question to reveal how familiar the candidate is with the ideas and principles of EA? Lots of low quality applications aren't really an issue if you have an easy way to filter them out. As a simplistic example, if I am hiring for a job that requires fluent Spanish, and a dropdown prompt in the job application asks candidates to evaluate their Spanish, it is pretty easy to filter out people that selected "I don't speak any Spanish" or "I speak a little Spanish, but not much."

But the benefit of Easy Apply (from the candidate's perspective) is the ease. John Doe candidate doesn't have to fill in a dozen different text boxes with information that is already on his resume. And that ease can be gained in an organization's own application form. An application form literally can be as simple as prompts for name, email address, and resume. That might be the most minimalistic that an application form could be while still being functional. And there are plenty of organizations that have these types of applications: companies that use Lever or Ashby often have very simple and easy job application forms (example 1, example 2).

Conversely, the more than organizations prompt candidates to explain "Why do you want to work for us" or "tell us about your most impressive accomplishment" the more burdensome it is for candidates. Of course, maybe making it burdensome for candidates is intentional, and the organization believes that this will lead to higher quality candidates. There are some things that you can't really get information about by prompting candidates to select an item from a list.

Lakin @ 2024-04-24T14:47 (+2) in response to If You're Going To Eat Animals, Eat Beef and Dairy

I suspect this may also be true for some large fraction of the population.

Lakin @ 2024-04-24T14:54 (+1)

Few people I talk to in these communities know this, but animals several-thousand-pound in size used to roam the Earth. And not just wooly mammoths either: 8,000-lb sloths (Megatherium), armadillos "roughly the same size and weight as a Volkswagen Beetle" (Glyptodon), 7,000-lb marsupials (Diprotodon), and many more. 

Suspiciously, the megafauna on each continent mostly went extinct every time humans got on that continent.[1][2] Personally, I suspect that humans largely evolved to hunt megafauna. 

Note that megafauna meat is quite different than meat of smaller animals. It had a much larger amount (and percentage!) of fat.[3]

These days I eat 1-1.5lbs of lean beef per day, and I supplement 8-16oz of fat in the form of butter. I've been eating basically just this (also some seafood, rarely a potato, rarely some other things) for the last 3.5 years.

  1. ^

    https://ourworldindata.org/quaternary-megafauna-extinction Every time humans got on a new continent, all of the megafauna died…

  2. ^
  3. ^
Lakin @ 2024-04-24T14:43 (+3) in response to If You're Going To Eat Animals, Eat Beef and Dairy

I do this for health reasons. I feel significantly better and have much more energy when I do this.

Lakin @ 2024-04-24T14:47 (+2)

I suspect this may also be true for some large fraction of the population.

Lakin @ 2024-04-24T14:42 (+3) in response to If You're Going To Eat Animals, Eat Beef and Dairy

I eat (almost) only meat and butter, and by my calculations this comes out to ~1 cow/year.

Lakin @ 2024-04-24T14:43 (+3)

I do this for health reasons. I feel significantly better and have much more energy when I do this.

Lakin @ 2024-04-24T14:42 (+3) in response to If You're Going To Eat Animals, Eat Beef and Dairy

I eat (almost) only meat and butter, and by my calculations this comes out to ~1 cow/year.

Stephen Clare @ 2024-04-24T14:29 (+7) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

Your first job out of college is the hardest to get. Later on you'll be able to apply for jobs while working, which is less stressful, and you'll have a portfolio of successful projects you can point to. So hopefully it's some small comfort that applying for jobs will probably never suck as much as it does for you right now. I know how hard it can be though, and I'm sorry. A few years ago after graduating from my Master's, I submitted almost 30 applications before getting an offer and accepting one.

I do notice that the things you're applying to all seem very competitive. Since they're attractive positions at prestigious orgs, the applicant pool is probably unbelievably strong. When there are hundreds of very strong applicants applying for a handful of places, many good candidates simply have to get rejected. Hopefully that's some more small comfort. 

It may also be worth suggesting, though, for anyone in a similar position who may be reading this, that it's also fine to look for less competitive opportunities (particularly early on in your career). Our lives will be very long and adventurous (hopefully), and you may find it easier to get jobs at the MITs and Horizons and GovAIs of the world after getting some experience at organisations which may seem somewhat less prestigious.

To speak on my own experience, among those ~30 places that rejected me were some of the same orgs you mention (e.g. GovAI, OpenPhil, etc.). The offer I ended up accepting was from Founders Pledge. I was proud to get that offer and the FP research team there was and is very strong, but I do think it's probably the case that it was a somewhat less competitive application process. But ultimately I loved working at FP. I got to do some cool and rigorous research, and I've had very interesting work opportunities since. It's probably even the case that, at that point in my career, FP was a better place for me to end up than some of the other places I applied.

Eli_Nathan @ 2024-04-24T14:26 (+4) in response to Who's hiring? (Feb-May 2024)

CEA is hiring for someone to lead the EA Global program. CEA's three flagship EAG conferences facilitate tens of thousands of highly impactful connections each year that help people build professional relationships, apply for jobs, and make other critical career decisions.

This is a role that comes with a large amount of autonomy, and one that plays a key role in shaping a key piece of the effective altruism community’s landscape. 

See more details and apply here!

Davis_Kingsley @ 2024-04-24T14:22 (+1) in response to You probably want to donate any Manifold currency this week

Thanks for the tip! Just donated my mana to GiveDirectly.

Rebecca @ 2024-04-23T20:06 (+2) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

I find it’s very rare to have to do the work test in 1 sitting, and I at least usually do better if I can split it up a bit

Joseph Lemien @ 2024-04-24T14:15 (+3)

From my own experience as an applicant for EA organizations, I'd estimate that maybe 50% to 60% of the work sample tests or the tasks that I've been assigned have either requested or required that I complete it in one sitting.

And I do think that there is a lot of benefit in limiting the time candidates can spend on it, otherwise we might end up assessing Candidate A's ten hours of work and Candidate B's three hours of work. We want to make sure it is a fair evaluation of what each of them can do when we control for as many variables as possible.

Matthew Rendall @ 2024-04-24T11:30 (+9) in response to If You're Going To Eat Animals, Eat Beef and Dairy

Thanks, Vasco! You are welcome to list me in the acknowledgements. I’m glad to have the reference to Tomasik’s post, which Timothy Chan also cited below, and appreciate the detailed response. That said, I doubt we should be agnostic on whether the overall effects of global heating on wild animals will be good or bad.

The main upside of global heating for animal welfare, on Tomasik’s analysis, is that it could decrease wild animal populations, and thus wild animal suffering. On balance, in his view, the destruction of forests and coral reefs is a good thing. But that relies on the assumption that most wild animal lives are worse than nothing. Tomasik and others have given some powerful reasons to think this, but there are also strong arguments on the other side. Moreover, as Clare Palmer argues, global heating might increase wild animal numbers—and even Tomasik doesn’t seem sure it would decrease them.

In contrast, the main downside, in Tomasik’s analysis, is less controversial: that global heating is going to cause a lot of suffering by destroying or changing the habitats to which wild animals are adapted. ‘An “unfavorable climate”’, notes Katie McShane, ‘is one where there isn’t enough to eat, where what kept you safe from predators and diseases in the past no longer works, where you are increasingly watching your offspring and fellow group members suffer and die, and where the scarcity of resources leads to increased conflict, destabilizing group structures and increasing violent confrontations.' Palmer isn’t so sure: ‘Even if some animals suffer and die, climate change might result in an overall net gain in pleasure, or preference satisfaction (for instance) in the context of sentient animals. This may be unlikely, but it’s not impossible.’ True. But even if it’s only unlikely that global heating’s effects will be goodit means that its effects on existing animals are bad in expectation.

There’s another factor which Tomasik mentions in passing: there is some chance that global heating could lead to the collapse of human civilisation—perhaps in conjunction with other factors. In some respects, this would be a good thing for non-humans—notably, it would put an end to factory farming. It would also preclude the possibility of our spreading wild animal suffering to other planets. On the flipside, however, it would also eliminate the possibility of our doing anything sizable to mitigate wild animal suffering on earth.

Now, while there may be more doubt about the upsides than about the downsides of our GHG emissions, that needn’t decide the issue if the upsides are big enough. But even if Tomasik and others are right that wild animal lives are bad on net, there’s also doubt about whether global heating will reduce the number of wild animal lives. And even if both are these premises are met, I’m not sure they’d outweigh the suffering global heating would inflict on those wild animals who will exist.

I think you have misinterpreted what my article about discounting is recommending. In contrast to some other writers, I’m not calling for discounting at the lowest possible rate. Even at a rate of 2%, catastrophic damages evaporate in cost-benefit analysis if they occur more than a couple of centuries hence, thus giving next to no weight to the distant future. However, a traditional justification for discounting is that if we didn’t, we’d be obliged to invest nearly all our income, since the number of future people could be so great. I argue for discounting damages to those who would be much better off than we are at conventional rates, but giving sizable—even if not equal—weight to damages that would be suffered by everyone else, regardless of how far into the future they exist. My approach thus has affinities with the one advocated by Geir Asheim here

One implication is that while we’re under no obligation to make future rich people richer, we ought to be very worried about worst-case climate change scenarios, since in those humans could be poorer. Another is that since most non-humans for the foreseeable future will be worse off than we are, we shouldn’t discount their interests away. 

Vasco Grilo @ 2024-04-24T14:14 (+11)

Thanks for the follow up, Matthew! Strongly upvoted.

My best guess is also that additional GHG emissions are bad for wild animals, but it has very low resilience, so I do not want to advocate for conservationism. My views on the badness of the factory-farming of birds are much more resilient, so I am happy with people switching from poultry to beef, although I would rather have them switch to plant-based alternatives. Personally, I have been eating plant-based for 5 years.

Moreover, as Clare Palmer argues

Just flagging this link seems broken.

I think you have misinterpreted what my article about discounting is recommending.

Sorry! It sounded so much like you were referring to Weitzman 1998 that I actually did not open the link. My bad! I have now changed "That paper says one should discount" to "One should discount".

a traditional justification for discounting is that if we didn’t, we’d be obliged to invest nearly all our income, since the number of future people could be so great.

I do not think this is a good argument for discounting. If it turns out we should invest nearly all our income to maximise welfare, then I would support it. In reality, I think the possibility of the number of future people being so great is more than offset by the rapid decay of how much we could affect such people, such that investing nearly all our income is not advisable.

I argue for discounting damages to those who would be much better off than we are at conventional rates, but giving sizable—even if not equal—weight to damages that would be suffered by everyone else, regardless of how far into the future they exist.

This rejects (perfect) impartiality, right? I strongly endorse expected total hedonistic utilitarianism, so I would rather maintain impartiality. At the same time, the above seems like a good heuristic for better outcomes even under fully impartial views.

SusannaF @ 2024-04-24T12:02 (+12) in response to If You're Going To Eat Animals, Eat Beef and Dairy

Hi Ulrik - I'm not aware of farms which have slaughter facilities on-site (is this more common in the US than in the UK maybe?) and the 'small, local high welfare farm' is also a bit of a myth. The majority of farmed animals (85% in the UK, 99% in the US) are factory-farmed (i.e. raised in the most intensive conditions), are killed at a fraction of their natural lifespans, transported and killed in high-speed slaughterhouses - whilst abuses have been documented in both large and small 'local' slaughter facilities. The 2 conditions / requirements you have stipulated in your post are hypothetical / wishful-thinking type scenarios which are, unfortunately, not borne out by the realities of farming and killing billions of animals for consumption. 

Ulrik Horn @ 2024-04-24T14:12 (+2)

Ok that's good to know - I will probably be pretty vegan going forward. By the way I love all the hard evidence here on the EAF about animal welfare. It really makes me viscerally upset about the scale of abuse we currently inflict on our feathered and four-legged friends. So thanks to you and everyone else on further opening my eyes and heart to this.

Nathan Young @ 2024-04-18T08:56 (+33) in response to Nathan Young's Quick takes

An alternate stance on moderation (from @Habryka.)

This is from this comment responding to this post about there being too many bans on LessWrong. Note how the LessWrong is less moderated than here in that it (I guess) responds to individual posts less often, but more moderated in that I guess it rate limits people more without reason. 

I found it thought provoking. I'd recommend reading it.

Thanks for making this post! 

One of the reasons why I like rate-limits instead of bans is that it allows people to complain about the rate-limiting and to participate in discussion on their own posts (so seeing a harsh rate-limit of something like "1 comment per 3 days" is not equivalent to a general ban from LessWrong, but should be more interpreted as "please comment primarily on your own posts", though of course it shares many important properties of a ban).

This is a pretty opposite approach to the EA forum which favours bans.

Things that seem most important to bring up in terms of moderation philosophy: 

Moderation on LessWrong does not depend on effort

"Another thing I've noticed is that almost all the users are trying.  They are trying to use rationality, trying to understand what's been written here, trying to apply Baye's rule or understand AI.  Even some of the users with negative karma are trying, just having more difficulty."

Just because someone is genuinely trying to contribute to LessWrong, does not mean LessWrong is a good place for them. LessWrong has a particular culture, with particular standards and particular interests, and I think many people, even if they are genuinely trying, don't fit well within that culture and those standards. 

In making rate-limiting decisions like this I don't pay much attention to whether the user in question is "genuinely trying " to contribute to LW,  I am mostly just evaluating the effects I see their actions having on the quality of the discussions happening on the site, and the quality of the ideas they are contributing. 

Motivation and goals are of course a relevant component to model, but that mostly pushes in the opposite direction, in that if I have someone who seems to be making great contributions, and I learn they aren't even trying, then that makes me more excited, since there is upside if they do become more motivated in the future.

I sense this is quite different to the EA forum too. I can't imagine a mod saying I don't pay much attention to whether the user in question is "genuinely trying". I find this honesty pretty stark. Feels like a thing moderators aren't allowed to say. "We don't like the quality of your comments and we don't think you can improve".

Signal to Noise ratio is important

Thomas and Elizabeth pointed this out already, but just because someone's comments don't seem actively bad, doesn't mean I don't want to limit their ability to contribute. We do a lot of things on LW to improve the signal to noise ratio of content on the site, and one of those things is to reduce the amount of noise, even if the mean of what we remove looks not actively harmful. 

We of course also do other things than to remove some of the lower signal content to improve the signal to noise ratio. Voting does a lot, how we sort the frontpage does a lot, subscriptions and notification systems do a lot. But rate-limiting is also a tool I use for the same purpose.

Old users are owed explanations, new users are (mostly) not

I think if you've been around for a while on LessWrong, and I decide to rate-limit you, then I think it makes sense for me to make some time to argue with you about that, and give you the opportunity to convince me that I am wrong. But if you are new, and haven't invested a lot in the site, then I think I owe you relatively little. 

I think in doing the above rate-limits, we did not do enough to give established users the affordance to push back and argue with us about them. I do think most of these users are relatively recent or are users we've been very straightforward with since shortly after they started commenting that we don't think they are breaking even on their contributions to the site (like the OP Gerald Monroe, with whom we had 3 separate conversations over the past few months), and for those I don't think we owe them much of an explanation. LessWrong is a walled garden. 

You do not by default have the right to be here, and I don't want to, and cannot, accept the burden of explaining to everyone who wants to be here but who I don't want here, why I am making my decisions. As such a moderation principle that we've been aspiring to for quite a while is to let new users know as early as possible if we think them being on the site is unlikely to work out, so that if you have been around for a while you can feel stable, and also so that you don't invest in something that will end up being taken away from you.

Feedback helps a bit, especially if you are young, but usually doesn't

Maybe there are other people who are much better at giving feedback and helping people grow as commenters, but my personal experience is that giving users feedback, especially the second or third time, rarely tends to substantially improve things. 

I think this sucks. I would much rather be in a world where the usual reasons why I think someone isn't positively contributing to LessWrong were of the type that a short conversation could clear up and fix, but it alas does not appear so, and after having spent many hundreds of hours over the years giving people individualized feedback, I don't really think "give people specific and detailed feedback" is a viable moderation strategy, at least more than once or twice per user. I recognize that this can feel unfair on the receiving end, and I also feel sad about it.

I do think the one exception here is that if people are young or are non-native english speakers. Do let me know if you are in your teens or you are a non-native english speaker who is still learning the language. People do really get a lot better at communication between the ages of 14-22 and people's english does get substantially better over time, and this helps with all kinds communication issues.

Again this is very blunt but I'm not sure it's wrong. 

We consider legibility, but its only a relatively small input into our moderation decisions

It is valuable and a precious public good to make it easy to know which actions you take will cause you to end up being removed from a space. However, that legibility also comes at great cost, especially in social contexts. Every clear and bright-line rule you outline will have people budding right up against it, and de-facto, in my experience, moderation of social spaces like LessWrong is not the kind of thing you can do while being legible in the way that for example modern courts aim to be legible. 

As such, we don't have laws. If anything we have something like case-law which gets established as individual moderation disputes arise, which we then use as guidelines for future decisions, but also a huge fraction of our moderation decisions are downstream of complicated models we formed about what kind of conversations and interactions work on LessWrong, and what role we want LessWrong to play in the broader world, and those shift and change as new evidence comes in and the world changes.

I do ultimately still try pretty hard to give people guidelines and to draw lines that help people feel secure in their relationship to LessWrong, and I care a lot about this, but at the end of the day I will still make many from-the-outside-arbitrary-seeming-decisions in order to keep LessWrong the precious walled garden that it is.

I try really hard to not build an ideological echo chamber

When making moderation decisions, it's always at the top of my mind whether I am tempted to make a decision one way or another because they disagree with me on some object-level issue. I try pretty hard to not have that affect my decisions, and as a result have what feels to me a subjectively substantially higher standard for rate-limiting or banning people who disagree with me, than for people who agree with me. I think this is reflected in the decisions above.

I do feel comfortable judging people on the methodologies and abstract principles that they seem to use to arrive at their conclusions. LessWrong has a specific epistemology, and I care about protecting that. If you are primarily trying to... 

  • argue from authority, 
  • don't like speaking in probabilistic terms, 
  • aren't comfortable holding multiple conflicting models in your head at the same time, 
  • or are averse to breaking things down into mechanistic and reductionist terms, 

then LW is probably not for you, and I feel fine with that. I feel comfortable reducing the visibility or volume of content on the site that is in conflict with these epistemological principles (of course this list isn't exhaustive, in-general the LW sequences are the best pointer towards the epistemological foundations of the site).

It feels cringe to read that basically if I don't get the sequences lessWrong might rate limit me. But it is good to be open about it. I don't think the EA forum's core philosophy is as easily expressed.

If you see me or other LW moderators fail to judge people on epistemological principles but instead see us directly rate-limiting or banning users on the basis of object-level opinions that even if they seem wrong seem to have been arrived at via relatively sane principles, then I do really think you should complain and push back at us. I see my mandate as head of LW to only extend towards enforcing what seems to me the shared epistemological foundation of LW, and to not have the mandate to enforce my own object-level beliefs on the participants of this site.

Now some more comments on the object-level: 

I overall feel good about rate-limiting everyone on the above list. I think it will probably make the conversations on the site go better and make more people contribute to the site. 

Us doing more extensive rate-limiting is an experiment, and we will see how it goes. As kave said in the other response to this post, the rule that suggested these specific rate-limits does not seem like it has an amazing track record, though I currently endorse it as something that calls things to my attention (among many other heuristics).

Also, if anyone reading this is worried about being rate-limited or banned in the future, feel free to reach out to me or other moderators on Intercom. I am generally happy to give people direct and frank feedback about their contributions to the site, as well as how likely I am to take future moderator actions. Uncertainty is costly, and I think it's worth a lot of my time to help people understand to what degree investing in LessWrong makes sense for them. 

JP Addison @ 2024-04-24T13:46 (+6)

I want to throw in a bit of my philosophy here.

Status note: This comment is written by me and reflects my views. I ran it past the other moderators, but they might have major disagreements with it.

I agree with a lot of Jason’s view here. The EA community is indeed much bigger than the EA Forum, and the Forum would serve its role as an online locus much less well if we used moderation action to police the epistemic practices of its participants.

I don’t actually think this that bad. I think it is a strength of the EA community that it is large enough and has sufficiently many worldviews that any central discussion space is going to be a bit of a mishmash of epistemologies.[1]

Some corresponding ways this viewpoint causes me to be reluctant to apply Habryka’s philosophy:[2]

Something like a judicial process is much more important to me. We try much harder than my read of LessWrong to apply rules consistently. We have the Forum Norms doc and our public history of cases forms something much closer to a legal code + case law than LW has. Obviously we’re far away from what would meet a judicial standard, but I view much of my work through that lens. Also notable is that all nontrivial moderation decisions get one or two moderators to second the proposal.

Related both to the epistemic diversity, and the above, I am much more reluctant to rely on my personal judgement about whether someone is a positive contributor to the discussion. I still do have those opinions, but am much more likely to use my power as a regular user to karma-vote on the content.

Some points of agreement: 

Old users are owed explanations, new users are (mostly) not

Agreed. We are much more likely to make judgement calls in cases of new users. And much less likely to invest time in explaining the decision. We are still much less likely to ban new users than LessWrong. (Which, to be clear, I don’t think would have been tenable on LessWrong when they instituted their current policies, which was after the launch of GPT-4 and a giant influx of low quality content.)

I try really hard to not build an ideological echo chamber

Most of the work I do as a moderator is reading reports and recommending no official action. I have the internal experience of mostly fighting others to keep the Forum an open platform. Obviously that is a compatible experience with overmoderating the Forum into an echo chamber, but I will at least bring this up as a strong point of philosophical agreement.

Final points:

I do think we could potentially give more “near-ban” rate limits, such as the 1 comment/3 days. The main benefit of this I see is as allowing the user to write content disagreeing with their ban.

  1. ^

    Controversial point! Maybe if everyone adopted my own epistemic practices the community would be better off. It would certainly gain in the ability to communicate smoothly with itself, and would probably spend less effort pulling in opposite directions as a result, but I think the size constraints and/or deference to authority that would be required would not be worth it.

  2. ^

    Note that Habryka has been a huge influence on me. These disagreements are what remains after his large influence on me.

JP Addison @ 2024-04-24T13:42 (+13) in response to JP Addison's Quick takes

With the US presidential election coming up this year, some of y’all will probably want to discuss it.[1] I think it’s a good time to restate our politics policy. tl;dr Partisan politics content is allowed, but will be restricted to the Personal Blog category. On-topic policy discussions are still eligible as frontpage material.

  1. ^

    Or the expected UK elections.

Denis @ 2024-04-24T13:34 (+1) in response to Announcing The New York Declaration on Animal Consciousness

Is it random that this appeared in the New York Times yesterday, or are the two related?

How Do We Know What Animals Are Really Feeling? - The New York Times (nytimes.com)

Regardless, it is great to see more realisation and communication around this topic. Most people just do not make any mental association between "food" and "animal suffering". One day this will all appear utterly barbaric, the way slavery appears barbaric to us today even though some highly reputed figures throughout history owned slaves. 

The more communication we have around animal consciousness and suffering, the faster that will happen. 

The best kind of communication may well be the kind that is not "accusatory" - just informative. Let people think about it for themselves rather than telling them what to think. 

Ultimately, maybe the best hope for ending animal suffering is alternative protein, and it is shocking how little money and effort is committed to this, given that it's also critical for climate, for hunger-reduction, for resilience. Alternative protein offers the potential to tell people "here is a cheaper, healthier, tastier, climate-friendlier... alternative to meat, which also avoids animal suffering." 

There are thousands of people who would jump on that statement and say it's unrealistic, but it's absolutely not. It's just that we're not treating it like the emergency that it is, we're not putting the same resources into it that we're putting into making more powerful iphones. We could choose to. 

 

Jason @ 2024-04-24T13:16 (+2) in response to You probably want to donate any Manifold currency this week

Also, there's a statement in a publicly-accessible stand up meeting summary (speaker unknown) that "I also tentatively think Manifund wants to end the charity program after this"

https://manifoldmarkets.notion.site/Standup-de82dcce7411478fa52048c229a2eda2

(screenshot on file in my email)

William_MacAskill @ 2024-04-18T11:47 (+147) in response to Personal reflections on FTX

On talking about this publicly

A number of people have asked why there hasn’t been more communication around FTX. I’ll explain my own case here; I’m not speaking for others. The upshot is that, honestly, I still feel pretty clueless about what would have been the right decisions, in terms of communications, from both me and from others, including EV, over the course of the last year and a half. I do, strongly, feel like I misjudged how long everything would take, and I really wish I’d gotten myself into the mode of “this will all take years.” 

Shortly after the collapse, I drafted a blog post and responses to comments on the Forum. I was also getting a lot of media requests, and I was somewhat sympathetic to the idea of doing podcasts about the collapse — defending EA in the face of the criticism it was getting. My personal legal advice was very opposed to speaking publicly, for reasons I didn’t wholly understand; the reasons were based on a general principle rather than anything to do with me, as they’ve seen a lot of people talk publicly about ongoing cases and it’s gone badly for them, in a variety of ways. (As I’ve learned more, I’ve come to see that this view has a lot of merit to it). I can’t remember EV’s view, though in general it was extremely cautious about communication at that time. I also got mixed comments on whether my Forum posts were even helpful; I haven’t re-read them recently, but I was in a pretty bad headspace at the time. Advisors said that by January things would be clearer. That didn’t seem like that long to wait, and I felt very aware of how little I knew.

The “time at which it’s ok to speak”, according to my advisors, kept getting pushed back. But by March I felt comfortable, personally, about speaking publicly. I had a blog post ready to go, but by this point the Mintz investigation (that is, the investigation that EV had commissioned) had gotten going. Mintz were very opposed to me speaking publicly. I think they said something like that my draft was right on the line where they’d consider resigning from running the investigation if I posted it. They thought the integrity of the investigation would be compromised if I posted, because my public statements might have tainted other witnesses in the investigation, or had a bearing on what they said to the investigators. EV generally wanted to follow Mintz’s view on this, but couldn’t share legal advice with me, so it was hard for me to develop my own sense of the costs and benefits of communicating. 

By December, the Mintz report was fully finished and the bankruptcy settlement was completed. I was travelling (vacation and work) over December and January, and aimed to record podcasts on FTX in February. That got delayed by a month because of Sam Harris’s schedule, so they got recorded in March. 

It’s still the case that talking about this feels like walking through a minefield. There’s still a real risk of causing unjustified and unfair lawsuits against me or other people or organisations, which, even if frivolous, can impose major financial costs and lasting reputational damage. Other relevant people also don’t want to talk about the topic, even if just for their own sanity, and I don’t want to force their hand. In my own case, thinking and talking about this topic feels like fingering an open wound, so I’m sympathetic to their decision.

Denis @ 2024-04-24T13:12 (+15)

I've had quite a few disagreements with other EA's about this, but I will repeat it here, and maybe get more downvotes. But I've worked for 20 years in a multinational and I know how companies deal with potential reputational damage, and I think we need to at least ask ourselves if it would be wise for us to do differently. 

EA is part of a real world which isn't necessarily fair and logical. Our reputation in this real world is vitally important to the good work we plan to do - it impacts our ability to get donations, to carry out projects, to influence policy. 

We all believe we're willing to make sacrifices to help EA succeed. 

Here's the hard part: Sometimes the sacrifice we have to make is to go against our own natural desire to do what feels right. 

It feels right that Will and other people from EA should make public statements about how bad we feel about FTX and how we'll try to do better in future and so on. 

But the legal advice Will got was correct, and was also what was best for EA. 

There was zero chance that the FTX scandal could reflect positively on EA. But there were steps Will and others could take to minimise the damage to the EA movement. 

The most important of these is to distance ourselves from the crimes that SBF committed. He committed those crimes. Not EA. Not Will. SBF caused massive harm to EA and to Will. 

I see a lot of EA's soul-searching and asking what we could have done differently. Which is good in a way. But we need to be very careful. Admitting that we (EA movement) should have done better is tantamount to admitting that we did something wrong, which is quickly conflated in public opinion with "SBF and EA are closely intertwined, one and the same." (Remember how low public awareness of EA is in general). 

The communication needs to be: EA was defrauded by SBF. He has done us massive harm. We want to make sure nobody will ever do that to EA again. We need to ensure that any public communication puts SBF on one side, and EA on the other side, a victim of his crimes just like the millions of investors. 

The fact that he saw himself as an EA is not the point. Nobody in EA encouraged him to commit fraud. People in EA may have been a bit naive, but nobody in EA was guilty of defrauding millions of investors. That was SBF. 

So Will's legal advice was spot on. Any immediate statement would have seemed defensive, as if he had something to feel guilty about, which would have resulted in more harm to the public perception of EA because of association with SBF.  

  • SBF committed crimes. 
  • Will or EA did not commit crimes, or contribute to SBF's crimes. 
  • SBF defrauded and harmed millions of investors.
  • SBF also defrauded and harmed the EA movement. 
  • The EA movement is angry with SBF. We want to make sure that nobody ever does that to us again. 

As "good people", we all want to look back and ask if there was something we could have done differently that would have prevented Sam from harming those millions of innocent investors. It is natural to wonder, the same way we see any tragedy and wonder if we could have prevented it. But we need to be very careful about the PR aspects of this (and yes, we all hate PR, but it is reality - read Pirandello if you don't believe me!). If we start making statements that suggest that we did something wrong, we're just going to be directing some of the public anger away from SBF and towards EA. I don't think that's helpful. 

There is one caveat: if someone acting on behalf on an EA organisation truly did something wrong which contributed to this fraud, then obviously we need to investigate that. But I am not aware of any evidence to suggest that happened. 

David Thorstad @ 2024-04-24T03:12 (+4) in response to Motivation gaps: Why so much EA criticism is hostile and lazy

I’d like to hope that academics are aiming for a level of understanding above that of a typical user on an Internet forum.

All academic works have a right to reply. Many journals print response papers and it is a live option to submit responses to critical papers, including mine. It is also common to respond to others in the context of a larger paper. The only limit to the right of academic reply is that the response must be of suitable quality and interest to satisfy expert reviewers.

Richard Y Chappell @ 2024-04-24T13:01 (+8)

Realistically, it is almost never in an academic's professional interest to write a reply paper (unless they are completely starved of original ideas). Referees are fickle, and if the reply isn't accepted at the original journal, very few other journals will even consider it, making it a bad time investment. (A real "right of reply" -- where the default expectation switches from 'rejection' to 'acceptance' -- might change the incentives here.)

Example: early in my career, I wrote a reply to an article that was published in Ethics. The referees agreed with my criticisms, and rejected my reply on the grounds that this was all obvious and the original paper never should have been published. I learned my lesson and now just post replies to my blog since that's much less time-intensive (and probably gets more readers anyway).

Damin Curtis @ 2024-04-24T11:19 (0) in response to You probably want to donate any Manifold currency this week

Nice, just donated my $31.23 worth of Mana to GiveWell! Wouldn't have known to do that otherwise, and took about 30 seconds. Thanks for the post :)

OllieBase @ 2024-04-24T12:44 (+1)

Maybe let the forum team know here (Toby's comment above) :)

Kerkko Pelttari @ 2024-04-24T10:37 (+5) in response to You probably want to donate any Manifold currency this week

Same, I only had ~800 mana free but wouldn't have realized to donate it otherwise, and it only took a minute.

OllieBase @ 2024-04-24T12:44 (+1)

Maybe let the forum know here (Toby's comment above) :)

tobytrem @ 2024-04-24T12:32 (+17) in response to You probably want to donate any Manifold currency this week

Thanks for sharing this on the Forum! 
If you (the reader) have donated your mana because of this post, I'd love it if you put a react on this comment. 

harfe @ 2024-04-23T04:06 (+49) in response to harfe's Quick takes

Consider donating all or most of your Mana on Manifold to charity before May 1.

Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then.

Also this part might be relevant for people with large positions they want to sell now:

One week may not be enough time for users with larger portfolios to liquidate and donate. We want to work individually with anyone who feels like they are stuck in this situation and honor their expected returns and agree on an amount they can donate at the original 100:1 rate past the one week deadline once the relevant markets have resolved.

tobytrem @ 2024-04-24T12:32 (+2)

Thanks for sharing this on the Forum! 
If you (the reader) have donated your mana because of this quick take, I'd love it if you put a react on this comment. 

Ulrik Horn @ 2024-04-24T09:09 (+2) in response to If You're Going To Eat Animals, Eat Beef and Dairy

Thanks for writing this, it drives home to me the point of taking a broad perspective when making ethical choices. I am wondering if you take animal product consumption a step further and look at only eating animal products where you know both of the below are true?

  1. The animals have a very high degree of welfare (think small, local farms you can visit, you know the farmer, etc.)
  2. The way they are slaughtered is the most humane possible, ideally on-farm etc. so they more or less have no idea what is coming for them until they are gone - in my mind this more or less has no suffering from a utilitarian perspective (unless the animals somehow are able to anticipate the slaughter and have increased anxiety throughout their lives because of it).

I have been pretty vegan so far, but people around me are arguing for the type of animal products above and I have a hard time pushing back on it.

SusannaF @ 2024-04-24T12:02 (+12)

Hi Ulrik - I'm not aware of farms which have slaughter facilities on-site (is this more common in the US than in the UK maybe?) and the 'small, local high welfare farm' is also a bit of a myth. The majority of farmed animals (85% in the UK, 99% in the US) are factory-farmed (i.e. raised in the most intensive conditions), are killed at a fraction of their natural lifespans, transported and killed in high-speed slaughterhouses - whilst abuses have been documented in both large and small 'local' slaughter facilities. The 2 conditions / requirements you have stipulated in your post are hypothetical / wishful-thinking type scenarios which are, unfortunately, not borne out by the realities of farming and killing billions of animals for consumption. 

Elizabeth @ 2024-04-24T03:46 (+5) in response to Personal reflections on FTX

that makes sense, sounds like it wasn't the concern for at least your group. He does describe it as "The rest of the management team was horrified and quit in a huff, loudly telling the investors that Bankman-Fried was dishonest and reckless", so unless there were multiple waves of management quitting it sounds like the book conflated multiple stories. 

Lorenzo Buonanno @ 2024-04-24T11:49 (+17)

Just to clarify, it seems that "The rest of the management team was horrified and quit in a huff, loudly telling the investors that Bankman-Fried was dishonest and reckless" is from Matt Levine, not from Michael Lewis.

I'm quickly skimming the relevant parts of Going Infinite, and it seems to me that Lewis highlights other issues as even more relevant than the missing $4M

Linch @ 2024-04-23T23:40 (+8) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

A relevant reframing here is whether having a PhD provides a high Bayes factor update to being hired. Eg, if people with and without PhDs have a 2% chance of being hired, but ">50% of successful applicants had a PhD" because most applicants have a PhD, then you should probably not include this, but if 1 in 50 applicants are hired, but it rises to 1 in 10 people if you have a PhD and falls to 1 in 100 if you don't, then the PhD is a massive evidential update even if there is no causal effect.

ixex @ 2024-04-24T11:40 (+1)

Exactly

Vasco Grilo @ 2024-04-24T07:38 (+5) in response to If You're Going To Eat Animals, Eat Beef and Dairy

Nice points, Matthew!

(a) It wasn't clear to me that the estimate of global heating damages was counting global heating damages to non-humans.

I have now clarified my estimate of the harms of GHG emissions only accounts for humans. I have also added:

estimated the scale of the welfare of wild animals is 4.21 M times that of farmed animals. Nonetheless, I have neglected the impact of GHG emissions on wild animals due to their high uncertainty. According to Brian Tomasik:

“On balance, I’m extremely uncertain about the net impact of climate change on wild-animal suffering; my probabilities are basically 50% net good vs. 50% net bad when just considering animal suffering on Earth in the next few centuries (ignoring side effects on humanity's very long-term future).”

In particular, it is unclear whether wild animals have positive/negative welfare.

I have added your name to the Acknowledgements. Let me know if you would rather remain anonymous.

(b) It appears you were working with a study that employed a discount rate of 2%. That's going to discount damages in 100 years to 13% of their present value, and damages in 200 years to 1.9% of their present value--and it goes downhill from there. But that seems very hard to justify. Discounting is often defended on the ground that our descendants will be richer than we are.

Carleton 2022 presents results for various discount rates, but I used the ones for their preferred value of 2 %. I have a footnote saying:

“Our preferred estimates use a discount rate of  = 2 %”. This is 1.08 (= 0.02/0.0185) times the 1.85 % (= (17.5/9.72)^(1/(2022 - 1990)) - 1) annual growth rate of the global real GDP per capita from 1990 to 2022. The adequate growth rate may be higher due to transformative AI, or lower owing to stagnation. I did not want to go into these considerations, so I just used Carleton 2022’s mainstream value.


But that rationale doesn’t apply to damages in worst-case scenarios. Because they could be so enduring, these damages are huge in expectation.

I used to think this was relevant, but mostly no longer do:

  • One should discount the future at the lowest possible rate, but it might still be the case that this is not much lower than 2 % (for reasons besides pure time discounting, which I agree should be 0).
  • I believe human extinction due to climate change is astronomically unlikely. I have a footnote with the following. "For donors interested in interventions explicitly targeting existential risk mitigation, I recommend donating to LTFF, which mainly supports AI safety. I guess existential risk from climate change is smaller than that from nuclear war (relatedly), and estimated the nearterm annual risk of human extinction from nuclear war is 5.93*10^-12, whereas I guess that from AI is 10^-6".
  • I guess human extinction is very unlikely to be an existential catastrophe. "For example, I think there would only be a 0.0513 % (= e^(-10^9/(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, to be existential". You can check the details of the Fermi estimate in the post.
  • If your worldview is such that very unlikely outcomes of climate chance still have meaningful expected value, the same will tend to apply to our treatment of animals. For example, I assume you would have to consider effects on digital minds.
  • I am open to indirect longterm effects dominating the expected value, but I suppose maximising more empirically quantifiable less uncertain effects on welfare is still a great heuristic.
Matthew Rendall @ 2024-04-24T11:30 (+9)

Thanks, Vasco! You are welcome to list me in the acknowledgements. I’m glad to have the reference to Tomasik’s post, which Timothy Chan also cited below, and appreciate the detailed response. That said, I doubt we should be agnostic on whether the overall effects of global heating on wild animals will be good or bad.

The main upside of global heating for animal welfare, on Tomasik’s analysis, is that it could decrease wild animal populations, and thus wild animal suffering. On balance, in his view, the destruction of forests and coral reefs is a good thing. But that relies on the assumption that most wild animal lives are worse than nothing. Tomasik and others have given some powerful reasons to think this, but there are also strong arguments on the other side. Moreover, as Clare Palmer argues, global heating might increase wild animal numbers—and even Tomasik doesn’t seem sure it would decrease them.

In contrast, the main downside, in Tomasik’s analysis, is less controversial: that global heating is going to cause a lot of suffering by destroying or changing the habitats to which wild animals are adapted. ‘An “unfavorable climate”’, notes Katie McShane, ‘is one where there isn’t enough to eat, where what kept you safe from predators and diseases in the past no longer works, where you are increasingly watching your offspring and fellow group members suffer and die, and where the scarcity of resources leads to increased conflict, destabilizing group structures and increasing violent confrontations.' Palmer isn’t so sure: ‘Even if some animals suffer and die, climate change might result in an overall net gain in pleasure, or preference satisfaction (for instance) in the context of sentient animals. This may be unlikely, but it’s not impossible.’ True. But even if it’s only unlikely that global heating’s effects will be goodit means that its effects on existing animals are bad in expectation.

There’s another factor which Tomasik mentions in passing: there is some chance that global heating could lead to the collapse of human civilisation—perhaps in conjunction with other factors. In some respects, this would be a good thing for non-humans—notably, it would put an end to factory farming. It would also preclude the possibility of our spreading wild animal suffering to other planets. On the flipside, however, it would also eliminate the possibility of our doing anything sizable to mitigate wild animal suffering on earth.

Now, while there may be more doubt about the upsides than about the downsides of our GHG emissions, that needn’t decide the issue if the upsides are big enough. But even if Tomasik and others are right that wild animal lives are bad on net, there’s also doubt about whether global heating will reduce the number of wild animal lives. And even if both are these premises are met, I’m not sure they’d outweigh the suffering global heating would inflict on those wild animals who will exist.

I think you have misinterpreted what my article about discounting is recommending. In contrast to some other writers, I’m not calling for discounting at the lowest possible rate. Even at a rate of 2%, catastrophic damages evaporate in cost-benefit analysis if they occur more than a couple of centuries hence, thus giving next to no weight to the distant future. However, a traditional justification for discounting is that if we didn’t, we’d be obliged to invest nearly all our income, since the number of future people could be so great. I argue for discounting damages to those who would be much better off than we are at conventional rates, but giving sizable—even if not equal—weight to damages that would be suffered by everyone else, regardless of how far into the future they exist. My approach thus has affinities with the one advocated by Geir Asheim here

One implication is that while we’re under no obligation to make future rich people richer, we ought to be very worried about worst-case climate change scenarios, since in those humans could be poorer. Another is that since most non-humans for the foreseeable future will be worse off than we are, we shouldn’t discount their interests away. 

Damin Curtis @ 2024-04-24T11:19 (0) in response to You probably want to donate any Manifold currency this week

Nice, just donated my $31.23 worth of Mana to GiveWell! Wouldn't have known to do that otherwise, and took about 30 seconds. Thanks for the post :)

OscarD @ 2024-04-24T11:05 (+3) in response to Priors and Prejudice

Great post, and an interesting counterfactual history!

Hooray for moral trade.

Evolutionary debunking arguments feel relevant re the causal history of our beliefes.

squeezy @ 2024-04-24T11:03 (+4) in response to Three Reasons Early Detection Interventions Are Not Obviously Cost-Effective

Thank you for writing this article! As a complete newcomer to pandemic preparedness at large, I found this extremely useful and a great example of work that surfaces and questions often unstated assumptions.

Although I don't have enough expertise to provide much meaningful feedback, I did want to bring up some thoughts I had regarding your arguments in Reason 2. Your 44 hospitalizations threshold in the numerical examples strikes me as reasonable, but it does also seem to me that the metagenomic sequencing of COVID-19 was related ― if not a critical precondition ― to China confirming its finding of a novel pathogen (source). I recognize that the early detection interventions you are calling into question here may be more of the form of mass/representative sampling programs, but it seems plausible to me that merely having the means to isolate and sequence a pathogen near the site of the outbreak in question could substantially affect time to confirmation.

My prior is that China is likely quite capable in that regard, but other countries may have fewer capabilities. All this to say that investing in more conventional metagenomic sequencing capacity in "sequencing deserts" could still be very cost effective. But note that this is all conjectural; I don't know anything about the distribution of sequencing capacity, nor even what it takes to identify, isolate and sequence a pathogen.

Thanks again for this brilliant piece!

Angelina Li @ 2024-04-24T09:31 (+2) in response to You probably want to donate any Manifold currency this week

Thanks, I found this a helpful nudge, and wouldn't have known about this otherwise :)

Kerkko Pelttari @ 2024-04-24T10:37 (+5)

Same, I only had ~800 mana free but wouldn't have realized to donate it otherwise, and it only took a minute.

yanni kyriacos @ 2024-04-23T23:55 (+3) in response to Yanni Kyriacos's Quick takes

This is an extremely "EA" request from me but I feel like we need a word for people (i.e. me) who are Vegans but will eat animal products if they're about to be thrown out. OpportuVegan? UtilaVegan?

tobytrem @ 2024-04-24T10:35 (+3)

If you predictably do this, you raise the odds that people around you will cook some/ buy some extra food so that it will be "thrown out", or offer you food they haven't quite finished (and that they'll replace with a snack later. 
So I'd recommend going with "Vegan" as your label, for practical as well as signalling reasons. 

OscarD @ 2024-04-24T10:35 (+3) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

One thing I have heard is that having long-ish application stages provides value by getting more people to think about relevant topics (I have heard this from at least two orgs I think). E.g. having several hundred people spend an hour writing a paragraphs about an AI safety topic might be good by virtue of generally having more people think more about this being good. I haven't seen a write-up weighing up the pros and cons of this though. I agree this can be bad for applicants.

Heramb Podar @ 2024-04-24T10:31 (+1) in response to Heramb Podar's Quick takes

I don't think we have a good answer to what happens after we do auditing of an AI model and find something wrong.

 

Given that our current understanding of AI's internal workings is at least a generation behind, it's not exactly like we can isolate what mechanism is causing certain behaviours. (Would really appreciate any input here- I see very little to no discussion on this in governance papers; it's almost as if policy folks are oblivious to the technical hurdles which await working groups)

yanni kyriacos @ 2024-04-23T23:55 (+3) in response to Yanni Kyriacos's Quick takes

This is an extremely "EA" request from me but I feel like we need a word for people (i.e. me) who are Vegans but will eat animal products if they're about to be thrown out. OpportuVegan? UtilaVegan?

Bella @ 2024-04-24T10:18 (+3)

I think the term I've heard (from non-EAs) is 'freegan' (they'll eat it if it didn't cause more animal products to be purchased!)

Ulrik Horn @ 2024-04-24T09:09 (+2) in response to If You're Going To Eat Animals, Eat Beef and Dairy

Thanks for writing this, it drives home to me the point of taking a broad perspective when making ethical choices. I am wondering if you take animal product consumption a step further and look at only eating animal products where you know both of the below are true?

  1. The animals have a very high degree of welfare (think small, local farms you can visit, you know the farmer, etc.)
  2. The way they are slaughtered is the most humane possible, ideally on-farm etc. so they more or less have no idea what is coming for them until they are gone - in my mind this more or less has no suffering from a utilitarian perspective (unless the animals somehow are able to anticipate the slaughter and have increased anxiety throughout their lives because of it).

I have been pretty vegan so far, but people around me are arguing for the type of animal products above and I have a hard time pushing back on it.

Bella @ 2024-04-24T09:57 (+4)

Not opining on the overall question, but FWIW I'm not sure on-farm slaughter is better. Reason being — I think that large slaughterhouses have "smoother" processes and (per animal killed) are less likely to end up with e.g. no stunning, stunning but resuscitation before being killed, etc.

But this does have to be weighed against the stress of transport, and I bet in a lot of cases it'd have been better to have on-farm slaughter given the length & conditions of transport.

Angelina Li @ 2024-04-24T09:31 (+2) in response to You probably want to donate any Manifold currency this week

Thanks, I found this a helpful nudge, and wouldn't have known about this otherwise :)

Ulrik Horn @ 2024-04-24T09:09 (+2) in response to If You're Going To Eat Animals, Eat Beef and Dairy

Thanks for writing this, it drives home to me the point of taking a broad perspective when making ethical choices. I am wondering if you take animal product consumption a step further and look at only eating animal products where you know both of the below are true?

  1. The animals have a very high degree of welfare (think small, local farms you can visit, you know the farmer, etc.)
  2. The way they are slaughtered is the most humane possible, ideally on-farm etc. so they more or less have no idea what is coming for them until they are gone - in my mind this more or less has no suffering from a utilitarian perspective (unless the animals somehow are able to anticipate the slaughter and have increased anxiety throughout their lives because of it).

I have been pretty vegan so far, but people around me are arguing for the type of animal products above and I have a hard time pushing back on it.

Matthew Rendall @ 2024-04-23T09:49 (+15) in response to If You're Going To Eat Animals, Eat Beef and Dairy

So far as it goes, your argument seems correct. But you're leaving out a significant factor here--carbon emissions. Beef cattle are extraordinarily carbon intensive even compared to other animals raised for food. If you eat them, your emissions, combined with other people's emissions, are going to cause a huge amount of both human and non-human suffering.

There's a complication. You could, in principle, offset the damage from your carbon emissions. But you could also, in principle, eat animals who have been raised free range, and whose lives have probably been worth living up to the time they're killed. 

Both of these will require you to spend extra money, and investigate whether you're really getting what you pay for. Rather than going to all this trouble--and here we'll agree--it seems a lot better simply to eat an Impossible Burger. 

Ulrik Horn @ 2024-04-24T09:05 (+3)

Perhaps it is included in existing comments but there is a forum post on climate change vs global development showing that one should hesitate about always prioritizing the former. Then, as I understand it, if one gives only some weight to animals compared to people, I would expect is very roughly follows that one should definitely be cautious about prioritizing climate change over animal welfare. Hopefully we can find a solution that lets us avoid this trade-off though!

Linch @ 2024-04-23T23:40 (+8) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

A relevant reframing here is whether having a PhD provides a high Bayes factor update to being hired. Eg, if people with and without PhDs have a 2% chance of being hired, but ">50% of successful applicants had a PhD" because most applicants have a PhD, then you should probably not include this, but if 1 in 50 applicants are hired, but it rises to 1 in 10 people if you have a PhD and falls to 1 in 100 if you don't, then the PhD is a massive evidential update even if there is no causal effect.

David_Moss @ 2024-04-24T08:48 (+2)

I think this is one piece of information you would need to include to stop such a statement being misleading, but as I argue here, there are potentially lots of other pieces of information which would need to be included to make it non-misleading (i.e. information about any and all other confounders which explain the association).

Otherwise, applicants will not know that conditional on X, they are not less likely to be successful, if they do not have a PhD (even though disproportionately many people with X have a PhD).

ixex @ 2024-04-23T22:33 (+3) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

Even if your current best guess is that it's not causal, if having a PhD meaningfully increases your chances of getting hired conditional on having applied, that information would help candidates get a better sense of their probability of getting hired

[edited to specify that I meant conditional on applying]

David_Moss @ 2024-04-24T08:43 (+3)

As I suggested in my first comment, you could do the same "by reporting other characteristics which play no role in selection, but which are heavily over-represented in successful applicants": for example, you could report that >50% of successful applicants are male,[1] white, live in certain countries, >90% have liberal political beliefs, and probably a very disproportionately large number have read Harry Potter fan fic.[2] Presumably one could identify other traits which are associated with success via their association with these other traits e.g. if most successful applicants have PhDs and PhDs disproportionately tend to [drink red wine, ski etc.], then successful applicants may also disproportionately have these traits.

Of course, different people can disagree about whether or not each of these are causal. But even if they are predictive, I imagine that we would agree that at least one of these would likely mislead people. For example, having read Harry Potter fan fic is associated with being involved with communities interested in EA-related jobs for largely arbitrary historical reasons.[3] 

This concern is particularly acute when we take into account the pragmatics of employers highlighting some specific fact.[4] People typically don't offer irrelevant information for no reason. So if orgs go out of their way to say ">50% of successful applicants have PhDs", even with the caveat about this not being causal, applicants will still reasonably wonder "Why are they telling me this?" and many will reasonably infer "What they want to convey is that this is a very competitive position and I should not apply."

As I mentioned in the footnote of my comment above, there are jobs where this would be a reasonable inference. But I think most EA jobs are not like this.

If one wanted to provide applicants with full, non-misleading information, I think you would need to distinguish which of the cases applies, and provide a full account of the association which explains why successful applicants might often have PhDs, but that this is not the case when you control for x, y, z. That way (in theory), applicants would be able to know that conditional on them being a person who meets the requirements specified in the application (e.g. they can complete the coding test task), the fact that they don't have a PhD does or does not imply anything about their chances of success. But I think that in practice, providing such an account for any given trait is either very difficult or impossible.[5] 

  1. ^

    Though in EA Survey data, there is no significant gender difference in likelihood of having an EA job. In fact, a slightly larger proportion of women tend to have EA jobs.

  2. ^

    None of these reflect real numbers from any actual hiring rounds, though they do reflect general disparities observed in the wider community.

  3. ^

    Of course, you could describe a situation where having read Harry Potter fan fic actually serves as a useful indicator of some relevant trait like involvement in the EA community. But, again, I'm not referring to cases like this. Even in cases where involvement in the EA community is of no relevance to the role at all (e.g. all you need to do to be hired is to perform some technical, testable skill, like coding very well), applicants are likely to be disproportionately interested in EA, and successful applicants may be yet further disproportionately interested in EA, even if it has nothing to do with selection.

    This can happen if, for example, 50% of the applications are basically spam (e.g. applications from a large job site, who have barely read the job advert and don't have any relevant skills but are applying for everything they can click on). In such cases, the subset of applications who are actually vaguely relevant, will be disproportionately people with an interest in EA, people with degrees etc.

  4. ^

    In some countries there may be a norm of releasing information about certain characteristics, in which case this consideration doesn't apply for those characteristics, but would for others.

  5. ^

    And that is not taking into account the important question of whether all applicants would actually update on such information provided completely rationally, or would whether many would be irrationally inclined to be negative about their chances, and just conclude that they aren't good enough to apply if they don't have a PhD from a fancy institution.  

Matthew Rendall @ 2024-04-23T16:00 (+12) in response to If You're Going To Eat Animals, Eat Beef and Dairy

Vasco, I've read your post to which the first link leads quickly, so please correct me if I'm wrong. However, it left me wondering about two things:

(a) It wasn't clear to me that the estimate of global heating damages was counting global heating damages to non-humans.  The references to DALYs and 'climate change affecting more people with lower income' lead me to suspect you're not. But non-humans will surely be the vast majority of the victims of global heating--as well as, in some cases, its beneficiaries. While Timothy Chan is quite right to point out below that this is a complex matter, it's certainly isn't going to be a wash, and if the effects are negative, they're likely to be very bad.

(b) It appears you were working with a study that employed a discount rate of 2%. That's going to discount damages in 100 years to 13% of their present value, and damages in 200 years to 1.9% of their present value--and it goes downhill from there. But that seems very hard to justify. Discounting is often defended on the ground that our descendants will be richer than we are. But that rationale doesn’t apply to damages in worst-case scenarios. Because they could be so enduring, these damages are huge in expectation. Second, future non-humans won’t be richer than we are, so benefits to them don't have diminishing marginal utility compared with benefits to us.

The US government--including, so far as I know, the EPA--uses a discount rate that is higher than two percent, which makes future damages from global heating evaporate even more quickly. What's more, I'd be surprised if it's trying to value damages to wild animals in terms of the value they would attach to avoiding them, as opposed to the value that American human beings do. The latter approach, as Dale Jamieson has observed, is rather like valuing harm to slaves by what their masters would pay to avoid it.

Vasco Grilo @ 2024-04-24T07:38 (+5)

Nice points, Matthew!

(a) It wasn't clear to me that the estimate of global heating damages was counting global heating damages to non-humans.

I have now clarified my estimate of the harms of GHG emissions only accounts for humans. I have also added:

estimated the scale of the welfare of wild animals is 4.21 M times that of farmed animals. Nonetheless, I have neglected the impact of GHG emissions on wild animals due to their high uncertainty. According to Brian Tomasik:

“On balance, I’m extremely uncertain about the net impact of climate change on wild-animal suffering; my probabilities are basically 50% net good vs. 50% net bad when just considering animal suffering on Earth in the next few centuries (ignoring side effects on humanity's very long-term future).”

In particular, it is unclear whether wild animals have positive/negative welfare.

I have added your name to the Acknowledgements. Let me know if you would rather remain anonymous.

(b) It appears you were working with a study that employed a discount rate of 2%. That's going to discount damages in 100 years to 13% of their present value, and damages in 200 years to 1.9% of their present value--and it goes downhill from there. But that seems very hard to justify. Discounting is often defended on the ground that our descendants will be richer than we are.

Carleton 2022 presents results for various discount rates, but I used the ones for their preferred value of 2 %. I have a footnote saying:

“Our preferred estimates use a discount rate of  = 2 %”. This is 1.08 (= 0.02/0.0185) times the 1.85 % (= (17.5/9.72)^(1/(2022 - 1990)) - 1) annual growth rate of the global real GDP per capita from 1990 to 2022. The adequate growth rate may be higher due to transformative AI, or lower owing to stagnation. I did not want to go into these considerations, so I just used Carleton 2022’s mainstream value.


But that rationale doesn’t apply to damages in worst-case scenarios. Because they could be so enduring, these damages are huge in expectation.

I used to think this was relevant, but mostly no longer do:

  • One should discount the future at the lowest possible rate, but it might still be the case that this is not much lower than 2 % (for reasons besides pure time discounting, which I agree should be 0).
  • I believe human extinction due to climate change is astronomically unlikely. I have a footnote with the following. "For donors interested in interventions explicitly targeting existential risk mitigation, I recommend donating to LTFF, which mainly supports AI safety. I guess existential risk from climate change is smaller than that from nuclear war (relatedly), and estimated the nearterm annual risk of human extinction from nuclear war is 5.93*10^-12, whereas I guess that from AI is 10^-6".
  • I guess human extinction is very unlikely to be an existential catastrophe. "For example, I think there would only be a 0.0513 % (= e^(-10^9/(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, to be existential". You can check the details of the Fermi estimate in the post.
  • If your worldview is such that very unlikely outcomes of climate chance still have meaningful expected value, the same will tend to apply to our treatment of animals. For example, I assume you would have to consider effects on digital minds.
  • I am open to indirect longterm effects dominating the expected value, but I suppose maximising more empirically quantifiable less uncertain effects on welfare is still a great heuristic.
BrownHairedEevee @ 2024-04-24T03:52 (+2) in response to AnonymousTurtle's Quick takes

Cool! For context, Malengo is helping students from Uganda attend university in Germany, and it also has a program to support students from French-speaking African countries [link in French]. I'm excited about this program not only for its economic benefits, but also for its potential to enable more people to live in liberal democratic countries, and in the long term, increase support for liberal democracy around the globe.

NickLaing @ 2024-04-24T06:27 (+5)

As a quick reply, I'm wondering what evidence you have that education in democratic liberal countries increases support for liberal democracy accross the globe? There's arguments for and against this thesis, but I don't think there's good evidence that it helps. 

 Many dictators in Africa for example were educated in top universities, which gave them better connections and influence which might have helped them oppress their people. Also during the 20ths centure a growing intelligent and motivated middle class seems correlated with higher chance of democracy. - its unclear whether highly skilled migration helps grow this middle class through increasing remittances and a growing economy, or removes the most capable people who could be starting businesses and making their home country a better place. Its worth noting that programs like this don't just take high school graduates, they usually take the cream of the crop who were likely to do very well in their home conutry as well.

I'm not saying you're wrong, just that its complicated and far from a slamdunk that this will increase support for liberal democracies.

Chris Leong @ 2024-04-24T04:50 (+5) in response to Should we break up Google DeepMind?

At least from an AI risk perspective, it's not at all clear to me that this would improve things as it would lead to a further dispersion of this knowledge outward.

Matthew Rendall @ 2024-04-23T09:49 (+15) in response to If You're Going To Eat Animals, Eat Beef and Dairy

So far as it goes, your argument seems correct. But you're leaving out a significant factor here--carbon emissions. Beef cattle are extraordinarily carbon intensive even compared to other animals raised for food. If you eat them, your emissions, combined with other people's emissions, are going to cause a huge amount of both human and non-human suffering.

There's a complication. You could, in principle, offset the damage from your carbon emissions. But you could also, in principle, eat animals who have been raised free range, and whose lives have probably been worth living up to the time they're killed. 

Both of these will require you to spend extra money, and investigate whether you're really getting what you pay for. Rather than going to all this trouble--and here we'll agree--it seems a lot better simply to eat an Impossible Burger. 

Robi Rahman @ 2024-04-24T04:08 (+6)

Beef cattle are not that carbon-intensive. If you're concerned about the climate, the main problem with cattle is their methane emissions.

If you eat them, your emissions, combined with other people's emissions, are going to cause a huge amount of both human and non-human suffering.

If I eat beef, my emissions combined with other people's emissions does some amount of harm. If I don't eat beef, other people's emissions do approximately the same amount of harm as there would have been if I had eaten it. The marginal harm from my food-based carbon emissions are really small compared to the marginal harm from my food-based contribution to animal suffering.

AnonymousTurtle @ 2024-04-21T10:41 (+27) in response to AnonymousTurtle's Quick takes

GiveWell and Open Philanthropy just made a $1.5M grant to Malengo!

Congratulations to @Johannes Haushofer and the whole team, this seems such a promising intervention from a wide variety of views

BrownHairedEevee @ 2024-04-24T03:52 (+2)

Cool! For context, Malengo is helping students from Uganda attend university in Germany, and it also has a program to support students from French-speaking African countries [link in French]. I'm excited about this program not only for its economic benefits, but also for its potential to enable more people to live in liberal democratic countries, and in the long term, increase support for liberal democracy around the globe.

Ben_West @ 2024-04-23T23:58 (+25) in response to Personal reflections on FTX

Thanks, that makes sense. I didn't remember Going Infinite as having made such a strong claim, but maybe I was projecting my own knowledge into the book.

I looked back at the agenda for our resignation/buyout meeting and I don't see anything like "didn't disclose misplaced transfer money to investors". Which doesn't mean that no one had this concern, only that they didn't add it to the agenda, but I do think it would be misleading to describe this as the central concern of the management team, given that we listed other things in the agenda instead of that.[1]

  1. ^

    To preempt a question about what concerns I did have, if not the transfer thing: see my post from last year

    I thought Sam was a bad CEO. I think he literally never prepared for a single one-on-one we had, his habit of playing video games instead of talking to you was “quirky” when he was a billionaire but aggravating when he was my manager, and my recollection is that Alameda made less money in the time I was there than if it had just simply bought and held bitcoin.

    I'm not sure if I would describe the above as a "benign management dispute" (it certainly didn't feel benign to me at the time), but I think it's even less accurate to describe it as being about the misplaced transfers

Elizabeth @ 2024-04-24T03:46 (+5)

that makes sense, sounds like it wasn't the concern for at least your group. He does describe it as "The rest of the management team was horrified and quit in a huff, loudly telling the investors that Bankman-Fried was dishonest and reckless", so unless there were multiple waves of management quitting it sounds like the book conflated multiple stories. 

BrownHairedEevee @ 2024-04-24T03:26 (+4) in response to BrownHairedEevee's Quick takes

Maybe EA philanthropists should be invest more conservatively, actually

The pros and cons of unusually high risk tolerance in EA philanthropy have been discussed a lot, e.g. here. One factor that may weigh in favor of higher risk aversion is that nonprofits benefit from a stable stream of donations, rather than one that goes up and down a lot with the general economy. This is for a few reasons:

  • Funding stability in a cause area makes it easier for employees to advance their careers because they can count on stable employment. It also makes it easier for nonprofits to hire, retain, and develop talent. This allows both nonprofits and their employees to have greater impact in the long run. Whereas a higher but more volatile stream of funding might not lead to as much impact.
  • It becomes more politically difficult to make progress in some causes during a recession. For example, politicians may have lower appetite for farm animal welfare regulations and might even be more willing to repeal existing regulations if they believe the regulations stifle economic growth. This makes it especially important for animal welfare orgs to retain funding.
Jason @ 2024-04-24T03:43 (+4)

These are good arguments for providing stable levels of funding per year, but there are often ways to further that goal without materially dialing back the riskiness of one's investments (probable exception: crypto, because the swings can be so wild and because other EA donors may be disproportionately in crypto). One classic approach is to set a budget based on a rolling average of the value of one's investments -- for universities, that is often a rolling three-year average, but it apparently goes back much further than that at Yale using a weighted-average approach. And EA philanthropists probably have more flexibility on this point than universities, whose use of endowments is often constrained by applicable law related to endowment spending.

MHR @ 2024-04-24T02:55 (+2) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

Hm I don't obviously see the analogy with the common app - hiring employees and admitting students seem quite different.

Jason @ 2024-04-24T03:31 (+2)

(Disclaimer: I'm not an antitrust lawyer, not that I can give anyone legal advice on the Forum anyway. Also, this is a US perspective.) 

The basic principle is that agreements "in restraint of trade" are illegal, with that term interpreted by reference to a "rule of reason" developed through over a century of caselaw. Neither student admissions nor employee hiring are really in the heartland of antitrust, although it has been applied to both in the past. 

I don't see how admissions and hiring are that different when it comes to determining whether use of a common application form would be in restraint of trade (i.e., whether it unreasonably impedes fair competition). I'm also unclear on what a good argument would be for an assertion that using the same first-stage application would unreasonably impede fair competition for employees in the first place. I'd argue that it would promote competition in the market for employees, by making it easier for employees to apply to more potential employers. But I didn't dig into any caselaw on that.

BrownHairedEevee @ 2024-04-24T03:26 (+4) in response to BrownHairedEevee's Quick takes

Maybe EA philanthropists should be invest more conservatively, actually

The pros and cons of unusually high risk tolerance in EA philanthropy have been discussed a lot, e.g. here. One factor that may weigh in favor of higher risk aversion is that nonprofits benefit from a stable stream of donations, rather than one that goes up and down a lot with the general economy. This is for a few reasons:

  • Funding stability in a cause area makes it easier for employees to advance their careers because they can count on stable employment. It also makes it easier for nonprofits to hire, retain, and develop talent. This allows both nonprofits and their employees to have greater impact in the long run. Whereas a higher but more volatile stream of funding might not lead to as much impact.
  • It becomes more politically difficult to make progress in some causes during a recession. For example, politicians may have lower appetite for farm animal welfare regulations and might even be more willing to repeal existing regulations if they believe the regulations stifle economic growth. This makes it especially important for animal welfare orgs to retain funding.
ElliotJDavies @ 2024-04-23T21:00 (+6) in response to Motivation gaps: Why so much EA criticism is hostile and lazy

To a large extent I don't buy this. Academics and Journalists could interview an arbitrary EA forum user on a particular area if they wanted to get up to speed quickly. The fact they seem not to do this, in addition to not giving a right to reply, makes me think they're not truth-seeking. 

David Thorstad @ 2024-04-24T03:12 (+4)

I’d like to hope that academics are aiming for a level of understanding above that of a typical user on an Internet forum.

All academic works have a right to reply. Many journals print response papers and it is a live option to submit responses to critical papers, including mine. It is also common to respond to others in the context of a larger paper. The only limit to the right of academic reply is that the response must be of suitable quality and interest to satisfy expert reviewers.

Jason @ 2024-04-24T00:08 (+2) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

Many elite US universities do this -- the Common Application to which my comment indirectly alludes -- and law schools did something vaguely similar at least in the mid-2000s (showing my age here). So I am expecting the answer is negative.

Common evaluation would be trickier -- e.g., I vaguely remember some universities getting into trouble with allegations that they were divvying up choice applicants rather than competing for them. [Edit: It may have been this -- they were apparently colluding about financial aid offers, and reached a settlement with DOJ Antitrust to stop doing this.]

MHR @ 2024-04-24T02:55 (+2)

Hm I don't obviously see the analogy with the common app - hiring employees and admitting students seem quite different.

Jason @ 2024-04-24T01:13 (+11) in response to You probably want to donate any Manifold currency this week

Relatedly, I would note that past comments on the Forum about trading on Manifold as a potentially effective way to steer money to charity may no longer be valid (or may be less valid) after May 1 than they were prior to the announcement being made. The reason is that Manifold's "pivot" requires a much more controlled supply of its play money (mana), making Manifold at least close to a zero-sum game for traders. In contrast, a past comment may have been written when the mana printing presses were in high gear, at which time the expected value (in mana) of trading on Manifold was meaningfully positive due to subsidies, free mana, etc.

Jason @ 2024-04-24T00:08 (+2) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

Many elite US universities do this -- the Common Application to which my comment indirectly alludes -- and law schools did something vaguely similar at least in the mid-2000s (showing my age here). So I am expecting the answer is negative.

Common evaluation would be trickier -- e.g., I vaguely remember some universities getting into trouble with allegations that they were divvying up choice applicants rather than competing for them. [Edit: It may have been this -- they were apparently colluding about financial aid offers, and reached a settlement with DOJ Antitrust to stop doing this.]

Larks @ 2024-04-24T00:54 (+2)

I vaguely remember some universities getting into trouble with allegations that they were divvying up choice applicants rather than competing for them.

This is explicitly the policy in the UK, and (I would guess) almost entirely eliminates offer acceptance uncertainty for Oxford and the other place:

... you can't apply to Oxford and Cambridge in the same year.

MHR @ 2024-04-23T23:56 (+2) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

Are there antitrust concerns with multiple orgs (even if nonprofit) using a common screener? 

Jason @ 2024-04-24T00:08 (+2)

Many elite US universities do this -- the Common Application to which my comment indirectly alludes -- and law schools did something vaguely similar at least in the mid-2000s (showing my age here). So I am expecting the answer is negative.

Common evaluation would be trickier -- e.g., I vaguely remember some universities getting into trouble with allegations that they were divvying up choice applicants rather than competing for them. [Edit: It may have been this -- they were apparently colluding about financial aid offers, and reached a settlement with DOJ Antitrust to stop doing this.]



Comments on 2024-04-23

Elizabeth @ 2024-04-22T21:52 (+23) in response to Personal reflections on FTX

Matt Levine is quoting from Going Infinite. I do not know who Michael Lewis's source is. I've heard confirming bits and pieces privately, which makes me trust this public version more. Of course that doesn't mean that was everyone's motivation: I'd be very interested to hear whatever you're able to share. 

Ben_West @ 2024-04-23T23:58 (+25)

Thanks, that makes sense. I didn't remember Going Infinite as having made such a strong claim, but maybe I was projecting my own knowledge into the book.

I looked back at the agenda for our resignation/buyout meeting and I don't see anything like "didn't disclose misplaced transfer money to investors". Which doesn't mean that no one had this concern, only that they didn't add it to the agenda, but I do think it would be misleading to describe this as the central concern of the management team, given that we listed other things in the agenda instead of that.[1]

  1. ^

    To preempt a question about what concerns I did have, if not the transfer thing: see my post from last year

    I thought Sam was a bad CEO. I think he literally never prepared for a single one-on-one we had, his habit of playing video games instead of talking to you was “quirky” when he was a billionaire but aggravating when he was my manager, and my recollection is that Alameda made less money in the time I was there than if it had just simply bought and held bitcoin.

    I'm not sure if I would describe the above as a "benign management dispute" (it certainly didn't feel benign to me at the time), but I think it's even less accurate to describe it as being about the misplaced transfers

Jason @ 2024-04-22T22:33 (+4) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

At least where the acceptance rate is 3-5 percent, it seems plausible that there could be something like the "AI Safety Common Pre-Application" that would reduce the time burden for many applicants. In many cases it would seem possible to say, on information not customized to a specific program, that an applicant just isn't going to make that top 3-5%.

(Applicants meeting specified criteria would presumably be invited to skip the pre-app stage, eliminating the risk of those applicants being erroneously screened out on common information.)

By analogy: In some courts, you have to seek permission from the court of appeals prior to appealing. The bar for being allowed is much lower than for succeeding, which means that denials at permission stage save disappointed litigants the resources they'd otherwise use to prepare full appeals.

MHR @ 2024-04-23T23:56 (+2)

Are there antitrust concerns with multiple orgs (even if nonprofit) using a common screener? 

yanni kyriacos @ 2024-04-23T23:55 (+3) in response to Yanni Kyriacos's Quick takes

This is an extremely "EA" request from me but I feel like we need a word for people (i.e. me) who are Vegans but will eat animal products if they're about to be thrown out. OpportuVegan? UtilaVegan?

ixex @ 2024-04-23T22:33 (+3) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

Even if your current best guess is that it's not causal, if having a PhD meaningfully increases your chances of getting hired conditional on having applied, that information would help candidates get a better sense of their probability of getting hired

[edited to specify that I meant conditional on applying]

Linch @ 2024-04-23T23:40 (+8)

A relevant reframing here is whether having a PhD provides a high Bayes factor update to being hired. Eg, if people with and without PhDs have a 2% chance of being hired, but ">50% of successful applicants had a PhD" because most applicants have a PhD, then you should probably not include this, but if 1 in 50 applicants are hired, but it rises to 1 in 10 people if you have a PhD and falls to 1 in 100 if you don't, then the PhD is a massive evidential update even if there is no causal effect.

Nathan Young @ 2024-04-21T11:20 (–3) in response to An instance of white supremacist and Nazi ideology creeping onto the EA Forum

I don't respond or read fully posts that are this much longer than my posts. 

Allowing debate of, e.g., white supremacy on the EA Forum, besides being simply off-topic in most cases, creates a no-win situation for the people whose rights and value are being debated and for other people who care a lot about them. If you engage in the debate, it will exhaust you and distress you, which your interlocutors may very well enjoy. If you avoid the debate or debate a bit and then disengage, this can create the impression that your views can’t be reasonably defended. It can also create the impression that your interlocutors’ views are the dominant ones in the community, which can become a self-fulfilling prophecy. (See: "Nazi death spiral".)

Let's note that white supremacy is not regularly discussed on the EA forum and you, yes literally you, are the cause of most of this previous discussion. 3 or 4 times now I have seen people say they didn't see this discussion until they read this post. 

I am not particularly worried about people thinking that sterilisation is a good solution, for instance. Perhaps if there was a post every week arguing for it, then I would want to talk about that.

I get that there are people who don't want this to be a space where these ideas are regularly discussed. There are large costs to them, similar to the costs around engaging with with Bostrom's email. These costs are real. I sense people want confidence that others will join them in pushing back when stuff like this happens. 

I don't know how to give people that confidence, except to say, I see the costs of othering groups in society. How what starts as "wouldn't it be convenient if" can end up as "let's put them in camps". I don't really know how to convey this to you, but a part of me is very concerned by this. 

But again, right know, the equilibrium feels okay and not a high priority to change. Let's come back to it in 6 months.

Jason @ 2024-04-23T23:36 (+2)

I am not particularly worried about people thinking that sterilisation is a good solution, for instance. Perhaps if there was a post every week arguing for it, then I would want to talk about that.

I think that what the voting dynamics may suggest would be a bigger problem than the frequency of posts like Mr. Parr's per se. His lead post got to +24 at one point (and stayed there for a while), while the post on which we are commenting sits at -12 (despite my +9 strong upvote). If I were in a group for which people were advocating for sterilization, and had good reason to think a significant fraction of the community supported that view, it would be cold comfort that the posts advocating for my sterilization only came by every few months!

CalebW @ 2024-04-23T23:25 (+9) in response to You probably want to donate any Manifold currency this week

From the discord: "Manifold can provide medium-term loans to users with larger invested balances to donate to charity now provided they agree to not exit their markets in a disorderly fashion or engage in any other financial shenanigans (interpreted very broadly). Feel free to DM for more details on your particular case."

I DM'd yesterday; today I received a mana loan for my invested amount, for immediate donation, due for repayment Jan 2, 202, with a requirement to not sell out of large positions before May.

There's now a Google form: https://forms.gle/XjegTMHf7oZVdLZF7

Arepo @ 2024-04-23T23:02 (+6) in response to Hiring retrospective: Research Communicator for Giving What We Can

I hadn't seen this until now. I still hope you'll do a follow up on the most recent round, since as I've said (repeatedly) elsewhere, I think you guys are the gold standard in the EA movement about how to do this well :)

One not necessarily very helpful thought:

Our work trial was overly intense and stressful, and unrepresentative of working at GWWC.

is a noble goal, but somewhat in tension with this goal:

In retrospect, we could have ensured this was done on a time-limited basis, or provided a more reasonable estimate.

It's really hard to make a strictly timed test, especially a sub-one-day one unstressful/intense.

This isn't to say you shouldn't do the latter, just to recognise that there's a natural tradeoff between two imperatives here. 

Another problem with timing is that you don't get to equalise across all axes, so you can trade one bias for another. For example, you're going to bias towards people who have access to an extra monitor or two at the time of taking the test, whose internet is faster or who are just in a less distracting location.

I don't know that that's really a solvable problem, and if not, the timed test seems probably the least of all evils, but again it seems like a tradeoff worth being aware of.

The dream is maybe some kind of self-contained challenge where you ask them to showcase some relevant way of thinking in a way in time isn't super important, but I can't think of any good version of that.

Nathan_Barnard @ 2024-04-23T22:54 (+9) in response to Priors and Prejudice

I essentially agree with the basic point of this post - and think it was a great post!

I have some what feel like nitpicks about the specific story that you told that and I'm sort of confused about how much they matter. My guess is that this actually is a counterargument to the point being made in the post and imply that trapped priors are less of a problem than the example used in the post would imply. 

I think that the broadly libertarian view and Scandinavian-style social democracy views are much more similar than this post gives them credit for. In particular, they agree on the crucial importance of liberal democracy that prevents elites (in the 19th-century traditional agricultural elites) from using the state to engage in rent-seeking. I remember reading a list of demands of the German Social Democratic party in the 1870s (before it had moderated) that read a list of liberal democratic demands - secret ballot, free speech, expansion of the power of democratically elected Reichstag etc.  These two strands of modern liberal thought also agreed on a liberal epistemology that should be used to try to systematically improve society from a broadly utilitarian perspective - the London School of Economics was founded by 4 Fabian Society members to further this aim! 

I think this cashes out in the Effective Samaritans and the Libertarian side of EA (although the Libertains side of EA is pretty unusually libertarian) pursuing pretty similar projects when trying to use non-randomista means for development. For instance, my guess is that both would support increasing state capacity in low-income countries to improve the basic nightwatchman functions of the state, reducing corruption, protecting liberties and the integrity of elections, and removing regulation that that represent elite rent-seeking. Of course they'll be some differences in emphasis - the Effective Samarations might have a particular theory of change around using unions to coordinate labour to push for political change - but these seem relatively minor compared to the core things both agree are important. Byran Caplan and Robin Hanson are genuinely unusually libertarian even amongst broadly free-market economists, but typically both utilitarian-motivated libertarians and social democrats would be interested in building at least a basic welfare state in low-income countries. 

I think we actually in practice see this convergence between liberal social democrats and broadly utilitarian libertarians in the broadly unified policy agendas of Ezra Klien's abundance agenda and lots of EA-Rationalist adjacent Libertarians like a focus on making it easier to build houses in highly productive cities, reducing barrier to immigration to rich countries, increased public funding of R&D and improving state capacity, particularly around extremely ambitious projects like operation warp speed. 

Jason @ 2024-04-23T16:28 (+4) in response to Personal reflections on FTX

It's unclear from that whether the due diligence scaled appropriately with size of donation. I doubt ~anyone is batting an eye at charities that took 25K-50K from SBF, due diligence or no. The process at the tens of millions per year level needs to be bespoke, though.

AnonymousEAForumAccount @ 2024-04-23T22:34 (+3)

Yeah, fully agree with this. I hope now that EV and/or EV-affiliated people are talking more about this matter that they'll be willing to share what specific due diligence was done before accepting SBF's gifts and what their due diligence policies look like more generally. 

David_Moss @ 2024-04-23T21:02 (+2) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

I broadly agree that such a statement, taken completely literally, would not be misleading in itself. But it raises the questions:

  •  What useful information is being conveyed by such a statement?
    • If interpreted correctly, I think the applicant should take anything actionable away from the statement. But what I suspect many will do, will be to conclude that they probably shouldn't apply if they don't have a PhD, even if they meet all the requirements.
  • What is pragmatically implied by such a statement?
    • People don't typically go out of their way to state things which they don't think are relevant (without some reason). So if employers go out of their way to state ">50% of successful applicants had a PhD...", even with the caveat, people are reasonably going to wonder "Why are they telling me this?" and a natural interpretation is "They want to communicate that if I don't have a PhD, I'm probably not suited to the role, even if I meet all the requirements", which is exactly what employers don't want to communicate (and is not true) in the cases I'm describing.[1] 
  1. ^

    I think there are roles where unless you have a PhD, you are unlikely to meet the requirements of the role. In such cases, communicating that would be useful. But the cases I'm describing are not like that: in these cases, PhDs are really not relevant to the roles, but applicants will have very commonly undertaken PhDs. I imagine that part of the motivation for wanting to see the information are because people think that things are really like the former case, not the latter case.

ixex @ 2024-04-23T22:33 (+3)

Even if your current best guess is that it's not causal, if having a PhD meaningfully increases your chances of getting hired conditional on having applied, that information would help candidates get a better sense of their probability of getting hired

[edited to specify that I meant conditional on applying]

Maxwell Tabarrok @ 2024-04-23T20:11 (+1) in response to AI Regulation is Unsafe

I do make the "by default" claim but I also give reasons why advocating for specific regulations can backfire. E.g the environmentalist success with NEPA. Environmentalists had huge success in getting the specific legal powers and constraints on govt that they asked for but those have been repurposed in service of default govt incentives. Also, advocacy for a specific set of regulations has spillovers onto others. When AI safety advocates make the case for fearing AI progress they provide support for a wide range of responses to AI including lots of nonsensical ones.

tlevin @ 2024-04-23T21:48 (+12)

Yes, some regulations backfire, and this is a good flag to keep in mind when designing policy, but to actually make the reference-class argument here work, you'd have to show that this is what we should expect from AI policy, which would include showing that failures like NEPA are either much more relevant for the AI case or more numerous than other, more successful regulations, like (in my opinion) the Clean Air Act, Sarbanes-Oxley, bans on CFCs or leaded gasoline, etc. I know it's not quite as simple as "I would simply design good regulations instead of bad ones," but it's also not as simple as "some regulations are really counterproductive, so you shouldn't advocate for any." Among other things, this assumes that nobody else will be pushing for really counterproductive regulations!

Jonas Hallgren @ 2024-04-23T15:47 (+7) in response to Bryan Johnson seems more EA aligned than I expected

I appreciate you putting out a support post of someone who might have some EA leanings that would be good to pick up on. I may or may not have done so in the past and then removed the post because people absolutely shat on it on the forum 😅 so respect.

PeterSlattery @ 2024-04-23T21:22 (+2)

Thanks, Jonas! I appreciate the support :P 

AnonymousTurtle @ 2024-04-22T21:24 (+3) in response to Bryan Johnson seems more EA aligned than I expected

https://forum.effectivealtruism.org/posts/nb6tQ5MRRpXydJQFq/ea-survey-2020-series-donation-data#Donation_and_income_for_recent_years, and personal conversations which make me suspect the assumption of non-respondents donating as much as respondents is excessively generous.

Not donating any of their money is definitely an exaggeration, but it's not more than the median rich person https://www.philanthropyroundtable.org/almanac/statistics-on-u-s-generosity/

PeterSlattery @ 2024-04-23T21:22 (+4)

Thanks for following up! This evidence you offer doesn't persuade me that most EAs are extremely rich guys because it's not arguing that. Did you mean to claim that most EAs who are rich guys are not donating any of their money or more than the median rich person? 

I also don't feel particularly persuaded by that claim based on the evidence shared. What are the specific points that are persuasive in the links - I couldn't see anything particularly relevant from scanning them. As in nothing that I could use to make an easy comparison between EA donors and median rich people. 

I see that "Mean share of total (imputed) income donated was 9.44% (imputing income where below 5k or missing) or 12.5% without imputation." for EAs and "around 2-3 percent of income" for US households" which seems opposed to your position. But I haven't checked carefully and I am not the kind of person who makes these sorts of careful comparisons very well.

I don't have evidence to link to here, or time to search for it, but my current beliefs are that most of EAs funding comes from rich and extremely rich people (often men) donating their money.  

huw @ 2024-04-22T12:57 (+15) in response to Bryan Johnson seems more EA aligned than I expected

This is an extremely rich guy who isn't donating any of his money. I wouldn't call him 'aligned' at all to EA.

I would also just, be careful about reading him on his word. He's only started talking about this framing recently (I've followed him for a while because of a passing interest in Kernel). He may well just be a guy who's very scared of dying with an incomprehensible amount of money to spend on it, who's looking for some admirers.

PeterSlattery @ 2024-04-23T21:11 (+4)

Thanks for the input!

I think of EA as a cluster of values and related actions that people can hold/practice to different extents. For instance, caring about social impact, seeking comparative advantage, thinking about long term positive impacts, and being concerned about existential risks including AI. He touched on all of those.

It's true that he doesn't mention donations. I don't think that discounts his alignment in other ways.

Useful to know he might not be genuine though.

ixex @ 2024-04-23T20:21 (+4) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

Right, but there is definitely a way you can communicate this information without being misleading. You could say, "in previous rounds, >50% of successful applicants had a PhD, but we do not assign weight to PhDs and do not believe there is a direct causal relationship between having a PhD and receiving an offer".

David_Moss @ 2024-04-23T21:02 (+2)

I broadly agree that such a statement, taken completely literally, would not be misleading in itself. But it raises the questions:

  •  What useful information is being conveyed by such a statement?
    • If interpreted correctly, I think the applicant should take anything actionable away from the statement. But what I suspect many will do, will be to conclude that they probably shouldn't apply if they don't have a PhD, even if they meet all the requirements.
  • What is pragmatically implied by such a statement?
    • People don't typically go out of their way to state things which they don't think are relevant (without some reason). So if employers go out of their way to state ">50% of successful applicants had a PhD...", even with the caveat, people are reasonably going to wonder "Why are they telling me this?" and a natural interpretation is "They want to communicate that if I don't have a PhD, I'm probably not suited to the role, even if I meet all the requirements", which is exactly what employers don't want to communicate (and is not true) in the cases I'm describing.[1] 
  1. ^

    I think there are roles where unless you have a PhD, you are unlikely to meet the requirements of the role. In such cases, communicating that would be useful. But the cases I'm describing are not like that: in these cases, PhDs are really not relevant to the roles, but applicants will have very commonly undertaken PhDs. I imagine that part of the motivation for wanting to see the information are because people think that things are really like the former case, not the latter case.

David Thorstad @ 2024-04-23T15:20 (+13) in response to Motivation gaps: Why so much EA criticism is hostile and lazy

Strongly agree. I think there's also a motivation gap in knowledge acquisition. If you don't think there's much promise in an idea or a movement, it usually doesn't make sense to spend years learning about it. This leads to large numbers of very good academics writing poorly-informed criticisms. But this shouldn't be taken to indicate that there's nothing behind the criticisms. It's just that it doesn't pay off career-wise for these people to spend years learning enough to press the criticisms better.

ElliotJDavies @ 2024-04-23T21:00 (+6)

To a large extent I don't buy this. Academics and Journalists could interview an arbitrary EA forum user on a particular area if they wanted to get up to speed quickly. The fact they seem not to do this, in addition to not giving a right to reply, makes me think they're not truth-seeking. 

David_Moss @ 2024-04-23T15:50 (+4) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

Whether this information is good to include depends crucially on the causal relationships though. 

In the simple test score case, academic ability causes both test scores and admission success, and test scores serve as a strong proxy for academic ability and we assume no other causal relationships complicating matters. Here test scores serve as a useful proxy for academic ability, and are relatively innocent as an indicator for likelihood of admission (i.e. they serve as a pretty good indicator of whether one is likely to succeed).

But telling people about something which was strongly associated with success, but not causally connected with the factors which determine success in the right way would be misleading. 

In a more complex (but perhaps realistic) case, where completing a PhD is causally related to a bunch of other factors, then saying that most successful applicants have PhDs risks being misleading about one's chances of success / suitability for the role and about the practical utility of getting a PhD for success. 


 

ixex @ 2024-04-23T20:21 (+4)

Right, but there is definitely a way you can communicate this information without being misleading. You could say, "in previous rounds, >50% of successful applicants had a PhD, but we do not assign weight to PhDs and do not believe there is a direct causal relationship between having a PhD and receiving an offer".

niplav @ 2024-04-03T18:30 (+7) in response to Killing the moths

I wonder whether the lives of those moths were net negative. If the population was rising, then the number of moths dying as larvae might've been fairly small. I assume that OPs apartment doesn't have many predatory insects or animals that eat insects, so the risk of predation was fairly small. That leaves five causes of death: old age, hunger, thirst, disease and crushing.

Death by old age for moths is probably not that bad? They don't have a very long life, so their duration of death also doesn't seem very long to me, and couldn't offset the quality of their life.

Hunger and thirst are likely worse, but I don't know by how much, do starved moths die from heart problems? (Do moths have hearts?)

Disease in house moth colonies is probably fairly rare.

Crushing can be very fast or lead to long painful death. Seems the worst of those options.

I think those moths probably had a better life than outside, just given the number of predatory insects; but I don't think that this was enough to make their lives net-positive. But it's been a while since I've read into insect welfare, so if most young insects die by predation, I'd increase my credence in those moths having had net-positive lives.

More:

Tristan @ 2024-04-23T20:16 (+2)

I think it's right to at least be open minded about the possibility that their lives might be generally good, all things considered.

To answer your question: insects don't have hearts because they don't have blood. Oxygen is transported to their cells by many tiny tubes (tracheae) extending from holes (spiracles) all over their thorax and abdomen.

Leopold Brown @ 2024-04-23T18:21 (+7) in response to Things EA Group Organisers Need To Hear

University of Arizona group organizer here; everything you've talked about are things that we have tried to reconcile with. But, having not yet faced a lot of those extreme changes in leadership, significant burnout, etc I believe we are struggling to fully internalize the consequences. And just because the symptoms haven't been made readily apparent, doesn't mean that the same underlying conditions aren't there in our organization.

The largest thing we have tried (and to a large extent, I believe failed in) is prioritizing the organizers themselves as an end. We have always had the strong beliefs that our organizers were going to be some of the most impactful members of the club; but the allure of new members and the demands of organizing have (i believe) put our priorities in a biased order. This semester has been much better, and I really appreciate you sharing your thoughts, they will play a role in how we move forward into our next semester and potential culture/workload changes that need to be made.

Ironically I have only been talking about our student club and not really myself. I am likely going to not be organizing next semester, but it is because I will be working on a riskier, more demanding, and very grandiose project for much of next semester (the irony is staggering). Your thoughts have definitely given me pause in regards to this new project, but I strongly believe it is something that I want to/should. That being said, it is really nice to hear this from another student doing organizing, and I'm no hyper-agentic organizing savant (far from it) and so I will keep your words and thoughts in the forefront as I move forward. Thank you very much,

Best regards,
Leopold

Kenneth_Diao @ 2024-04-23T20:16 (+1)

Hi Leopold,

Thank you for the thoughtful comment! I appreciate that my experience has informed your decision-making, but in the end it’s just my experience, so take it with a grain of salt. I also appreciate your caution; I would say that I’m also a pretty cautious person (especially for an EA; I personally think we sometimes need a little more of that).

I will say that big and risky projects aren’t necessarily a bad thing; they’re just big and risky. So if you’ve carefully considered the risks and acknowledged that you’re committing to a big project that might not pay off and you have some contingency plans, then I think it’s fine to do. I just think that sometimes we get caught up in the vision and end up goodharting for bigger and more visionary projects rather than more actually effective ones (my failure mode in Spring 2023).

Best, Kenneth

Mjreard @ 2024-04-23T13:21 (+4) in response to AI Regulation is Unsafe

I think you've failed to think on the margin here. I agree that the broad classes of regulation you point to here have *netted out* badly, but this says little about what the most thoughtful and determined actors in these spaces have achieved. 

Classically, Germany's early 2000s investments in solar R&D had enormous positive externalities on climate and the people who pushed for those didn't have to support restricting nuclear power also. The option space for them was not "the net-bad energy policy that emerged" vs "libertarian paradise;" it was: "the existing/inevitable bad policies with a bet on solar R&D" vs "the existing/inevitable bad policies with no bet on solar R&D."

I believe most EAs treat their engagement with AI policy as researching and advocating for narrow policies tailored to mitigate catastrophic risk. In this sense, they're acting as an organized/expert interest group motivated by a good, even popular per some polls, view of the public interest. They are competing with rather than complementing the more selfishly motivated interest groups seeking the kind of influence the oil & gas industry did in the climate context. On your model of regulation, this seems like a wise strategy, perhaps the only viable one. Again the alternative is not no regulation, but regulation that leaves out the best, most prosocial ideas. 

To the extent you're trying to warn EAs not to indiscriminately cheer any AI policy proposal assuming it will help with x-risk, I agree with you. I don't however agree that's reflective of how they're treating the issue. 

Maxwell Tabarrok @ 2024-04-23T20:13 (+1)

Yes that's fair. I do think that even specific advocacy can have risks though. Most advocacy is motivated by AI fear which can be picked up and used to support lots of other bad policies, e.g how Sam Altaman was received in congress.

tlevin @ 2024-04-23T15:23 (+16) in response to AI Regulation is Unsafe

This post correctly identifies some of the major obstacles to governing AI, but ultimately makes an argument for "by default, governments will not regulate AI well," rather than the claim implied by its title, which is that advocating for (specific) AI regulations is net negative -- a type of fallacious conflation I recognize all too well from my own libertarian past.

Maxwell Tabarrok @ 2024-04-23T20:11 (+1)

I do make the "by default" claim but I also give reasons why advocating for specific regulations can backfire. E.g the environmentalist success with NEPA. Environmentalists had huge success in getting the specific legal powers and constraints on govt that they asked for but those have been repurposed in service of default govt incentives. Also, advocacy for a specific set of regulations has spillovers onto others. When AI safety advocates make the case for fearing AI progress they provide support for a wide range of responses to AI including lots of nonsensical ones.

Ben Millwood @ 2024-04-23T11:32 (+4) in response to New org announcement: Would your project benefit from OSINT, satellite imagery analysis, or international security-related research support?

This seems like an impressive set of capabilities, exciting to hear about the new org :)

Did CSER write more about your work for them anywhere? Interested to read more about it.

Christina @ 2024-04-23T20:10 (+1)

It was a fairly preliminary project and didn't result in any immediate publications, but if you DM or email me, I'd be happy to chat about some of my findings! 

Jason @ 2024-04-23T19:50 (+4) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

At a minimum, candidates should be invited to seek a waiver of any "complete in one sitting" requirement on an early-round work task for good cause, without any adverse consequences whether the waiver is granted or not. Speaking as an employed individual with a preschooler, three hours of uninterrupted time is a a big ask for an early-round job application process!

Rebecca @ 2024-04-23T20:06 (+2)

I find it’s very rare to have to do the work test in 1 sitting, and I at least usually do better if I can split it up a bit

harfe @ 2024-04-23T04:06 (+49) in response to harfe's Quick takes

Consider donating all or most of your Mana on Manifold to charity before May 1.

Manifold is making multiple changes to the way Manifold works. You can read their announcement here. The main reason for donating now is that Mana will be devalued from the current 1 USD:100 Mana to 1 USD:1000 Mana on May 1. Thankfully, the 10k USD/month charity cap will not be in place until then.

Also this part might be relevant for people with large positions they want to sell now:

One week may not be enough time for users with larger portfolios to liquidate and donate. We want to work individually with anyone who feels like they are stuck in this situation and honor their expected returns and agree on an amount they can donate at the original 100:1 rate past the one week deadline once the relevant markets have resolved.

BrownHairedEevee @ 2024-04-23T19:59 (+7)

I just donated $65 to Shrimp Welfare Project :)

Joseph Lemien @ 2024-04-23T19:05 (+4) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

This is an aspect that I don't think of as often, but I do think it is very important. Some people have several hours free and can set aside three uninterrupted hours to focus on a single task. But not everyone can. I'm especially thinking of people who have children and work commitments. So in a sense it is unintentionally exclusionary.

To a certain extent, probably every hiring rounds is unintentionally exclusionary to varying extents, but I think that requiring candidates to spend three hours of uninterrupted time is a type of unintentionally exclusionary that can be relatively easily avoided. It is filtering out candidates based on something that is unrelated to how well/poorly they would perform on the job.

Jason @ 2024-04-23T19:50 (+4)

At a minimum, candidates should be invited to seek a waiver of any "complete in one sitting" requirement on an early-round work task for good cause, without any adverse consequences whether the waiver is granted or not. Speaking as an employed individual with a preschooler, three hours of uninterrupted time is a a big ask for an early-round job application process!

Patrick Liu @ 2024-04-23T19:11 (+1) in response to EA Data Science - Community call /Speed Friending

Unfortunately there were some technical difficulties with the Zoom link.  Thanks for those who found the new room.  Also keep the conversation going on our slack channel - https://join.slack.com/share/enQtNzAyMjc2MTYzNTUyMS01NTg3MmVjODc3ZDk4ZTlkNzg3ZmMyNmE0NTc3ZjdjMWJiMWI0ODliOTJkYTYyOWNmNDhkZWU5NGIyMTVhYWEw (expires in 14 days).

Ávila Carmesí @ 2024-04-23T15:21 (+9) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

I just don't have the time. I'm often at absolute capacity with college assignments, clubs, work, etc.

Especially if many orgs decided to have longer work trials, I might be unable to apply to many or may end up submitting subpar work tests because of this. 

Also, I'll point out that oftentimes EA orgs have *initial applications* that take 2-4 hours. This seems like clearly too much. I think a quick first round that verifies the following would be best:

  • The applicant is legally able to work for us.
  • The applicant satisfies our minimum experience/knowledge cutoff (looking at the resume and asking one question about their degree of engagement with, e.g., AI safety).
  • The applicant seems value-aligned or understands our mission (one question about why they want to apply or work in X area).

Longer questions about career plans, work-trial-y questions, reasoning and IQ-test-y questions, research proposals, and everything else should belong in later stages, when you've already filtered out people that just really did not belong in the application process.

Joseph Lemien @ 2024-04-23T19:05 (+4)

This is an aspect that I don't think of as often, but I do think it is very important. Some people have several hours free and can set aside three uninterrupted hours to focus on a single task. But not everyone can. I'm especially thinking of people who have children and work commitments. So in a sense it is unintentionally exclusionary.

To a certain extent, probably every hiring rounds is unintentionally exclusionary to varying extents, but I think that requiring candidates to spend three hours of uninterrupted time is a type of unintentionally exclusionary that can be relatively easily avoided. It is filtering out candidates based on something that is unrelated to how well/poorly they would perform on the job.

Stan Pinsent @ 2024-04-23T16:34 (+3) in response to Saving lives in normal times is better to improve the longterm future than doing so in catastrophes?

Thanks for the detailed response, Vasco! Apologies in advance that this reply is slightly rushed and scattershot.

I agree that you are right with the maths - it is 251x, not 63,000x.

  • I am not comparing the cost-effectiveness of preventing events of different magnitudes.
  • Instead, I am comparing the cost-effectiveness of saving lives in periods of different population losses.

OK, I did not really get this!

In your example on wars you say

  • As a consequence, if the goal is minimising war deaths[2], spending to save lives in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.

Can you give an example of what might count as "spending to save lives in wars 1k times as deadly" in this context? 

I am guessing it is spending money now on things that would save lives in very deadly wars. Something like building a nuclear bunker vs making a bullet proof vest? Thinking about the amounts we might be willing to spend on interventions that save lives in 100-death wars vs 100k-death wars, it intuitively feels like 251x is a way better multiplier than 63,000. So where am I going wrong?

When you are thinking about the PDF of , are you forgetting that ∇ is not proportional to ∇

To give a toy example: suppose 

Then if  we have  

If  we have  

The "height of the PDF graph" will not capture these differences in width. This won't matter much for questions of 100 vs 100k deaths, but it might be relevant for near-existential mortality levels.

Vasco Grilo @ 2024-04-23T18:50 (+2)

Can you give an example of what might count as "spending to save lives in wars 1k times as deadly" in this context?

For example, if one was comparing wars involding 10 k or 10 M deaths, the latter would be more likely to involve multiple great power, in which case it would make more sense to improve relationships between NATO, China and Russia.

Thinking about the amounts we might be willing to spend on interventions that save lives in 100-death wars vs 100k-death wars, it intuitively feels like 251x is a way better multiplier than 63,000. So where am I going wrong?

You may be right! Interventions to decrease war deaths may be better conceptualised as preventing deaths within a given severity range, in which case I should not have interpreted lirerally the example in Founders Pledge’s report Philanthropy to the Right of Boom. In general, I think one has to rely on cost-effectiveness analyses to decide what to prioritise.

When you are thinking about the PDF of , are you forgetting that ∇ is not proportional to ∇?

I am not sure I got the question. In my discussion of Founders Pledge's example about war deaths, I assumed the value of saving one life to be the same regardless of population size, because this is what they were doing). So I did not use the ratio between the initial and population.

huw @ 2024-04-22T12:57 (+15) in response to Bryan Johnson seems more EA aligned than I expected

This is an extremely rich guy who isn't donating any of his money. I wouldn't call him 'aligned' at all to EA.

I would also just, be careful about reading him on his word. He's only started talking about this framing recently (I've followed him for a while because of a passing interest in Kernel). He may well just be a guy who's very scared of dying with an incomprehensible amount of money to spend on it, who's looking for some admirers.

Habryka @ 2024-04-23T18:49 (+12)

This is an extremely rich guy who isn't donating any of his money.

FWIW, I totally don't consider "donating" a necessary component of taking effective altruistic action. Most charities seem much less effective than the most effective for-profit organizations, and most of the good in the world seems achieved by for-profit companies. 

I don't have a particularly strong take on Bryan Johnson, but using "donations" as a proxy seems pretty bad to me.

Karthik Tadepalli @ 2024-04-23T18:27 (+9) in response to Should we break up Google DeepMind?

I read it as aiming to reduce AI risk by increasing the cost of scaling.

I also don't see how breaking deepmind off from Google would increase competitive dynamics. Google, Microsoft, Amazon and other big tech partners are likely to be pushing their subsidiaries to race even faster since they are likely to have much less conscientiousness about AI risk than the companies building AI. Coordination between DeepMind and e.g. OpenAI seems much easier than coordination between Google and Microsoft.

Habryka @ 2024-04-23T18:46 (+19)

Less than a year ago Deepmind and Google Brain were two separate companies (both making cutting-edge contributions to AI development). My guess is if you broke off Deepmind from Google you would now just pretty quickly get competition between Deepmind and Google Brain (and more broadly just make the situation around slowing things down a more multilateral situation).

But more concretely, anti-trust action makes all kinds of coordination harder. After an anti-trust action that destroyed billions of dollars in economic value, the ability to get people in the same room and even consider coordinating goes down a lot, since that action itself might invite further anti-trust action.

Habryka @ 2024-04-23T15:35 (+13) in response to Should we break up Google DeepMind?

Huh, fwiw I thought this proposal would increase AI risk, since it would increase competitive dynamics (and generally make coordinating on slowing down harder). I at least didn't read this post as x-risk motivated (though I admit I was confused what it's primary motivation was).

Karthik Tadepalli @ 2024-04-23T18:27 (+9)

I read it as aiming to reduce AI risk by increasing the cost of scaling.

I also don't see how breaking deepmind off from Google would increase competitive dynamics. Google, Microsoft, Amazon and other big tech partners are likely to be pushing their subsidiaries to race even faster since they are likely to have much less conscientiousness about AI risk than the companies building AI. Coordination between DeepMind and e.g. OpenAI seems much easier than coordination between Google and Microsoft.

Leopold Brown @ 2024-04-23T18:21 (+7) in response to Things EA Group Organisers Need To Hear

University of Arizona group organizer here; everything you've talked about are things that we have tried to reconcile with. But, having not yet faced a lot of those extreme changes in leadership, significant burnout, etc I believe we are struggling to fully internalize the consequences. And just because the symptoms haven't been made readily apparent, doesn't mean that the same underlying conditions aren't there in our organization.

The largest thing we have tried (and to a large extent, I believe failed in) is prioritizing the organizers themselves as an end. We have always had the strong beliefs that our organizers were going to be some of the most impactful members of the club; but the allure of new members and the demands of organizing have (i believe) put our priorities in a biased order. This semester has been much better, and I really appreciate you sharing your thoughts, they will play a role in how we move forward into our next semester and potential culture/workload changes that need to be made.

Ironically I have only been talking about our student club and not really myself. I am likely going to not be organizing next semester, but it is because I will be working on a riskier, more demanding, and very grandiose project for much of next semester (the irony is staggering). Your thoughts have definitely given me pause in regards to this new project, but I strongly believe it is something that I want to/should. That being said, it is really nice to hear this from another student doing organizing, and I'm no hyper-agentic organizing savant (far from it) and so I will keep your words and thoughts in the forefront as I move forward. Thank you very much,

Best regards,
Leopold

Rebecca @ 2024-04-23T17:58 (+2) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

It sounds like you would benefit from greater prioritisation and focus. (Eg see: https://calnewport.com/dangerous-ideas-college-extracurriculars-are-meaningless/).

Ávila Carmesí @ 2024-04-23T18:16 (+1)

Thank you for your advice! I will say that my part-time job was research, which is crucial if I want to get research positions or into PhD programs in the near future. The clubs I lead are also very relevant to the jobs I'm applying to, and I think they may be quite impactful (so I'm willing to do them even if they harm my own odds). 

Regardless of my specific situation, I think EA orgs should conduct hiring under the assumption that a significant portion of their applicants don't have the time for multiple multi-hour work tests in early stages of the application process (where most will be weeded out).

Patrick Liu @ 2024-04-23T18:00 (+1) in response to EA Data Science - Community call /Speed Friending

I'm getting an error from the zoom link.  Please use this room instead: https://meet.google.com/avh-vdvw-wub

Ávila Carmesí @ 2024-04-23T15:21 (+9) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

I just don't have the time. I'm often at absolute capacity with college assignments, clubs, work, etc.

Especially if many orgs decided to have longer work trials, I might be unable to apply to many or may end up submitting subpar work tests because of this. 

Also, I'll point out that oftentimes EA orgs have *initial applications* that take 2-4 hours. This seems like clearly too much. I think a quick first round that verifies the following would be best:

  • The applicant is legally able to work for us.
  • The applicant satisfies our minimum experience/knowledge cutoff (looking at the resume and asking one question about their degree of engagement with, e.g., AI safety).
  • The applicant seems value-aligned or understands our mission (one question about why they want to apply or work in X area).

Longer questions about career plans, work-trial-y questions, reasoning and IQ-test-y questions, research proposals, and everything else should belong in later stages, when you've already filtered out people that just really did not belong in the application process.

Rebecca @ 2024-04-23T17:58 (+2)

It sounds like you would benefit from greater prioritisation and focus. (Eg see: https://calnewport.com/dangerous-ideas-college-extracurriculars-are-meaningless/).

Rebecca @ 2024-04-23T17:09 (+14) in response to Motivation gaps: Why so much EA criticism is hostile and lazy

I don’t think it requires years of learning to write a thoughtful op-ed-level critique of EA. I’d be surprised if that’s true for an academic paper-level one either

David Thorstad @ 2024-04-23T17:39 (+9)

That's fair! But I also think most op-eds on any topic are pretty bad. As for academic papers, I have to say it took me at least a year to write anything good about EA, and that was on a research-only postdoc with 50% of my research time devoted to longtermism. 

There's an awful lot that has been written on these topics, and catching up on the state of the art can't be rushed without bad results. 

David Thorstad @ 2024-04-23T15:20 (+13) in response to Motivation gaps: Why so much EA criticism is hostile and lazy

Strongly agree. I think there's also a motivation gap in knowledge acquisition. If you don't think there's much promise in an idea or a movement, it usually doesn't make sense to spend years learning about it. This leads to large numbers of very good academics writing poorly-informed criticisms. But this shouldn't be taken to indicate that there's nothing behind the criticisms. It's just that it doesn't pay off career-wise for these people to spend years learning enough to press the criticisms better.

Rebecca @ 2024-04-23T17:09 (+14)

I don’t think it requires years of learning to write a thoughtful op-ed-level critique of EA. I’d be surprised if that’s true for an academic paper-level one either

Ávila Carmesí @ 2024-04-23T15:22 (+1) in response to On failing to get EA jobs: My experience and recommendations to EA orgs

Yes, I've had two calls with them. Maybe it wasn't very clear from my background but I've been pretty deeply involved with EA for about 2 years (also went to multiple EAGs). 

How do you think 80k career advice would help in my situation?

Rebecca @ 2024-04-23T17:04 (+2)

Mainly advice on intermediate steps to get more domain-relevant experience.

Vasco Grilo @ 2024-04-22T17:20 (+3) in response to Saving lives in normal times is better to improve the longterm future than doing so in catastrophes?

Thanks for the comment, Stan!

Using PDF rather than CDF to compare the cost-effectiveness of preventing events of different magnitudes here seems off.

Technically speaking, the way I modelled the cost-effectiveness:

  • I am not comparing the cost-effectiveness of preventing events of different magnitudes.
  • Instead, I am comparing the cost-effectiveness of saving lives in periods of different population losses.

Using the CDF makes sense for the former, but the PDF is adequate for the latter.

You show that preventing (say) all potential wars next year with a death toll of 100 is 1000^1.6 = 63,000 times better in expectation than preventing all potential wars with a death toll of 100k.

I agree the above follows from using my tail index of 1.6. It is just worth noting that the wars have to involve exactly, not at least, 100 and 100 k deaths for the above to be correct.

More realistically, intervention A might decrease the probability of wars of magnitude 10-100 deaths and intervention B might decrease the probability of wars of magnitude 100,000 to 1,000,000 deaths. Suppose they decrease the probability of such wars over the next n years by the same amount. Which intervention is more valuable? We would use the same methodology as you did except we would use the CDF instead of the PDF. Intervention A would be only 1000^0.6 = 63 times as valuable.

This is not quite correct. The expected deaths from wars with  to  deaths is , where  are the minimum war deaths. So, for a tail index of , intervention A would be 251 (= (10^-0.6 - 100^-0.6)/((10^5)^-0.6 - (10^6)^-0.6)) times as cost-effective as B. As the upper bounds of the severity ranges of A and B get increasingly close to their lower bounds, the cost-effectiveness of A tends to 63 k times that of B. In any case, the qualitative conclusion is the same. Preventing smaller wars averts more deaths in expectation assuming war deaths follow a power law.

As an intuition pump we might look at the distribution of military deaths in the 20th century. Should the League of Nations/UN have spent more effort preventing small wars and less effort preventing large ones?

I do not know. Instead of relying on past deaths alone, I would rather use cost-effectiveness analyses to figure out what is more cost-effective, as the Centre for Exploratory Altruism Research (CEARCH) does. I just think it is misleading to directly compare the scale of different events without accounting for their likelihood, as in the example from Founders Pledge’s report Philanthropy to the Right of Boom I mention in the post.

When it comes to things that could be even deadlier than WWII, like nuclear war or a pandemic, it's obvious to me that the uncertainty about the death toll of such events increases at least linearly with the expected toll, and hence the "100-1000 vs 100k-1M" framing is superior to the PDF approach.

I am also quite uncertain about the death toll of catastrophic events! I used the PDF to remain consistent which Founders Pledge's example, which compared discrete death tolls (not ranges).

Stan Pinsent @ 2024-04-23T16:34 (+3)

Thanks for the detailed response, Vasco! Apologies in advance that this reply is slightly rushed and scattershot.

I agree that you are right with the maths - it is 251x, not 63,000x.

  • I am not comparing the cost-effectiveness of preventing events of different magnitudes.
  • Instead, I am comparing the cost-effectiveness of saving lives in periods of different population losses.

OK, I did not really get this!

In your example on wars you say

  • As a consequence, if the goal is minimising war deaths[2], spending to save lives in wars 1 k times as deadly should be 0.00158 % (= (10^3)^(-1.6)) as large.

Can you give an example of what might count as "spending to save lives in wars 1k times as deadly" in this context? 

I am guessing it is spending money now on things that would save lives in very deadly wars. Something like building a nuclear bunker vs making a bullet proof vest? Thinking about the amounts we might be willing to spend on interventions that save lives in 100-death wars vs 100k-death wars, it intuitively feels like 251x is a way better multiplier than 63,000. So where am I going wrong?

When you are thinking about the PDF of , are you forgetting that ∇ is not proportional to ∇

To give a toy example: suppose 

Then if  we have  

If  we have  

The "height of the PDF graph" will not capture these differences in width. This won't matter much for questions of 100 vs 100k deaths, but it might be relevant for near-existential mortality levels.