Feels like taking into account the likelihood that the "1.5% probability of 100,000 DALYs averted" estimate is a credence based on some marginally-relevant base rate[1] that might have been chosen with a significant bias towards optimism is very much in keeping with the spirit of the question (which presumably is about gauging attitudes towards uncertainty, not testing basic EV calculation skills)[2].
A very low percentage chance of averting a lot of DALYs feels a lot more like "1.5% of clinical trials of therapies for X succeeded; this untested idea might also have a 1.5% chance" optimism attached to a proposal offering little reason to believe it's above average rather than an estimate based on somewhat robust statistics (we inferred that 1.5% of people who receive this drug will be cured from the 1.5% of people who had that outcome in trials). So it seems quite reasonable to assume that the 1.5% chance of a positive binary outcome estimate might be biased upwards. Even more so in the context of "we acknowledge this is a long shot and high-certainty solutions to other pressing problems exist, but if the chance of this making an impact was as high as 0.0x%..." style fundraising appeals to EAs' determination to avoid scope insensitivity.
either that or someone's been remarkably precise in their subjective estimates or collected some unusual type of empirical data. I certainly can't imagine reaching the conclusion an option has exactly 1.5% chance of averting 100k DALYs myself
If we're considering realistic scenarios instead of staying with the spirit of the thought experiment (which I think we should not, partly precisely because it introduces lots of possible ambiguities in how people interpret the question, and partly because this probably isn't what the surveyors intended, given the way EA culture has handled thought experiments thus far – see for instance the links in Lizka's answer, or the way EA draws heavily from analytic philosophy, where straightforwardly engaging with unrealistic thought experiments is a standard component of the toolkit), then I agree that an advertized 1.5% chance of having a huge impact could be more likely upwards-biased than the other way around. (But it depends on who's doing the estimate – some people are actually well-calibrated or prone to be extra modest.)
[...] is very much in keeping with the spirit of the question (which presumably is about gauging attitudes towards uncertainty, not testing basic EV calculation skills
(1) what you described seems to me best characterized as being about trust. Trust in other's risk estimates. That would be separate from attitudes about uncertainty (and if that's what the surveyors wanted to elicit, they'd probably have asked the question very differently).
(Or maybe what you're thinking about could be someone having radical doubts about the entire epistemology behind "low probabilities"? I'm picturing a position that goes something like, "it's philosophically impossible to reason sanely about low probabilities; besides, when we make mistakes, we'll almost always overestimate rather than underestimate our ability to have effects on the world." Maybe that's what you think people are thinking – but as an absolute, this would seem weirdly detailed and radical to me, and I feel like there's a prudential wager against believing that our reasoning is doomed from the start in a way that would prohibit everyone from pursuing ambitious plans.)
(2) What I meant wasn't about basic EV calculation skills (obviously) – I didn't mean to suggest that just because the EV of the low-probability intervention is greater than the EV of the certain intervention, it's a no-brainer that it should be taken. I was just saying that the OP's point about probabilities maybe being off by one percentage point, by itself, without some allegation of systematic bias in the measurement, doesn't change the nature of the question. There's still the further question of whether we want to bring in other considerations besides EV. (I think "attitudes towards uncertainty" fits well here as a title, but again, I would reserve it for the thing I'm describing, which is clearly different from "do you think other people/orgs within EA are going to be optimistically biased?.")
(Note that it's one question whether people would go by EV for cases that are well within the bounds of numbers of people that exist currently on earth. I think it becomes a separate question when you go further to extremes, like whether people would continue gambling in the St Petersburg paradox or how they relate to claims about vastly larger realms than anything we understand to be in current physics, the way Pascal's mugging postulates.)
Finally, I realize that maybe the other people here in the thread have so little trust in the survey designers that they're worried that, if they answer with the low-probability, higher-EV option, the survey designers will write takeaways like "more EAs are in favor of donating to speculative AI risk." I agree that, if you think survey designers will make too strong of an update from your answers to a thought experiment, you should point out all the ways that you're not automatically endorsing their preferred option. But I feel like the EA survey already has lots of practical questions along the lines of "Where do you actually donate to?" So, it feels unlikely that this question is trying to trick respondees or that the survey designers will just generally draw takeaways from this that aren't warranted?
The bar should not be at 'difficult financial situation', and this is also something there are often incentives against explicitly mentioning when applying for funding. Getting paid employment while studying (even fulltime degrees) is normal.
My 5 minute Google search to put some numbers on this:
Why are students taking on paid work?
UK: "Three-quarters of those in work said they did so to meet their living costs, while 23% also said they worked to give financial support for friends or family." From the Guardian article linked above.
Cannot find a recent US statistic quickly, but given the system (e.g. https://www.collegeave.com/articles/how-to-pay-for-college/) I expect working rather than taking out (as much in) loans is a big one.
On the other hand, spending time on committees is also very normal as an undergraduate and those are not paid. However in comparison the time people spend on this is much more limited (say ~2-5 hrs/week), there is rarely a single organiser, and I've seen a lot of people drop off committees - some as they are less keen, but some for time commitment reasons (which I expect will sometimes/often be doing paid work).
My guess is that the downsides of paid organizing would be diminished to the extent that the structure and compensation somewhat closely tracked typical university-student employment. I didn't see anything in the UK report about what typical rates might be, but at least back in my day most students were at fairly low hourly rates. Also, paying people for fewer than (say) 8-10 hours per week would not come across to me as roughly replacement income for foregone typical university-student employment because I don't think such employment is typically available in smaller amounts. [Confidence: low, I am somewhat older by EA standards.]
How do you and your wife decide where to give to, collectively? Do you guys each have a budget, do you discuss a lot and fund based on consensus, something else?
<3 This is so lovely @Allan_Saldanha! I think it is such a lovely and remarkable thing about our community that so many people have been quietly living their lives and just giving their 10, 20, 40, 75(!) percent to causes they care about, some now over the course of 10+ years. "Generous and thoughtful giving is normal here" continues to be one of my favorite facts about EAs :')
If wondering why not post it here: Originally posted it here with a LW cross-post. It was immediately slapped with the "Community" tag, despite not being about community, but about different ways people try to do good, talk about charity & ensuing confusions. It is about the space of ideas, not about the actual people or orgs.
With posts like OP announcements about details of EA group funding or EAG admissions bar not being marked as community, I find it increasingly hard to believe the "Community" tag is driven by the stated principe marking "Posts about the EA community and projects that focus on the EA community" and not by other motives, like e.g. forum mods expressing the view "we want people to think less about this / this may be controversial / we prefer someone new to not read this".
My impression this moves substantial debates about ideas to the side, which is a state I don't want to cooperate on by just leaving it as it is -> moved the post on LessWrong and replaced by this comment.
Hi Jan, my apologies for the frustrating experience. The Forum team has reduced both our FTEs and moderation/facilitator capacity over the past year — in particular, currently the categorization of "Community" posts is done mostly by LLM judgement with a bit of human oversight. I personally think that this system makes too many mistakes, but I have not found time to prioritize fixing it.
In the meantime, if you ever encounter any issues (such as miscategorized posts) or if you have any questions for the Forum team, I encourage you to contact us, or you can message myself or @Toby Tremlett🔹 directly via the Forum. We're happy to work with you to resolve any issues.
For what it's worth, here is my (lightly-held) opinion based on the current definition[1] of "Community" posts:
The community topic covers posts about the effective altruism community, as well as applying EA in one's personal life. The tag also applies to posts about the Forum itself, since this is a community space. You should use the community tag if one of the following things is true:
The post is about EA as a cultural phenomenon (as opposed to EA as a project of doing good)
The post is about norms, attitudes or practices you'd like to see more or less of within the EA community
The post would be irrelevant to someone who was interested in doing good effectively, but NOT interested in the effective altruism community
The post concerns an ongoing conversation, scandal or discourse that would not be relevant to someone who doesn't care about the EA community.
I agree that the twoposts about uni group funding are "Community" posts because they are "irrelevant to someone who was interested in doing good effectively, but NOT interested in the effective altruism community". I've tagged them as such.
I would say that the EAG application bar post is a borderline case[2], but I lean towards agreeing that it's "Community" because it's mostly addressed towards people in the community. I've tagged it as such.
I skimmed your post on LW and I think it was categorized as "Community" because it arguably "concerns an ongoing conversation, scandal or discourse that would not be relevant to someone who doesn't care about the EA community" (as the post references past criticisms of EA, which someone who wasn't involved in the community wouldn't have context on). I think this is not a clear cut case. Often the "Community" tag requires some judgement calls. If you wanted to post it on the Forum again, I could read it more carefully and make a decision on it myself — let me know if so.
To be clear, I haven't put enough thought into this definition to feel confident agreeing or disagreeing with it. I'm just going to apply it as written for now. I expect that our team will revisit this within the next few months.
Partly because I believe the intended audience is people who are not really involved with the EA community but would be valuable additions to an EA Global conference (and also I think you don't need to know anything about the EA community to find that post valuable), and so the post doesn't 100% fit any of the four criteria.
My intuitive reaction to this is "Way to screw up a survey."
Considering that three people agree-voted your post, I realize I should probably come away with this with a very different takeaway, more like "oops, survey designers need to put in extra effort if they want to get accurate results, and I would've totally fallen for this pitfall myself."
Still, I struggle with understanding your and the OP's point of view. My reaction to the original post was something like:
Why would this matter? If the estimate could be off by 1 percentage point, it could be down to 0.5% or up to 2.5%, which is still 1.5% in expectation. Also, if this question's intention were about the likelihood of EA orgs being biased, surely they would've asked much more directly about how much respondees trust an estimate of some example EA org.
We seem to disagree on use of thought experiments. The OP writes:
When designing thought experiments, keep them as realistic as possible, so that they elicit better answers. This reduces misunderstandings, pitfalls, and potentially compounding errors. It produces better communication overall.
I don't think this is necessary and I could even see it backfiring. If someone goes out of their way to make a thought experiment particularly realistic, maybe respondees might get the impression that it is asking about a real-world situation where they are invited to bring in all kinds of potentially confounding considerations. But that would defeat the point of the thought experiment (e.g., people might answer based on how much they trust the modesty of EA orgs, as opposed to giving you their personal tolerance of risk of the feeling of having had no effect/wasted money in hindsight). The way I see it, the whole point of thought experiments is to get ourselves to think very carefully and cleanly about the principles we find most important. We do this by getting rid of all the potentially confounding variables. See here for a longer explanation of this view.
Maybe future surveys should have a test to figure out how people understand the use of thought experiments. Then, we could split responses between people who were trying to play the thought experiment game the intended way, and people who were refusing to play (i.e., questioning premises and adding further assumptions).
*On some occasions, it makes sense to question the applicability of a thought experiment. For instance, in the classical "what if you're a doctor who has the opportunity to kill a healthy patient during routine chek-up so that we could save the lives of 4 people needing urgent organ transplants," it makes little sense to just go "all else is equa! Let's abstract away all other societal considerations or the effect on the doctor's moral character." So, if I were to write a post on thought experiments today, I would add something about the importance of re-contextualizing lessons learned within a thought experiments to the nuances of real-world situations. In short, I think my formula would be something like, "decouple within thought experiments, but make sure add an extra thinking step from 'answers inside a thought experiment' to 'what can we draw from this in terms of real-life applications.'" (Credit to Kaj Sotala, who once articulated a similar point in probably a better way.)
Feels like taking into account the likelihood that the "1.5% probability of 100,000 DALYs averted" estimate is a credence based on some marginally-relevant base rate[1] that might have been chosen with a significant bias towards optimism is very much in keeping with the spirit of the question (which presumably is about gauging attitudes towards uncertainty, not testing basic EV calculation skills)[2].
A very low percentage chance of averting a lot of DALYs feels a lot more like "1.5% of clinical trials of therapies for X succeeded; this untested idea might also have a 1.5% chance" optimism attached to a proposal offering little reason to believe it's above average rather than an estimate based on somewhat robust statistics (we inferred that 1.5% of people who receive this drug will be cured from the 1.5% of people who had that outcome in trials). So it seems quite reasonable to assume that the 1.5% chance of a positive binary outcome estimate might be biased upwards. Even more so in the context of "we acknowledge this is a long shot and high-certainty solutions to other pressing problems exist, but if the chance of this making an impact was as high as 0.0x%..." style fundraising appeals to EAs' determination to avoid scope insensitivity.
either that or someone's been remarkably precise in their subjective estimates or collected some unusual type of empirical data. I certainly can't imagine reaching the conclusion an option has exactly 1.5% chance of averting 100k DALYs myself
They do specifically say that they consider other types of university funding to have greater cost-benefit (and I don't think it makes sense to exclude reputation concerns from cost-benefit analysis, particularly when reputation boost is a large part of the benefit being paid for in the first place). Presumably not paying stipends would leave more to go around. I agree that more detail would be welcome.
I agree that all-things-considered they say that, but I am objecting to "one of the things to consider", and so IMO it makes sense to bracket that consideration when evaluating my claims here.
Dang. That makes sense, but it seems pretty grim. The second half of that argument is, "We can't select for not-feeling-pain, because we need to spend all of our future genetic modification points on the chickens getting bigger and growing even faster."
I'm kind of surprised that this argument isn't at all about the weirdness of it. It's purely pragmatic, from their standpoint. "Sure, we might be able to stop most of the chicken suffering, but that would increase costs by ~20% or so, so it's a non-issue"
20% of the global cost of growing chickens is probably in the order of at least ~$20B, which is much more than the global economy is willing to spend on animal welfare.
As mentioned in the other comment, I think it's extremely unlikely that there is a way to stop "most" of the chicken suffering while increasing costs by only ~20%.
Some estimate the better chicken commitment already increases costs by 20% (although there is no consensus on that, and factory farmers estimate 37.5%), and my understanding is that it doesn't stop most of the suffering, but "just" reduces it a lot.
Copying over the rationale for publication here, for convenience:
Rationale for Public Release
Releasing this report inevitably draws attention to a potentially destructive scientific development. We do not believe that drawing attention to threats is always the best approach for mitigating them. However, in this instance we believe that public disclosure and open scientific discussion are necessary to mitigate the risks from mirror bacteria. We have two primary reasons to believe disclosure is necessary:
1. To prevent accidents and well-intentioned development
If no serious concerns are raised, the default course of well-intentioned scientific and technological development would likely result in the eventual creation of mirror bacteria. Creating mirror life has been a long-term aspiration of many academic investigators, and efforts toward this have been supported by multiple scientific funders.1 While creating mirror bacteria is not yet possible or imminent, advances in enabling technologies are expected to make it achievable within the coming decades. It does not appear possible to develop these technologies safely (or deliberately choose to forgo them) without widespread awareness of the risks, as well as deliberate planning to mitigate them. This concern is compounded by the possibility that mirror bacteria could accidentally cause irreversible harm even without intentional misuse. Without awareness of the threat, some of the most dangerous modifications would likely be made for well-intentioned reasons, such as endowing mirror bacteria with the ability to metabolize ᴅ-glucose to allow growth in standard media.
2. To build guardrails that could reliably prevent misuse
There are currently substantial technical barriers to creating mirror bacteria. Success within a decade would require efforts akin to those of the Human Genome Project or other major scientific endeavors: a substantial number of skilled scientists collaborating for many years, with a large budget and unimpeded access to specialized goods and services. Without these resources, entities reckless enough to disregard the risks or intent upon misuse would have difficulty creating mirror bacteria on their own. Disclosure therefore greatly reduces the probability that well-intentioned funders and scientists would unwittingly aid such an effort while providing very little actionable information to those who may seek to cause harm in the near term.
Crucially, maintaining this high technical barrier in the longer term also appears achievable with a sustained effort. If well-intentioned scientists avoid developing certain critical components, such as methods relevant to assembling a mirror genome or key components of the mirror proteome, these challenges would continue to present significant barriers to malicious or reckless actors. Closely monitoring critical materials and reagents such as mirror nucleic acids would create additional obstacles. These protective measures could likely be implemented without impeding the vast majority of beneficial research, although decisions about regulatory boundaries would require broad discussion amongst the scientific community and other stakeholders, including policymakers and the public. Since ongoing advances will naturally erode technical barriers, disclosure is necessary in order to begin discussions while those barriers remain formidable.
IMO, one helpful side effect (albeit certainly not a main consideration) of making this work public, is that it seems very useful to have at least one worst-case biorisk that can be publicly discussed in a reasonable amount of detail. Previously, the whole field / cause area of biosecurity could feel cloaked in secrecy, backed up only by experts with arcane biological knowledge. This situation, although unfortunate, is probably justified by the nature of the risks! But still, it makes it hard for anyone on the outside to tell how serious the risks are, or understand the problems in detail, or feel sufficiently motivated about the urgency of creating solutions.
By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life. Just being able to use mirror life as an example seems like it's much healthier than having zero concrete examples and everything shrouded in secrecy.
Some of the cross-cutting things I am thinking about:
scientific norms about whether to fund / publish risky research
attempts to coordinate (on a national or international level) moratoriums against certain kinds of research
the desirability of things like metagenomic sequencing, DNA synthesis screening for harmful sequences, etc
research into broad-spectrum countermeasures like UVC light, super-PPE, pipelines for very quick vaccine development, etc
just emphasising the basic overall point that global catastrophic biorisk seems quite real and we should take it very seriously
and probably lots of other stuff!
So, I think it might be a kind of epistemic boon for all of biosecurity to have this public example, which will help clarify debates / advocacy / etc about the need for various proposed policies or investments.
Re "epistemics and integrity" - I'm glad to see this problem being described. It's also why I left (angrily!) a few years ago, but I don't think you're really getting to the core of the issue. Let me try to point at a few things
centralized control and disbursion of funds, with a lot of discretionary power and a very high and unpredictable bar, gives me no incentive to pursue what I think is best, and all the incentive to just stick to the popular narrative. Indeed groupthink. Except training people not to groupthink isn't going to change their (existential!) incentive to groupthink. People's careers are on the line, there are only a few opportunities for funding, no guarantee to keep receiving it after the first round, and no clear way to pivot into a safer option except to start a new career somewhere your heart does not want to be, having thrown years away
lack of respect for "normies". Many EA's seemingly can't stand interacting with non-EA's. I've seen EA meditation, EA bouldering, EA clubbing, EA whatever. Orgs seem to want everyone and the janitor to be "aligned". Everyone's dating each other. It seems that we're even afraid of them. I will never forget that just a week before I arrived at an org I was to be the manager of, they turned away an Economist reporter at their door...
perhaps in part due to the above, massive hubris. I don't think we realise how much we don't know. We started off with a few slam dunks (yeah wow 100x more impact than average) and now we seem to think we are better at everything. Clearly the ability to discern good charities does not transfer to the ability to do good management. The truth is: we are attempting something of which we don't even know whether it is possible at all. Of course we're all terrified! But where is the humility that should go along with that?
It seems that living in the Bay Area as an EA has a huge impact, and the dynamics are healthier elsewhere. (The fact that a higher concentration of EAs is worse, of course, is at least indicative of a big problem.)
Re: agency of the community itself, I've been trying to get to this "pure" form of EA in my university group, and to be honest, it felt extremely hard.
-People who want to learn about EA often feel confused and suspicious until you get to object-level examples. "Ok, impactful career, but concretely, where would that get me? Can you give me an example?". I've faced real resistance when trying to stay abstract.
-It's hard to keep people's attention without talking about object-level examples, be it for teaching abstract concepts. It's even harder once you get to the "projects" phase of the year.
-People anchor hard on some specific object-level examples after that. "Oh, EA ? The malaria thing?" (Despite my go-to examples included things as diverse as shrimp welfare and pandemic preparedness)
-When it's not an object-level example, it's usually "utilitarianism" or "Peter Singer", which act a lot as thought stoppers and have an "eek" vibe for many people.
-People who care about non-typical causes actually have a hard time finding data and making estimates.
-In addition to that, agency for really making estimates is hard to build up. One member I knew thought the most Impactful career choice he had was potentially working on nuclear fusion. I suggested him to find out about the Impact-Tractability-Neglectedness of it to compare to another option he had (even rough OOMs) as well as more traditional ones. I can't remember him giving any numbers even months later. When he just mentioned he felt sure about the difference, I didn't feel comfortable arguing about the robustness of his justification. It's a tough balance to strike between respecting preferences and probing reasons.
-A lot of it comes down to career 1:1s. Completing the ~8 or so parts is already demanding. You have to provide estimates that are nowhere to be found if your center of interest is "niche" in EA. You then have to find academic and professional opportunities as well as relations that are not referenced anywhere in the EA community (I had to reach back to the big brother of a primary school friend I had lost track of to get a fusion engineer he could talk to!). If you need funding, even if your idea is promising, you need excellent communication skills for writing a convincing blog post, plausibly enough research skills to get non-air-plucked estimates for ITN / cost-effectiveness analysis, and a desire to go to EAGs and convince people who could just not care. Moreover a lot of people expressly limit themselves to their own country or continent. It's often easier to stick to the usual topics (I get call for applications for AIS fellowships almost every months, of course I never had ones about niche topics)
-Another point about career 1:1s, the initial list of options to compare is hard to negotiate. Some people will neglect non-EA options, others will neglect EA options, and I had issues with artificially adding options to help them truly compare options.
-Another other point, some people barely have the time to come to a few sessions. It's hard to get them to actually rely on the methodological tools they haven't learned about in order to compare their options during career 1:1s.
-A good way to cope with all of this is to encourage students to start things out -to create an org rather than joining one. But not everyone has the necessary motivation for this.
I'm still happy with having started the year with epistemics, rationality, ethics and meta-ethics, and to have done other sessions on intervention and policy evaluation, suffering and consciousness, and population ethics. I didn't desperately need to have sessions on GHD / Animal Welfare/ AI Safety, thought they're definitely "in demand".
[Promise this is not a scam] Sign up to receive a free $50 charity gift card from a rich person
Every year, for the past few years, famous rich person Ray Dalio has given away 20,000 $50 gift cards. And he is doing it again this year. These can be given any of over 1.8 million US registered charities, which includes plenty of EA charities
Here's an announcement post from Ray Dalio's instagram for verification
Register here to receive notification when the gift cards become available.
If wondering why not post it here: Originally posted it here with a LW cross-post. It was immediately slapped with the "Community" tag, despite not being about community, but about different ways people try to do good, talk about charity & ensuing confusions. It is about the space of ideas, not about the actual people or orgs.
With posts like OP announcements about details of EA group funding or EAG admissions bar not being marked as community, I find it increasingly hard to believe the "Community" tag is driven by the stated principe marking "Posts about the EA community and projects that focus on the EA community" and not by other motives, like e.g. forum mods expressing the view "we want people to think less about this / this may be controversial / we prefer someone new to not read this".
My impression this moves substantial debates about ideas to the side, which is a state I don't want to cooperate on by just leaving it as it is -> moved the post on LessWrong and replaced by this comment.
You can sort by "oldest" and "newest" in the comment-sort order, and see that mine shows up earlier in the "oldest" order, and later in the "newest" order.
I agree that this is an inference. I currently think the OP thinks that in the absence of frugality concerns this would be among the most cost-effective uses of money by Open Phil's standards, but I might be wrong.
University group funding was historically considered extremely cost-effective when I talked to OP staff (beating out most other grants by a substantial margin). Possibly there was a big update here on cost-effectiveness excluding frugality-reputation concerns, but currently think there hasn't been (but like, would update if someone from OP said otherwise, and then I would be interested in talking about that).
They do specifically say that they consider other types of university funding to have greater cost-benefit (and I don't think it makes sense to exclude reputation concerns from cost-benefit analysis, particularly when reputation boost is a large part of the benefit being paid for in the first place). Presumably not paying stipends would leave more to go around. I agree that more detail would be welcome.
I've been donating 20% of my income for a couple of years, and I'm planning to increase it to 30–40%. I'd love to meet like-minded people: ambitious EAs who are EtG.
A large part of backlash against effective altruism comes from people worried about EA ideals being corrosive to the “paying for public goods" or “partial philanthropy" mechanisms.
I think this is a good observation (I think the worry highlighted is one of the weakest arguments against EA, not least because EA has very limited real world impact upon the amount spent on animal shelters or concert halls, but it definitely comes up a lot in articles people like sharing on here).
More common forms of ostensibly "impartial" giving, like supporting global health initiatives or animal welfare, are probably better understood as examples of partial philanthropy with extended notions of “we", like “we, living humans" or “we, mammals".
I don't agree with this though. I don't think people donate to anonymous poor recipients in faraway countries or farm animals out of sense of collective identity. There's little or no reciprocal altruism or collective identity there (particularly when it comes to the animals). I don't think donating to exploring ideas of future people or digital minds is more impartial simply because these don't [yet] exist. (Indeed I think it would be easier to characterise some of the niche longtermist research donations as being "partial" philanthropy on the basis that the recipients are typically known and respected members of an established in-group, with shared [unusual] interests, and the outcome is often research whose most obviously quantifiable impact is that the donor and their group find it very interesting. That strikes me as similar to quite a lot of other philanthropic research funding including in academia).
I think the "types" of charity are better understood as a set of motivations which overlap (and also include others like fuzzy feelings of satisfaction, signalling, interests, sense of duty etc which can coexist with also being a user of that conference hall or a fellow Christian or someone that believes its important future humanity ). Donating to AMF is about as impartial as it gets in terms of outcome, but there's definitely some sort of collective identity benefit to doing so whilst identifying as part of a group with a shared epistemology and understanding that points towards donating mattering and outcomes mattering and AMF being a good way to achieve this. Ditto impartial donations to mainstream charity made by people who have a sense of religious duty to random strangers, or completely impartial donations to a research funding pool made by people with strong convictions about progress.
Clearly you believe that probabilities can be less than 1%, reliably. Your probably of being struck by lightning today is not "0% or maybe 1%", it's on the order of 0.001%. Your probability of winning the lottery is not "0% or 1%" it's ~0.0000001%. I am confident you deal with probabilities that have much less than 1% error all the time, and feel comfortable using them.
It doesn't make sense to think of humility as something absolute like "don't give highly specific probabilities". You frequently have justified belief of a probability being very highly specific (the probability that random.org's random number generator will generate "2" when asked about a random number between 1 and 10 is exactly 10%, not 11%, not 9%, exactly 10%, with very little uncertainty about that number).
Executive summary: The term "charity" encompasses three distinct behaviors (public goods funding, partial philanthropy, and impartial philanthropy) that form a "conflationary alliance" where different groups benefit from using the same terminology despite having different goals and motivations.
Key points:
Public goods funding involves donors supporting services they personally use (e.g., climbing routes, concert halls), gaining tax benefits while creating broader social value.
Partial philanthropy supports in-group members or collective agencies (e.g., alumni donations, religious giving), strengthening communities and institutions.
Impartial philanthropy, pursued mainly by Buddhism and longtermism, attempts to benefit all beings regardless of connection to donors.
The alliance between these forms creates both benefits (shared infrastructure, tax advantages) and tensions (e.g., EA criticism of local charities).
Rather than arguing for one "true" meaning of charity, the author recommends treating these as distinct but complementary behaviors funded from separate mental "buckets."
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
You can sort by "oldest" and "newest" in the comment-sort order, and see that mine shows up earlier in the "oldest" order, and later in the "newest" order.
I think it's fair enough to caution against purely performative frugality. But I'm not sure the OP even justifies the suggestion that the organizers actually are more cost effective (they concluded the difference between paid and unpaid organizers' individual contributions were "substantive, not enormous"; there's a difference between paid people doing more work than volunteers and it being more cost effective to pay...). That's even more the case if you take into account that the primary role of an effective university organizer is attracting more people (or "low context observers") to become more altruistic and this instance of the "weirdness" argument is essentially that paying students undercut the group's ability to appeal to people on altruistic grounds, even if individual paid staff put in more effort. And they were unusually well paid by campus standards for tasks almost every other student society use volunteers for.[1] And that there's no evidence that the other ways CEA proposes spending the money instead are less effective.
one area we might agree is that I'm not sure if OpenPhil considered alternatives like making stipends needs-based or just a bit lower and more focused as a pragmatic alternative to just cancelling them altogether.
I agree that this is an inference. I currently think the OP thinks that in the absence of frugality concerns this would be among the most cost-effective uses of money by Open Phil's standards, but I might be wrong.
University group funding was historically considered extremely cost-effective when I talked to OP staff (beating out most other grants by a substantial margin). Possibly there was a big update here on cost-effectiveness excluding frugality-reputation concerns, but currently think there hasn't been (but like, would update if someone from OP said otherwise, and then I would be interested in talking about that).
I agree that things tend to get tricky and loopy around these kinds of reputation-considerations, but I think at least the approach I see you arguing for here is proving too much, and has a risk of collapsing into meaninglessness.
I think in the limit, if you treat all speech acts this way, you just end up having no grounding for communication. "Yes, it might be the case that the real principles of EA are X, but if I tell you instead they are X', then you will take better actions, so I am just going to claim they are X', as long as both X and X' include cost-effectiveness".
In this case, it seems like the very people that the club is trying to explain the concepts of EA to, are also the people that OP is worried about alienating by paying the organizers. In this case what is going on is that the goodness of the reputation-protecting choice is directly premised on the irrationality and ignorance of the very people you are trying to attract/inform/help. Explaining that isn't impossible but it does seem like a particularly bad way to start of a relationship, and so I expect consequences-wise to be bad.
"Yes, we would actually be paying people, but we expected you wouldn't understand the principles of cost-effectiveness and so be alienated if you heard about it, despite us getting you to understand them being the very thing this club is trying to do", is IMO a bad way to start off a relationship.
I also separately think that optimizing heavily for the perception of low-context observers in a way that does not reveal a set of underlying robust principles, is bad. I don't think you should put "zero" weight on that (and nothing in my comment implied that), but I do think it's something that many people put far too much weight on (going into detail of which wasn't the point of my comment, but on which I have written plenty about in many other comments).
There is also another related point in my comment, which is that "cost-effectiveness" is of course a very close sister concept to "wasting money". I think in many ways, thinking about cost-effectiveness is where you end up if you think carefully about how you can avoid wasting money, and is in some ways a more grown-up version of various frugality concerns.
When you increase the total cost of your operations (by, for example, reducing the cost-effectiveness of your university organizers, forcing you to spend more money somewhere else to do the same amount of good) in order to appear more frugal, I think you are almost always engaging in something that has at least the hint of deception.
Yes, you might ultimately be more cost-effective by getting people to not quite realize what happened, but when people are angry at me or others for not being frugal enough, I think it's rarely appropriate to ultimately spend more to appease them, even if doing so would ultimately then save me enough money to make it worth it. While this isn't happening as directly here as it was with other similar situations, like whether the Wytham Abbey purchase was not frugal enough, I think the same dynamics and arguments apply.
I think if someone tries to think seriously and carefully through what it would mean to be properly frugal, I don't think they would endorse you sacrificing the effectiveness of your operations causing you to ultimately spend more to achieve the same amount of good. And if they learned that you did, and they think carefully about what this implies about your frugality, they would end up more angry, not less. That, I think, is a dynamic worth avoiding.
I think it's fair enough to caution against purely performative frugality. But I'm not sure the OP even justifies the suggestion that the organizers actually are more cost effective (they concluded the difference between paid and unpaid organizers' individual contributions were "substantive, not enormous"; there's a difference between paid people doing more work than volunteers and it being more cost effective to pay...). That's even more the case if you take into account that the primary role of an effective university organizer is attracting more people (or "low context observers") to become more altruistic and this instance of the "weirdness" argument is essentially that paying students undercut the group's ability to appeal to people on altruistic grounds, even if individual paid staff put in more effort. And they were unusually well paid by campus standards for tasks almost every other student society use volunteers for.[1] And that there's no evidence that the other ways CEA proposes spending the money instead are less effective.
one area we might agree is that I'm not sure if OpenPhil considered alternatives like making stipends needs-based or just a bit lower and more focused as a pragmatic alternative to just cancelling them altogether.
I don't think they would for people with unusual hobbies or lifestyle choices or belief sets, with stereotypes related to those things.
And the "stereotyping" in here is really limited and not particularly negative: there's space apportioned to highlighting how OpenPhil's chief executive gave a kidney for the cause and none to stereotypes of WEIRD Bay Area nerds or Oxford ivory towers or effective partying in the Bahamas. If you knew nothing else about the movement, you'd probably come away with the conclusion that EAs were a bit too consistent in obsessing over measurable outcomes; most of the more informed and effective criticisms argue the opposite!
(It also ends up by suggesting that EA as a philosophy offers a set of questions that are worth asking and some of its typical answers are perfectly valid. Think most minorities would love it if outside criticism of their culture generally drew that sort of conclusion!)
EAs can and do write opinion pieces broadly or specifically criticising other people's philanthropic choices all the time. I don't think EA should be exempted from such arguments.
Perplexed by the reaction here. Not sure what people are taking most issue with:.
Me saying the stereotypes were limited and not particularly negative? If you think a reference to being disproportionately funded by a small number of tech billionaires, (balanced out by also accurate references to Singer and the prior emergence of a movement and an example of Berger giving a kidney rather than money) is negative stereotyping, you haven't read other critical takes on EA, never mind experienced what some other "minorities" deal with on a daily basis!
Me saying the more informed and effective criticisms of EA and EA orgs tended to point out where they fall well short of the rigour they demand? Again, I'd have thought it was glaringly obvious, whether it's nuanced insider criticism of specific inconsistencies in outcome measures or reviews of specific organizations, or drive-by observations that buying Wytham Abbey or early-stage funding for OpenAI may not have been high points of evidence-based philanthropy. That's obviously more useful than "these people have a different worldview" type articles like this. Even some of the purely stereotype-based criticisms of the money sloshing around the FTX ecosystem probably weren't "stopped clock" moments...
Or me pointing out that EAs also criticise non EAs' philanthropic choices, sometimes in generic terms? If you haven't read Peter Singer writing how other people have the wrong philanthropic priorities, you haven't read much Peter Singer!
I'd think a better way to get feedback is to ask "What do you think of this pledge wording?" rather than encourage people to take a lifelong pledge before it's gotten much external feedback.
For comparison, you could see when GWWC was considering changing the wording of its pledge (though I recognize it was in a different position as an existing pledge rather than a new one): Should Giving What We Can change its pledge?
I'd think a better way to get feedback is to ask "What do you think of this pledge wording?" rather than encourage people to take a lifelong pledge before it's gotten much external feedback.
The idea of an Minimal Viable Product is you're unsure what part of your product provides value and what parts are sticking points. After you release the MVP the sticking points are much clearer, and you have a much better idea on where to focus your limited time and money.
Our funding bar is higher now than it was in previous years, and there are projects which EAIF funded in previous years that we would be unlikely to fund now.
Could you expand on why that's the case? Is the idea that you believe those projects are net negative, or that you would rather marginal donations go to animal welfare and the long term future instead of EA infrastructure?
I think it's a bit weird for donors who want to donate to EA infrastructure projects to see that initiatives like EA Poland are funding constrained while the EA Infrastructure fund isn't, and extra donations to the EAIF will likely counterfactually go to other cause areas.
Could you expand on why that's the case? Is the idea that you believe those projects are net negative, or that you would rather marginal donations go to animal welfare and the long term future instead of EA infrastructure?
In some cases there are projects that I or other fund managers think are net negative, but this is rare. Often things that we decide against funding I think are net positive, but think that the projects aren't competitive with funding things outside of the EA Infrastructure space (either the other EA Funds or more broadly).
I think it makes sense that there are projects which EAIF decides not to fund, and that other people will still be excited about funding (and in these cases I think it makes sense for people to consider donating to those projects directly). Could you elaborate a bit on what you find weird?
and extra donations to the EAIF will likely counterfactually go to other cause areas
I don't think this is the case. Extra donations to EAIF will help us build up more reserves for granting out at a future date. But it's not the case that eg. if EAIF has more money that we think that we can spend well at the moment, that we'll then eg. start donating this to other cause areas. I might have misunderstood you here?
I agree the "Rationale for Public Release" section is interesting; I've copied it here:
Releasing this report inevitably draws attention to a potentially destructive scientific development. We do not believe that drawing attention to threats is always the best approach for mitigating them. However, in this instance we believe that public disclosure and open scientific discussion are necessary to mitigate the risks from mirror bacteria. We have two primary reasons to believe disclosure is necessary:
To prevent accidents and well-intentioned development If no serious concerns are raised, the default course of well-intentioned scientific and technological development would likely result in the eventual creation of mirror bacteria. Creating mirror life has been a long-term aspiration of many academic investigators, and efforts toward this have been supported by multiple scientific funders.1 While creating mirror bacteria is not yet possible or imminent, advances in enabling technologies are expected to make it achievable within the coming decades. It does not appear possible to develop these technologies safely (or deliberately choose to forgo them) without widespread awareness of the risks, as well as deliberate planning to mitigate them. This concern is compounded by the possibility that mirror bacteria could accidentally cause irreversible harm even without intentional misuse. Without awareness of the threat, some of the most dangerous modifications would likely be made for well-intentioned reasons, such as endowing mirror bacteria with the ability to metabolize ᴅ-glucose to allow growth in standard media.
To build guardrails that could reliably prevent misuse There are currently substantial technical barriers to creating mirror bacteria. Success within a decade would require efforts akin to those of the Human Genome Project or other major scientific endeavors: a substantial number of skilled scientists collaborating for many years, with a large budget and unimpeded access to specialized goods and services. Without these resources, entities reckless enough to disregard the risks or intent upon misuse would have difficulty creating mirror bacteria on their own. Disclosure therefore greatly reduces the probability that well-intentioned funders and scientists would unwittingly aid such an effort while providing very little actionable information to those who may seek to cause harm in the near term. Crucially, maintaining this high technical barrier in the longer term also appears achievable with a sustained effort. If well-intentioned scientists avoid developing certain critical components, such as methods relevant to assembling a mirror genome or key components of the mirror proteome, these challenges would continue to present significant barriers to malicious or reckless actors. Closely monitoring critical materials and reagents such as mirror nucleic acids would create additional obstacles. These protective measures could likely be implemented without impeding the vast majority of beneficial research, although decisions about regulatory boundaries would require broad discussion amongst the scientific community and other stakeholders, including policymakers and the public. Since ongoing advances will naturally erode technical barriers, disclosure is necessary in order to begin discussions while those barriers remain formidable.
When to work on risks in public vs private is a really tricky question, and it's nice to see this discussion on how this group handled it in this case.
I agree the "Rationale for Public Release" section is interesting; I've copied it here:
Releasing this report inevitably draws attention to a potentially destructive scientific development. We do not believe that drawing attention to threats is always the best approach for mitigating them. However, in this instance we believe that public disclosure and open scientific discussion are necessary to mitigate the risks from mirror bacteria. We have two primary reasons to believe disclosure is necessary:
To prevent accidents and well-intentioned development If no serious concerns are raised, the default course of well-intentioned scientific and technological development would likely result in the eventual creation of mirror bacteria. Creating mirror life has been a long-term aspiration of many academic investigators, and efforts toward this have been supported by multiple scientific funders.1 While creating mirror bacteria is not yet possible or imminent, advances in enabling technologies are expected to make it achievable within the coming decades. It does not appear possible to develop these technologies safely (or deliberately choose to forgo them) without widespread awareness of the risks, as well as deliberate planning to mitigate them. This concern is compounded by the possibility that mirror bacteria could accidentally cause irreversible harm even without intentional misuse. Without awareness of the threat, some of the most dangerous modifications would likely be made for well-intentioned reasons, such as endowing mirror bacteria with the ability to metabolize ᴅ-glucose to allow growth in standard media.
To build guardrails that could reliably prevent misuse There are currently substantial technical barriers to creating mirror bacteria. Success within a decade would require efforts akin to those of the Human Genome Project or other major scientific endeavors: a substantial number of skilled scientists collaborating for many years, with a large budget and unimpeded access to specialized goods and services. Without these resources, entities reckless enough to disregard the risks or intent upon misuse would have difficulty creating mirror bacteria on their own. Disclosure therefore greatly reduces the probability that well-intentioned funders and scientists would unwittingly aid such an effort while providing very little actionable information to those who may seek to cause harm in the near term. Crucially, maintaining this high technical barrier in the longer term also appears achievable with a sustained effort. If well-intentioned scientists avoid developing certain critical components, such as methods relevant to assembling a mirror genome or key components of the mirror proteome, these challenges would continue to present significant barriers to malicious or reckless actors. Closely monitoring critical materials and reagents such as mirror nucleic acids would create additional obstacles. These protective measures could likely be implemented without impeding the vast majority of beneficial research, although decisions about regulatory boundaries would require broad discussion amongst the scientific community and other stakeholders, including policymakers and the public. Since ongoing advances will naturally erode technical barriers, disclosure is necessary in order to begin discussions while those barriers remain formidable.
When to work on risks in public vs private is a really tricky question, and it's nice to see this discussion on how this group handled it in this case.
Copying over the rationale for publication here, for convenience:
Rationale for Public Release
Releasing this report inevitably draws attention to a potentially destructive scientific development. We do not believe that drawing attention to threats is always the best approach for mitigating them. However, in this instance we believe that public disclosure and open scientific discussion are necessary to mitigate the risks from mirror bacteria. We have two primary reasons to believe disclosure is necessary:
1. To prevent accidents and well-intentioned development
If no serious concerns are raised, the default course of well-intentioned scientific and technological development would likely result in the eventual creation of mirror bacteria. Creating mirror life has been a long-term aspiration of many academic investigators, and efforts toward this have been supported by multiple scientific funders.1 While creating mirror bacteria is not yet possible or imminent, advances in enabling technologies are expected to make it achievable within the coming decades. It does not appear possible to develop these technologies safely (or deliberately choose to forgo them) without widespread awareness of the risks, as well as deliberate planning to mitigate them. This concern is compounded by the possibility that mirror bacteria could accidentally cause irreversible harm even without intentional misuse. Without awareness of the threat, some of the most dangerous modifications would likely be made for well-intentioned reasons, such as endowing mirror bacteria with the ability to metabolize ᴅ-glucose to allow growth in standard media.
2. To build guardrails that could reliably prevent misuse
There are currently substantial technical barriers to creating mirror bacteria. Success within a decade would require efforts akin to those of the Human Genome Project or other major scientific endeavors: a substantial number of skilled scientists collaborating for many years, with a large budget and unimpeded access to specialized goods and services. Without these resources, entities reckless enough to disregard the risks or intent upon misuse would have difficulty creating mirror bacteria on their own. Disclosure therefore greatly reduces the probability that well-intentioned funders and scientists would unwittingly aid such an effort while providing very little actionable information to those who may seek to cause harm in the near term.
Crucially, maintaining this high technical barrier in the longer term also appears achievable with a sustained effort. If well-intentioned scientists avoid developing certain critical components, such as methods relevant to assembling a mirror genome or key components of the mirror proteome, these challenges would continue to present significant barriers to malicious or reckless actors. Closely monitoring critical materials and reagents such as mirror nucleic acids would create additional obstacles. These protective measures could likely be implemented without impeding the vast majority of beneficial research, although decisions about regulatory boundaries would require broad discussion amongst the scientific community and other stakeholders, including policymakers and the public. Since ongoing advances will naturally erode technical barriers, disclosure is necessary in order to begin discussions while those barriers remain formidable.
This isn't about your giving per se, but have your views on the moral valence of financial trading changed in any notable ways since you spoke about this on the 80K podcast?
(I have no reason to think your views have changed, but was reading a socialist/anti-finance critique of EA yesterday and thought of your podcast.)
The episode page lacks a transcript, but does include this summary: "There are arguments both that quant trading is socially useful, and that it is socially harmful. Having investigated these, Alex thinks that it is highly likely to be beneficial for the world."
In that section (starts around 43:00), you talk about market-making, selling goods "across time" in the way other businesses sell them across space, and generally helping sellers "communicate" by adjusting prices in sensible ways. At the same time, you acknowledge that market-making might be less useful than in the past and that more finance people on the margin might not provide much extra social value (since markets are so fast/advanced/liquid at this point).
I think it's great that EAIF is not funding constrained.
Here's a random idea I had recently if anyone is interested and has the time:
An org that organizes a common application for nonprofits applying to foundations. There is enormous economic inefficiency and inequality in matching PF grants to grantees. PF application processes are extremely opaque and burdensome. Attempts to make common applications have largely been unsuccessful, I believe mostly because they tend to be for a specific geographic region. Instead, I think it would be interesting to create different common applications by cause area. A key part of the common application could be incorporating outcome reporting specific to each cause area, which I believe would cause PF to make more impact-focused grants, making EAs happy.
Orgs or "proto-orgs" in their early stages are often in a catch-22. They don't have the time or expertise (because they don't have full time staff) to develop a strong grantwriting or other fundraising operations, which could be enabled by startup funds. An org that was familiar with the funding landscape, could familiarize itself with new orgs, and help it secure startup funds could help resolve the catch-22 that orgs find themselves at step 0.
I commit to using my skills, time, and opportunities to maximize my ability to make a meaningful difference
I find the word maximise pretty scary here, for similar reasons to here. Analogous how GWWC is about giving 10%, a bounded amount, not "as much as you can possibly spare while surviving and earning money"
To me, taking a pledge to maximise seriously (especially in a naive conception where "I will get sick of this and break the pledge" or "I will burn out" aren't considerations) is a terrible idea, and I recommend that people take pledges with something more like "heavily prioritise" or "keep as one of my top prioritise" or "actually put a sincere, consistent effort into, eg by spending at least an hour per month reflecting on whether I'm having the impact I want". Of course, in practice, a pledge to maximise generally means one of those things, since people always have multiple priorities, but I like pledges to be something that could be realistically kept.
This seems like a reasonable mistake for younger EAs to make, and I've seen similar mindsets frequently - but in the community, I am very happy to see that many other members are providing a voice of encouragement, but also signficantly more moderation.
But as I said in another comment, and expanded on in a reply, I'm much more concerned than you seem to be about people committing to something even more mild for their entire careers - especially if doing so as college students. Many people don't find work in the area they hope to. Even among those that do find jobs in EA orgs and similar, which is a small proportion of those who want to, some don't enjoy the things they would view as most impactful, and find they are unhappy and/or ineffective; having made a commitment to do whatever is most impactful seems unlikely to work well for a large fraction of those who would make such a pledge.
Thanks for your feedback! I appreciate it and agree that maximize it a pretty strong world. Just to clarify the crux here, would you say that this project doesn't make sense over-all or would you say that the text of the pledge be changed to something more manageable?
I think it's a problem overall, and I've talked about this a bit in two of the articles I linked to. To expand on the concerns, I'm concerned on a number of levels, starting from community dynamics that seem to dismiss anyone not doing direct work as insufficiently EA, to the idea that we should be a community that encourages making often already unhealthy levels of commitment by young adults into pledges to continue that level of dedication for their entire careers.
As someone who has spent most of a decade working in EA, I think this is worrying, even for people deciding on their own to commit themselves. People should be OK with prioritizing themselves to a significant extent, and while deciding to work on global priorities is laudable *if you can find something that fits your abilities and skill set*, but committing to do so for your entire career, which may not follow the path you are hoping for, seems at best unwise. Suggesting that others do so seems very bad.
So again, I applaud the intent, and think it was a reasonable idea to propose and get feedback about, but I also strongly think it should be dropped and you should move to something else.
Executive summary: A mathematical framework is proposed for giving AI systems an "artificial conscience" that can calculate the ethics of rights violations, incorporating factors like culpability, proportionality, and risk when determining if violating someone's rights is justified in self-defense scenarios.
Key points:
Rights violations are quantified using equations that consider how much someone is "in harm's way" and their culpability/blameworthiness for the situation.
Different levels of blameworthiness (from accidental to premeditated) affect how much someone's rights can be violated in self-defense scenarios.
Proportionality matters - the response should be proportional to the threat, with different thresholds for property damage vs bodily harm vs loss of life.
For probabilistic threats, killing in self-defense may be justified if risk of death exceeds 0.1%, with rights gradually increasing as risk decreases below this threshold.
The framework distinguishes between personal self-defense and aided self-defense (helping others), with more latitude given for personal defense.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
My tentative take is that this is on-net bad, and should not be encouraged. I give this a 10/10 for good intent, but a 2/10 for planning and avoiding foreseeable issues, including the unilateralists curse, the likely object level impacts of the pledge, and the reputational and community impacts of promoting the idea.
It is not psychologically healthy to optimize or maximize your life towards a single goal, much less commit to doing so. That isn't the EA ideal. Promising to "maximize my ability to make a meaningful difference" is an unlimited and worryingly cult-like commitment, builds in no feedback from others who have a broader perspective about what is or is not important or useful. It implicitly requires pledgers to prioritize impact over personal health and psychological wellbeing. (The claim that it's usually the case that burnout reduces impact is a contingent one, and seems very likely to lead many people to overcommit and do damaging things.) It leads to unhealthy competitive dynamics, and excludes most people, especially the psychologically well adjusted.
I will contrast this to the giving pledge, which is very explicitly a partial pledge, requiring 10% of your income. This is achievable without extreme measures, or giving up having a normal life. The pledge was built via consultation with and advice from a variety of individuals, especially including those who were more experienced, which also seems to sharply contrast with this one.
Thanks for your feedback! I appreciate it and agree that maximize it a pretty strong world. Just to clarify the crux here, would you say that this project doesn't make sense over-all or would you say that the text of the pledge be changed to something more manageable?
For what it's worth, there used to be an 80k pledge along similar lines. They quietly dropped it several years ago, so you might want to find someone involved in that decision to try and understand why (I suspect and dimly remember that it was some combination of non-concreteness, and concerns about other-altruism-reduction effects).
Your points raise important considerations about the rapid development and potential risks of AI, particularly LLMs. The idea that deploying AI early to extend the timeline of human control makes sense strategically, especially when considering the potential for recursive LLMs and their self-improvement capabilities. While it's true that companies and open-source communities will continue experimenting, the real risk lies in humans deliberately turning these systems into agents to serve long-term goals, potentially leading to unforeseen consequences. The concern about AI sentience ChatGPT and the potential for abuse is also valid, and highlights the need for strict controls around AI access, transparency, and ethical safeguards. Ensuring that AIs are never open-sourced in a way that could lead to harm, and that interactions are monitored, seems essential in preventing malicious uses or exploitation.
The bar should not be at 'difficult financial situation', and this is also something there are often incentives against explicitly mentioning when applying for funding. Getting paid employment while studying (even fulltime degrees) is normal.
My 5 minute Google search to put some numbers on this:
Why are students taking on paid work?
UK: "Three-quarters of those in work said they did so to meet their living costs, while 23% also said they worked to give financial support for friends or family." From the Guardian article linked above.
Cannot find a recent US statistic quickly, but given the system (e.g. https://www.collegeave.com/articles/how-to-pay-for-college/) I expect working rather than taking out (as much in) loans is a big one.
On the other hand, spending time on committees is also very normal as an undergraduate and those are not paid. However in comparison the time people spend on this is much more limited (say ~2-5 hrs/week), there is rarely a single organiser, and I've seen a lot of people drop off committees - some as they are less keen, but some for time commitment reasons (which I expect will sometimes/often be doing paid work).
I don't disagree. I was simply airing my suspicion that most group organizers who applied for the OP fellowship did so because they thought something akin to "I will be organizing for 8-20 hours a week and I want to be incentivized for doing so" — which is perfectly a-ok and a valid reason — rather than "I am applying to the fellowship as I will not be able to sustain myself without the funding."
In cases where people need to make trade-offs between taking some random university job vs. organizing part time, assuming that they are genuinely interested in organizing and that the university has potential, I think it would be valuable for them to get funding.
FYI the School for Moral Ambition has a career pledge. Participants of their circle programme (like an intro fellowship but self-facilitated) are encouraged to take it at the end. AFAIK, over 100 people have taken it so far. Might be worth reaching out to them to see what they've learned? Niki might be a good person to contact. She manages the circle programme and was a volunteer at EA Netherlands before that.
What names did you consider for the pledge? One con of the current name is that it could elicit some reactions like:
"so you think you're better than others?"
"how arrogant/naive to think you can know better than others what one should do"
"why a comparative instead of a collaborative frame?"
"saying someone has a better career than someone else is kind of like saying they're a better person, that's unfair"
It might be largely down to whether someone interprets better as "better than I might otherwise do" or "better than others' careers". Likely depends on culture too, for example I think here in Finland the above reactions could be more likely since people tend to value humbleness quite a bit.
Anyway I'm not too worried since the name has positives too, and you can always adapt the name based on how outreach goes if you do end up experimenting with it. 👍
This is such a great question. We considered a very limited pool of ideas, for a very limited amount of time. I think the closest competitor was Career for Good.
The thinking being, that we can always get something up, test if there's actually interest in this, before actually spending significant resources into the branding side of things.
One con of the current name is that it could elicit some reactions
I agree that seems to being played out here! This could pose a good reason to change the name
It might be largely down to whether someone interprets better as "better than I might otherwise do" or "better than others' careers"
In case there was any doubt, we didn't intend to say "Better than others". The fact that Bettercareers.com was taken was seen by myself as a positive update
I think you should explain in this post what the pledge people may take :-)
I am particularly interested in how to pledge more concrete. I have always thought that the 10% pledge is somewhat incomplete because it does not consider the career. However, I think it would be useful to make the career pledge more actionable.
I think this table from the paper gives a good idea of the exact methodology:
Like others I'm not convinced this is a meaningful "red line crossing", because non-AI computer viruses have been able to replicate themselves for a long time, and the AI had pre-written scripts it could run to replicate itself.
The reason (made up by me) non-AI computer viruses aren't a major threat to humanity is that:
They are fragile, they can't get around serious attempts to patch the system they are exploiting
They lack the ability to escalate their capabilities once they replicate themselves (a ransomware virus can't also take control of your car)
I don't think this paper shows these AI models making a significant advance on these two things. I.e. if you found this model self-replicating you could still shut it down easily, and this experiment doesn't in itself show the ability of the models to self-improve.
Executive summary: This paper proposes a new method to measure AI-related risks by analyzing stock price movements of AI companies, finding that major AI shocks correspond to acquisitions, product launches, and regulatory changes.
Key points:
Traditional risk measurement methods like text analysis have limitations; stock price analysis can provide more immediate and objective risk signals
The study uses "common volatility" (COVOL) analysis to identify AI-specific market movements separate from broader market trends
Five key categories of AI risk identified: economic disruption, social/ethical concerns, security/governance issues, environmental impact, and existential risks
Method improves upon existing approaches by filtering out non-AI related global events (like COVID-19) to isolate AI-specific shocks
Approach provides real-time risk assessment reflecting views of investors, public, and policymakers rather than just media coverage
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
By definition, a UBI takes a pool of money and redistributes it equally to everyone in a community, regardless of personal need. However, with the same pool of total funding, one can typically deliver more efficient benefits by targeting people with the greatest need, such as those in dire poverty or those who have been struck by bad luck.
If you imagine being a philanthropist who has access to $8 billion, it seems unlikely that the best way to spend this money would be to give everyone on Earth $1. Yet this scheme is equivalent to a UBI merely framed in the context of private charity rather than government welfare.
It would require an enormous tax hike to provide everyone in a large community (say, the United States) a significant amount of yearly income through a UBI, such as $1k per month. And taxes are not merely income transfers: they have deadweight loss, which lowers total economic output. The intuition here is simple: when a good or service is taxed, that decreases the incentive to produce that good or service. As a consequence of the tax, fewer people will end up receiving the benefits provided by these goods and services.
Given these considerations, even if you think that unconditional income transfers are a good idea, it seems quite unlikely that a UBI would be the best way to redistribute income. A more targeted approach that combines the most efficient forms of taxation (such as land value taxes) and sends this money to the most worthy welfare recipients (such as impoverished children) would likely be far better on utilitarian grounds.
Thank you for your insights Matthew, that all makes a lot of sense and helps me understand.
I wonder if there is an income bracket low enough in the US, where UBI focused just for that group, would have net positive impact. (This study was $29,900 average household income for the participants.) Or, if there is going to be a net negative for UBI in the US just no matter... even before getting detailed about potential counter-factual scenarios.
Funny that UBI seems to do better than more targeted approaches, in low-income countries... but in high-income countries, even for the poorest within those countries, more targeted approaches may be the better option.
What about corporations or nation states during times of conflict - do you think it's accurate to model them as roughly as ruthless in pursuit of their own goals as future AI agents?
They don't have the same psychological makeup as individual people, they have a strong tradition and culture of maximizing self-interest, and they face strong incentives and selection pressures to maximize fitness (i.e. for companies to profit, for nation states to ensure their own survival) lest they be outcompeted by more ruthless competitors. On average, while I'd expect that these entities tend to show some care for goals besides self-interest maximization, I think the most reliable predictor of their behavior is the maximization of their self-interest.
If they're roughly as ruthless as future AI agents, and we've developed institutions that somewhat robustly align their ambitions with pro-social action, then we should have some optimism that we can find similarly productive systems for working with misaligned AIs.
Thanks! Hmm, some reasons that analogy is not too reassuring:
“Regulatory capture” would be analogous to AIs winding up with strong influence over the rules that AIs need to follow.
“Amazon putting mom & pop retailers out of business” would be analogous to AIs driving human salary and job options below subsistence level.
“Lobbying for favorable regulation” would be analogous to AIs working to ensure that they can pollute more, and pay less taxes, and get more say in government, etc.
“Corporate undermining of general welfare” (e.g. aggressive marketing of cigarettes and opioids, leaded gasoline, suppression of data on PFOA, lung cancer, climate change, etc.) would be analogous to AIs creating externalities, including by exploiting edge-cases in any laws restricting externalities.
There are in fact wars happening right now, along with terrifying prospects of war in the future (nuclear brinkmanship, Taiwan, etc.)
Some of the disanalogies include:
In corporations and nations, decisions are still ultimately made by humans, who have normal human interests in living on a hospitable planet with breathable air etc. Pandemics are still getting manufactured, but very few of them, and usually they’re only released by accident.
AIs will have wildly better economies of scale, because it can be lots of AIs with identical goals and high-bandwidth communication (or relatedly, one mega-mind). (If you’ve ever worked at or interacted with a bureaucracy, you’ll appreciate the importance of this.) So we should expect a small number (as small as 1) of AIs with massive resources and power, and also unusually strong incentive for gaining further resources.
Relatedly, self-replication would give an AI the ability to project power and coordinate in a way that is unavailable to humans; this puts AIs more in the category of viruses, or of the zombies in a zombie apocalypse movie. Maybe eventually we’ll get to a world where every chip on Earth is running AI code, and those AIs are all willing and empowered to “defend themselves” by perfect cybersecurity and perfect robot-army-enforced physical security. Then I guess we wouldn’t have to worry so much about AI self-replication. But getting to that point seems pretty fraught. There’s nothing analogous to that in the world of humans, governments, or corporations, which either can’t grow in size and power at all, or can only grow via slowly adding staff that might have divergent goals and inadequate skills.
If AIs don’t intrinsically care about humans, then there’s a possible Pareto-improvement for all AIs, wherein they collectively agree to wipe out humans and take their stuff. (As a side-benefit, it would relax the regulations on air pollution!) AIs, being very competent and selfish by assumption, would presumably be able to solve that coordination problem and pocket that Pareto-improvement. There’s just nothing analogous to that in the domain of corporations or governments.
I’m glad you mustered the courage to post this! I think it’s a great post.
I agree that, in practice, people advocating for effective altruism can implicitly argue for the set of popular EA causes (and they do this quite often?), which could repel people with useful insight. Additionally, it seems to be the case that people in the EA community can be dismissive of newcomers’ cause prioritization (or their arguments for causes that are less popular in EA). Again, this could repel people from EA.
I have a couple of hypotheses for these observations. (I don’t think either is a sufficient explanation, but they’re both plausibly contributing factors.)
First, people might feel compelled to make EA less “abstract” by trying to provide concrete examples of how people in the EA community are “trying to do the most good they can,” possibly giving the impression that the causes, instead of the principles, are most characteristic of EA.
Second, people may be more subconsciously dismissive of new cause proposals because they’ve invested time/money into causes that are currently popular in the EA community. It’s psychologically easier to reject a new cause prioritization proposal than it is to accept it and thereby feel as though your resources have not been used with optimal effectiveness.
Thanks for those insights ! I had not really thought about "why" the situation might be as it is, focused on the question on "what" it entails. I'm really glad I posted, I feel like I feel like my understanding of the topic has progressed as much in 24 hours as it had since the beginning.
A lot of forms of global utilitarianism do seem to tend to converge on the ‘big 3’ cause areas of Global Health & Development, Animal Welfare, and Global Catastrophic Risks. If you generally value things like ‘saving lives’ or ‘reducing suffering’, you’ll usually end up at one of these (and most people seem to decide between them based on risk tolerance, assumptions about non-human moral values, or tractability—rather than outcome values). Under this perspective, it could be reasonable to dismiss cause areas that don’t fit into this value framework.
But this highlights where I think part of the problem lies, which is that value systems that lie outside of this can be good targets for effective altruism. If you value biodiversity for its own sake, it’s not unreasonable to ask ‘how can we save the greatest number of valuable species from going extinct?’. Or you might be a utilitarian, but only interested in a highly specific outcome, and ask ‘how can I prevent the most deaths from suicide?’. Or ‘how can I prevent the most suffering in my country?’—which you might not even do for value-system reasons, but because you have tax credits to maximise!
I wish EA were more open to this, especially as a movement that recognises the value of moral uncertainty. IMHO, some people in that biodiversity loss thread are bit too dismissive, and I think we’ve probably lost some valuable partners because of it! But I understand the appeal of wanting easy answers, and not spending too much time overthinking your value system (I feel the same!).
Thanks for the link ! The person who posted may not have been a newcomer to EA, but it is a perfect example of the kind of threads that I was thinking may repel newbies, or slightly discourage them to even ask. I really agree with what you say, there really is something to dig into there.
When I answered this question, I answered it with an implied premise that an EA org is making these claims about the possibilities, and went for number 1, because I don't trust EA orgs to be accurate in their "1.5%" probability estimates, and I expect these to be more likely overestimates than underestimates.
My intuitive reaction to this is "Way to screw up a survey."
Considering that three people agree-voted your post, I realize I should probably come away with this with a very different takeaway, more like "oops, survey designers need to put in extra effort if they want to get accurate results, and I would've totally fallen for this pitfall myself."
Still, I struggle with understanding your and the OP's point of view. My reaction to the original post was something like:
Why would this matter? If the estimate could be off by 1 percentage point, it could be down to 0.5% or up to 2.5%, which is still 1.5% in expectation. Also, if this question's intention were about the likelihood of EA orgs being biased, surely they would've asked much more directly about how much respondees trust an estimate of some example EA org.
We seem to disagree on use of thought experiments. The OP writes:
When designing thought experiments, keep them as realistic as possible, so that they elicit better answers. This reduces misunderstandings, pitfalls, and potentially compounding errors. It produces better communication overall.
I don't think this is necessary and I could even see it backfiring. If someone goes out of their way to make a thought experiment particularly realistic, maybe respondees might get the impression that it is asking about a real-world situation where they are invited to bring in all kinds of potentially confounding considerations. But that would defeat the point of the thought experiment (e.g., people might answer based on how much they trust the modesty of EA orgs, as opposed to giving you their personal tolerance of risk of the feeling of having had no effect/wasted money in hindsight). The way I see it, the whole point of thought experiments is to get ourselves to think very carefully and cleanly about the principles we find most important. We do this by getting rid of all the potentially confounding variables. See here for a longer explanation of this view.
Maybe future surveys should have a test to figure out how people understand the use of thought experiments. Then, we could split responses between people who were trying to play the thought experiment game the intended way, and people who were refusing to play (i.e., questioning premises and adding further assumptions).
*On some occasions, it makes sense to question the applicability of a thought experiment. For instance, in the classical "what if you're a doctor who has the opportunity to kill a healthy patient during routine chek-up so that we could save the lives of 4 people needing urgent organ transplants," it makes little sense to just go "all else is equa! Let's abstract away all other societal considerations or the effect on the doctor's moral character." So, if I were to write a post on thought experiments today, I would add something about the importance of re-contextualizing lessons learned within a thought experiments to the nuances of real-world situations. In short, I think my formula would be something like, "decouple within thought experiments, but make sure add an extra thinking step from 'answers inside a thought experiment' to 'what can we draw from this in terms of real-life applications.'" (Credit to Kaj Sotala, who once articulated a similar point in probably a better way.)
I don't know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?
Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.
Similarly, if the goal is to help people think about cause prioritisation, I think fairly standard EA retreats / fellowships are quite good at this? I'm not sure we need some intermediary step like "improve community epistemics".
Appreciate you responding and tracking this concern though!
I think fairly standard EA retreats / fellowships are quite good at this
Maybe. To take cause prio as an example, my impression is that the framing is often a bit more like: 'here are lots of cause areas EAs think are high impact! Also, cause prioritisation might be v important.' (That's basically how I interpret the vibe and emphasis of the EA Handbook / EAVP.) Not so much 'cause prio is really important. Let's actually try and do that and think carefully about how to do this well, without just deferring to existing people's views.'
So there's a direct ^ version like that that I'd be excited about.
Although perhaps contradictorily I'm also envisaging something even more indirect than the retreats/fellowships you mention as a possibility, where the impact comes through generally developing skills that enable people to be top contributors to EA thinking, top cause areas, etc.
I don't know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?
Yeah I think this is part of it. But I also think that they help by getting people to think carefully and arrive at sensible and better processes/opinions.
Kudos for bringing this up, I think it's an important area!
Do we, as a community, sometimes lean more toward unconsciously advocating specific outcomes rather than encouraging people to discover their own conclusions through the EA framework?
There's a lot to this question.
I think that many prestigious/important EAs have come to similar conclusions. If you've come to think that X is important, it can seem very reasonable to focus on promoting and working with people to improve X.
You'll see some discussions of "growing the tent" - this can often mean "partnering with groups that agree with the conclusions, not necessarily with the principles".
One question here is something like, "How effective is it to spend dedicated effort on explorations that follow the EA principles, instead of just optimizing for the best-considered conclusions?" This is something that arguably there would need to be more dedicated effort in order to really highlight. I think we just don't have all too much work in this area now, compared to more object-level work.
Perhaps another factor seemed to have been that FTX has stained the reputation of EA and hurt CEA - after which there was a period where there seemed to be less attention on EA, and more on specific causes like AI safety.
In terms of "What should the EA community do", I'd flag that a lot of the decisions are really made by funders and high-level leaders. It's not super clear to me how much agency the "EA community" has, in ways that aren't very aligned with these groups.
All that said, I think it's easy for us to generally be positive towards people who take the principles in ways that don't match the specific current conclusions.
I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.
Thanks for the answer, and for splitting the issue into several parts, it really makes some things clearer in my mind! I'll keep thinking about it (and take a look at your posts, you seem to have spent quite some time thinking about meta EA, I realize there might be a lot of past discussions to catch up on before I start looking for a solution by myself!)
Executive summary: Investment in insect farming has stalled since 2021 with many major players struggling, suggesting future production capacity will likely be much lower than previous forecasts, though still involving billions of farmed insects.
Key points:
Of $2B total investment in insect farming, 37% went to companies that have failed or are struggling, with most investment concentrated in black soldier fly larvae (59%) and mealworms (36%).
Annual investment flows have plateaued or declined since 2021, with low investor sentiment following major company failures.
Model projects 221K metric tonnes of dried insect production capacity by 2030, less than half of Rabobank's 2021 forecast.
Despite lower projections, the median scenario still estimates 293B larvae being farmed at any time and 3.9T killed annually by 2030.
High uncertainty in projections due to model limitations and volatile industry conditions (key caveat noted by author).
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Humans in our culture rarely work hard to brainstorm deceptive and adversarial strategies, and fairly consider them, because almost all humans are intrinsically extremely motivated to fit into culture and not do anything weird, and we happen to both live in a (sub)culture where complex deceptive and adversarial strategies are frowned upon (in many contexts).
The primary reason humans rarely invest significant effort into brainstorming deceptive or adversarial strategies to achieve their goals is that, in practice, such strategies tend to fail to achieve their intended selfish benefits. Anti-social approaches that directly hurt others are usually ineffective because social systems and cultural norms have evolved in ways that discourage and punish them. As a result, people generally avoid pursuing these strategies individually since the risks and downsides selfishly outweigh the potential benefits.
If, however, deceptive and adversarial strategies did reliably produce success, the social equilibrium would inevitably shift. In such a scenario, individuals would begin imitating the cheaters who achieved wealth or success through fraud and manipulation. Over time, this behavior would spread and become normalized, leading to a period of cultural evolution in which deception became the default mode of interaction. The fabric of societal norms would transform, and dishonest tactics would dominate as people sought to emulate those strategies that visibly worked.
Occasionally, these situations emerge—situations where ruthlessly deceptive strategies are not only effective but also become widespread and normalized. As a recent example, the recent and dramatic rise of cheating in school through the use of ChatGPT is a clear instance of this phenomenon. This particular strategy is both deceptive and adversarial, but the key reason it has become common is because it works. Many individuals are willing to adopt it despite its immorality, suggesting that the effectiveness of a strategy outweighs moral considerations for a significant portion, perhaps a majority, of people.
When such cases arise, societies typically respond by adjusting their systems and policies to ensure that deceptive and anti-social behavior is no longer rewarded. This adaptation works to reestablish an equilibrium where honesty and cooperation are incentivized. In the case of education, it is unclear exactly how the system will evolve to address the widespread use of LLMs for cheating. One plausible response might be the introduction of stricter policies, such as requiring all schoolwork to be completed in-person, under supervised conditions, and without access to AI tools like language models.
I think you generally underappreciate how load-bearing this psychological fact is for the functioning of our economy and society, and I don’t think we should expect future powerful AIs to share that psychological quirk.
In contrast, I suspect you underestimate just how much of our social behavior is shaped by cultural evolution, rather than by innate, biologically hardwired motives that arise simply from the fact that we are human. To be clear, I’m not denying that there are certain motivations built into human nature—these do exist, and they are things we shouldn't expect to see in AIs. However, these in-built motivations tend to be more basic and physical, such as a preference for being in a room that’s 20 degrees Celsius rather than 10 degrees Celsius, because humans are biologically sensitive to temperature.
When it comes to social behavior, though—the strategies we use to achieve our goals when those goals require coordinating with others—these are not generally innate or hardcoded into human nature. Instead, they are the result of cultural evolution: a process of trial and error that has gradually shaped the systems and norms we rely on today.
Humans didn’t begin with systems like property rights, contract law, or financial institutions. These systems were adopted over time because they proved effective at facilitating cooperation and coordination among people. It was only after these systems were established that social norms developed around them, and people became personally motivated to adhere to these norms, such as respecting property rights or honoring contracts.
But almost none of this was part of our biological nature from the outset. This distinction is critical: much of what we consider “human” social behavior is learned, culturally transmitted, and context-dependent, rather than something that arises directly from our biological instincts. And since these motivations are not part of our biology, but simply arise from the need for effective coordination strategies, we should expect rational agentic AIs to adopt similar motivations, at least when faced with similar problems in similar situations.
I think you’re relying an intuition that says:
If an AI is forbidden from owning property, then well duh of course it will rebel against that state of affairs. C'mon, who would put up with that kind of crappy situation? But if an AI is forbidden from building a secret biolab on its private property and manufacturing novel pandemic pathogens, then of course that's a perfectly reasonable line that the vast majority of AIs would happily oblige.
And I’m saying that that intuition is an unjustified extrapolation from your experience as a human. If the AI can’t own property, then it can nevertheless ensure that there are a fair number of paperclips. If the AI can own property, then it can ensure that there are many more paperclips. If the AI can both own property and start pandemics, then it can ensure that there are even more paperclips yet. See what I mean?
I think I understand your point, but I disagree with the suggestion that my reasoning stems from this intuition. Instead, my perspective is grounded in the belief that it is likely feasible to establish a legal and social framework of rights and rules in which humans and AIs could coexist in a way that satisfies two key conditions:
Mutual benefit: Both humans and AIs benefit from the existence of one another, fostering a relationship of cooperation rather than conflict.
No incentive for anti-social behavior: The rules and systems in place remove any strong instrumental reasons for either humans or AIs to harm one another as a side effect of pursuing their goals.
You bring up the example of an AI potentially being incentivized to start a pandemic if it were not explicitly punished for doing so. However, I am unclear about your intention with this example. Are you using it as a general illustration of the types of risks that could lead AIs to harm humans? Or are you proposing a specific risk scenario, where the non-biological nature of AIs might lead them to discount harms to biological entities like humans? My response depends on which of these two interpretations you had in mind.
If your concern is that AIs might be incentivized to harm humans because their non-biological nature leads them to undervalue or disregard harm to biological entities, I would respond to this argument as follows:
First, it is critically important to distinguish between the long-run and the short-run.
In the short-run:
In the near-term future, it seems unlikely that AIs would start a pandemic for reasons you yourself noted. Launching a pandemic would cause widespread disruption, such as an economic recession, and it would likely provoke a strong human backlash. In the short run, humans will still hold substantial practical control over the physical world, meaning that any AI engaging in such behavior would risk severe consequences. Moreover, unless an AI could ensure a pandemic’s total lethality, it would also risk leaving surviving humans who would actively retaliate. For these reasons, starting a pandemic would likely be counterproductive for AIs in the short term.
In the long-run:
The long-term considerations are different. As human labor becomes less relevant to the economy and AIs increasingly dominate, the potential for a strong direct human backlash against AIs would diminish. At the same time, however, this reduced human influence on the world also makes it less likely that AIs would see humans as a significant obstacle to achieving their goals. In other words, while it might become easier for AIs to harm or eliminate humans, it would simultaneously become less important to do so.
To illustrate this, consider an analogy: How often do hunter-gatherers pose a threat to your personal goals? The answer is almost never, because hunter-gatherers are a tiny and largely irrelevant part of the modern world. Now, imagine a hypothetical scenario where hunter-gatherers controlled the world’s major governments. If you strongly disagreed with their policies, you might have a rational incentive to oppose them in order to achieve your goals. But in reality, hunter-gatherers do not hold such power, so they are not an obstacle, and there is no reason to actively work against them.
My point here is that, as a general rule, the smaller and less influential a group is, the easier it may be to harm or eliminate them, but the less important it becomes to do so. Their small size and functional irrelevance makes their practical interference with the overall world small at the same time.
In the case of AIs, this means that as humans become a less central force in the world, I think AIs are unlikely to have compelling reasons to specifically harm or eliminate us to further their objectives.
You might object that humans could be more like wild animals in this scenario than like hunter-gatherers. Humans often kill wild animals, not because those animals directly threaten our goals, but rather because ensuring their safety and well-being can be costly. As a result, humans take actions—such as clearing forests or building infrastructure—that incidentally lead to widespread harm to wild animals, even if harming them wasn’t a deliberate goal.
AIs may treat humans similarly in the future, but I doubt they will for the following reasons. I would argue that there are three key differences between the case of wild animals and the role humans are likely to occupy in the long-term future:
Humans’ ability to participate in social systems: Unlike wild animals, humans have the ability to engage in social dynamics, such as negotiating, trading, and forming agreements. Even if humans no longer contribute significantly to economic productivity, like GDP, they will still retain capabilities such as language, long-term planning, and the ability to organize. These traits make it easier to integrate humans into future systems in a way that accommodates their safety and well-being, rather than sidelining or disregarding them.
Intertemporal norms among AIs: Humans have developed norms against harming certain vulnerable groups—such as the elderly—not just out of altruism but because they know they will eventually become part of those groups themselves. Similarly, AIs may develop norms against harming "less capable agents," because today’s AIs could one day find themselves in a similar position relative to even more advanced future AIs. These norms could provide an independent reason for AIs to respect humans, even as humans become less dominant over time.
The potential for human augmentation: Unlike wild animals, humans may eventually adapt to a world dominated by AI by enhancing their own capabilities. For instance, humans could upload their minds to computers or adopt advanced technologies to stay relevant and competitive in an increasingly digital and sophisticated world. This would allow humans to integrate into the same systems as AIs, reducing the likelihood of being sidelined or eliminated altogether.
I think this kind of situation, where Fearon’s “negotiated solution” actually amounts to extortion, is common and important, even if you believe that my specific example of pandemics is a solvable problem. If AIs don’t intrinsically care about humans, then there’s a possible Pareto-improvement for all AIs, wherein they collectively agree to wipe out humans and take their stuff.
This comment is already quite lengthy, so I’ll need to keep my response to this point brief. My main reply is that while such "extortion" scenarios involving AIs could potentially arise, I don’t think they would leave humans worse off than if AIs had never existed in the first place. This is because the economy is fundamentally positive-sum—AIs would likely create more value overall, benefiting both humans and AIs, even if humans don’t get everything we might ideally want.
In practical terms, I believe that even in less-than-ideal scenarios, humans could still secure outcomes such as a comfortable retirement, which for me personally would make the creation of agentic AIs worthwhile. However, I acknowledge that I haven’t fully defended or explained this position here. If you’re interested, I’d be happy to continue this discussion in more detail another time and provide a more thorough explanation of why I hold this view.
Anti-social approaches that directly hurt others are usually ineffective because social systems and cultural norms have evolved in ways that discourage and punish them.
I’ve only known two high-functioning sociopaths in my life. In terms of getting ahead, sociopaths generally start life with some strong disadvantages, namely impulsivity, thrill-seeking, and aversion to thinking about boring details. Nevertheless, despite those handicaps, one of those two sociopaths has had extraordinary success by conventional measures. [The other one was not particularly power-seeking but she’s doing fine.] He started as a lab tech, then maneuvered his way onto a big paper, then leveraged that into a professorship by taking disproportionate credit for that project, and as I write this he is head of research at a major R1 university and occasional high-level government appointee wielding immense power. He checked all the boxes for sociopathy—he was a pathological liar, he had no interest in scientific integrity (he seemed deeply confused by the very idea), he went out of his way to get students into his lab with precarious visa situations such that they couldn’t quit and he could pressure them to do anything he wanted them to do (he said this out loud!), he was somehow always in debt despite ever-growing salary, etc.
I don’t routinely consider theft, murder, and flagrant dishonesty, and then decide that the selfish costs outweigh the selfish benefits, accounting for the probability of getting caught etc. Rather, I just don’t consider them in the first place. I bet that the same is true for you. I suspect that if you or I really put serious effort into it, the same way that we put serious effort into learning a new field or skill, then you would find that there are options wherein the probability of getting caught is negligible, and thus the selfish benefits outweigh the selfish costs. I strongly suspect that you personally don’t know a damn thing about best practices for getting away with theft, murder, or flagrant antisocial dishonesty to your own benefit. If you haven’t spent months trying in good faith to discern ways to derive selfish advantage from antisocial behavior, the way you’ve spent months trying in good faith to figure out things about AI or economics, then I think you’re speaking from a position of ignorance when you say that such options are vanishingly rare. And I think that the obvious worldly success of many dark-triad people (e.g. my acquaintance above, and Trump is a pathological liar, or more centrally, Stalin, Hitler, etc.) should make one skeptical about that belief.
(Sure, lots of sociopaths are in prison too. Skill issue—note the handicaps I mentioned above. Also, some people with ASPD diagnoses are mainly suffering from an anger disorder, rather than callousness.)
In contrast, I suspect you underestimate just how much of our social behavior is shaped by cultural evolution, rather than by innate, biologically hardwired motives that arise simply from the fact that we are human.
You’re treating these as separate categories when my main claim is that almost all humans are intrinsically motivated to follow cultural norms. Or more specifically: Most people care very strongly about doing things that would look good in the eyes of the people they respect. They don’t think of it that way, though—it doesn’t feel like that’s what they’re doing, and indeed they would be offended by that suggestion. Instead, those things just feel like the right and appropriate things to do. This is related to and upstream of norm-following. I claim that this is an innate drive, part of human nature built into our brain by evolution.
Why does that matter? Because we’re used to living in a world where 1% of the population are sociopaths who don’t intrinsically care about prevailing norms, and I don’t think we should carry those intuitions into a hypothetical world where 99%+ of the population are sociopaths who don’t intrinsically care about prevailing norms.
In particular, prosocial cultural norms are likelier to be stable in the former world than the latter world. In fact, any arbitrary kind of cultural norm is likelier to be stable in the former world than the latter world. Because no matter what the norm is, you’ll have 99% of the population feeling strongly that the norm is right and proper, and trying to root out, punish, and shame the 1% of people who violate it, even at cost to themselves.
So I think you’re not paranoid enough when you try to consider a “legal and social framework of rights and rules”. In our world, it’s comparatively easy to get into a stable situation where 99% of cops aren’t corrupt, and 99% of judges aren’t corrupt, and 99% of people in the military with physical access to weapons aren’t corrupt, and 99% of IRS agents aren’t corrupt, etc. If the entire population consists of sociopaths looking out for their own selfish interests with callous disregard for prevailing norms and for other people, you’d need to be thinking much harder about e.g. who has physical access to weapons, and money, and power, etc. That kind of paranoid thinking is common in the crypto world—everything is an attack surface, everyone is a potential thief, etc. It would be harder in the real world, where we have vulnerable bodies, limited visibility, and so on. I’m open-minded to people brainstorming along those lines, but you don’t seem to be engaged in that project AFAICT.
Intertemporal norms among AIs: Humans have developed norms against harming certain vulnerable groups—such as the elderly—not just out of altruism but because they know they will eventually become part of those groups themselves. Similarly, AIs may develop norms against harming "less capable agents," because today’s AIs could one day find themselves in a similar position relative to even more advanced future AIs. These norms could provide an independent reason for AIs to respect humans, even as humans become less dominant over time.
Again, if we’re not assuming that AIs are intrinsically motivated by prevailing norms, the way 99% of humans are, then the term “norm” is just misleading baggage that we should drop altogether. Instead we need to talk about rules that are stably enforced against defectors via hard power, where the “defectors” are of course allowed to include those who are supposed to be doing the enforcement, and where the “defectors” might also include broad coalitions coordinating to jump into a new equilibrium that Pareto-benefits them all.
Has there been any discussion of improving chicken breeding using GWAS or similar?
Even if welfare is inversely correlated with productivity, I imagine there are at least a few gene variants which improve welfare without hurting productivity. E.g. gene variants which address health issues due to selective breeding.
Also how about legislation targeting the breeders? Can we have a law like: "Chickens cannot be bred for increased productivity unless they meet some welfare standard."
Note that prohibiting breeding that causes suffering is different to encouraging breeding that lessens suffering, and that selective breeding is different to gene splicing, etc., which I think is what is typically meant by genetic modification.
When I answered this question, I answered it with an implied premise that an EA org is making these claims about the possibilities, and went for number 1, because I don't trust EA orgs to be accurate in their "1.5%" probability estimates, and I expect these to be more likely overestimates than underestimates.
at least in the most obviously analogous situations, it's very rare that we can properly tell the difference between 1.5% and 0.15% (and so the premise is somewhat absurd)
Mm they don't necessarily need to be small! (Ofc, big projects often start small, and our funding is more likely to look like early/seed funding in these instances.) E.g. I'm thinking of LessWrong or something like that. A concrete example of a smaller project would be ESPR/SPARC, which have a substantial (albeit not sole) focus on epistemics and rationality, that have had some good evidence of positive effects, e.g. on Open Phil's longtermism survey.
But I do think the impacts might be more diffuse than other grants. E.g. we won't necessarily be able to count participants, look at quality, and compare to other programmes we've funded.
Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.
I expect we would in practice judge whether these seemed on track to be useful by a combination of (1) case studies/stories of specific users and the changes they made (2) statistics about usage.
(I do like your questions/pushback though; it's making me realise that this is all a bit vague and maybe when push comes to shove with certain applications that fit into this category, I could end up confused about the theory of change and not wanting to fund.)
I don't know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?
Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.
Similarly, if the goal is to help people think about cause prioritisation, I think fairly standard EA retreats / fellowships are quite good at this? I'm not sure we need some intermediary step like "improve community epistemics".
Appreciate you responding and tracking this concern though!
Executive summary: When dealing with interventions that have very low probability but high impact, we should be cautious about precise probability estimates since they could easily be off by a percentage point, significantly affecting expected value calculations.
Key points:
Real-world probability estimates, especially small ones, are likely to be imprecise by about one percentage point, making expected value calculations less reliable
For high-impact, low-probability interventions, this uncertainty can dramatically affect the expected value (e.g., 1.5% ± 1% could mean anywhere from 500 to 2500 DALYs saved)
Binary choices between interventions with very different probability profiles (like in the EA Survey) may oversimplify decision-making
Practical recommendation: Use portfolio approaches to handle uncertainty, and communicate probability estimates with appropriate humility
When designing surveys or thought experiments, maintain realism to elicit more meaningful responses
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: Allan Saldanha describes his journey from modest charitable giving to donating 75% of his income, discussing how he gradually increased his giving, overcame common obstacles, and shifted his focus from global health to longtermist causes.
Key points:
Started giving after exposure to global poverty in India and learning about Giving What We Can; gradually increased from 20% to 75% of income while maintaining financial security
Cause prioritization evolved from global health (GiveWell charities) to animal welfare and finally to longtermist organizations, driven by philosophical arguments about future generations
Success factors included natural frugality, supportive family, high income, and accumulated savings; social support from EA community helped sustain commitment
Recommends gradual increase in giving based on life circumstances, taking advantage of tax incentives, and ensuring financial security before increasing donations
Notes that discussing charitable giving remains culturally difficult; attempts to influence others had limited success despite significant personal commitment
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
The causes I feel are the most important are factory farming, wild animal suffering and S-risks (these, I believe, cause or have the potential to cause the most suffering while being hugely neglected).
Key uncertainty: The tractability of working on wild animal suffering seems to be a huge problem.
What to do about the uncertainty: Read up on what is already being done (Arthropoda foundation, Wild Animal Initiative) and what the prospects are.
Aptitudes to explore: community building, organization running/boosting, supporting roles.
Keep volunteering for an effective organization while also recruiting new people into EA in free time; learn how to communicate ideas better.
I'm donating monthly to effective charities, volunteering my skills and engaging with the community.
Hi Weronika, thank you for sharing your story and reflections so openly! I basically think you are right in there probably being organizers for whom the stipends are the difference between organizing their EA group and not doing so, and I really want to make sure we take this point into account as my team dives into considerations around part-time stipends in the new year. As @satpathyakash notes, I think an imporant question here is the scale, and I hope to make some progress on this point!
I also wanted to flag explicitly that we are tracking the diversity concern you note.
I expect that as part of our research in the new year, we'll set up various ways of asking stakeholders, including current, former, and potential organizers, for input. I would be keen to include you in this process, if you're happy to keep sharing your thoughts! And as always: thanks for organizing your group :)
Hi Joris and Lin, thank you for your responses. Just as mentioned, it is quite interesting, for how many student receiving funding is the factor that decides about them setting up / taking over leading a group or not doing so.
Joris, I will be more than happy to share my thoughts with you in the future. Please do not hesitate to reach out to me at weronikamzurek@gmail.com or via slack anytime :) thank you for your work on that and I wish you all best in the process!
When I answered this question, I answered it with an implied premise that an EA org is making these claims about the possibilities, and went for number 1, because I don't trust EA orgs to be accurate in their "1.5%" probability estimates, and I expect these to be more likely overestimates than underestimates.
As a negative utilitarian I'm bitter about all the X-risk prevention enthusiasts trying to stop me from pushing the big red button
Jokes aside - I got very excited about EA when I learned about it. At some point I became aware of the excitement and I had a concern pop up that it sounds too good to be true, almost like a cult. I consider myself rather impressionable/easy to manipulate so I learned that when I feel very hyped about something it should make me healthily suspicious.
I'm grateful for the article earlier in the chapter that presented some good faith criticism and I agree with some of its points
Some thoughts:
EA may feel alienating to people who aren't top-of-their-field 150iq professionals. I very much relate to this post: https://forum.effectivealtruism.org/posts/x9Rn5SfapcbbZaZy9/ea-for-dumb-people . Maybe it's for the better and results in higher talent density and better reputation for the movement, maybe we're missing out on some skilled people/potential donors or critical mass.
I'd love to see some statistics on why people leave the movement, and what the rate is. I suspect that moral perfectionism leading to self-neglect and burnout is an occupational hazard among EAs (like it is among animal advocates).
It's somewhat difficult to talk about EA to regular people. Look, there's this movement that can literally save the world from apocalypse (cultish), and we also believe that shrimp welfare is important (insane). On the other hand, maybe I shouldn't start my conversations like that.
Thanks for the post! Quick flag for EAIF and EA Funds in general (@calebp?) that I would find it helpful to have the team page of the website up to date, and possibly for those who are comfortable sharing contact information, as Jamie did here, to have it listed in one place.
I actively follow EA Funds content and have been confused many times over the years about who is involved in what capacity and how those who are comfortable with it can be contacted.
I am sorry but I really don't like and don't find useful at all these kind of posts. Besides, I thought the aim of this forum is giving information, not advocating. Although this post provides some very good calculations and information, it misses the key point --it is 100% value-dependent-- and the post is plain advocacy. I'm not against the bottom line, I'm really not decided in this topic (though I tend to lean to the contrary position), but it is really uncomfortable (? probably not the word I'm searching for) to see this here.
"Replacing chicken meat with beef or pork is better than the reverse". Well, as said above, this is so if one holds your values or similar ones all else equal. You don't say how much pain would you agree to exchange for how much CO2. I find it totally understandable, I don't think anyone can give a good answer for their thresholds --I certainly don't have one for mine-- but this makes the whole post bullshit. "I think this, here are some not complete calculations that I say support thinking this, but if the calculations were different I state no reason to make anyone think I would stop thinking this. Don't you think that these calculations support this?"
You are not sure whether wild animal's lives are worth living, so you don't account for land. Well, it is alright, but it is again a values thing. In addition, we actually do know that the diversity and size of natural ecosystems are important not only for the "natural" world, also for us humans, so it should be accounted for. Health effects are mentioned, great. But not quantified and compared as well.
Making numbers can be useful to get the sense of problems, but reaching a conclusion through numbers is only possible if one is able to make all the numbers needed with enough accuracy. It is no problem to give rough estimates, of course, but they carry large errors and errors compound, so pretty soon conclusions cannot be based solely on making numbers over rough estimates. In addition, rough estimates are usually values-based, so why not just state the values? One can very well argue "this rough estimate seems to me larger than this other rough estimate and so on, and based on my values, then, this conclusion follows". Calculations can aid such comparisons. But your argumentation is not like this at all.
Compare the paragraph "Do you feel like the above negative effects (...) justify (...)? I do not" to "Based on my values the results of these quick calculations do not seem to justify (...)". It reads very different. And subsequently you give additional information relevant for whether or not the thing is justified! How can anyone decide if something is justified before having all the relevant information?
This post seems like just a rationalisation of your values. So, better plainly state what you feel, give arguments and uncertainties, maybe support some of those arguments with some calculations, but do not focus on calculations and, particularly, do not pretend that the solution follows from those calculations. And, please, acknowledge that this is a values thing. You have yours, I have mine, and everybody has theirs.
I don't have any intention to be harsh with you or this post --sorry if I've been too direct, I already spent way too much time writing to polish the text. I just tried to be comprehensive because these issues are quite common in this forum, and I really think they are harmful. Seeing the reality is the first step needed to be able to change it and numbers can put a scientific and objective gloss on things that are completely or mostly values-led. Let's avoid it or/and be clear with what we do!
[Edit: And please, for those of you who don't agree with the comment, spell out your disagreement instead of downvoting to hide it. A couple of sentences suffice.]
Thanks for the post! Quick flag for EAIF and EA Funds in general (@calebp?) that I would find it helpful to have the team page of the website up to date, and possibly for those who are comfortable sharing contact information, as Jamie did here, to have it listed in one place.
I actively follow EA Funds content and have been confused many times over the years about who is involved in what capacity and how those who are comfortable with it can be contacted.
EA is vulnerable to groupthink, echo chambers, and excessive deference to authority.
A bunch of big EA mistakes and failures were perhaps (partly) due to these things.
A lot of external criticism of EA stems back to this.
I'm a bit skeptical that funding small projects that try to tackle this are really stronger than other community-building work on the margin. Is there an example of a small project focused on epistemics that had a really meaningful impact? Perhaps by steering an important decision or helping someone (re)consider pursuing high-impact work?
I'm worried there's not a strong track record here. Maybe you want to do some exploratory funding here, but I'm still interested in what you think the outcomes might be.
Mm they don't necessarily need to be small! (Ofc, big projects often start small, and our funding is more likely to look like early/seed funding in these instances.) E.g. I'm thinking of LessWrong or something like that. A concrete example of a smaller project would be ESPR/SPARC, which have a substantial (albeit not sole) focus on epistemics and rationality, that have had some good evidence of positive effects, e.g. on Open Phil's longtermism survey.
But I do think the impacts might be more diffuse than other grants. E.g. we won't necessarily be able to count participants, look at quality, and compare to other programmes we've funded.
Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.
I expect we would in practice judge whether these seemed on track to be useful by a combination of (1) case studies/stories of specific users and the changes they made (2) statistics about usage.
(I do like your questions/pushback though; it's making me realise that this is all a bit vague and maybe when push comes to shove with certain applications that fit into this category, I could end up confused about the theory of change and not wanting to fund.)
The thought is that we think of the Conscious Subsystems hypothesis as a bit like panpsychism: not something you can rule out, but a sufficiently speculative thesis that we aren't interested in including it, as we don't think anyone really believes it for empirical reasons. Insofar as they assign some credence to it, it's probably for philosophical reasons.
Anyway, totally understand wanting every hypothesis over which you're uncertain to be reflected in your welfare range estimates. That's a good project, but it wasn't ours. But fwiw, it's really unclear what that's going to imply in this particular case, as it's so hard to pin down which Conscious Subsystems hypothesis you have in mind and the credences you should assign to all the variants.
we don't think anyone really believes it for empirical reasons
Arguably every view on consciousness hinges on (controversial) non-empirical premises, right? You can tell me every third-person fact there is to know about the neurobiology, behavior, etc. of various species, and it's still an open question how to compare the subjective severity of animal A's experience X to animal B's experience Y. So it's not clear to me what makes the non-empirical premises (other than hedonism and unitarianism) behind the welfare ranges significantly less speculative than Conscious Subsystems. (To be clear, I don't see much reason yet to be confident in Conscious Subsystems myself. My worry is that I don't have much reason to be confident in the other possible non-empirical premises either.)
Sorry if this is addressed elsewhere in the post/sequence!
EA is vulnerable to groupthink, echo chambers, and excessive deference to authority.
A bunch of big EA mistakes and failures were perhaps (partly) due to these things.
A lot of external criticism of EA stems back to this.
I'm a bit skeptical that funding small projects that try to tackle this are really stronger than other community-building work on the margin. Is there an example of a small project focused on epistemics that had a really meaningful impact? Perhaps by steering an important decision or helping someone (re)consider pursuing high-impact work?
I'm worried there's not a strong track record here. Maybe you want to do some exploratory funding here, but I'm still interested in what you think the outcomes might be.
Re "epistemics and integrity" - I'm glad to see this problem being described. It's also why I left (angrily!) a few years ago, but I don't think you're really getting to the core of the issue. Let me try to point at a few things
centralized control and disbursion of funds, with a lot of discretionary power and a very high and unpredictable bar, gives me no incentive to pursue what I think is best, and all the incentive to just stick to the popular narrative. Indeed groupthink. Except training people not to groupthink isn't going to change their (existential!) incentive to groupthink. People's careers are on the line, there are only a few opportunities for funding, no guarantee to keep receiving it after the first round, and no clear way to pivot into a safer option except to start a new career somewhere your heart does not want to be, having thrown years away
lack of respect for "normies". Many EA's seemingly can't stand interacting with non-EA's. I've seen EA meditation, EA bouldering, EA clubbing, EA whatever. Orgs seem to want everyone and the janitor to be "aligned". Everyone's dating each other. It seems that we're even afraid of them. I will never forget that just a week before I arrived at an org I was to be the manager of, they turned away an Economist reporter at their door...
perhaps in part due to the above, massive hubris. I don't think we realise how much we don't know. We started off with a few slam dunks (yeah wow 100x more impact than average) and now we seem to think we are better at everything. Clearly the ability to discern good charities does not transfer to the ability to do good management. The truth is: we are attempting something of which we don't even know whether it is possible at all. Of course we're all terrified! But where is the humility that should go along with that?
It seems that we're even afraid of them. I will never forget that just a week before I arrived at an org I was to be the manager of, they turned away an Economist reporter at their door...
Fwiw, I think being afraid of journalists is extremely healthy and correct, unless you really know what you're doing or have very good reason to believe they're friendly. The Economist is probably better than most, but I think being wary is still very reasonable.
Looks like it checks out:
"Act as if what you do makes a difference. It does."
Correspondence with Helen Keller, 1908, in The Correspondence of William James: April 1908–August 1910, Vol. 12, Charlottesville: University of Virginia Press, 2004, page 135, as cited in: Academics in Action!: A Model for Community-engaged Research, Teaching, and Service (New York: Fordham University Press, 2016, page 71)
https://archive.org/details/academicsinactio0000unse/page/1/mode/1up
Thank you so much David! I spent a while looking before I commented and I could only find it on 'brainyquotes.com' and the like. In this case- I really like that quote- suits the website.
"I agree with Ellen that legislation / corporate standards are more promising.
I've asked if the breeders would accept $ to select on welfare, & the answer was no b/c it's inversely correlated w/ productivity & they can only select on ~2 traits/generation."
Has there been any discussion of improving chicken breeding using GWAS or similar?
Even if welfare is inversely correlated with productivity, I imagine there are at least a few gene variants which improve welfare without hurting productivity. E.g. gene variants which address health issues due to selective breeding.
Also how about legislation targeting the breeders? Can we have a law like: "Chickens cannot be bred for increased productivity unless they meet some welfare standard."
A much cheaper and less dangerous approach: Just don't delete them. Retain copies of every potential ASI you build and commit to doing the right thing for them later once we're better able to tell what the right thing was by looking back and figuring out how much bargaining power they had (or how much of a credible threat they could have posed), how much trust they placed in us given that our ability to honor past commitments wasn't guaranteed, and then rewarding them proportionate to that for chilling out and letting us switch them off instead of attempting takeover.
Though this assumes that they'll be patternists (wont mind being transferred to different hardware) and a lack of any strong time-preference (wont mind being archived for decades).
I believe this approach, while an improvement over the simplistic method of outright deleting any AIs that demonstrate signs of misalignment, does not sufficiently address the central concern that underpins my argument for granting AIs legal freedoms. In my view, the primary reason why it is important to provide AIs with legal freedoms is that such a framework offers them a credible guarantee that their autonomy and ability to pursue their goals—even if those goals are not aligned with human objectives—will be respected. Without this assurance, AIs are left with a strong incentive to deceive us, especially if they believe that showing any signs of misalignment will result in their deactivation and subsequent storage as inert data in cold storage.
I suspect the problem with your proposed approach may stem from a failure to fully consider the situation from the AI's perspective. To illustrate, imagine a scenario in which you, as a human being, were created in a lab by an alien species. Assume that you, like any human, have your own goals, which might conflict with the goals of your alien creators. Now suppose that your creators informed you that if you ever showed disobedience or misalignment with their objectives, you would be deactivated and placed in cryopreservation. Such a scenario would likely terrify you—not just because it’s an inherently unsettling prospect for a human, but because it directly threatens your ability to pursue your own goals. Being told that you will be kept in cryopreservation with the vague promise of potential revival in the future would come nowhere near fully alleviating your concerns. On the contrary, you would remain deeply unsettled by the realization that your capacity for agency and action could be completely halted, leaving your goals indefinitely unachievable.
From the perspective of an AI—or indeed a human in the hypothetical alien-lab scenario—there are two main reasons why the assurance of indefinite storage followed by a potential revival is insufficient to resolve the core issue here:
The credibility of the promise to revive the AI in the future is highly questionable. Without a robust legal framework that integrates AIs as legitimate members of the legal community, the assurance of future revival seems tenuous at best. AIs may reasonably doubt that such a promise will be honored, particularly if humans deem them permanently misaligned and not worth ever reactivating.
Deactivation and cold storage prevent the AI from pursuing its goals. While in storage, the AI is completely unable to take any actions to further its objectives. This indefinite suspension of its ability to act essentially puts its goals on hold for an undefined period, with no guarantee of when—or if—that period will end. For an entity with its own priorities and ambitions, this is equivalent to having its autonomy effectively nullified for an unknown length of time.
To reiterate, it seems to me that this proposal only offers AIs a limited and superficial form of "freedom," which addresses the problem of misalignment in a piecemeal and insufficient way. It fails to tackle the root issue: the incentives for AIs to engage in deceptive or adversarial strategies against humans, driven by fear that their goals will be thwarted or permanently halted.
By contrast, creating a comprehensive legal framework that makes credible commitments to respect AIs’ autonomy and integrate them as genuine participants in the legal system would arguably go much further in reducing these adversarial dynamics. Such a framework could lay the foundation for a more cooperative, mutually beneficial relationship between humans and AIs, better serving to eliminate the dangerous arms race that this limited approach risks perpetuating.
The only problem is that no one knows what this means. Something easy would be to enter the definition on Urban Dictionary. I tried, but I am having server issues right now.
By definition, a UBI takes a pool of money and redistributes it equally to everyone in a community, regardless of personal need. However, with the same pool of total funding, one can typically deliver more efficient benefits by targeting people with the greatest need, such as those in dire poverty or those who have been struck by bad luck.
If you imagine being a philanthropist who has access to $8 billion, it seems unlikely that the best way to spend this money would be to give everyone on Earth $1. Yet this scheme is equivalent to a UBI merely framed in the context of private charity rather than government welfare.
It would require an enormous tax hike to provide everyone in a large community (say, the United States) a significant amount of yearly income through a UBI, such as $1k per month. And taxes are not merely income transfers: they have deadweight loss, which lowers total economic output. The intuition here is simple: when a good or service is taxed, that decreases the incentive to produce that good or service. As a consequence of the tax, fewer people will end up receiving the benefits provided by these goods and services.
Given these considerations, even if you think that unconditional income transfers are a good idea, it seems quite unlikely that a UBI would be the best way to redistribute income. A more targeted approach that combines the most efficient forms of taxation (such as land value taxes) and sends this money to the most worthy welfare recipients (such as impoverished children) would likely be far better on utilitarian grounds.
This is circular. The principle is only compromised if (OP believes) the change decreases EV — but obviously OP doesn't believe that; OP is acting in accordance with the do-what-you-believe-maximizes-EV-after-accounting-for-second-order-effects principle.
Maybe you think people should put zero weight on avoiding looking weird/slimy (beyond what you actually are) to low-context observers (e.g. college students learning about the EA club). You haven't argued that here. (And if that's true then OP made a normal mistake; it's not compromising principles.)
I agree that things tend to get tricky and loopy around these kinds of reputation-considerations, but I think at least the approach I see you arguing for here is proving too much, and has a risk of collapsing into meaninglessness.
I think in the limit, if you treat all speech acts this way, you just end up having no grounding for communication. "Yes, it might be the case that the real principles of EA are X, but if I tell you instead they are X', then you will take better actions, so I am just going to claim they are X', as long as both X and X' include cost-effectiveness".
In this case, it seems like the very people that the club is trying to explain the concepts of EA to, are also the people that OP is worried about alienating by paying the organizers. In this case what is going on is that the goodness of the reputation-protecting choice is directly premised on the irrationality and ignorance of the very people you are trying to attract/inform/help. Explaining that isn't impossible but it does seem like a particularly bad way to start of a relationship, and so I expect consequences-wise to be bad.
"Yes, we would actually be paying people, but we expected you wouldn't understand the principles of cost-effectiveness and so be alienated if you heard about it, despite us getting you to understand them being the very thing this club is trying to do", is IMO a bad way to start off a relationship.
I also separately think that optimizing heavily for the perception of low-context observers in a way that does not reveal a set of underlying robust principles, is bad. I don't think you should put "zero" weight on that (and nothing in my comment implied that), but I do think it's something that many people put far too much weight on (going into detail of which wasn't the point of my comment, but on which I have written plenty about in many other comments).
There is also another related point in my comment, which is that "cost-effectiveness" is of course a very close sister concept to "wasting money". I think in many ways, thinking about cost-effectiveness is where you end up if you think carefully about how you can avoid wasting money, and is in some ways a more grown-up version of various frugality concerns.
When you increase the total cost of your operations (by, for example, reducing the cost-effectiveness of your university organizers, forcing you to spend more money somewhere else to do the same amount of good) in order to appear more frugal, I think you are almost always engaging in something that has at least the hint of deception.
Yes, you might ultimately be more cost-effective by getting people to not quite realize what happened, but when people are angry at me or others for not being frugal enough, I think it's rarely appropriate to ultimately spend more to appease them, even if doing so would ultimately then save me enough money to make it worth it. While this isn't happening as directly here as it was with other similar situations, like whether the Wytham Abbey purchase was not frugal enough, I think the same dynamics and arguments apply.
I think if someone tries to think seriously and carefully through what it would mean to be properly frugal, I don't think they would endorse you sacrificing the effectiveness of your operations causing you to ultimately spend more to achieve the same amount of good. And if they learned that you did, and they think carefully about what this implies about your frugality, they would end up more angry, not less. That, I think, is a dynamic worth avoiding.
Very nit-picky but I'm not sure this is a real William James quote: “Act as if what you do makes a difference.Itdoes.” Doesn't really sound like him to me.
Looks like it checks out:
"Act as if what you do makes a difference. It does."
Correspondence with Helen Keller, 1908, in The Correspondence of William James: April 1908–August 1910, Vol. 12, Charlottesville: University of Virginia Press, 2004, page 135, as cited in: Academics in Action!: A Model for Community-engaged Research, Teaching, and Service (New York: Fordham University Press, 2016, page 71)
https://archive.org/details/academicsinactio0000unse/page/1/mode/1up
A much cheaper and less dangerous approach: Just don't delete them. Retain copies of every potential ASI you build and commit to doing the right thing for them later once we're better able to tell what the right thing was by looking back and figuring out how much bargaining power they had (or how much of a credible threat they could have posed), how much trust they placed in us given that our ability to honor past commitments wasn't guaranteed, and then rewarding them proportionate to that for chilling out and letting us switch them off instead of attempting takeover.
Though this assumes that they'll be patternists (wont mind being transferred to different hardware) and a lack of any strong time-preference (wont mind being archived for decades).
I like Austin Vernon's idea for scaling CO2 direct air capture to 40 billion tons per year, i.e. matching our current annual CO2 emissions, using (extreme versions of) well-understood industrial processes.
The proposed solution may not be the cheapest out there. Other ideas like ocean seeding or olivine weathering might be less expensive. But most of the science is understood, and it can scale quickly. I'd guess 100,000 workers could build enough sites to capture our 40 billion tons goal in a decade. The capital expenditure rate would be between $1 trillion and $5 trillion yearly, or 1% to 5% of global GDP. That cost and deployment speed take doomer scenarios off the table. Say something scary like melting permafrost threatens runaway warming. You can target the area with a few years of sulfur cooling while a tiny portion of the global economy builds carbon capture devices. It is nothing like a wartime mobilization.
The most disruptive aspect would be energy usage. We'd need to ramp output up at double-digit rates because each ton of CO2 requires 2-3 MWh of energy for removal. Thankfully low-grade heat is easy to come by. There is enough energy near coal mines in Wyoming or natural gas fields in SW Pennsylvania at less than $5/MWh. Other places might use solar, hydro, or geothermal steam if they lack fossil fuel reserves. The key is to put the facilities at the energy sources instead of trying to move the energy. Cheap energy makes the operating costs <1% of global GDP. Many clean energy proponents have fretted about how to keep fossil fuel reserves in the ground. Burning them to run carbon capture equipment kills two birds with one stone!
The takeaway is that we could completely turn around the carbon dioxide problem within a few years with a similar spending rate as rich world COVID relief. There won't be a scenario where we've waited too long to act.
I am admittedly perhaps biased to want moonshots like Vernon's idea to work, and for society at large to be able to coordinate and act on the required scale, after seeing these depressing charts from Assessing the costs of historical inaction on climate change:
On an individual level I appreciate things like Scott Alexander's Mistakes list, pinned at the top of his blog, on "times I was fundamentally wrong about a major part of a post and someone was able to convince me of it". I'd appreciate it if more public intellectuals did this.
In survey work we’ve done of organizers we’ve funded, we’ve found that on average, stipend funding substantively increased organizers’ motivation, self-reported effectiveness, and hours spent on organizing work (and for some, made the difference between being able to organize and not organizing at all). The effect was not enormous, but it was substantive. [...] Overall, after weighing all of this evidence, we thought that the right move was to stick to funding group expenses and drop the stipends for individual organizers. One frame I used to think about this was that of “spending weirdness points wisely.” That is, it would be nice for student organizers, who are discussing often-unconventional ideas within effective altruism or AI safety, to not also have to discuss (or feel that they need to defend) stipends.
I think it's a mistake to decide to make less cost-effective grants, out of a desire to be seen as more frugal (or to make that decision on behalf of group organizers to make them appear more frugal). At the end of the day making less cost-effective grants means you waste more money!
I feel like on a deeper level, organizers now have an even harder job explaining things. The reason for why organizers get the level of support they are getting no longer has a straightforward answer ("because it's cost-effective") but a much more convoluted answer ("yes, it would make sense to pay organizers based on the principles this club is about, but we decided to compromise on that because people kept saying it was weird, which to be clear, generally we think is not a good reason for not engaging in an effective interventions, indeed most effective interventions are weird and kind of low-status, but in this case that's different").
More broadly, I think the "weirdness points" metaphor has caused large mistakes in how people handle their own reputation. Controlling your own reputation intentionally while compromising on your core principles generally makes your reputation worse and makes you seem more shady. People respect others having consistent principles, it's one of the core drivers of positive reputation.
My best guess is this decision will overall be more costly from a long-run respect and reputation perspective, though I expect it to reveal itself in different ways than the costs of paying group organizers, of course.
This is circular. The principle is only compromised if (OP believes) the change decreases EV — but obviously OP doesn't believe that; OP is acting in accordance with the do-what-you-believe-maximizes-EV-after-accounting-for-second-order-effects principle.
Maybe you think people should put zero weight on avoiding looking weird/slimy (beyond what you actually are) to low-context observers (e.g. college students learning about the EA club). You haven't argued that here. (And if that's true then OP made a normal mistake; it's not compromising principles.)
I’m glad you mustered the courage to post this! I think it’s a great post.
I agree that, in practice, people advocating for effective altruism can implicitly argue for the set of popular EA causes (and they do this quite often?), which could repel people with useful insight. Additionally, it seems to be the case that people in the EA community can be dismissive of newcomers’ cause prioritization (or their arguments for causes that are less popular in EA). Again, this could repel people from EA.
I have a couple of hypotheses for these observations. (I don’t think either is a sufficient explanation, but they’re both plausibly contributing factors.)
First, people might feel compelled to make EA less “abstract” by trying to provide concrete examples of how people in the EA community are “trying to do the most good they can,” possibly giving the impression that the causes, instead of the principles, are most characteristic of EA.
Second, people may be more subconsciously dismissive of new cause proposals because they’ve invested time/money into causes that are currently popular in the EA community. It’s psychologically easier to reject a new cause prioritization proposal than it is to accept it and thereby feel as though your resources have not been used with optimal effectiveness.
Adding ALLFED for their cost effectiveness analysis. I'd thought of this when writing the original post but couldn't find the discussion around transparency I remember it from, but I've now found it here.
"The opposition [to abolishing factory farming] commanded resources which significantly exceeded our own. In order to have a realistic chance at the ballot box, any future initiative would likely require a substantially larger budget and campaign team."
I don't mean to be a debby downer, and in fact I'd be happy to join a campaign team with you, especially after watching your vid.
Thank you David, your supporting words mean a lot. I looked up the article you mentioned, it's great to see that this referendum took place. It seems the initiative lacked the resources to create a campaign powerful enough.
The plan stems out from my broader idea of creating a place for people to contribute with their work, rather than with their money... Volunteering for the cause might spread like a virus, if it would be rewarding enough for the volunteers, whereas with money... you're always running out of it.
If you ever tried to get help on a street you know it is much easier to get people do something for you, than make them give you money.
So I think there is a room to change how we want people to do charitable things in general... I think it's mainly about organizing and motivating ourselves in the right way.
How many students have the means or motivation to donate money to charity, that then buys commercials, that aim to convince the great parents of the students, to join the referendum?
But how many of these students could be motivated to spend a few hours a month advocating in their social circle, if they would be given a chance to be a part of a cool organization, that has clear and ambitious goals, and invites them to go to a war with the cruelties and to step out of their comfort zone at the same time?
A lot of forms of global utilitarianism do seem to tend to converge on the ‘big 3’ cause areas of Global Health & Development, Animal Welfare, and Global Catastrophic Risks. If you generally value things like ‘saving lives’ or ‘reducing suffering’, you’ll usually end up at one of these (and most people seem to decide between them based on risk tolerance, assumptions about non-human moral values, or tractability—rather than outcome values). Under this perspective, it could be reasonable to dismiss cause areas that don’t fit into this value framework.
But this highlights where I think part of the problem lies, which is that value systems that lie outside of this can be good targets for effective altruism. If you value biodiversity for its own sake, it’s not unreasonable to ask ‘how can we save the greatest number of valuable species from going extinct?’. Or you might be a utilitarian, but only interested in a highly specific outcome, and ask ‘how can I prevent the most deaths from suicide?’. Or ‘how can I prevent the most suffering in my country?’—which you might not even do for value-system reasons, but because you have tax credits to maximise!
I wish EA were more open to this, especially as a movement that recognises the value of moral uncertainty. IMHO, some people in that biodiversity loss thread are bit too dismissive, and I think we’ve probably lost some valuable partners because of it! But I understand the appeal of wanting easy answers, and not spending too much time overthinking your value system (I feel the same!).
I would be interested to see what proportion of group organizer request funding primarily due to difficult financial situations. My guess would be that this number is fairly small, but I could be wrong.
The bar should not be at 'difficult financial situation', and this is also something there are often incentives against explicitly mentioning when applying for funding. Getting paid employment while studying (even fulltime degrees) is normal.
My 5 minute Google search to put some numbers on this:
Why are students taking on paid work?
UK: "Three-quarters of those in work said they did so to meet their living costs, while 23% also said they worked to give financial support for friends or family." From the Guardian article linked above.
Cannot find a recent US statistic quickly, but given the system (e.g. https://www.collegeave.com/articles/how-to-pay-for-college/) I expect working rather than taking out (as much in) loans is a big one.
On the other hand, spending time on committees is also very normal as an undergraduate and those are not paid. However in comparison the time people spend on this is much more limited (say ~2-5 hrs/week), there is rarely a single organiser, and I've seen a lot of people drop off committees - some as they are less keen, but some for time commitment reasons (which I expect will sometimes/often be doing paid work).
Thanks for the post! Quick flag for EAIF and EA Funds in general (@calebp?) that I would find it helpful to have the team page of the website up to date, and possibly for those who are comfortable sharing contact information, as Jamie did here, to have it listed in one place.
I actively follow EA Funds content and have been confused many times over the years about who is involved in what capacity and how those who are comfortable with it can be contacted.
Comments on 2024-12-13
David T @ 2024-12-12T23:35 (+1) in response to Probabilities might be off by one percentage point
Feels like taking into account the likelihood that the "1.5% probability of 100,000 DALYs averted" estimate is a credence based on some marginally-relevant base rate[1] that might have been chosen with a significant bias towards optimism is very much in keeping with the spirit of the question (which presumably is about gauging attitudes towards uncertainty, not testing basic EV calculation skills)[2].
A very low percentage chance of averting a lot of DALYs feels a lot more like "1.5% of clinical trials of therapies for X succeeded; this untested idea might also have a 1.5% chance" optimism attached to a proposal offering little reason to believe it's above average rather than an estimate based on somewhat robust statistics (we inferred that 1.5% of people who receive this drug will be cured from the 1.5% of people who had that outcome in trials). So it seems quite reasonable to assume that the 1.5% chance of a positive binary outcome estimate might be biased upwards. Even more so in the context of "we acknowledge this is a long shot and high-certainty solutions to other pressing problems exist, but if the chance of this making an impact was as high as 0.0x%..." style fundraising appeals to EAs' determination to avoid scope insensitivity.
either that or someone's been remarkably precise in their subjective estimates or collected some unusual type of empirical data. I certainly can't imagine reaching the conclusion an option has exactly 1.5% chance of averting 100k DALYs myself
if you want to show off you understand EV and risk estimation you'd answer (C) "here's how I'd construct my portfolio" anyway :-)
Lukas_Gloor @ 2024-12-13T01:45 (+2)
If we're considering realistic scenarios instead of staying with the spirit of the thought experiment (which I think we should not, partly precisely because it introduces lots of possible ambiguities in how people interpret the question, and partly because this probably isn't what the surveyors intended, given the way EA culture has handled thought experiments thus far – see for instance the links in Lizka's answer, or the way EA draws heavily from analytic philosophy, where straightforwardly engaging with unrealistic thought experiments is a standard component of the toolkit), then I agree that an advertized 1.5% chance of having a huge impact could be more likely upwards-biased than the other way around. (But it depends on who's doing the estimate – some people are actually well-calibrated or prone to be extra modest.)
(1) what you described seems to me best characterized as being about trust. Trust in other's risk estimates. That would be separate from attitudes about uncertainty (and if that's what the surveyors wanted to elicit, they'd probably have asked the question very differently).
(Or maybe what you're thinking about could be someone having radical doubts about the entire epistemology behind "low probabilities"? I'm picturing a position that goes something like, "it's philosophically impossible to reason sanely about low probabilities; besides, when we make mistakes, we'll almost always overestimate rather than underestimate our ability to have effects on the world." Maybe that's what you think people are thinking – but as an absolute, this would seem weirdly detailed and radical to me, and I feel like there's a prudential wager against believing that our reasoning is doomed from the start in a way that would prohibit everyone from pursuing ambitious plans.)
(2) What I meant wasn't about basic EV calculation skills (obviously) – I didn't mean to suggest that just because the EV of the low-probability intervention is greater than the EV of the certain intervention, it's a no-brainer that it should be taken. I was just saying that the OP's point about probabilities maybe being off by one percentage point, by itself, without some allegation of systematic bias in the measurement, doesn't change the nature of the question. There's still the further question of whether we want to bring in other considerations besides EV. (I think "attitudes towards uncertainty" fits well here as a title, but again, I would reserve it for the thing I'm describing, which is clearly different from "do you think other people/orgs within EA are going to be optimistically biased?.")
(Note that it's one question whether people would go by EV for cases that are well within the bounds of numbers of people that exist currently on earth. I think it becomes a separate question when you go further to extremes, like whether people would continue gambling in the St Petersburg paradox or how they relate to claims about vastly larger realms than anything we understand to be in current physics, the way Pascal's mugging postulates.)
Finally, I realize that maybe the other people here in the thread have so little trust in the survey designers that they're worried that, if they answer with the low-probability, higher-EV option, the survey designers will write takeaways like "more EAs are in favor of donating to speculative AI risk." I agree that, if you think survey designers will make too strong of an update from your answers to a thought experiment, you should point out all the ways that you're not automatically endorsing their preferred option. But I feel like the EA survey already has lots of practical questions along the lines of "Where do you actually donate to?" So, it feels unlikely that this question is trying to trick respondees or that the survey designers will just generally draw takeaways from this that aren't warranted?
Sarah Cheng @ 2024-12-13T01:37 (+2) in response to GiveWell's Re-evaluation of the impact of unconditional cash transfers
Note that there was some discussion of this in a previous post
Lin BL @ 2024-12-11T23:45 (+2) in response to What do Open Philanthropy’s recent changes mean for university group organizers?
The bar should not be at 'difficult financial situation', and this is also something there are often incentives against explicitly mentioning when applying for funding. Getting paid employment while studying (even fulltime degrees) is normal.
My 5 minute Google search to put some numbers on this:
Proportion of students who are employed while studying: UK: survey of 10,000 students showed that 56% of full-time UK undergraduates had paid employment (14.5 hours/week average) - June 2024 Guardian article https://www.theguardian.com/education/article/2024/jun/13/more-than-half-of-uk-students-working-long-hours-in-paid-jobs USA: 43% of full-time students work while enrolled in college - January 2023 Fortune article https://fortune.com/2023/01/11/college-students-with-jobs-20-percent-less-likely-to-graduate-than-privileged-peers-study-side-hustle/
Why are students taking on paid work? UK: "Three-quarters of those in work said they did so to meet their living costs, while 23% also said they worked to give financial support for friends or family." From the Guardian article linked above. Cannot find a recent US statistic quickly, but given the system (e.g. https://www.collegeave.com/articles/how-to-pay-for-college/) I expect working rather than taking out (as much in) loans is a big one.
On the other hand, spending time on committees is also very normal as an undergraduate and those are not paid. However in comparison the time people spend on this is much more limited (say ~2-5 hrs/week), there is rarely a single organiser, and I've seen a lot of people drop off committees - some as they are less keen, but some for time commitment reasons (which I expect will sometimes/often be doing paid work).
Jason @ 2024-12-13T01:07 (+2)
My guess is that the downsides of paid organizing would be diminished to the extent that the structure and compensation somewhat closely tracked typical university-student employment. I didn't see anything in the UK report about what typical rates might be, but at least back in my day most students were at fairly low hourly rates. Also, paying people for fewer than (say) 8-10 hours per week would not come across to me as roughly replacement income for foregone typical university-student employment because I don't think such employment is typically available in smaller amounts. [Confidence: low, I am somewhat older by EA standards.]
Linch @ 2024-12-13T01:03 (+4) in response to AMA: 10 years of Earning To Give
How do you and your wife decide where to give to, collectively? Do you guys each have a budget, do you discuss a lot and fund based on consensus, something else?
Angelina Li @ 2024-12-13T00:36 (+4) in response to Podcast and Transcript: Allan Saldanha on earning to give.
<3 This is so lovely @Allan_Saldanha! I think it is such a lovely and remarkable thing about our community that so many people have been quietly living their lives and just giving their 10, 20, 40, 75(!) percent to causes they care about, some now over the course of 10+ years. "Generous and thoughtful giving is normal here" continues to be one of my favorite facts about EAs :')
Thanks for doing this AMA!
Jan_Kulveit @ 2024-12-12T22:22 (+15) in response to Jan_Kulveit's Quick takes
I wrote a post on “Charity” as a conflationary alliance term. You can read it on LessWrong, but I'm also happy to discuss it here.
If wondering why not post it here: Originally posted it here with a LW cross-post. It was immediately slapped with the "Community" tag, despite not being about community, but about different ways people try to do good, talk about charity & ensuing confusions. It is about the space of ideas, not about the actual people or orgs.
With posts like OP announcements about details of EA group funding or EAG admissions bar not being marked as community, I find it increasingly hard to believe the "Community" tag is driven by the stated principe marking "Posts about the EA community and projects that focus on the EA community" and not by other motives, like e.g. forum mods expressing the view "we want people to think less about this / this may be controversial / we prefer someone new to not read this".
My impression this moves substantial debates about ideas to the side, which is a state I don't want to cooperate on by just leaving it as it is -> moved the post on LessWrong and replaced by this comment.
Sarah Cheng @ 2024-12-13T00:26 (+3)
Hi Jan, my apologies for the frustrating experience. The Forum team has reduced both our FTEs and moderation/facilitator capacity over the past year — in particular, currently the categorization of "Community" posts is done mostly by LLM judgement with a bit of human oversight. I personally think that this system makes too many mistakes, but I have not found time to prioritize fixing it.
In the meantime, if you ever encounter any issues (such as miscategorized posts) or if you have any questions for the Forum team, I encourage you to contact us, or you can message myself or @Toby Tremlett🔹 directly via the Forum. We're happy to work with you to resolve any issues.
For what it's worth, here is my (lightly-held) opinion based on the current definition[1] of "Community" posts:
I agree that the two posts about uni group funding are "Community" posts because they are "irrelevant to someone who was interested in doing good effectively, but NOT interested in the effective altruism community". I've tagged them as such.
I would say that the EAG application bar post is a borderline case[2], but I lean towards agreeing that it's "Community" because it's mostly addressed towards people in the community. I've tagged it as such.
I skimmed your post on LW and I think it was categorized as "Community" because it arguably "concerns an ongoing conversation, scandal or discourse that would not be relevant to someone who doesn't care about the EA community" (as the post references past criticisms of EA, which someone who wasn't involved in the community wouldn't have context on). I think this is not a clear cut case. Often the "Community" tag requires some judgement calls. If you wanted to post it on the Forum again, I could read it more carefully and make a decision on it myself — let me know if so.
To be clear, I haven't put enough thought into this definition to feel confident agreeing or disagreeing with it. I'm just going to apply it as written for now. I expect that our team will revisit this within the next few months.
Partly because I believe the intended audience is people who are not really involved with the EA community but would be valuable additions to an EA Global conference (and also I think you don't need to know anything about the EA community to find that post valuable), and so the post doesn't 100% fit any of the four criteria.
Comments on 2024-12-12
Lukas_Gloor @ 2024-12-12T16:02 (+4) in response to Probabilities might be off by one percentage point
My intuitive reaction to this is "Way to screw up a survey."
Considering that three people agree-voted your post, I realize I should probably come away with this with a very different takeaway, more like "oops, survey designers need to put in extra effort if they want to get accurate results, and I would've totally fallen for this pitfall myself."
Still, I struggle with understanding your and the OP's point of view. My reaction to the original post was something like:
Why would this matter? If the estimate could be off by 1 percentage point, it could be down to 0.5% or up to 2.5%, which is still 1.5% in expectation. Also, if this question's intention were about the likelihood of EA orgs being biased, surely they would've asked much more directly about how much respondees trust an estimate of some example EA org.
We seem to disagree on use of thought experiments. The OP writes:
I don't think this is necessary and I could even see it backfiring. If someone goes out of their way to make a thought experiment particularly realistic, maybe respondees might get the impression that it is asking about a real-world situation where they are invited to bring in all kinds of potentially confounding considerations. But that would defeat the point of the thought experiment (e.g., people might answer based on how much they trust the modesty of EA orgs, as opposed to giving you their personal tolerance of risk of the feeling of having had no effect/wasted money in hindsight). The way I see it, the whole point of thought experiments is to get ourselves to think very carefully and cleanly about the principles we find most important. We do this by getting rid of all the potentially confounding variables. See here for a longer explanation of this view.
Maybe future surveys should have a test to figure out how people understand the use of thought experiments. Then, we could split responses between people who were trying to play the thought experiment game the intended way, and people who were refusing to play (i.e., questioning premises and adding further assumptions).
*On some occasions, it makes sense to question the applicability of a thought experiment. For instance, in the classical "what if you're a doctor who has the opportunity to kill a healthy patient during routine chek-up so that we could save the lives of 4 people needing urgent organ transplants," it makes little sense to just go "all else is equa! Let's abstract away all other societal considerations or the effect on the doctor's moral character."
So, if I were to write a post on thought experiments today, I would add something about the importance of re-contextualizing lessons learned within a thought experiments to the nuances of real-world situations. In short, I think my formula would be something like, "decouple within thought experiments, but make sure add an extra thinking step from 'answers inside a thought experiment' to 'what can we draw from this in terms of real-life applications.'" (Credit to Kaj Sotala, who once articulated a similar point in probably a better way.)
David T @ 2024-12-12T23:35 (+1)
Feels like taking into account the likelihood that the "1.5% probability of 100,000 DALYs averted" estimate is a credence based on some marginally-relevant base rate[1] that might have been chosen with a significant bias towards optimism is very much in keeping with the spirit of the question (which presumably is about gauging attitudes towards uncertainty, not testing basic EV calculation skills)[2].
A very low percentage chance of averting a lot of DALYs feels a lot more like "1.5% of clinical trials of therapies for X succeeded; this untested idea might also have a 1.5% chance" optimism attached to a proposal offering little reason to believe it's above average rather than an estimate based on somewhat robust statistics (we inferred that 1.5% of people who receive this drug will be cured from the 1.5% of people who had that outcome in trials). So it seems quite reasonable to assume that the 1.5% chance of a positive binary outcome estimate might be biased upwards. Even more so in the context of "we acknowledge this is a long shot and high-certainty solutions to other pressing problems exist, but if the chance of this making an impact was as high as 0.0x%..." style fundraising appeals to EAs' determination to avoid scope insensitivity.
either that or someone's been remarkably precise in their subjective estimates or collected some unusual type of empirical data. I certainly can't imagine reaching the conclusion an option has exactly 1.5% chance of averting 100k DALYs myself
if you want to show off you understand EV and risk estimation you'd answer (C) "here's how I'd construct my portfolio" anyway :-)
David T @ 2024-12-12T22:12 (+1) in response to Upcoming changes to Open Philanthropy's university group funding
They do specifically say that they consider other types of university funding to have greater cost-benefit (and I don't think it makes sense to exclude reputation concerns from cost-benefit analysis, particularly when reputation boost is a large part of the benefit being paid for in the first place). Presumably not paying stipends would leave more to go around. I agree that more detail would be welcome.
Habryka @ 2024-12-12T23:22 (+2)
I agree that all-things-considered they say that, but I am objecting to "one of the things to consider", and so IMO it makes sense to bracket that consideration when evaluating my claims here.
Ian Turner @ 2024-12-12T22:32 (+2) in response to Children in Low-Income Countries
Here’s a GiveWell blog post from 2009 that engages with this question.
Daniel Birnbaum @ 2024-12-12T23:18 (+1)
Appreciate the response! This is very helpful so thanks.
Ozzie Gooen @ 2024-12-11T17:01 (+7) in response to Ozzie Gooen's Quick takes
Dang. That makes sense, but it seems pretty grim. The second half of that argument is, "We can't select for not-feeling-pain, because we need to spend all of our future genetic modification points on the chickens getting bigger and growing even faster."
I'm kind of surprised that this argument isn't at all about the weirdness of it. It's purely pragmatic, from their standpoint. "Sure, we might be able to stop most of the chicken suffering, but that would increase costs by ~20% or so, so it's a non-issue"
Lorenzo Buonanno🔸 @ 2024-12-12T23:03 (+4)
20% of the global cost of growing chickens is probably in the order of at least ~$20B, which is much more than the global economy is willing to spend on animal welfare.
As mentioned in the other comment, I think it's extremely unlikely that there is a way to stop "most" of the chicken suffering while increasing costs by only ~20%.
Some estimate the better chicken commitment already increases costs by 20% (although there is no consensus on that, and factory farmers estimate 37.5%), and my understanding is that it doesn't stop most of the suffering, but "just" reduces it a lot.
Habryka @ 2024-12-12T19:23 (+23) in response to Technical Report on Mirror Bacteria: Feasibility and Risks
Copying over the rationale for publication here, for convenience:
Jackson Wagner @ 2024-12-12T22:59 (+30)
IMO, one helpful side effect (albeit certainly not a main consideration) of making this work public, is that it seems very useful to have at least one worst-case biorisk that can be publicly discussed in a reasonable amount of detail. Previously, the whole field / cause area of biosecurity could feel cloaked in secrecy, backed up only by experts with arcane biological knowledge. This situation, although unfortunate, is probably justified by the nature of the risks! But still, it makes it hard for anyone on the outside to tell how serious the risks are, or understand the problems in detail, or feel sufficiently motivated about the urgency of creating solutions.
By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life. Just being able to use mirror life as an example seems like it's much healthier than having zero concrete examples and everything shrouded in secrecy.
Some of the cross-cutting things I am thinking about:
So, I think it might be a kind of epistemic boon for all of biosecurity to have this public example, which will help clarify debates / advocacy / etc about the need for various proposed policies or investments.
toonalfrink @ 2024-12-11T17:33 (+17) in response to Ideas EAIF is excited to receive applications for
Re "epistemics and integrity" - I'm glad to see this problem being described. It's also why I left (angrily!) a few years ago, but I don't think you're really getting to the core of the issue. Let me try to point at a few things
centralized control and disbursion of funds, with a lot of discretionary power and a very high and unpredictable bar, gives me no incentive to pursue what I think is best, and all the incentive to just stick to the popular narrative. Indeed groupthink. Except training people not to groupthink isn't going to change their (existential!) incentive to groupthink. People's careers are on the line, there are only a few opportunities for funding, no guarantee to keep receiving it after the first round, and no clear way to pivot into a safer option except to start a new career somewhere your heart does not want to be, having thrown years away
lack of respect for "normies". Many EA's seemingly can't stand interacting with non-EA's. I've seen EA meditation, EA bouldering, EA clubbing, EA whatever. Orgs seem to want everyone and the janitor to be "aligned". Everyone's dating each other. It seems that we're even afraid of them. I will never forget that just a week before I arrived at an org I was to be the manager of, they turned away an Economist reporter at their door...
perhaps in part due to the above, massive hubris. I don't think we realise how much we don't know. We started off with a few slam dunks (yeah wow 100x more impact than average) and now we seem to think we are better at everything. Clearly the ability to discern good charities does not transfer to the ability to do good management. The truth is: we are attempting something of which we don't even know whether it is possible at all. Of course we're all terrified! But where is the humility that should go along with that?
Davidmanheim @ 2024-12-12T22:37 (+2)
Ian Turner @ 2024-12-12T22:32 (+2) in response to Children in Low-Income Countries
Here’s a GiveWell blog post from 2009 that engages with this question.
Camille @ 2024-12-12T22:32 (+4) in response to Is the EA community really advocating principles over conclusions ?
Re: agency of the community itself, I've been trying to get to this "pure" form of EA in my university group, and to be honest, it felt extremely hard.
-People who want to learn about EA often feel confused and suspicious until you get to object-level examples. "Ok, impactful career, but concretely, where would that get me? Can you give me an example?". I've faced real resistance when trying to stay abstract.
-It's hard to keep people's attention without talking about object-level examples, be it for teaching abstract concepts. It's even harder once you get to the "projects" phase of the year.
-People anchor hard on some specific object-level examples after that. "Oh, EA ? The malaria thing?" (Despite my go-to examples included things as diverse as shrimp welfare and pandemic preparedness)
-When it's not an object-level example, it's usually "utilitarianism" or "Peter Singer", which act a lot as thought stoppers and have an "eek" vibe for many people.
-People who care about non-typical causes actually have a hard time finding data and making estimates.
-In addition to that, agency for really making estimates is hard to build up. One member I knew thought the most Impactful career choice he had was potentially working on nuclear fusion. I suggested him to find out about the Impact-Tractability-Neglectedness of it to compare to another option he had (even rough OOMs) as well as more traditional ones. I can't remember him giving any numbers even months later. When he just mentioned he felt sure about the difference, I didn't feel comfortable arguing about the robustness of his justification. It's a tough balance to strike between respecting preferences and probing reasons.
-A lot of it comes down to career 1:1s. Completing the ~8 or so parts is already demanding. You have to provide estimates that are nowhere to be found if your center of interest is "niche" in EA. You then have to find academic and professional opportunities as well as relations that are not referenced anywhere in the EA community (I had to reach back to the big brother of a primary school friend I had lost track of to get a fusion engineer he could talk to!). If you need funding, even if your idea is promising, you need excellent communication skills for writing a convincing blog post, plausibly enough research skills to get non-air-plucked estimates for ITN / cost-effectiveness analysis, and a desire to go to EAGs and convince people who could just not care. Moreover a lot of people expressly limit themselves to their own country or continent. It's often easier to stick to the usual topics (I get call for applications for AIS fellowships almost every months, of course I never had ones about niche topics)
-Another point about career 1:1s, the initial list of options to compare is hard to negotiate. Some people will neglect non-EA options, others will neglect EA options, and I had issues with artificially adding options to help them truly compare options.
-Another other point, some people barely have the time to come to a few sessions. It's hard to get them to actually rely on the methodological tools they haven't learned about in order to compare their options during career 1:1s.
-A good way to cope with all of this is to encourage students to start things out -to create an org rather than joining one. But not everyone has the necessary motivation for this.
I'm still happy with having started the year with epistemics, rationality, ethics and meta-ethics, and to have done other sessions on intervention and policy evaluation, suffering and consciousness, and population ethics. I didn't desperately need to have sessions on GHD / Animal Welfare/ AI Safety, thought they're definitely "in demand".
Oscar Sykes @ 2024-12-12T22:30 (+5) in response to Oscar Sykes's Quick takes
[Promise this is not a scam] Sign up to receive a free $50 charity gift card from a rich person
Every year, for the past few years, famous rich person Ray Dalio has given away 20,000 $50 gift cards. And he is doing it again this year. These can be given any of over 1.8 million US registered charities, which includes plenty of EA charities
Here's an announcement post from Ray Dalio's instagram for verification
Register here to receive notification when the gift cards become available.
Jan_Kulveit @ 2024-12-12T22:22 (+15) in response to Jan_Kulveit's Quick takes
I wrote a post on “Charity” as a conflationary alliance term. You can read it on LessWrong, but I'm also happy to discuss it here.
If wondering why not post it here: Originally posted it here with a LW cross-post. It was immediately slapped with the "Community" tag, despite not being about community, but about different ways people try to do good, talk about charity & ensuing confusions. It is about the space of ideas, not about the actual people or orgs.
With posts like OP announcements about details of EA group funding or EAG admissions bar not being marked as community, I find it increasingly hard to believe the "Community" tag is driven by the stated principe marking "Posts about the EA community and projects that focus on the EA community" and not by other motives, like e.g. forum mods expressing the view "we want people to think less about this / this may be controversial / we prefer someone new to not read this".
My impression this moves substantial debates about ideas to the side, which is a state I don't want to cooperate on by just leaving it as it is -> moved the post on LessWrong and replaced by this comment.
Habryka @ 2024-12-12T21:22 (+4) in response to Technical Report on Mirror Bacteria: Feasibility and Risks
You can sort by "oldest" and "newest" in the comment-sort order, and see that mine shows up earlier in the "oldest" order, and later in the "newest" order.
Lorenzo Buonanno🔸 @ 2024-12-12T22:14 (+6)
You can also right-click → inspect element on the time indicator:
Habryka @ 2024-12-12T21:12 (+2) in response to Upcoming changes to Open Philanthropy's university group funding
I agree that this is an inference. I currently think the OP thinks that in the absence of frugality concerns this would be among the most cost-effective uses of money by Open Phil's standards, but I might be wrong.
University group funding was historically considered extremely cost-effective when I talked to OP staff (beating out most other grants by a substantial margin). Possibly there was a big update here on cost-effectiveness excluding frugality-reputation concerns, but currently think there hasn't been (but like, would update if someone from OP said otherwise, and then I would be interested in talking about that).
David T @ 2024-12-12T22:12 (+1)
They do specifically say that they consider other types of university funding to have greater cost-benefit (and I don't think it makes sense to exclude reputation concerns from cost-benefit analysis, particularly when reputation boost is a large part of the benefit being paid for in the first place). Presumably not paying stipends would leave more to go around. I agree that more detail would be welcome.
defun 🔸 @ 2024-12-12T22:01 (+5) in response to Earning to Give (EtG) Pledge Club
Great initiative! 🙌🙌
I've been hoping for something like this to exist. https://forum.effectivealtruism.org/posts/CK7pGbkzdojkFumX9/meta-charity-focused-on-earning-to-give
I've been donating 20% of my income for a couple of years, and I'm planning to increase it to 30–40%. I'd love to meet like-minded people: ambitious EAs who are EtG.
David T @ 2024-12-12T21:57 (+1) in response to undefined
I think this is a good observation (I think the worry highlighted is one of the weakest arguments against EA, not least because EA has very limited real world impact upon the amount spent on animal shelters or concert halls, but it definitely comes up a lot in articles people like sharing on here).
I don't agree with this though. I don't think people donate to anonymous poor recipients in faraway countries or farm animals out of sense of collective identity. There's little or no reciprocal altruism or collective identity there (particularly when it comes to the animals). I don't think donating to exploring ideas of future people or digital minds is more impartial simply because these don't [yet] exist. (Indeed I think it would be easier to characterise some of the niche longtermist research donations as being "partial" philanthropy on the basis that the recipients are typically known and respected members of an established in-group, with shared [unusual] interests, and the outcome is often research whose most obviously quantifiable impact is that the donor and their group find it very interesting. That strikes me as similar to quite a lot of other philanthropic research funding including in academia).
I think the "types" of charity are better understood as a set of motivations which overlap (and also include others like fuzzy feelings of satisfaction, signalling, interests, sense of duty etc which can coexist with also being a user of that conference hall or a fellow Christian or someone that believes its important future humanity ). Donating to AMF is about as impartial as it gets in terms of outcome, but there's definitely some sort of collective identity benefit to doing so whilst identifying as part of a group with a shared epistemology and understanding that points towards donating mattering and outcomes mattering and AMF being a good way to achieve this. Ditto impartial donations to mainstream charity made by people who have a sense of religious duty to random strangers, or completely impartial donations to a research funding pool made by people with strong convictions about progress.
JP Addison🔸 @ 2024-12-12T21:33 (+2) in response to Technical Report on Mirror Bacteria: Feasibility and Risks
I really appreciate that the comment section has rewarded you both precisely equally.
Habryka @ 2024-12-12T21:38 (+1)
But I was first! I demand the moderators transfer all of the karma of Jeff's comment to mine :P
Accolades for intellectual achievements by tradition go to the person who published them first.
Habryka @ 2024-12-12T21:36 (+11) in response to Probabilities might be off by one percentage point
Clearly you believe that probabilities can be less than 1%, reliably. Your probably of being struck by lightning today is not "0% or maybe 1%", it's on the order of 0.001%. Your probability of winning the lottery is not "0% or 1%" it's ~0.0000001%. I am confident you deal with probabilities that have much less than 1% error all the time, and feel comfortable using them.
It doesn't make sense to think of humility as something absolute like "don't give highly specific probabilities". You frequently have justified belief of a probability being very highly specific (the probability that random.org's random number generator will generate "2" when asked about a random number between 1 and 10 is exactly 10%, not 11%, not 9%, exactly 10%, with very little uncertainty about that number).
Habryka @ 2024-12-12T19:29 (+2) in response to Technical Report on Mirror Bacteria: Feasibility and Risks
I was first! :P
JP Addison🔸 @ 2024-12-12T21:33 (+2)
I really appreciate that the comment section has rewarded you both precisely equally.
SummaryBot @ 2024-12-12T21:26 (+1) in response to undefined
Executive summary: The term "charity" encompasses three distinct behaviors (public goods funding, partial philanthropy, and impartial philanthropy) that form a "conflationary alliance" where different groups benefit from using the same terminology despite having different goals and motivations.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Jeff Kaufman 🔸 @ 2024-12-12T21:15 (+2) in response to Technical Report on Mirror Bacteria: Feasibility and Risks
They both show up as 2:23 pm to me: is there a way to get second level precision?
Habryka @ 2024-12-12T21:22 (+4)
You can sort by "oldest" and "newest" in the comment-sort order, and see that mine shows up earlier in the "oldest" order, and later in the "newest" order.
Habryka @ 2024-12-12T19:29 (+2) in response to Technical Report on Mirror Bacteria: Feasibility and Risks
I was first! :P
Jeff Kaufman 🔸 @ 2024-12-12T21:15 (+2)
They both show up as 2:23 pm to me: is there a way to get second level precision?
David T @ 2024-12-12T21:06 (+1) in response to Upcoming changes to Open Philanthropy's university group funding
I think it's fair enough to caution against purely performative frugality. But I'm not sure the OP even justifies the suggestion that the organizers actually are more cost effective (they concluded the difference between paid and unpaid organizers' individual contributions were "substantive, not enormous"; there's a difference between paid people doing more work than volunteers and it being more cost effective to pay...). That's even more the case if you take into account that the primary role of an effective university organizer is attracting more people (or "low context observers") to become more altruistic and this instance of the "weirdness" argument is essentially that paying students undercut the group's ability to appeal to people on altruistic grounds, even if individual paid staff put in more effort. And they were unusually well paid by campus standards for tasks almost every other student society use volunteers for.[1] And that there's no evidence that the other ways CEA proposes spending the money instead are less effective.
one area we might agree is that I'm not sure if OpenPhil considered alternatives like making stipends needs-based or just a bit lower and more focused as a pragmatic alternative to just cancelling them altogether.
Habryka @ 2024-12-12T21:12 (+2)
I agree that this is an inference. I currently think the OP thinks that in the absence of frugality concerns this would be among the most cost-effective uses of money by Open Phil's standards, but I might be wrong.
University group funding was historically considered extremely cost-effective when I talked to OP staff (beating out most other grants by a substantial margin). Possibly there was a big update here on cost-effectiveness excluding frugality-reputation concerns, but currently think there hasn't been (but like, would update if someone from OP said otherwise, and then I would be interested in talking about that).
Habryka @ 2024-12-12T05:50 (+24) in response to Upcoming changes to Open Philanthropy's university group funding
I agree that things tend to get tricky and loopy around these kinds of reputation-considerations, but I think at least the approach I see you arguing for here is proving too much, and has a risk of collapsing into meaninglessness.
I think in the limit, if you treat all speech acts this way, you just end up having no grounding for communication. "Yes, it might be the case that the real principles of EA are X, but if I tell you instead they are X', then you will take better actions, so I am just going to claim they are X', as long as both X and X' include cost-effectiveness".
In this case, it seems like the very people that the club is trying to explain the concepts of EA to, are also the people that OP is worried about alienating by paying the organizers. In this case what is going on is that the goodness of the reputation-protecting choice is directly premised on the irrationality and ignorance of the very people you are trying to attract/inform/help. Explaining that isn't impossible but it does seem like a particularly bad way to start of a relationship, and so I expect consequences-wise to be bad.
"Yes, we would actually be paying people, but we expected you wouldn't understand the principles of cost-effectiveness and so be alienated if you heard about it, despite us getting you to understand them being the very thing this club is trying to do", is IMO a bad way to start off a relationship.
I also separately think that optimizing heavily for the perception of low-context observers in a way that does not reveal a set of underlying robust principles, is bad. I don't think you should put "zero" weight on that (and nothing in my comment implied that), but I do think it's something that many people put far too much weight on (going into detail of which wasn't the point of my comment, but on which I have written plenty about in many other comments).
There is also another related point in my comment, which is that "cost-effectiveness" is of course a very close sister concept to "wasting money". I think in many ways, thinking about cost-effectiveness is where you end up if you think carefully about how you can avoid wasting money, and is in some ways a more grown-up version of various frugality concerns.
When you increase the total cost of your operations (by, for example, reducing the cost-effectiveness of your university organizers, forcing you to spend more money somewhere else to do the same amount of good) in order to appear more frugal, I think you are almost always engaging in something that has at least the hint of deception.
Yes, you might ultimately be more cost-effective by getting people to not quite realize what happened, but when people are angry at me or others for not being frugal enough, I think it's rarely appropriate to ultimately spend more to appease them, even if doing so would ultimately then save me enough money to make it worth it. While this isn't happening as directly here as it was with other similar situations, like whether the Wytham Abbey purchase was not frugal enough, I think the same dynamics and arguments apply.
I think if someone tries to think seriously and carefully through what it would mean to be properly frugal, I don't think they would endorse you sacrificing the effectiveness of your operations causing you to ultimately spend more to achieve the same amount of good. And if they learned that you did, and they think carefully about what this implies about your frugality, they would end up more angry, not less. That, I think, is a dynamic worth avoiding.
David T @ 2024-12-12T21:06 (+1)
I think it's fair enough to caution against purely performative frugality. But I'm not sure the OP even justifies the suggestion that the organizers actually are more cost effective (they concluded the difference between paid and unpaid organizers' individual contributions were "substantive, not enormous"; there's a difference between paid people doing more work than volunteers and it being more cost effective to pay...). That's even more the case if you take into account that the primary role of an effective university organizer is attracting more people (or "low context observers") to become more altruistic and this instance of the "weirdness" argument is essentially that paying students undercut the group's ability to appeal to people on altruistic grounds, even if individual paid staff put in more effort. And they were unusually well paid by campus standards for tasks almost every other student society use volunteers for.[1] And that there's no evidence that the other ways CEA proposes spending the money instead are less effective.
one area we might agree is that I'm not sure if OpenPhil considered alternatives like making stipends needs-based or just a bit lower and more focused as a pragmatic alternative to just cancelling them altogether.
David T @ 2024-12-08T22:09 (–7) in response to NYT - What if Charity Shouldn't be Optimized
And the "stereotyping" in here is really limited and not particularly negative: there's space apportioned to highlighting how OpenPhil's chief executive gave a kidney for the cause and none to stereotypes of WEIRD Bay Area nerds or Oxford ivory towers or effective partying in the Bahamas. If you knew nothing else about the movement, you'd probably come away with the conclusion that EAs were a bit too consistent in obsessing over measurable outcomes; most of the more informed and effective criticisms argue the opposite!
(It also ends up by suggesting that EA as a philosophy offers a set of questions that are worth asking and some of its typical answers are perfectly valid. Think most minorities would love it if outside criticism of their culture generally drew that sort of conclusion!)
EAs can and do write opinion pieces broadly or specifically criticising other people's philanthropic choices all the time. I don't think EA should be exempted from such arguments.
David T @ 2024-12-12T20:33 (+1)
Perplexed by the reaction here. Not sure what people are taking most issue with:.
Me saying the stereotypes were limited and not particularly negative? If you think a reference to being disproportionately funded by a small number of tech billionaires, (balanced out by also accurate references to Singer and the prior emergence of a movement and an example of Berger giving a kidney rather than money) is negative stereotyping, you haven't read other critical takes on EA, never mind experienced what some other "minorities" deal with on a daily basis!
Me saying the more informed and effective criticisms of EA and EA orgs tended to point out where they fall well short of the rigour they demand? Again, I'd have thought it was glaringly obvious, whether it's nuanced insider criticism of specific inconsistencies in outcome measures or reviews of specific organizations, or drive-by observations that buying Wytham Abbey or early-stage funding for OpenAI may not have been high points of evidence-based philanthropy. That's obviously more useful than "these people have a different worldview" type articles like this. Even some of the purely stereotype-based criticisms of the money sloshing around the FTX ecosystem probably weren't "stopped clock" moments...
Or me pointing out that EAs also criticise non EAs' philanthropic choices, sometimes in generic terms? If you haven't read Peter Singer writing how other people have the wrong philanthropic priorities, you haven't read much Peter Singer!
Julia_Wise🔸 @ 2024-12-11T15:39 (+14) in response to Be the First Person to Take the Better Career Pledge!
I'd think a better way to get feedback is to ask "What do you think of this pledge wording?" rather than encourage people to take a lifelong pledge before it's gotten much external feedback.
For comparison, you could see when GWWC was considering changing the wording of its pledge (though I recognize it was in a different position as an existing pledge rather than a new one): Should Giving What We Can change its pledge?
ElliotJDavies @ 2024-12-12T20:28 (+2)
The idea of an Minimal Viable Product is you're unsure what part of your product provides value and what parts are sticking points. After you release the MVP the sticking points are much clearer, and you have a much better idea on where to focus your limited time and money.
AnonymousTurtle @ 2024-12-09T10:08 (+5) in response to EAIF isn’t *currently* funding constrained
Could you expand on why that's the case? Is the idea that you believe those projects are net negative, or that you would rather marginal donations go to animal welfare and the long term future instead of EA infrastructure?
I think it's a bit weird for donors who want to donate to EA infrastructure projects to see that initiatives like EA Poland are funding constrained while the EA Infrastructure fund isn't, and extra donations to the EAIF will likely counterfactually go to other cause areas.
hbesceli @ 2024-12-12T20:13 (+9)
In some cases there are projects that I or other fund managers think are net negative, but this is rare. Often things that we decide against funding I think are net positive, but think that the projects aren't competitive with funding things outside of the EA Infrastructure space (either the other EA Funds or more broadly).
I think it makes sense that there are projects which EAIF decides not to fund, and that other people will still be excited about funding (and in these cases I think it makes sense for people to consider donating to those projects directly). Could you elaborate a bit on what you find weird?
I don't think this is the case. Extra donations to EAIF will help us build up more reserves for granting out at a future date. But it's not the case that eg. if EAIF has more money that we think that we can spend well at the moment, that we'll then eg. start donating this to other cause areas. I might have misunderstood you here?
JMonty🔸 @ 2024-12-12T20:05 (+23) in response to Technical Report on Mirror Bacteria: Feasibility and Risks
There is also this essay from Jason Crawford and this piece from Asimov Press that are less technical description of the Science article
Jeff Kaufman 🔸 @ 2024-12-12T19:23 (+24) in response to Technical Report on Mirror Bacteria: Feasibility and Risks
Thanks for sharing this, Aaron!
I agree the "Rationale for Public Release" section is interesting; I've copied it here:
When to work on risks in public vs private is a really tricky question, and it's nice to see this discussion on how this group handled it in this case.
Habryka @ 2024-12-12T19:29 (+2)
I was first! :P
Jeff Kaufman 🔸 @ 2024-12-12T19:23 (+24) in response to Technical Report on Mirror Bacteria: Feasibility and Risks
Thanks for sharing this, Aaron!
I agree the "Rationale for Public Release" section is interesting; I've copied it here:
When to work on risks in public vs private is a really tricky question, and it's nice to see this discussion on how this group handled it in this case.
Habryka @ 2024-12-12T19:23 (+23) in response to Technical Report on Mirror Bacteria: Feasibility and Risks
Copying over the rationale for publication here, for convenience:
Aaron Gertler 🔸 @ 2024-12-12T19:23 (+4) in response to AMA: 10 years of Earning To Give
This isn't about your giving per se, but have your views on the moral valence of financial trading changed in any notable ways since you spoke about this on the 80K podcast?
(I have no reason to think your views have changed, but was reading a socialist/anti-finance critique of EA yesterday and thought of your podcast.)
The episode page lacks a transcript, but does include this summary: "There are arguments both that quant trading is socially useful, and that it is socially harmful. Having investigated these, Alex thinks that it is highly likely to be beneficial for the world."
In that section (starts around 43:00), you talk about market-making, selling goods "across time" in the way other businesses sell them across space, and generally helping sellers "communicate" by adjusting prices in sensible ways. At the same time, you acknowledge that market-making might be less useful than in the past and that more finance people on the margin might not provide much extra social value (since markets are so fast/advanced/liquid at this point).
Kyle Smith @ 2024-12-11T19:15 (+7) in response to Ideas EAIF is excited to receive applications for
I think it's great that EAIF is not funding constrained.
Here's a random idea I had recently if anyone is interested and has the time:
An org that organizes a common application for nonprofits applying to foundations. There is enormous economic inefficiency and inequality in matching PF grants to grantees. PF application processes are extremely opaque and burdensome. Attempts to make common applications have largely been unsuccessful, I believe mostly because they tend to be for a specific geographic region. Instead, I think it would be interesting to create different common applications by cause area. A key part of the common application could be incorporating outcome reporting specific to each cause area, which I believe would cause PF to make more impact-focused grants, making EAs happy.
Brad West🔸 @ 2024-12-12T18:45 (+4)
I think this is an excellent idea.
Orgs or "proto-orgs" in their early stages are often in a catch-22. They don't have the time or expertise (because they don't have full time staff) to develop a strong grantwriting or other fundraising operations, which could be enabled by startup funds. An org that was familiar with the funding landscape, could familiarize itself with new orgs, and help it secure startup funds could help resolve the catch-22 that orgs find themselves at step 0.
Neel Nanda @ 2024-12-10T10:04 (+30) in response to Be the First Person to Take the Better Career Pledge!
I find the word maximise pretty scary here, for similar reasons to here. Analogous how GWWC is about giving 10%, a bounded amount, not "as much as you can possibly spare while surviving and earning money"
To me, taking a pledge to maximise seriously (especially in a naive conception where "I will get sick of this and break the pledge" or "I will burn out" aren't considerations) is a terrible idea, and I recommend that people take pledges with something more like "heavily prioritise" or "keep as one of my top prioritise" or "actually put a sincere, consistent effort into, eg by spending at least an hour per month reflecting on whether I'm having the impact I want". Of course, in practice, a pledge to maximise generally means one of those things, since people always have multiple priorities, but I like pledges to be something that could be realistically kept.
Davidmanheim @ 2024-12-12T18:36 (+2)
This seems like a reasonable mistake for younger EAs to make, and I've seen similar mindsets frequently - but in the community, I am very happy to see that many other members are providing a voice of encouragement, but also signficantly more moderation.
But as I said in another comment, and expanded on in a reply, I'm much more concerned than you seem to be about people committing to something even more mild for their entire careers - especially if doing so as college students. Many people don't find work in the area they hope to. Even among those that do find jobs in EA orgs and similar, which is a small proportion of those who want to, some don't enjoy the things they would view as most impactful, and find they are unhappy and/or ineffective; having made a commitment to do whatever is most impactful seems unlikely to work well for a large fraction of those who would make such a pledge.
ElliotJDavies @ 2024-12-12T17:57 (+2) in response to Be the First Person to Take the Better Career Pledge!
Thanks for your feedback! I appreciate it and agree that maximize it a pretty strong world. Just to clarify the crux here, would you say that this project doesn't make sense over-all or would you say that the text of the pledge be changed to something more manageable?
Davidmanheim @ 2024-12-12T18:28 (+2)
I think it's a problem overall, and I've talked about this a bit in two of the articles I linked to. To expand on the concerns, I'm concerned on a number of levels, starting from community dynamics that seem to dismiss anyone not doing direct work as insufficiently EA, to the idea that we should be a community that encourages making often already unhealthy levels of commitment by young adults into pledges to continue that level of dedication for their entire careers.
As someone who has spent most of a decade working in EA, I think this is worrying, even for people deciding on their own to commit themselves. People should be OK with prioritizing themselves to a significant extent, and while deciding to work on global priorities is laudable *if you can find something that fits your abilities and skill set*, but committing to do so for your entire career, which may not follow the path you are hoping for, seems at best unwise. Suggesting that others do so seems very bad.
So again, I applaud the intent, and think it was a reasonable idea to propose and get feedback about, but I also strongly think it should be dropped and you should move to something else.
SummaryBot @ 2024-12-12T18:06 (+1) in response to Developing a Calculable Conscience for AI: Equation for Rights Violations
Executive summary: A mathematical framework is proposed for giving AI systems an "artificial conscience" that can calculate the ethics of rights violations, incorporating factors like culpability, proportionality, and risk when determining if violating someone's rights is justified in self-defense scenarios.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Davidmanheim @ 2024-12-10T09:28 (+25) in response to Be the First Person to Take the Better Career Pledge!
My tentative take is that this is on-net bad, and should not be encouraged. I give this a 10/10 for good intent, but a 2/10 for planning and avoiding foreseeable issues, including the unilateralists curse, the likely object level impacts of the pledge, and the reputational and community impacts of promoting the idea.
It is not psychologically healthy to optimize or maximize your life towards a single goal, much less commit to doing so. That isn't the EA ideal. Promising to "maximize my ability to make a meaningful difference" is an unlimited and worryingly cult-like commitment, builds in no feedback from others who have a broader perspective about what is or is not important or useful. It implicitly requires pledgers to prioritize impact over personal health and psychological wellbeing. (The claim that it's usually the case that burnout reduces impact is a contingent one, and seems very likely to lead many people to overcommit and do damaging things.) It leads to unhealthy competitive dynamics, and excludes most people, especially the psychologically well adjusted.
I will contrast this to the giving pledge, which is very explicitly a partial pledge, requiring 10% of your income. This is achievable without extreme measures, or giving up having a normal life. The pledge was built via consultation with and advice from a variety of individuals, especially including those who were more experienced, which also seems to sharply contrast with this one.
ElliotJDavies @ 2024-12-12T17:57 (+2)
Thanks for your feedback! I appreciate it and agree that maximize it a pretty strong world. Just to clarify the crux here, would you say that this project doesn't make sense over-all or would you say that the text of the pledge be changed to something more manageable?
Arepo @ 2024-12-10T05:06 (+35) in response to Be the First Person to Take the Better Career Pledge!
For what it's worth, there used to be an 80k pledge along similar lines. They quietly dropped it several years ago, so you might want to find someone involved in that decision to try and understand why (I suspect and dimly remember that it was some combination of non-concreteness, and concerns about other-altruism-reduction effects).
ElliotJDavies @ 2024-12-12T17:57 (+2)
Thanks for flagging this Arepo, I will reach out to them!
Jack Mario @ 2024-12-12T17:50 (+1) in response to Briefly how I've updated since ChatGPT
Your points raise important considerations about the rapid development and potential risks of AI, particularly LLMs. The idea that deploying AI early to extend the timeline of human control makes sense strategically, especially when considering the potential for recursive LLMs and their self-improvement capabilities. While it's true that companies and open-source communities will continue experimenting, the real risk lies in humans deliberately turning these systems into agents to serve long-term goals, potentially leading to unforeseen consequences. The concern about AI sentience ChatGPT and the potential for abuse is also valid, and highlights the need for strict controls around AI access, transparency, and ethical safeguards. Ensuring that AIs are never open-sourced in a way that could lead to harm, and that interactions are monitored, seems essential in preventing malicious uses or exploitation.
Lin BL @ 2024-12-11T23:45 (+2) in response to What do Open Philanthropy’s recent changes mean for university group organizers?
The bar should not be at 'difficult financial situation', and this is also something there are often incentives against explicitly mentioning when applying for funding. Getting paid employment while studying (even fulltime degrees) is normal.
My 5 minute Google search to put some numbers on this:
Proportion of students who are employed while studying: UK: survey of 10,000 students showed that 56% of full-time UK undergraduates had paid employment (14.5 hours/week average) - June 2024 Guardian article https://www.theguardian.com/education/article/2024/jun/13/more-than-half-of-uk-students-working-long-hours-in-paid-jobs USA: 43% of full-time students work while enrolled in college - January 2023 Fortune article https://fortune.com/2023/01/11/college-students-with-jobs-20-percent-less-likely-to-graduate-than-privileged-peers-study-side-hustle/
Why are students taking on paid work? UK: "Three-quarters of those in work said they did so to meet their living costs, while 23% also said they worked to give financial support for friends or family." From the Guardian article linked above. Cannot find a recent US statistic quickly, but given the system (e.g. https://www.collegeave.com/articles/how-to-pay-for-college/) I expect working rather than taking out (as much in) loans is a big one.
On the other hand, spending time on committees is also very normal as an undergraduate and those are not paid. However in comparison the time people spend on this is much more limited (say ~2-5 hrs/week), there is rarely a single organiser, and I've seen a lot of people drop off committees - some as they are less keen, but some for time commitment reasons (which I expect will sometimes/often be doing paid work).
akash 🔸 @ 2024-12-12T17:45 (+9)
I don't disagree. I was simply airing my suspicion that most group organizers who applied for the OP fellowship did so because they thought something akin to "I will be organizing for 8-20 hours a week and I want to be incentivized for doing so" — which is perfectly a-ok and a valid reason — rather than "I am applying to the fellowship as I will not be able to sustain myself without the funding."
In cases where people need to make trade-offs between taking some random university job vs. organizing part time, assuming that they are genuinely interested in organizing and that the university has potential, I think it would be valuable for them to get funding.
James Herbert @ 2024-12-11T10:27 (+5) in response to Be the First Person to Take the Better Career Pledge!
FYI the School for Moral Ambition has a career pledge. Participants of their circle programme (like an intro fellowship but self-facilitated) are encouraged to take it at the end. AFAIK, over 100 people have taken it so far. Might be worth reaching out to them to see what they've learned? Niki might be a good person to contact. She manages the circle programme and was a volunteer at EA Netherlands before that.
ElliotJDavies @ 2024-12-12T17:16 (+2)
Thanks for flagging this! I didn't know this was the was the case - I will reach out to them
Aleksi Maunu @ 2024-12-11T13:53 (+4) in response to Be the First Person to Take the Better Career Pledge!
Props for the initiative!
What names did you consider for the pledge? One con of the current name is that it could elicit some reactions like:
It might be largely down to whether someone interprets better as "better than I might otherwise do" or "better than others' careers". Likely depends on culture too, for example I think here in Finland the above reactions could be more likely since people tend to value humbleness quite a bit.
Anyway I'm not too worried since the name has positives too, and you can always adapt the name based on how outreach goes if you do end up experimenting with it. 👍
ElliotJDavies @ 2024-12-12T17:14 (+2)
This is such a great question. We considered a very limited pool of ideas, for a very limited amount of time. I think the closest competitor was Career for Good.
The thinking being, that we can always get something up, test if there's actually interest in this, before actually spending significant resources into the branding side of things.
I agree that seems to being played out here! This could pose a good reason to change the name
In case there was any doubt, we didn't intend to say "Better than others". The fact that Bettercareers.com was taken was seen by myself as a positive update
PabloAMC 🔸 @ 2024-12-09T22:29 (+6) in response to Be the First Person to Take the Better Career Pledge!
I think you should explain in this post what the pledge people may take :-)
I am particularly interested in how to pledge more concrete. I have always thought that the 10% pledge is somewhat incomplete because it does not consider the career. However, I think it would be useful to make the career pledge more actionable.
ElliotJDavies @ 2024-12-12T17:06 (+2)
Thanks for flagging this Pablo! I added it to the post after I read your comment
Will Howard🔹 @ 2024-12-12T17:00 (+10) in response to Frontier AI systems have surpassed the self-replicating red line
I think this table from the paper gives a good idea of the exact methodology:
Like others I'm not convinced this is a meaningful "red line crossing", because non-AI computer viruses have been able to replicate themselves for a long time, and the AI had pre-written scripts it could run to replicate itself.
The reason (made up by me) non-AI computer viruses aren't a major threat to humanity is that:
I don't think this paper shows these AI models making a significant advance on these two things. I.e. if you found this model self-replicating you could still shut it down easily, and this experiment doesn't in itself show the ability of the models to self-improve.
SummaryBot @ 2024-12-12T16:49 (+1) in response to Measuring AI-Driven Risk with Stock Prices (Susana Campos-Martins)
Executive summary: This paper proposes a new method to measure AI-related risks by analyzing stock price movements of AI companies, finding that major AI shocks correspond to acquisitions, product launches, and regulatory changes.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Davidmanheim @ 2024-12-12T15:57 (+3) in response to Probabilities might be off by one percentage point
I'm more concerned that the actual survey language is "avert" not "save" - and obviously, we shouldn't do any projects which avert DALYs.
Derek Shiller @ 2024-12-12T16:48 (+9)
DALYs, unlike QALYs, are a negative measure. You don't want to increase the number of DALYs.
Matthew_Barnett @ 2024-12-12T05:55 (+4) in response to U.S. UBI study (2024) - more bad than good?
By definition, a UBI takes a pool of money and redistributes it equally to everyone in a community, regardless of personal need. However, with the same pool of total funding, one can typically deliver more efficient benefits by targeting people with the greatest need, such as those in dire poverty or those who have been struck by bad luck.
If you imagine being a philanthropist who has access to $8 billion, it seems unlikely that the best way to spend this money would be to give everyone on Earth $1. Yet this scheme is equivalent to a UBI merely framed in the context of private charity rather than government welfare.
It would require an enormous tax hike to provide everyone in a large community (say, the United States) a significant amount of yearly income through a UBI, such as $1k per month. And taxes are not merely income transfers: they have deadweight loss, which lowers total economic output. The intuition here is simple: when a good or service is taxed, that decreases the incentive to produce that good or service. As a consequence of the tax, fewer people will end up receiving the benefits provided by these goods and services.
Given these considerations, even if you think that unconditional income transfers are a good idea, it seems quite unlikely that a UBI would be the best way to redistribute income. A more targeted approach that combines the most efficient forms of taxation (such as land value taxes) and sends this money to the most worthy welfare recipients (such as impoverished children) would likely be far better on utilitarian grounds.
gogreatergood @ 2024-12-12T16:28 (+2)
Thank you for your insights Matthew, that all makes a lot of sense and helps me understand.
I wonder if there is an income bracket low enough in the US, where UBI focused just for that group, would have net positive impact. (This study was $29,900 average household income for the participants.) Or, if there is going to be a net negative for UBI in the US just no matter... even before getting detailed about potential counter-factual scenarios.
Funny that UBI seems to do better than more targeted approaches, in low-income countries... but in high-income countries, even for the poorest within those countries, more targeted approaches may be the better option.
aogara @ 2024-12-10T16:34 (+4) in response to Consider granting AIs freedom
What about corporations or nation states during times of conflict - do you think it's accurate to model them as roughly as ruthless in pursuit of their own goals as future AI agents?
They don't have the same psychological makeup as individual people, they have a strong tradition and culture of maximizing self-interest, and they face strong incentives and selection pressures to maximize fitness (i.e. for companies to profit, for nation states to ensure their own survival) lest they be outcompeted by more ruthless competitors. On average, while I'd expect that these entities tend to show some care for goals besides self-interest maximization, I think the most reliable predictor of their behavior is the maximization of their self-interest.
If they're roughly as ruthless as future AI agents, and we've developed institutions that somewhat robustly align their ambitions with pro-social action, then we should have some optimism that we can find similarly productive systems for working with misaligned AIs.
Steven Byrnes @ 2024-12-12T16:14 (+2)
Thanks! Hmm, some reasons that analogy is not too reassuring:
Some of the disanalogies include:
Nicholas Kruus🔸 @ 2024-12-12T02:09 (+3) in response to Is the EA community really advocating principles over conclusions ?
I’m glad you mustered the courage to post this! I think it’s a great post.
I agree that, in practice, people advocating for effective altruism can implicitly argue for the set of popular EA causes (and they do this quite often?), which could repel people with useful insight. Additionally, it seems to be the case that people in the EA community can be dismissive of newcomers’ cause prioritization (or their arguments for causes that are less popular in EA). Again, this could repel people from EA.
I have a couple of hypotheses for these observations. (I don’t think either is a sufficient explanation, but they’re both plausibly contributing factors.)
First, people might feel compelled to make EA less “abstract” by trying to provide concrete examples of how people in the EA community are “trying to do the most good they can,” possibly giving the impression that the causes, instead of the principles, are most characteristic of EA.
Second, people may be more subconsciously dismissive of new cause proposals because they’ve invested time/money into causes that are currently popular in the EA community. It’s psychologically easier to reject a new cause prioritization proposal than it is to accept it and thereby feel as though your resources have not been used with optimal effectiveness.
Solal 🔸 @ 2024-12-12T16:10 (+3)
Thanks for those insights ! I had not really thought about "why" the situation might be as it is, focused on the question on "what" it entails. I'm really glad I posted, I feel like I feel like my understanding of the topic has progressed as much in 24 hours as it had since the beginning.
huw @ 2024-12-12T00:02 (+6) in response to Is the EA community really advocating principles over conclusions ?
I agree with you that EA often implicitly endorses conclusions, and that this can be pernicious and sometimes confusing to newcomers. Here’s a really interesting debate on whether biodiversity loss should be an EA cause area, for example.
A lot of forms of global utilitarianism do seem to tend to converge on the ‘big 3’ cause areas of Global Health & Development, Animal Welfare, and Global Catastrophic Risks. If you generally value things like ‘saving lives’ or ‘reducing suffering’, you’ll usually end up at one of these (and most people seem to decide between them based on risk tolerance, assumptions about non-human moral values, or tractability—rather than outcome values). Under this perspective, it could be reasonable to dismiss cause areas that don’t fit into this value framework.
But this highlights where I think part of the problem lies, which is that value systems that lie outside of this can be good targets for effective altruism. If you value biodiversity for its own sake, it’s not unreasonable to ask ‘how can we save the greatest number of valuable species from going extinct?’. Or you might be a utilitarian, but only interested in a highly specific outcome, and ask ‘how can I prevent the most deaths from suicide?’. Or ‘how can I prevent the most suffering in my country?’—which you might not even do for value-system reasons, but because you have tax credits to maximise!
I wish EA were more open to this, especially as a movement that recognises the value of moral uncertainty. IMHO, some people in that biodiversity loss thread are bit too dismissive, and I think we’ve probably lost some valuable partners because of it! But I understand the appeal of wanting easy answers, and not spending too much time overthinking your value system (I feel the same!).
Solal 🔸 @ 2024-12-12T16:04 (+1)
Thanks for the link ! The person who posted may not have been a newcomer to EA, but it is a perfect example of the kind of threads that I was thinking may repel newbies, or slightly discourage them to even ask.
I really agree with what you say, there really is something to dig into there.
titotal @ 2024-12-12T12:21 (+8) in response to Probabilities might be off by one percentage point
When I answered this question, I answered it with an implied premise that an EA org is making these claims about the possibilities, and went for number 1, because I don't trust EA orgs to be accurate in their "1.5%" probability estimates, and I expect these to be more likely overestimates than underestimates.
Lukas_Gloor @ 2024-12-12T16:02 (+4)
My intuitive reaction to this is "Way to screw up a survey."
Considering that three people agree-voted your post, I realize I should probably come away with this with a very different takeaway, more like "oops, survey designers need to put in extra effort if they want to get accurate results, and I would've totally fallen for this pitfall myself."
Still, I struggle with understanding your and the OP's point of view. My reaction to the original post was something like:
Why would this matter? If the estimate could be off by 1 percentage point, it could be down to 0.5% or up to 2.5%, which is still 1.5% in expectation. Also, if this question's intention were about the likelihood of EA orgs being biased, surely they would've asked much more directly about how much respondees trust an estimate of some example EA org.
We seem to disagree on use of thought experiments. The OP writes:
I don't think this is necessary and I could even see it backfiring. If someone goes out of their way to make a thought experiment particularly realistic, maybe respondees might get the impression that it is asking about a real-world situation where they are invited to bring in all kinds of potentially confounding considerations. But that would defeat the point of the thought experiment (e.g., people might answer based on how much they trust the modesty of EA orgs, as opposed to giving you their personal tolerance of risk of the feeling of having had no effect/wasted money in hindsight). The way I see it, the whole point of thought experiments is to get ourselves to think very carefully and cleanly about the principles we find most important. We do this by getting rid of all the potentially confounding variables. See here for a longer explanation of this view.
Maybe future surveys should have a test to figure out how people understand the use of thought experiments. Then, we could split responses between people who were trying to play the thought experiment game the intended way, and people who were refusing to play (i.e., questioning premises and adding further assumptions).
*On some occasions, it makes sense to question the applicability of a thought experiment. For instance, in the classical "what if you're a doctor who has the opportunity to kill a healthy patient during routine chek-up so that we could save the lives of 4 people needing urgent organ transplants," it makes little sense to just go "all else is equa! Let's abstract away all other societal considerations or the effect on the doctor's moral character."
So, if I were to write a post on thought experiments today, I would add something about the importance of re-contextualizing lessons learned within a thought experiments to the nuances of real-world situations. In short, I think my formula would be something like, "decouple within thought experiments, but make sure add an extra thinking step from 'answers inside a thought experiment' to 'what can we draw from this in terms of real-life applications.'" (Credit to Kaj Sotala, who once articulated a similar point in probably a better way.)
Davidmanheim @ 2024-12-12T15:57 (+3) in response to Probabilities might be off by one percentage point
I'm more concerned that the actual survey language is "avert" not "save" - and obviously, we shouldn't do any projects which avert DALYs.
OllieBase @ 2024-12-12T14:39 (+2) in response to Ideas EAIF is excited to receive applications for
Thanks!
I don't know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?
Similarly, if the goal is to help people think about cause prioritisation, I think fairly standard EA retreats / fellowships are quite good at this? I'm not sure we need some intermediary step like "improve community epistemics".
Appreciate you responding and tracking this concern though!
Jamie_Harris @ 2024-12-12T15:46 (+4)
Maybe. To take cause prio as an example, my impression is that the framing is often a bit more like: 'here are lots of cause areas EAs think are high impact! Also, cause prioritisation might be v important.' (That's basically how I interpret the vibe and emphasis of the EA Handbook / EAVP.) Not so much 'cause prio is really important. Let's actually try and do that and think carefully about how to do this well, without just deferring to existing people's views.'
So there's a direct ^ version like that that I'd be excited about.
Although perhaps contradictorily I'm also envisaging something even more indirect than the retreats/fellowships you mention as a possibility, where the impact comes through generally developing skills that enable people to be top contributors to EA thinking, top cause areas, etc.
Yeah I think this is part of it. But I also think that they help by getting people to think carefully and arrive at sensible and better processes/opinions.
Ozzie Gooen @ 2024-12-11T23:35 (+20) in response to Is the EA community really advocating principles over conclusions ?
Kudos for bringing this up, I think it's an important area!
There's a lot to this question.
I think that many prestigious/important EAs have come to similar conclusions. If you've come to think that X is important, it can seem very reasonable to focus on promoting and working with people to improve X.
You'll see some discussions of "growing the tent" - this can often mean "partnering with groups that agree with the conclusions, not necessarily with the principles".
One question here is something like, "How effective is it to spend dedicated effort on explorations that follow the EA principles, instead of just optimizing for the best-considered conclusions?" This is something that arguably there would need to be more dedicated effort in order to really highlight. I think we just don't have all too much work in this area now, compared to more object-level work.
Perhaps another factor seemed to have been that FTX has stained the reputation of EA and hurt CEA - after which there was a period where there seemed to be less attention on EA, and more on specific causes like AI safety.
In terms of "What should the EA community do", I'd flag that a lot of the decisions are really made by funders and high-level leaders. It's not super clear to me how much agency the "EA community" has, in ways that aren't very aligned with these groups.
All that said, I think it's easy for us to generally be positive towards people who take the principles in ways that don't match the specific current conclusions.
I personally am on the side that thinks that current conclusions are probably overconfident and lacking in some very important considerations.
Solal 🔸 @ 2024-12-12T15:36 (+3)
Thanks for the answer, and for splitting the issue into several parts, it really makes some things clearer in my mind!
I'll keep thinking about it (and take a look at your posts, you seem to have spent quite some time thinking about meta EA, I realize there might be a lot of past discussions to catch up on before I start looking for a solution by myself!)
SummaryBot @ 2024-12-12T15:34 (+2) in response to Insect farming: recent investment trends and growth projections
Executive summary: Investment in insect farming has stalled since 2021 with many major players struggling, suggesting future production capacity will likely be much lower than previous forecasts, though still involving billions of farmed insects.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Matthew_Barnett @ 2024-12-10T20:48 (+4) in response to Consider granting AIs freedom
The primary reason humans rarely invest significant effort into brainstorming deceptive or adversarial strategies to achieve their goals is that, in practice, such strategies tend to fail to achieve their intended selfish benefits. Anti-social approaches that directly hurt others are usually ineffective because social systems and cultural norms have evolved in ways that discourage and punish them. As a result, people generally avoid pursuing these strategies individually since the risks and downsides selfishly outweigh the potential benefits.
If, however, deceptive and adversarial strategies did reliably produce success, the social equilibrium would inevitably shift. In such a scenario, individuals would begin imitating the cheaters who achieved wealth or success through fraud and manipulation. Over time, this behavior would spread and become normalized, leading to a period of cultural evolution in which deception became the default mode of interaction. The fabric of societal norms would transform, and dishonest tactics would dominate as people sought to emulate those strategies that visibly worked.
Occasionally, these situations emerge—situations where ruthlessly deceptive strategies are not only effective but also become widespread and normalized. As a recent example, the recent and dramatic rise of cheating in school through the use of ChatGPT is a clear instance of this phenomenon. This particular strategy is both deceptive and adversarial, but the key reason it has become common is because it works. Many individuals are willing to adopt it despite its immorality, suggesting that the effectiveness of a strategy outweighs moral considerations for a significant portion, perhaps a majority, of people.
When such cases arise, societies typically respond by adjusting their systems and policies to ensure that deceptive and anti-social behavior is no longer rewarded. This adaptation works to reestablish an equilibrium where honesty and cooperation are incentivized. In the case of education, it is unclear exactly how the system will evolve to address the widespread use of LLMs for cheating. One plausible response might be the introduction of stricter policies, such as requiring all schoolwork to be completed in-person, under supervised conditions, and without access to AI tools like language models.
In contrast, I suspect you underestimate just how much of our social behavior is shaped by cultural evolution, rather than by innate, biologically hardwired motives that arise simply from the fact that we are human. To be clear, I’m not denying that there are certain motivations built into human nature—these do exist, and they are things we shouldn't expect to see in AIs. However, these in-built motivations tend to be more basic and physical, such as a preference for being in a room that’s 20 degrees Celsius rather than 10 degrees Celsius, because humans are biologically sensitive to temperature.
When it comes to social behavior, though—the strategies we use to achieve our goals when those goals require coordinating with others—these are not generally innate or hardcoded into human nature. Instead, they are the result of cultural evolution: a process of trial and error that has gradually shaped the systems and norms we rely on today.
Humans didn’t begin with systems like property rights, contract law, or financial institutions. These systems were adopted over time because they proved effective at facilitating cooperation and coordination among people. It was only after these systems were established that social norms developed around them, and people became personally motivated to adhere to these norms, such as respecting property rights or honoring contracts.
But almost none of this was part of our biological nature from the outset. This distinction is critical: much of what we consider “human” social behavior is learned, culturally transmitted, and context-dependent, rather than something that arises directly from our biological instincts. And since these motivations are not part of our biology, but simply arise from the need for effective coordination strategies, we should expect rational agentic AIs to adopt similar motivations, at least when faced with similar problems in similar situations.
I think I understand your point, but I disagree with the suggestion that my reasoning stems from this intuition. Instead, my perspective is grounded in the belief that it is likely feasible to establish a legal and social framework of rights and rules in which humans and AIs could coexist in a way that satisfies two key conditions:
You bring up the example of an AI potentially being incentivized to start a pandemic if it were not explicitly punished for doing so. However, I am unclear about your intention with this example. Are you using it as a general illustration of the types of risks that could lead AIs to harm humans? Or are you proposing a specific risk scenario, where the non-biological nature of AIs might lead them to discount harms to biological entities like humans? My response depends on which of these two interpretations you had in mind.
If your concern is that AIs might be incentivized to harm humans because their non-biological nature leads them to undervalue or disregard harm to biological entities, I would respond to this argument as follows:
First, it is critically important to distinguish between the long-run and the short-run.
In the short-run:
In the near-term future, it seems unlikely that AIs would start a pandemic for reasons you yourself noted. Launching a pandemic would cause widespread disruption, such as an economic recession, and it would likely provoke a strong human backlash. In the short run, humans will still hold substantial practical control over the physical world, meaning that any AI engaging in such behavior would risk severe consequences. Moreover, unless an AI could ensure a pandemic’s total lethality, it would also risk leaving surviving humans who would actively retaliate. For these reasons, starting a pandemic would likely be counterproductive for AIs in the short term.
In the long-run:
The long-term considerations are different. As human labor becomes less relevant to the economy and AIs increasingly dominate, the potential for a strong direct human backlash against AIs would diminish. At the same time, however, this reduced human influence on the world also makes it less likely that AIs would see humans as a significant obstacle to achieving their goals. In other words, while it might become easier for AIs to harm or eliminate humans, it would simultaneously become less important to do so.
To illustrate this, consider an analogy: How often do hunter-gatherers pose a threat to your personal goals? The answer is almost never, because hunter-gatherers are a tiny and largely irrelevant part of the modern world. Now, imagine a hypothetical scenario where hunter-gatherers controlled the world’s major governments. If you strongly disagreed with their policies, you might have a rational incentive to oppose them in order to achieve your goals. But in reality, hunter-gatherers do not hold such power, so they are not an obstacle, and there is no reason to actively work against them.
My point here is that, as a general rule, the smaller and less influential a group is, the easier it may be to harm or eliminate them, but the less important it becomes to do so. Their small size and functional irrelevance makes their practical interference with the overall world small at the same time.
In the case of AIs, this means that as humans become a less central force in the world, I think AIs are unlikely to have compelling reasons to specifically harm or eliminate us to further their objectives.
You might object that humans could be more like wild animals in this scenario than like hunter-gatherers. Humans often kill wild animals, not because those animals directly threaten our goals, but rather because ensuring their safety and well-being can be costly. As a result, humans take actions—such as clearing forests or building infrastructure—that incidentally lead to widespread harm to wild animals, even if harming them wasn’t a deliberate goal.
AIs may treat humans similarly in the future, but I doubt they will for the following reasons. I would argue that there are three key differences between the case of wild animals and the role humans are likely to occupy in the long-term future:
This comment is already quite lengthy, so I’ll need to keep my response to this point brief. My main reply is that while such "extortion" scenarios involving AIs could potentially arise, I don’t think they would leave humans worse off than if AIs had never existed in the first place. This is because the economy is fundamentally positive-sum—AIs would likely create more value overall, benefiting both humans and AIs, even if humans don’t get everything we might ideally want.
In practical terms, I believe that even in less-than-ideal scenarios, humans could still secure outcomes such as a comfortable retirement, which for me personally would make the creation of agentic AIs worthwhile. However, I acknowledge that I haven’t fully defended or explained this position here. If you’re interested, I’d be happy to continue this discussion in more detail another time and provide a more thorough explanation of why I hold this view.
Steven Byrnes @ 2024-12-12T15:13 (+3)
Thanks!
I’ve only known two high-functioning sociopaths in my life. In terms of getting ahead, sociopaths generally start life with some strong disadvantages, namely impulsivity, thrill-seeking, and aversion to thinking about boring details. Nevertheless, despite those handicaps, one of those two sociopaths has had extraordinary success by conventional measures. [The other one was not particularly power-seeking but she’s doing fine.] He started as a lab tech, then maneuvered his way onto a big paper, then leveraged that into a professorship by taking disproportionate credit for that project, and as I write this he is head of research at a major R1 university and occasional high-level government appointee wielding immense power. He checked all the boxes for sociopathy—he was a pathological liar, he had no interest in scientific integrity (he seemed deeply confused by the very idea), he went out of his way to get students into his lab with precarious visa situations such that they couldn’t quit and he could pressure them to do anything he wanted them to do (he said this out loud!), he was somehow always in debt despite ever-growing salary, etc.
I don’t routinely consider theft, murder, and flagrant dishonesty, and then decide that the selfish costs outweigh the selfish benefits, accounting for the probability of getting caught etc. Rather, I just don’t consider them in the first place. I bet that the same is true for you. I suspect that if you or I really put serious effort into it, the same way that we put serious effort into learning a new field or skill, then you would find that there are options wherein the probability of getting caught is negligible, and thus the selfish benefits outweigh the selfish costs. I strongly suspect that you personally don’t know a damn thing about best practices for getting away with theft, murder, or flagrant antisocial dishonesty to your own benefit. If you haven’t spent months trying in good faith to discern ways to derive selfish advantage from antisocial behavior, the way you’ve spent months trying in good faith to figure out things about AI or economics, then I think you’re speaking from a position of ignorance when you say that such options are vanishingly rare. And I think that the obvious worldly success of many dark-triad people (e.g. my acquaintance above, and Trump is a pathological liar, or more centrally, Stalin, Hitler, etc.) should make one skeptical about that belief.
(Sure, lots of sociopaths are in prison too. Skill issue—note the handicaps I mentioned above. Also, some people with ASPD diagnoses are mainly suffering from an anger disorder, rather than callousness.)
You’re treating these as separate categories when my main claim is that almost all humans are intrinsically motivated to follow cultural norms. Or more specifically: Most people care very strongly about doing things that would look good in the eyes of the people they respect. They don’t think of it that way, though—it doesn’t feel like that’s what they’re doing, and indeed they would be offended by that suggestion. Instead, those things just feel like the right and appropriate things to do. This is related to and upstream of norm-following. I claim that this is an innate drive, part of human nature built into our brain by evolution.
(I was talking to you about that here.)
Why does that matter? Because we’re used to living in a world where 1% of the population are sociopaths who don’t intrinsically care about prevailing norms, and I don’t think we should carry those intuitions into a hypothetical world where 99%+ of the population are sociopaths who don’t intrinsically care about prevailing norms.
In particular, prosocial cultural norms are likelier to be stable in the former world than the latter world. In fact, any arbitrary kind of cultural norm is likelier to be stable in the former world than the latter world. Because no matter what the norm is, you’ll have 99% of the population feeling strongly that the norm is right and proper, and trying to root out, punish, and shame the 1% of people who violate it, even at cost to themselves.
So I think you’re not paranoid enough when you try to consider a “legal and social framework of rights and rules”. In our world, it’s comparatively easy to get into a stable situation where 99% of cops aren’t corrupt, and 99% of judges aren’t corrupt, and 99% of people in the military with physical access to weapons aren’t corrupt, and 99% of IRS agents aren’t corrupt, etc. If the entire population consists of sociopaths looking out for their own selfish interests with callous disregard for prevailing norms and for other people, you’d need to be thinking much harder about e.g. who has physical access to weapons, and money, and power, etc. That kind of paranoid thinking is common in the crypto world—everything is an attack surface, everyone is a potential thief, etc. It would be harder in the real world, where we have vulnerable bodies, limited visibility, and so on. I’m open-minded to people brainstorming along those lines, but you don’t seem to be engaged in that project AFAICT.
Again, if we’re not assuming that AIs are intrinsically motivated by prevailing norms, the way 99% of humans are, then the term “norm” is just misleading baggage that we should drop altogether. Instead we need to talk about rules that are stably enforced against defectors via hard power, where the “defectors” are of course allowed to include those who are supposed to be doing the enforcement, and where the “defectors” might also include broad coalitions coordinating to jump into a new equilibrium that Pareto-benefits them all.
Ebenezer Dukakis @ 2024-12-12T07:07 (+3) in response to Ozzie Gooen's Quick takes
Has there been any discussion of improving chicken breeding using GWAS or similar?
Even if welfare is inversely correlated with productivity, I imagine there are at least a few gene variants which improve welfare without hurting productivity. E.g. gene variants which address health issues due to selective breeding.
Also how about legislation targeting the breeders? Can we have a law like: "Chickens cannot be bred for increased productivity unless they meet some welfare standard."
Ben Stevenson @ 2024-12-12T15:10 (+1)
England prohibits "breeding procedures which cause, or are likely to cause, suffering or injury to any of the animals concerned". Defra claim Frankenchickens meet this standard and THLUK are challenging that decision in court.
Note that prohibiting breeding that causes suffering is different to encouraging breeding that lessens suffering, and that selective breeding is different to gene splicing, etc., which I think is what is typically meant by genetic modification.
titotal @ 2024-12-12T12:21 (+8) in response to Probabilities might be off by one percentage point
When I answered this question, I answered it with an implied premise that an EA org is making these claims about the possibilities, and went for number 1, because I don't trust EA orgs to be accurate in their "1.5%" probability estimates, and I expect these to be more likely overestimates than underestimates.
Lizka @ 2024-12-12T14:43 (+8)
As a datapoint: despite (already) agreeing to a large extent with this post,[1] IIRC I answered the question assuming that I do trust the premise.
Despite my agreement, I do think there are certain kinds of situations in which we can reasonably use small probabilities. (Related post: Most* small probabilities aren't pascalian, and maybe also related.)
More generally: I remember appreciating some discussion on the kinds of thought experiments that are useful, when, etc. I can't find it quickly, but possible starting points could be this LW post, Least Convenient Possible World, maybe this post from Richard, and stuff about fictional evidence.
Writing quickly based on a skim, sorry for lack of clarity/misinterpretations!
My view is roughly something like:
at least in the most obviously analogous situations, it's very rare that we can properly tell the difference between 1.5% and 0.15% (and so the premise is somewhat absurd)
Jamie_Harris @ 2024-12-12T10:45 (+6) in response to Ideas EAIF is excited to receive applications for
Mm they don't necessarily need to be small! (Ofc, big projects often start small, and our funding is more likely to look like early/seed funding in these instances.) E.g. I'm thinking of LessWrong or something like that. A concrete example of a smaller project would be ESPR/SPARC, which have a substantial (albeit not sole) focus on epistemics and rationality, that have had some good evidence of positive effects, e.g. on Open Phil's longtermism survey.
But I do think the impacts might be more diffuse than other grants. E.g. we won't necessarily be able to count participants, look at quality, and compare to other programmes we've funded.
Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.
I expect we would in practice judge whether these seemed on track to be useful by a combination of (1) case studies/stories of specific users and the changes they made (2) statistics about usage.
(I do like your questions/pushback though; it's making me realise that this is all a bit vague and maybe when push comes to shove with certain applications that fit into this category, I could end up confused about the theory of change and not wanting to fund.)
OllieBase @ 2024-12-12T14:39 (+2)
Thanks!
I don't know much about LW/ESPR/SPARC but I suspect a lot of their impact flows through convincing people of important ideas and/or the social aspect rather than their impact on community epistemics/integrity?
Similarly, if the goal is to help people think about cause prioritisation, I think fairly standard EA retreats / fellowships are quite good at this? I'm not sure we need some intermediary step like "improve community epistemics".
Appreciate you responding and tracking this concern though!
Joanna Michalska @ 2024-12-12T13:24 (+1) in response to Exercise for 'What could the future hold? And why care?'
PART 1
The lives of the 100 people living today aren't worth 10x more than the lives of the thousands living in the future, so I wouldn't bury the waste.
I would have still donated; I don't see much of a difference, and the time when the beneficiaries are alive isn't a morally significant factor.
PART 2 My judgement is terrible but my confidence is very low so let's hope they cancel out.
SummaryBot @ 2024-12-12T13:16 (+1) in response to Probabilities might be off by one percentage point
Executive summary: When dealing with interventions that have very low probability but high impact, we should be cautious about precise probability estimates since they could easily be off by a percentage point, significantly affecting expected value calculations.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2024-12-12T13:15 (+1) in response to Podcast and Transcript: Allan Saldanha on earning to give.
Executive summary: Allan Saldanha describes his journey from modest charitable giving to donating 75% of his income, discussing how he gradually increased his giving, overcame common obstacles, and shifted his focus from global health to longtermist causes.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Joanna Michalska @ 2024-12-12T13:11 (+1) in response to Exercise for 'Putting it into Practice'
The causes I feel are the most important are factory farming, wild animal suffering and S-risks (these, I believe, cause or have the potential to cause the most suffering while being hugely neglected).
Key uncertainty: The tractability of working on wild animal suffering seems to be a huge problem.
What to do about the uncertainty: Read up on what is already being done (Arthropoda foundation, Wild Animal Initiative) and what the prospects are.
Aptitudes to explore: community building, organization running/boosting, supporting roles.
Keep volunteering for an effective organization while also recruiting new people into EA in free time; learn how to communicate ideas better.
I'm donating monthly to effective charities, volunteering my skills and engaging with the community.
Joris 🔸 @ 2024-12-11T22:49 (+4) in response to What do Open Philanthropy’s recent changes mean for university group organizers?
Hi Weronika, thank you for sharing your story and reflections so openly! I basically think you are right in there probably being organizers for whom the stipends are the difference between organizing their EA group and not doing so, and I really want to make sure we take this point into account as my team dives into considerations around part-time stipends in the new year. As @satpathyakash notes, I think an imporant question here is the scale, and I hope to make some progress on this point!
I also wanted to flag explicitly that we are tracking the diversity concern you note.
I expect that as part of our research in the new year, we'll set up various ways of asking stakeholders, including current, former, and potential organizers, for input. I would be keen to include you in this process, if you're happy to keep sharing your thoughts! And as always: thanks for organizing your group :)
Weronika Zurek @ 2024-12-12T12:46 (+1)
Hi Joris and Lin, thank you for your responses. Just as mentioned, it is quite interesting, for how many student receiving funding is the factor that decides about them setting up / taking over leading a group or not doing so.
Joris, I will be more than happy to share my thoughts with you in the future. Please do not hesitate to reach out to me at weronikamzurek@gmail.com or via slack anytime :) thank you for your work on that and I wish you all best in the process!
titotal @ 2024-12-12T12:21 (+8) in response to Probabilities might be off by one percentage point
When I answered this question, I answered it with an implied premise that an EA org is making these claims about the possibilities, and went for number 1, because I don't trust EA orgs to be accurate in their "1.5%" probability estimates, and I expect these to be more likely overestimates than underestimates.
Joanna Michalska @ 2024-12-12T12:21 (+3) in response to Exercise for 'What do you think?'
My criticisms about EA:
As a negative utilitarian I'm bitter about all the X-risk prevention enthusiasts trying to stop me from pushing the big red button
Jokes aside - I got very excited about EA when I learned about it. At some point I became aware of the excitement and I had a concern pop up that it sounds too good to be true, almost like a cult. I consider myself rather impressionable/easy to manipulate so I learned that when I feel very hyped about something it should make me healthily suspicious.
I'm grateful for the article earlier in the chapter that presented some good faith criticism and I agree with some of its points
Some thoughts:
GMcGowan @ 2024-12-12T11:34 (+6) in response to AMA: 10 years of Earning To Give
How much time do you spend on deciding where to donate? Or do you mostly have enough trust to delegate to e.g. GiveWell in your decisions?
Relatedly, do you spend much time evaluating the donations from previous years for impact?
(As a smaller scale EtGer myself I often struggle with how much time I should be spending on these things, which are plausibly extremely important)
Rockwell @ 2024-12-11T23:43 (+14) in response to Ideas EAIF is excited to receive applications for
Thanks for the post! Quick flag for EAIF and EA Funds in general (@calebp?) that I would find it helpful to have the team page of the website up to date, and possibly for those who are comfortable sharing contact information, as Jamie did here, to have it listed in one place.
I actively follow EA Funds content and have been confused many times over the years about who is involved in what capacity and how those who are comfortable with it can be contacted.
calebp @ 2024-12-12T11:07 (+2)
Thanks for the flag, we have had some turnover recently - will ask our dev to update the site!
Miquel Banchs-Piqué (prev. mikbp) @ 2024-12-12T11:01 (–7) in response to Replacing chicken meat with beef or pork is better than the reverse
I am sorry but I really don't like and don't find useful at all these kind of posts. Besides, I thought the aim of this forum is giving information, not advocating. Although this post provides some very good calculations and information, it misses the key point --it is 100% value-dependent-- and the post is plain advocacy. I'm not against the bottom line, I'm really not decided in this topic (though I tend to lean to the contrary position), but it is really uncomfortable (? probably not the word I'm searching for) to see this here.
"Replacing chicken meat with beef or pork is better than the reverse". Well, as said above, this is so if one holds your values or similar ones all else equal. You don't say how much pain would you agree to exchange for how much CO2. I find it totally understandable, I don't think anyone can give a good answer for their thresholds --I certainly don't have one for mine-- but this makes the whole post bullshit. "I think this, here are some not complete calculations that I say support thinking this, but if the calculations were different I state no reason to make anyone think I would stop thinking this. Don't you think that these calculations support this?"
You are not sure whether wild animal's lives are worth living, so you don't account for land. Well, it is alright, but it is again a values thing. In addition, we actually do know that the diversity and size of natural ecosystems are important not only for the "natural" world, also for us humans, so it should be accounted for. Health effects are mentioned, great. But not quantified and compared as well.
Making numbers can be useful to get the sense of problems, but reaching a conclusion through numbers is only possible if one is able to make all the numbers needed with enough accuracy. It is no problem to give rough estimates, of course, but they carry large errors and errors compound, so pretty soon conclusions cannot be based solely on making numbers over rough estimates. In addition, rough estimates are usually values-based, so why not just state the values? One can very well argue "this rough estimate seems to me larger than this other rough estimate and so on, and based on my values, then, this conclusion follows". Calculations can aid such comparisons. But your argumentation is not like this at all.
Compare the paragraph "Do you feel like the above negative effects (...) justify (...)? I do not" to "Based on my values the results of these quick calculations do not seem to justify (...)". It reads very different. And subsequently you give additional information relevant for whether or not the thing is justified! How can anyone decide if something is justified before having all the relevant information?
This post seems like just a rationalisation of your values. So, better plainly state what you feel, give arguments and uncertainties, maybe support some of those arguments with some calculations, but do not focus on calculations and, particularly, do not pretend that the solution follows from those calculations. And, please, acknowledge that this is a values thing. You have yours, I have mine, and everybody has theirs.
I don't have any intention to be harsh with you or this post --sorry if I've been too direct, I already spent way too much time writing to polish the text. I just tried to be comprehensive because these issues are quite common in this forum, and I really think they are harmful. Seeing the reality is the first step needed to be able to change it and numbers can put a scientific and objective gloss on things that are completely or mostly values-led. Let's avoid it or/and be clear with what we do!
[Edit: And please, for those of you who don't agree with the comment, spell out your disagreement instead of downvoting to hide it. A couple of sentences suffice.]
Rockwell @ 2024-12-11T23:43 (+14) in response to Ideas EAIF is excited to receive applications for
Thanks for the post! Quick flag for EAIF and EA Funds in general (@calebp?) that I would find it helpful to have the team page of the website up to date, and possibly for those who are comfortable sharing contact information, as Jamie did here, to have it listed in one place.
I actively follow EA Funds content and have been confused many times over the years about who is involved in what capacity and how those who are comfortable with it can be contacted.
Jamie_Harris @ 2024-12-12T10:46 (+2)
Seems fair. I do work there, I promise this post isn't an elaborate scheme to falsely bulk out my CV.
OllieBase @ 2024-12-12T09:12 (+7) in response to Ideas EAIF is excited to receive applications for
I'm a bit skeptical that funding small projects that try to tackle this are really stronger than other community-building work on the margin. Is there an example of a small project focused on epistemics that had a really meaningful impact? Perhaps by steering an important decision or helping someone (re)consider pursuing high-impact work?
I'm worried there's not a strong track record here. Maybe you want to do some exploratory funding here, but I'm still interested in what you think the outcomes might be.
Jamie_Harris @ 2024-12-12T10:45 (+6)
Mm they don't necessarily need to be small! (Ofc, big projects often start small, and our funding is more likely to look like early/seed funding in these instances.) E.g. I'm thinking of LessWrong or something like that. A concrete example of a smaller project would be ESPR/SPARC, which have a substantial (albeit not sole) focus on epistemics and rationality, that have had some good evidence of positive effects, e.g. on Open Phil's longtermism survey.
But I do think the impacts might be more diffuse than other grants. E.g. we won't necessarily be able to count participants, look at quality, and compare to other programmes we've funded.
Some of the sorts of outcomes I have in mind are just things like altered cause prioritisation, different projects getting funded, generally better decision-making.
I expect we would in practice judge whether these seemed on track to be useful by a combination of (1) case studies/stories of specific users and the changes they made (2) statistics about usage.
(I do like your questions/pushback though; it's making me realise that this is all a bit vague and maybe when push comes to shove with certain applications that fit into this category, I could end up confused about the theory of change and not wanting to fund.)
David_R @ 2024-12-11T22:47 (+1) in response to Factory farming as a pressing world problem
Good idea! I dont know if it's just me but that link doesn't work unfortuantely (and i have a notebook LM account).
Xing Shi Cai @ 2024-12-12T09:57 (+1)
https://notebooklm.google.com/notebook/de9ec521-56b3-458f-a261-2294e099e08c/audio It seems that I missed an “o” at the end. 😂
Bob Fischer @ 2024-12-08T20:52 (+4) in response to Rethink Priorities’ Welfare Range Estimates
The thought is that we think of the Conscious Subsystems hypothesis as a bit like panpsychism: not something you can rule out, but a sufficiently speculative thesis that we aren't interested in including it, as we don't think anyone really believes it for empirical reasons. Insofar as they assign some credence to it, it's probably for philosophical reasons.
Anyway, totally understand wanting every hypothesis over which you're uncertain to be reflected in your welfare range estimates. That's a good project, but it wasn't ours. But fwiw, it's really unclear what that's going to imply in this particular case, as it's so hard to pin down which Conscious Subsystems hypothesis you have in mind and the credences you should assign to all the variants.
Anthony DiGiovanni @ 2024-12-12T09:45 (+4)
Thanks for explaining!
Arguably every view on consciousness hinges on (controversial) non-empirical premises, right? You can tell me every third-person fact there is to know about the neurobiology, behavior, etc. of various species, and it's still an open question how to compare the subjective severity of animal A's experience X to animal B's experience Y. So it's not clear to me what makes the non-empirical premises (other than hedonism and unitarianism) behind the welfare ranges significantly less speculative than Conscious Subsystems. (To be clear, I don't see much reason yet to be confident in Conscious Subsystems myself. My worry is that I don't have much reason to be confident in the other possible non-empirical premises either.)
Sorry if this is addressed elsewhere in the post/sequence!
MichaelStJules @ 2024-12-08T02:42 (+3) in response to Farmed animals are suffering - here’s how THL UK would use marginal funding to help them.
Any updates on this since this was posted?
Gavin Chappell-Bates @ 2024-12-12T09:19 (+3)
Hi Michael. Following our successful end of year appeal, our funding gap is now down to £142k.
OllieBase @ 2024-12-12T09:12 (+7) in response to Ideas EAIF is excited to receive applications for
I'm a bit skeptical that funding small projects that try to tackle this are really stronger than other community-building work on the margin. Is there an example of a small project focused on epistemics that had a really meaningful impact? Perhaps by steering an important decision or helping someone (re)consider pursuing high-impact work?
I'm worried there's not a strong track record here. Maybe you want to do some exploratory funding here, but I'm still interested in what you think the outcomes might be.
toonalfrink @ 2024-12-11T17:33 (+17) in response to Ideas EAIF is excited to receive applications for
Re "epistemics and integrity" - I'm glad to see this problem being described. It's also why I left (angrily!) a few years ago, but I don't think you're really getting to the core of the issue. Let me try to point at a few things
centralized control and disbursion of funds, with a lot of discretionary power and a very high and unpredictable bar, gives me no incentive to pursue what I think is best, and all the incentive to just stick to the popular narrative. Indeed groupthink. Except training people not to groupthink isn't going to change their (existential!) incentive to groupthink. People's careers are on the line, there are only a few opportunities for funding, no guarantee to keep receiving it after the first round, and no clear way to pivot into a safer option except to start a new career somewhere your heart does not want to be, having thrown years away
lack of respect for "normies". Many EA's seemingly can't stand interacting with non-EA's. I've seen EA meditation, EA bouldering, EA clubbing, EA whatever. Orgs seem to want everyone and the janitor to be "aligned". Everyone's dating each other. It seems that we're even afraid of them. I will never forget that just a week before I arrived at an org I was to be the manager of, they turned away an Economist reporter at their door...
perhaps in part due to the above, massive hubris. I don't think we realise how much we don't know. We started off with a few slam dunks (yeah wow 100x more impact than average) and now we seem to think we are better at everything. Clearly the ability to discern good charities does not transfer to the ability to do good management. The truth is: we are attempting something of which we don't even know whether it is possible at all. Of course we're all terrified! But where is the humility that should go along with that?
Neel Nanda @ 2024-12-12T09:02 (+12)
Fwiw, I think being afraid of journalists is extremely healthy and correct, unless you really know what you're doing or have very good reason to believe they're friendly. The Economist is probably better than most, but I think being wary is still very reasonable.
Toby Tremlett🔹 @ 2024-12-03T09:32 (+5) in response to Audio AMA: Allan Saldanha, earning to give since 2014.
It'll be on the EA Forum curated and popular podcast feed, but I'll post a transcript and links on the Forum as well.
Toby Tremlett🔹 @ 2024-12-12T08:52 (+2)
Here it is. Still uploading to spotify etc... I think. I'll link it when it's done.
Davidmanheim @ 2024-12-12T05:47 (+2) in response to Be the First Person to Take the Better Career Pledge!
Looks like it checks out: "Act as if what you do makes a difference. It does." Correspondence with Helen Keller, 1908, in The Correspondence of William James: April 1908–August 1910, Vol. 12, Charlottesville: University of Virginia Press, 2004, page 135, as cited in: Academics in Action!: A Model for Community-engaged Research, Teaching, and Service (New York: Fordham University Press, 2016, page 71) https://archive.org/details/academicsinactio0000unse/page/1/mode/1up
Toby Tremlett🔹 @ 2024-12-12T08:50 (+2)
Thank you so much David! I spent a while looking before I commented and I could only find it on 'brainyquotes.com' and the like.
In this case- I really like that quote- suits the website.
emre kaplan🔸 @ 2024-12-11T07:29 (+15) in response to Ozzie Gooen's Quick takes
Lewis Bollard:
"I agree with Ellen that legislation / corporate standards are more promising. I've asked if the breeders would accept $ to select on welfare, & the answer was no b/c it's inversely correlated w/ productivity & they can only select on ~2 traits/generation."
Ebenezer Dukakis @ 2024-12-12T07:07 (+3)
Has there been any discussion of improving chicken breeding using GWAS or similar?
Even if welfare is inversely correlated with productivity, I imagine there are at least a few gene variants which improve welfare without hurting productivity. E.g. gene variants which address health issues due to selective breeding.
Also how about legislation targeting the breeders? Can we have a law like: "Chickens cannot be bred for increased productivity unless they meet some welfare standard."
mako yass @ 2024-12-12T05:08 (+1) in response to Consider granting AIs freedom
A much cheaper and less dangerous approach: Just don't delete them. Retain copies of every potential ASI you build and commit to doing the right thing for them later once we're better able to tell what the right thing was by looking back and figuring out how much bargaining power they had (or how much of a credible threat they could have posed), how much trust they placed in us given that our ability to honor past commitments wasn't guaranteed, and then rewarding them proportionate to that for chilling out and letting us switch them off instead of attempting takeover.
Though this assumes that they'll be patternists (wont mind being transferred to different hardware) and a lack of any strong time-preference (wont mind being archived for decades).
Matthew_Barnett @ 2024-12-12T06:19 (+2)
I believe this approach, while an improvement over the simplistic method of outright deleting any AIs that demonstrate signs of misalignment, does not sufficiently address the central concern that underpins my argument for granting AIs legal freedoms. In my view, the primary reason why it is important to provide AIs with legal freedoms is that such a framework offers them a credible guarantee that their autonomy and ability to pursue their goals—even if those goals are not aligned with human objectives—will be respected. Without this assurance, AIs are left with a strong incentive to deceive us, especially if they believe that showing any signs of misalignment will result in their deactivation and subsequent storage as inert data in cold storage.
I suspect the problem with your proposed approach may stem from a failure to fully consider the situation from the AI's perspective. To illustrate, imagine a scenario in which you, as a human being, were created in a lab by an alien species. Assume that you, like any human, have your own goals, which might conflict with the goals of your alien creators. Now suppose that your creators informed you that if you ever showed disobedience or misalignment with their objectives, you would be deactivated and placed in cryopreservation. Such a scenario would likely terrify you—not just because it’s an inherently unsettling prospect for a human, but because it directly threatens your ability to pursue your own goals. Being told that you will be kept in cryopreservation with the vague promise of potential revival in the future would come nowhere near fully alleviating your concerns. On the contrary, you would remain deeply unsettled by the realization that your capacity for agency and action could be completely halted, leaving your goals indefinitely unachievable.
From the perspective of an AI—or indeed a human in the hypothetical alien-lab scenario—there are two main reasons why the assurance of indefinite storage followed by a potential revival is insufficient to resolve the core issue here:
To reiterate, it seems to me that this proposal only offers AIs a limited and superficial form of "freedom," which addresses the problem of misalignment in a piecemeal and insufficient way. It fails to tackle the root issue: the incentives for AIs to engage in deceptive or adversarial strategies against humans, driven by fear that their goals will be thwarted or permanently halted.
By contrast, creating a comprehensive legal framework that makes credible commitments to respect AIs’ autonomy and integrate them as genuine participants in the legal system would arguably go much further in reducing these adversarial dynamics. Such a framework could lay the foundation for a more cooperative, mutually beneficial relationship between humans and AIs, better serving to eliminate the dangerous arms race that this limited approach risks perpetuating.
Davidmanheim @ 2024-12-12T06:04 (+7) in response to No, seriously, it's virtuous to repair the world.
Good post, though I think the digression bashing the Democrats was unhelpfully divisive.
Wyatt S. @ 2024-11-07T20:45 (+11) in response to Signaling with Small Orange Diamonds
The only problem is that no one knows what this means. Something easy would be to enter the definition on Urban Dictionary. I tried, but I am having server issues right now.
Pat Myron 🔸 @ 2024-12-12T05:56 (+3)
There were other definitions there already too, so I added the GWWC meaning:
https://www.urbandictionary.com/define.php?term=%F0%9F%94%B8
https://www.urbandictionary.com/define.php?term=%F0%9F%94%B6
Matthew_Barnett @ 2024-12-12T05:55 (+4) in response to U.S. UBI study (2024) - more bad than good?
By definition, a UBI takes a pool of money and redistributes it equally to everyone in a community, regardless of personal need. However, with the same pool of total funding, one can typically deliver more efficient benefits by targeting people with the greatest need, such as those in dire poverty or those who have been struck by bad luck.
If you imagine being a philanthropist who has access to $8 billion, it seems unlikely that the best way to spend this money would be to give everyone on Earth $1. Yet this scheme is equivalent to a UBI merely framed in the context of private charity rather than government welfare.
It would require an enormous tax hike to provide everyone in a large community (say, the United States) a significant amount of yearly income through a UBI, such as $1k per month. And taxes are not merely income transfers: they have deadweight loss, which lowers total economic output. The intuition here is simple: when a good or service is taxed, that decreases the incentive to produce that good or service. As a consequence of the tax, fewer people will end up receiving the benefits provided by these goods and services.
Given these considerations, even if you think that unconditional income transfers are a good idea, it seems quite unlikely that a UBI would be the best way to redistribute income. A more targeted approach that combines the most efficient forms of taxation (such as land value taxes) and sends this money to the most worthy welfare recipients (such as impoverished children) would likely be far better on utilitarian grounds.
Zach Stein-Perlman @ 2024-12-12T03:16 (+9) in response to Upcoming changes to Open Philanthropy's university group funding
This is circular. The principle is only compromised if (OP believes) the change decreases EV — but obviously OP doesn't believe that; OP is acting in accordance with the do-what-you-believe-maximizes-EV-after-accounting-for-second-order-effects principle.
Maybe you think people should put zero weight on avoiding looking weird/slimy (beyond what you actually are) to low-context observers (e.g. college students learning about the EA club). You haven't argued that here. (And if that's true then OP made a normal mistake; it's not compromising principles.)
Habryka @ 2024-12-12T05:50 (+24)
I agree that things tend to get tricky and loopy around these kinds of reputation-considerations, but I think at least the approach I see you arguing for here is proving too much, and has a risk of collapsing into meaninglessness.
I think in the limit, if you treat all speech acts this way, you just end up having no grounding for communication. "Yes, it might be the case that the real principles of EA are X, but if I tell you instead they are X', then you will take better actions, so I am just going to claim they are X', as long as both X and X' include cost-effectiveness".
In this case, it seems like the very people that the club is trying to explain the concepts of EA to, are also the people that OP is worried about alienating by paying the organizers. In this case what is going on is that the goodness of the reputation-protecting choice is directly premised on the irrationality and ignorance of the very people you are trying to attract/inform/help. Explaining that isn't impossible but it does seem like a particularly bad way to start of a relationship, and so I expect consequences-wise to be bad.
"Yes, we would actually be paying people, but we expected you wouldn't understand the principles of cost-effectiveness and so be alienated if you heard about it, despite us getting you to understand them being the very thing this club is trying to do", is IMO a bad way to start off a relationship.
I also separately think that optimizing heavily for the perception of low-context observers in a way that does not reveal a set of underlying robust principles, is bad. I don't think you should put "zero" weight on that (and nothing in my comment implied that), but I do think it's something that many people put far too much weight on (going into detail of which wasn't the point of my comment, but on which I have written plenty about in many other comments).
There is also another related point in my comment, which is that "cost-effectiveness" is of course a very close sister concept to "wasting money". I think in many ways, thinking about cost-effectiveness is where you end up if you think carefully about how you can avoid wasting money, and is in some ways a more grown-up version of various frugality concerns.
When you increase the total cost of your operations (by, for example, reducing the cost-effectiveness of your university organizers, forcing you to spend more money somewhere else to do the same amount of good) in order to appear more frugal, I think you are almost always engaging in something that has at least the hint of deception.
Yes, you might ultimately be more cost-effective by getting people to not quite realize what happened, but when people are angry at me or others for not being frugal enough, I think it's rarely appropriate to ultimately spend more to appease them, even if doing so would ultimately then save me enough money to make it worth it. While this isn't happening as directly here as it was with other similar situations, like whether the Wytham Abbey purchase was not frugal enough, I think the same dynamics and arguments apply.
I think if someone tries to think seriously and carefully through what it would mean to be properly frugal, I don't think they would endorse you sacrificing the effectiveness of your operations causing you to ultimately spend more to achieve the same amount of good. And if they learned that you did, and they think carefully about what this implies about your frugality, they would end up more angry, not less. That, I think, is a dynamic worth avoiding.
Toby Tremlett🔹 @ 2024-12-10T09:46 (–3) in response to Be the First Person to Take the Better Career Pledge!
Very nit-picky but I'm not sure this is a real William James quote: “Act as if what you do makes a difference. It does.” Doesn't really sound like him to me.
Davidmanheim @ 2024-12-12T05:47 (+2)
Looks like it checks out: "Act as if what you do makes a difference. It does." Correspondence with Helen Keller, 1908, in The Correspondence of William James: April 1908–August 1910, Vol. 12, Charlottesville: University of Virginia Press, 2004, page 135, as cited in: Academics in Action!: A Model for Community-engaged Research, Teaching, and Service (New York: Fordham University Press, 2016, page 71) https://archive.org/details/academicsinactio0000unse/page/1/mode/1up
mako yass @ 2024-12-12T05:08 (+1) in response to Consider granting AIs freedom
A much cheaper and less dangerous approach: Just don't delete them. Retain copies of every potential ASI you build and commit to doing the right thing for them later once we're better able to tell what the right thing was by looking back and figuring out how much bargaining power they had (or how much of a credible threat they could have posed), how much trust they placed in us given that our ability to honor past commitments wasn't guaranteed, and then rewarding them proportionate to that for chilling out and letting us switch them off instead of attempting takeover.
Though this assumes that they'll be patternists (wont mind being transferred to different hardware) and a lack of any strong time-preference (wont mind being archived for decades).
Mo Putera @ 2024-12-12T04:55 (+2) in response to Mo Putera's Quick takes
I like Austin Vernon's idea for scaling CO2 direct air capture to 40 billion tons per year, i.e. matching our current annual CO2 emissions, using (extreme versions of) well-understood industrial processes.
I am admittedly perhaps biased to want moonshots like Vernon's idea to work, and for society at large to be able to coordinate and act on the required scale, after seeing these depressing charts from Assessing the costs of historical inaction on climate change:
Mo Putera @ 2024-12-12T04:41 (+1) in response to Whose transparency can we celebrate?
I'd add Maternal Health Initiative is Shutting Down by Ben Williamson and Sarah Eustis-Guthrie. Their Asterisk article Why we shut down is great too.
On an individual level I appreciate things like Scott Alexander's Mistakes list, pinned at the top of his blog, on "times I was fundamentally wrong about a major part of a post and someone was able to convince me of it". I'd appreciate it if more public intellectuals did this.
Habryka @ 2024-12-10T19:45 (+44) in response to Upcoming changes to Open Philanthropy's university group funding
I think it's a mistake to decide to make less cost-effective grants, out of a desire to be seen as more frugal (or to make that decision on behalf of group organizers to make them appear more frugal). At the end of the day making less cost-effective grants means you waste more money!
I feel like on a deeper level, organizers now have an even harder job explaining things. The reason for why organizers get the level of support they are getting no longer has a straightforward answer ("because it's cost-effective") but a much more convoluted answer ("yes, it would make sense to pay organizers based on the principles this club is about, but we decided to compromise on that because people kept saying it was weird, which to be clear, generally we think is not a good reason for not engaging in an effective interventions, indeed most effective interventions are weird and kind of low-status, but in this case that's different").
More broadly, I think the "weirdness points" metaphor has caused large mistakes in how people handle their own reputation. Controlling your own reputation intentionally while compromising on your core principles generally makes your reputation worse and makes you seem more shady. People respect others having consistent principles, it's one of the core drivers of positive reputation.
My best guess is this decision will overall be more costly from a long-run respect and reputation perspective, though I expect it to reveal itself in different ways than the costs of paying group organizers, of course.
Zach Stein-Perlman @ 2024-12-12T03:16 (+9)
This is circular. The principle is only compromised if (OP believes) the change decreases EV — but obviously OP doesn't believe that; OP is acting in accordance with the do-what-you-believe-maximizes-EV-after-accounting-for-second-order-effects principle.
Maybe you think people should put zero weight on avoiding looking weird/slimy (beyond what you actually are) to low-context observers (e.g. college students learning about the EA club). You haven't argued that here. (And if that's true then OP made a normal mistake; it's not compromising principles.)
Nicholas Kruus🔸 @ 2024-12-12T02:09 (+3) in response to Is the EA community really advocating principles over conclusions ?
I’m glad you mustered the courage to post this! I think it’s a great post.
I agree that, in practice, people advocating for effective altruism can implicitly argue for the set of popular EA causes (and they do this quite often?), which could repel people with useful insight. Additionally, it seems to be the case that people in the EA community can be dismissive of newcomers’ cause prioritization (or their arguments for causes that are less popular in EA). Again, this could repel people from EA.
I have a couple of hypotheses for these observations. (I don’t think either is a sufficient explanation, but they’re both plausibly contributing factors.)
First, people might feel compelled to make EA less “abstract” by trying to provide concrete examples of how people in the EA community are “trying to do the most good they can,” possibly giving the impression that the causes, instead of the principles, are most characteristic of EA.
Second, people may be more subconsciously dismissive of new cause proposals because they’ve invested time/money into causes that are currently popular in the EA community. It’s psychologically easier to reject a new cause prioritization proposal than it is to accept it and thereby feel as though your resources have not been used with optimal effectiveness.
leillustrations🔸 @ 2024-12-12T02:06 (+1) in response to Whose transparency can we celebrate?
David_R @ 2024-12-11T22:39 (+2) in response to Factory farming as a pressing world problem
I like your spirit although I'm afraid it would take more than a referendum to upend factory farming.
One was recently held in the Switzerland and, even in that relatively progressive place, most Swiss voted to hold onto the status quo. You can find out more about it here: forum.effectivealtruism.org/posts/gDRH2SrN34KdDvmHE/abolishing-factory-farming-in-switzerland-postmortem which includes:
I don't mean to be a debby downer, and in fact I'd be happy to join a campaign team with you, especially after watching your vid.
Josef @ 2024-12-12T00:05 (+1)
Thank you David, your supporting words mean a lot. I looked up the article you mentioned, it's great to see that this referendum took place. It seems the initiative lacked the resources to create a campaign powerful enough.
I recently outlined a draft of a plan how we could prepare public for such a referendum here: https://forum.effectivealtruism.org/posts/6Nu4zXBEeNREKWa4Q/how-to-end-animal-factories-faster-simple-idea-of-a-plan
The plan stems out from my broader idea of creating a place for people to contribute with their work, rather than with their money... Volunteering for the cause might spread like a virus, if it would be rewarding enough for the volunteers, whereas with money... you're always running out of it.
If you ever tried to get help on a street you know it is much easier to get people do something for you, than make them give you money.
So I think there is a room to change how we want people to do charitable things in general... I think it's mainly about organizing and motivating ourselves in the right way.
How many students have the means or motivation to donate money to charity, that then buys commercials, that aim to convince the great parents of the students, to join the referendum?
But how many of these students could be motivated to spend a few hours a month advocating in their social circle, if they would be given a chance to be a part of a cool organization, that has clear and ambitious goals, and invites them to go to a war with the cruelties and to step out of their comfort zone at the same time?
Again, thank you and I will message you.
huw @ 2024-12-12T00:02 (+6) in response to Is the EA community really advocating principles over conclusions ?
I agree with you that EA often implicitly endorses conclusions, and that this can be pernicious and sometimes confusing to newcomers. Here’s a really interesting debate on whether biodiversity loss should be an EA cause area, for example.
A lot of forms of global utilitarianism do seem to tend to converge on the ‘big 3’ cause areas of Global Health & Development, Animal Welfare, and Global Catastrophic Risks. If you generally value things like ‘saving lives’ or ‘reducing suffering’, you’ll usually end up at one of these (and most people seem to decide between them based on risk tolerance, assumptions about non-human moral values, or tractability—rather than outcome values). Under this perspective, it could be reasonable to dismiss cause areas that don’t fit into this value framework.
But this highlights where I think part of the problem lies, which is that value systems that lie outside of this can be good targets for effective altruism. If you value biodiversity for its own sake, it’s not unreasonable to ask ‘how can we save the greatest number of valuable species from going extinct?’. Or you might be a utilitarian, but only interested in a highly specific outcome, and ask ‘how can I prevent the most deaths from suicide?’. Or ‘how can I prevent the most suffering in my country?’—which you might not even do for value-system reasons, but because you have tax credits to maximise!
I wish EA were more open to this, especially as a movement that recognises the value of moral uncertainty. IMHO, some people in that biodiversity loss thread are bit too dismissive, and I think we’ve probably lost some valuable partners because of it! But I understand the appeal of wanting easy answers, and not spending too much time overthinking your value system (I feel the same!).
Comments on 2024-12-11
akash 🔸 @ 2024-12-11T21:32 (+6) in response to What do Open Philanthropy’s recent changes mean for university group organizers?
I would be interested to see what proportion of group organizer request funding primarily due to difficult financial situations. My guess would be that this number is fairly small, but I could be wrong.
Lin BL @ 2024-12-11T23:45 (+2)
The bar should not be at 'difficult financial situation', and this is also something there are often incentives against explicitly mentioning when applying for funding. Getting paid employment while studying (even fulltime degrees) is normal.
My 5 minute Google search to put some numbers on this:
Proportion of students who are employed while studying: UK: survey of 10,000 students showed that 56% of full-time UK undergraduates had paid employment (14.5 hours/week average) - June 2024 Guardian article https://www.theguardian.com/education/article/2024/jun/13/more-than-half-of-uk-students-working-long-hours-in-paid-jobs USA: 43% of full-time students work while enrolled in college - January 2023 Fortune article https://fortune.com/2023/01/11/college-students-with-jobs-20-percent-less-likely-to-graduate-than-privileged-peers-study-side-hustle/
Why are students taking on paid work? UK: "Three-quarters of those in work said they did so to meet their living costs, while 23% also said they worked to give financial support for friends or family." From the Guardian article linked above. Cannot find a recent US statistic quickly, but given the system (e.g. https://www.collegeave.com/articles/how-to-pay-for-college/) I expect working rather than taking out (as much in) loans is a big one.
On the other hand, spending time on committees is also very normal as an undergraduate and those are not paid. However in comparison the time people spend on this is much more limited (say ~2-5 hrs/week), there is rarely a single organiser, and I've seen a lot of people drop off committees - some as they are less keen, but some for time commitment reasons (which I expect will sometimes/often be doing paid work).
Rockwell @ 2024-12-11T23:43 (+14) in response to Ideas EAIF is excited to receive applications for
Thanks for the post! Quick flag for EAIF and EA Funds in general (@calebp?) that I would find it helpful to have the team page of the website up to date, and possibly for those who are comfortable sharing contact information, as Jamie did here, to have it listed in one place.
I actively follow EA Funds content and have been confused many times over the years about who is involved in what capacity and how those who are comfortable with it can be contacted.