Two of your reasons to go vegan involve getting to tell others you are vegan. I find this pretty dishonest because I assume you aren't telling them this.
It's not about telling others I'm vegan. It's about telling them that I think non human animals are worthy of moral consideration. I also tell people that I donate to animal welfare charities and even which ones.
Happy to see this! I continue to think that smart EA funding expansion is an important area and wish it got more attention.
Minor notes:
If I'm counting right, this comes to a total of approximately $362,000. The Funding Circle website states that "Our members give above $100,000 to this cause area each year, and this is the expected minimum annual giving to join.". So it seems like the funding circle is basically 2-3 people, I presume? Or is there money I'm missing?
Links to the nonprofits would be useful, in the post. As a simple example, I tried searching "Bedrock", and got many miscellaneous results.
I really hope this work can help us identify great founders in this area, and then we can scale up the work from those individuals.
I'm surprised to see the focus fundraising charities focused on international countries. I'm looking now, and it seems like the giant majority of charitable funding is given by the top few countries. (Maybe this is where Ark and Bedrock are focused, that wasn't clear).
We at Effective Giving Ireland are thrilled to be supported by Meta-Charity Funders. It's really going to be a game-changer for us. For tax-reasons, we'd strongly encourage everyone to donate effectively in their home countries, many of which will now have an effective giving option, which is often tax deductible.
I notice that one of the UK grants for alernative proteins which you cite says, "Cultured meat, insect-based proteins and proteins made by fermentation" (my emphasis). I find this quite concerning.
I didn't previously realise the term "alternative proteins" includes insects. Has this always been the case? Is the definition contested or is a different term needed?
From the NAPIC website, they include Entocycle, "a world-leading provider of insect farming technology", as one of their partners (though this may not be representative). Interestingly Entocycle do have two pages on insectwelfare.
I am in favor of people considering unconventional approaches to charity.
At the same time, I find it pretty easy to argue against this. Some immediate things that come to mind: 1. My impression is that gambling is typically net-negative to participants, often highly so. I generally don't like seeing work go towards projects that are net-negative to their main customers (among others). 2. Out of all the "do business X, but it goes to charity", why not pick something itself beneficial? There are many business areas to choose from. Insurance can be pretty great - I think Lemonade Insurance did something clever with charity. 3. I think it's easy to start out altruistic with something like this, then become a worse person as you respond to incentives. In the casino business, the corporation is highly incentivized to do increasingly sleazy tactics to find, bait, and often bankrupt whales. If you don't do this, your competitors will, and they'll have more money to advertise. 4. I don't like making this the main thing, but I'd expect the PR to be really bad for anything this touches. "EAs don't really care about helping people, they just use that as an excuse to open sleazy casinos." There are few worse things to be associate with. A lot of charities are highly protective of their brands (and often with good reason). 5. It's very easy for me to imagine something like this creating worse epistemics. In order to grow revenue, it will be very "convenient" if you downplayed the harms caused by the casino. If such a thing does catch on in a certain charitable cluster, very soon that charitable cluster will be encouraged to lie and self-deceive. We saw some of this with the FTX incident. 6. The casino industry attracts and feeds off clients with poor epistemics. I'd imagine they (as in, the people the casino actually makes money from) wouldn't be the type who would care much about reasonable effective charities.
When I personally imagine a world where, "A significant part of the effective giving community is tied to high-rolling casinos", it's hard for me to imagine this not being highly distopic.
By all this, I hope the author doesn't treat this at all on an attack on them specifically. But I would consider it an attack on specific future project proposals that suggest advancing manipulative and harmful industries and tying such work to the topics of effective giving or effective philanthropy. I very much do not want to see more work done here. I'm spending some time on this comment, mainly to use this as an opportunity to hopefully dissuade others considering this sort of thing in the future.
On this note, I'd flag that I think a lot of the crypto industry has been full of scams and other manipulative and harmful behavior. Some of this got very close to EA (i.e. with FTX), and I'm sure with a long tail of much smaller projects. I consider much of this (the bad parts) a black mark on all connected+responsible participants and very much do not want to see more of it.
I agree that casinos are an evil business, and I would be extremely wary of making people worse off in a hope to "make it up" by charitable contributions.
@Brad West🔸 have already answered point by point, so I'll just add that I believe it's better to think of my proposal as a charity, that also provides games to it's customers, rather than casino that donates it's profits.
I'd argue that regular casinos are net positive for people without a gambling addiction, who treat is as an evening entertainment with an almost guaranteed loss. The industry preys on people who lost more then they could afford and are trying to get even, and it is not possible case.
I struggle to imagine someone, who would donate more to their DAF that they feel comfortable with because they felt devastated that money went to the charity of not their choice.
A year ago, I wrote "It's OK to Have Unhappy Holidays" during a time when I wasn’t feeling great about the season myself. That post inspired someone to host an impromptu Christmas Eve dinner, inviting others on short notice. Over vegan food and wine, six people came together to share their feelings about the holidays, reflect on the past year with gratitude, and enjoy a truly magical evening. It’s a moment I’m deeply thankful for. Perhaps this could inspire you this year—to host a gathering or spontaneously reach out to those nearby for a walk, a drink, or a shared meal.
Personally speaking, if I say I think something is 10x as effective, I mean that as an all-things-considered statement, which includes deferring however much I think it is appropriate to the views of others.
Probably depends on how you describe it and frame it. How do you explain why you are telling them this?
If you’re willing, you might do a trial on this. Do something like divide your clients into two random groups and send this message to half. See if you observe any difference (try to keep track of the numbers as well as the more qualitative outcomes like how they respond to the card)
I think I’m at the EAACX meeting point in Amalia Rodriguez Park. I’m near the statue of two women kissing. (O segredo). Has the event moved or am I just the first one here? I think I’ll go to the café now and get something to eat and wait to hear from anyone.
How sure are you are right and the other EA (who has also likely thought carefully about their donations) is wrong, though? I'm much more confident that I will increase the impact of someone's donation / spending if they are not in EA, rather than being too convinced of my own opinion and causing harm (by negative side effects, opportunity costs or lowering the value of their donation).
Personally speaking, if I say I think something is 10x as effective, I mean that as an all-things-considered statement, which includes deferring however much I think it is appropriate to the views of others.
Well done, it's super cool to see everything you guys have achieved this year. One thing I was surprised by is that EAGxs are almost three times cheaper than EAGs while having a slightly higher likelihood to recommend. I assume part of this is because EAGs are typically held in more expensive areas, but I'd be surprised if that explained all of it. Are there any other factors that explain the cost difference?
Good question! Yes, TL;DR large venues in major US/UK cities are more expensive per-attendee than smaller venues in other cities.
Eli covered this a bit in our last post about costs. There aren't that many venues big enough for EA Globals, and the venues that are big enough force you to use their in-house catering company, generally have a minimum mandatory spend, and significantly mark up the costs of their services. Our best guesses at why (from Eli's post):
Big venues are just generally quite expensive to run (big properties, lots of staff, etc.).
These venues are often empty, forcing them to charge more when they actually do host events.
Catering costs are marked up in order to mark venue costs down. Many customers will anchor on an initial venue cost; by the time they hear the exorbitant catering fees later, they may feel it’s too late to switch. (We always ask to see both venue and catering costs up front.)
I suspect straightforward lack of competition also plays a role. As an extreme example, if there's only one venue in a city large enough for conferences and you want to run a conference there, they can basically charge what they want to.
Meanwhile, venues that can host 200–600 people (EAGx events) are easier to come by. EAGx organizers often secure university venues which are cheap but often more difficult to work with. Location does play a role, of course. You may not be surprised to learn that Mexico City, Bangalore and Berlin are cheaper than Oakland, London and Boston. But we also hosted events in Sydney and Copenhagen this year, so I think the above cost vs. size factor / availability of space plays a bigger role.
I do want to add that we are consistently impressed by EAGx and EA Summit organizers when it comes to resourcefulness and the LTR scores they generate given the lower CPA. The EA Brazil Summit team, for example, had food donated by the Brazilian Vegetarian Society. The bar for hustling in service of impact is continuously being raised, and we hustle on.
(Other team members or EAGx organizers should feel free to jump in here and push back / add more details.)
I guess it is ok to mention it, particularly in a holiday gift. Specifically I would feel it is ok to mention what it achieved without being preachy. Some companies use smaller amounts (1%) to signal social impact.
How often grantees pivot to more modest goals or different tactics after they realise that their initial goals are very hard to reach or their initial idea does not deliver results- after they receive their grants for certain high goals and specific plans in their application? How do you balance holding grantees accountable vs. providing them flexibility?
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking he Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.
This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. 👏
Honourable Mentions:
Towards more cooperative AI safety strategies by @richard_ngo: This was a post that I read at exactly the right time for me, as it was a point that I was also highly concerned that the AI Safety field was having a "legitimacy problem".[1] As such, I think Richard's call to action to focus on legitimacy and competence is well made, and I would urge those working explicitly in the field to read it (as well as the comments and discussion on the LessWrong version), and perhaps consider my quick take on the 'vibe shift' in Silicon Valley as a chaser.
On Owning Our EA Affiliationby @Alix Pham: One of the most wholesome EA posts this year on the Forum? The post is a bit bittersweet to me now, as I was moved by it at the time but now I affiliate and identify less with EA now that than I have for a long time. The vibes around EA have not been great this year, and many people are explicitly or implicitly abandoning the movement, alix actually took the radical idea of "do the opposite". She's careful to try to draw a distinction between affiliation and identity, and really engages in the comments leading to very good discussion.
Policy advocacy for eradicating screwworm looks remarkably cost-effective by @MathiasKB🔸:EA Megaprojects are BACK baby! More seriously, this post people had the most 'blow my mind' effect on me this year. Who knew that the US Gov already engages in a campaign of strategic sterile-fly bombing, dropping millions of them on Central America every week? I feel like Mathias did great work finding a signal here, and I'm sure other organisations (maybe an AIM-incubated kind of one) are well placed to pick up the baton.
Forum Posters of the Year:
@Vasco Grilo🔸 - I presume that the Forum has a bat-signal of sorts, where long discussions are made without anyone trying to do an EV calculation. And in such dire times, vasco appears, and always with amazing sincerity and thoroughness. Probably the Forum's current postchild of 'calculate all the things' EA. I think this year he's been an awesome presence on the Forum, and long may it continue.
@Matthew_Barnett - Matthew is somewhat of an engima to me ideologically, there have been many cases where I've read a position of his and gone "no that can't be right". Nevertheless, I think the consistently high-quality nature of his contributions on the Forum, often presenting an unorthodox view compared to the rest of EA, is worth celebrating regardless of whether I personally agree. Furthermore, one of my major updates this year has been towards viewing the Alignment Problem as one of political participation and incentives, and this can probably traced back significantly to his posts this year.
Non-Forum Poasters of the Year:
Matt Reardon (mjreard on X) - Currently, X is not a nice place to be an Effective Altruist at the moment. It seems to be attacked from all directions, and it means it's not fun at all to push back on people and defend the EA point-of-view. Yet Matt has just consistently pushed back on some of the most egregious cases of this,[2] and also has had good discussion in EA Twitter too.
Jacques Thibodeau (JacquesThibs on X) - I think Jacques is great. He does interesting cool work on Alignment and you should consider working with him if you're also in that space. I think one of the most positivie things that Jacques does on X to build bridges across the wider 'AGI Twitter', including many who are sceptical or even hostile to AI Safety work like teortaxesTex or jd_pressman? I think this to his great credit, and I've never (or rarely) seen him get that angry on the platform, which might even deserve another award!
Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldn't shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.
My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/comments/contributors were this year!
I think that the fractured and mixed response to the latest Apollo reports (both for OpenAI and Anthropic) is partially downstream of this loss of trust and legitimacy
As a bit of a lurker, let me echo all of this, particularly the appreciation of @Vasco Grilo🔸. I don't always agree with him, but adding some numbers makes every discussion better!
I've previously found Francois Chollet's arguments that LLMs are unlikely to scale to AGI pretty convincing. Mainly because he had created an until-now unbeaten benchmark to back those arguments up.
But reading his linked write-up, he describes this as "not merely an incremental improvement, but a genuine breakthrough". He does not admit he was wrong, but instead paints o3 as something fundamentally different to previous LLM-based AIs, which for the purpose of assessing the significance of o3, amounts to the same thing!
As a vegan I agree with Marcus and Jeff's takes but also think at least carnitarianism (not eating fish) is justifiable on pure utilitarian grounds. The 5 cent offset estimate is miles off (by a factor of 50-100) for fish and shrimp here, and this is how your argument falls.
I made a rough model that suggests a 100g cooked serving for farmed carp is ~1.1 years in a factory farm, and that of farmed shrimp is ~6 years in a factory farm. I modelled salmon and it came out much lower than this, but I expect this to grow when I factor in the fact salmon are carnivorous and farmed fish are used in salmon fish feed.
This is a lot of time, and it's more expensive to pay for offsets that cover a longer time period. We have two main EA-aligned options for aquaculture 'offsets', one is the Fish Welfare Initiative, which (iirc) improves the life of a single fish across its lifetime for a marginal dollar, and the other is the Shrimp Welfare Project, which improves the death (a process lasting 3-5 minutes) of 1000 shrimp per year for a marginal dollar (we don't know how good their corporate campaigns will be yet).
I'm really not sure how good it is for a carp to have a lower stocking density and higher water quality, which is FWI's intervention in India, and essentially the best case for FWI's effectiveness. If we assume it's a 30% reduction in lifetime pain we can offset a fish meal for roughly $3.33.
I don't think it's good to prevent 1 year of shrimp suffocation and then go off and cause shrimp to spend 100 years in farmed conditions (which are really bad, to be clear). Biting the bullet on that and assuming a stunner lasts 20 years and no discount rate, to offset a single shrimp meal you'd have to pay $4.6 (nearly 100 times more than the estimate you used).
Maybe you could offset using a different species (chicken, through corporate commitments). Vasco Grilo thinks a marginal dollar gets you 2 years of chicken life in factory farms averted. Naively I'd think that chicken lives are better than shrimp lives, but shrimp matter slightly less morally. This time you probably have to pay $3 to offset a shrimp meal using the easiest species to influence.
Additionally, the lead time on offsets is long (I would think at least five years from a donation to a corporate commitment being implemented). It's not good to have an offset that realises most of its value 20 years from now when, by then, there is a much higher chance of lab grown meat being cheaper or animal welfare regulations being better.
I think that you should at least be carnitarian because this is incredibly easy and based on my modelling (second sheet) it's the vast majority (90-95%) of the (morally adjusted) time saved in factory farms associated with vegetarianism. I doubt that any person gets $4 of utility from eating a different kind of meat, and this just adds up over time.
As an omnivore who wants to eat lots of protein for fitness, I would love to agree with this and just keep on piling up chicken breasts on my plate. However, I think there are some factors ignored here. Most of them have already been addressed, but I'd like to add another that I did not find so far:
Not eating meat has not only an effect in terms of less demand for meat, it also increases demand for alternatives. This should, in my opinion, not be underestimated, as it also makes the diet change much easier.
For example: In Germany, we have a company called Rügenwalder Mühle. The origins of this company go back to a butcher shop back in 1834 and consequently, they always sold meat-based products. However, in 2014 they introduced vegetarian and vegan alternatives that were so great in terms of taste, quality and nutritional value that the demand was incredibly high. By now, these products bring in more revenue for them than the meat products. Obviously, this company will now focus more and more on the alternatives and they keep expanding their catalogue, often times with very high protein. This makes it much easier for a person like me to consider alternatives, and leads people to consume less meat even if they don't have any moral motivation to go vegan.
I doubt that any realistic amount of donations can top this. Sure, e.g. The Good Food Institute is basically trying to go into this direction, but at the end the demand needs to be there for it to work out long-term. Similar to voting in democracies, I think the "small effect" of our decisions can have quite an impact here that is hard to replace with donations.
I know that you state this as a reason that has not been addressed so your argument is probably not your main argument. But if you are using this as a main reason for going vegan, I feel like it misses the point. Maybe going vegan yourself makes it 20% easier for the next person to go vegan. That is still nowhere near the cost-effectiveness/effort-effectiveness of donating to animal welfare since the one estimate I listed was $1000 to offset a lifetime of veganism.
I think @Richard Y Chappell🔸 is right. I'd add that lots of my non-EA peers care about hypocrisy (ie, they would be unwilling to entertain arguments in favour of veganism or donating to animal welfare coming from a non-vegan).
I care a lot about spreading the cause of veganism (and effective altruism more generally), and I think that by eating vegan I hold a certain amount of moral legitimacy in the eyes of others that I don't want to give up because it might help me convince them about animal welfare or EA one day. (Being vegan also provides some reflective moral legitimacy or satisfaction to the irrational part of me that also cares about hypocrisy.)
My question for you is why do you promote AW donation AND veganism. Do you think you can increase your EU by only advocating for AW donations? Do you care that others abide by deontological side-contraints?
I am in favor of people considering unconventional approaches to charity.
At the same time, I find it pretty easy to argue against this. Some immediate things that come to mind: 1. My impression is that gambling is typically net-negative to participants, often highly so. I generally don't like seeing work go towards projects that are net-negative to their main customers (among others). 2. Out of all the "do business X, but it goes to charity", why not pick something itself beneficial? There are many business areas to choose from. Insurance can be pretty great - I think Lemonade Insurance did something clever with charity. 3. I think it's easy to start out altruistic with something like this, then become a worse person as you respond to incentives. In the casino business, the corporation is highly incentivized to do increasingly sleazy tactics to find, bait, and often bankrupt whales. If you don't do this, your competitors will, and they'll have more money to advertise. 4. I don't like making this the main thing, but I'd expect the PR to be really bad for anything this touches. "EAs don't really care about helping people, they just use that as an excuse to open sleazy casinos." There are few worse things to be associate with. A lot of charities are highly protective of their brands (and often with good reason). 5. It's very easy for me to imagine something like this creating worse epistemics. In order to grow revenue, it will be very "convenient" if you downplayed the harms caused by the casino. If such a thing does catch on in a certain charitable cluster, very soon that charitable cluster will be encouraged to lie and self-deceive. We saw some of this with the FTX incident. 6. The casino industry attracts and feeds off clients with poor epistemics. I'd imagine they (as in, the people the casino actually makes money from) wouldn't be the type who would care much about reasonable effective charities.
When I personally imagine a world where, "A significant part of the effective giving community is tied to high-rolling casinos", it's hard for me to imagine this not being highly distopic.
By all this, I hope the author doesn't treat this at all on an attack on them specifically. But I would consider it an attack on specific future project proposals that suggest advancing manipulative and harmful industries and tying such work to the topics of effective giving or effective philanthropy. I very much do not want to see more work done here. I'm spending some time on this comment, mainly to use this as an opportunity to hopefully dissuade others considering this sort of thing in the future.
On this note, I'd flag that I think a lot of the crypto industry has been full of scams and other manipulative and harmful behavior. Some of this got very close to EA (i.e. with FTX), and I'm sure with a long tail of much smaller projects. I consider much of this (the bad parts) a black mark on all connected+responsible participants and very much do not want to see more of it.
Re #1 - the customers in OPs contemplation would have already committed the funds to be donated and prospective wins would inure to the benefit of charities. So it isn't clear to me that the same typical harm applies (if you buy the premise that gamblers are net harmed by gambling). There wouldn't be the circumstance where the gambler feels they need to win it back - because they've already lost the money when they committed it to the DAF.
Re #2 - this could produce a good experience for customers - donating money to charities while playing games. And with how OP set it up, they know what they are losing (unlike with a typical casino there's that hope of winning it big).
Re #3 - for the reasons discussed above, the predatory and deceptive implications are less significant here. Unlike when someone takes money to a slot machine in a typical casino, when they put the money in the DAF they no longer have a chance of "getting it back"
Re #4 - yeah there might be some bad pr. But if people liked this and substituted it for normal gambling, it probably would be less morally problematic for the reasons discussed above.
Re #5 - I'm not really sure that this business is as morally corrosive as you suggest... It's potentially disadvantaging the gambler's preferred charity to the casino's, but not by much, and not without the gambler's knowledge.
Re #6 - the gamblers could choose the charities that are the beneficiaries of their DAF. And I don't know that enjoying gambling means that you wouldn't like to see kids saved from malaria and such.
I think your criticisms would better apply to a straight Profit for Good casino (normal casino with charities as shareholder). The concerns you bring up are some reasons I think a PFG casino, though an interesting idea, would not be a place I'd be looking to do as an early, strategic PFG (also big capital requirements).
OP's proposal is much more wholesome and actually addresses a lot more of the ethical concerns. I just think people may not be interested in gambling as much if there was not the prospect of winning money for themselves.
A useful test when moral theorizing about animals is to swap "animals" with "humans" and see if your answer changes substantially. In this example, if the answer changes, the relevant difference for you isn't about pure expected value consequentialism, it's about some salient difference between the rights or moral status of animals vs. humans. Vegans tend to give significant, even equivalent, moral status to some animals used for food. If you give near-equal moral status to animals, "offsetting meat eating by donating to animal welfare orgs" is similar to "donating to global health charities to offset hiring a hitman to target a group of humans". There are a series of rebuttals, counter-rebuttals, etc. to this line of reasoning. Not going to get into all of them. But suffice to say that in the animal welfare space, an animal welfarist carnivore is hesitantly trusted - it signals either a lack of commitment or discipline, a diet/health struggle, a discordant belief that animals deserve far less rights and moral status as humans, or (much rarer) a fanatic consequentialist ideology that thinks offsetting human killing is morally coherent and acceptable. A earnest carnivore that cares a lot about animal welfare is incredibly rare.
Are people here against killing one to save two in a vacuum? I thought EA was very utilitarian. I think intuitively, causing harm is repulsive but ultimately, our goal should be creating a better world.
To your "animal" to "human" swap, it's hard to give "would you kill/eat humans if you could offset" as an double standard since most self-proclaimed utilitarians are still intuitively repulsed to immoral behavior like causing harm to humans, cannibalism, etc. On the other hand, we are biologically programmed to not care when eating animal flesh, even if we deem animal suffering immoral. What this means is that I would be way to horrified to offset killing or eating a human even if I deem it moral. On the other hand, I can offset eating an animal because I don't intuitively care about the harm I caused. I am too disconnected, biologically preprogrammed, and cognitively dissonant. Therefore, offsetting animal suffering is not repulsive nor immoral to me.
I listed in descending order of importance. I might be confused for one of those "hyper rationalist" types in many instances. I think rationalist undervalue the cognitive dissonance. In my experience, a lot of rationalists just don't value non human animals. Even rationalists behave in a much more "vibes" based way than they'd have you believe. It really is hard to hold in your head both "it's okay to eat animals" and "we can avert tremendous amounts of suffering to hundreds of animals per dollar and have a moral compulsion to do so".
I also wouldn't call what I do virtue signaling. I never forthright tell people and I live in a very conservative part of the world.
Two of your reasons to go vegan involve getting to tell others you are vegan. I find this pretty dishonest because I assume you aren't telling them this.
I don't think that virtue signaling by telling most people you donate 10 percent would with week to noon vegans would work well. Most of my friends would consider me a hypocrite for doing that, and longer explanations wouldn't work for many.
Utilitarianism can be explained, but even after that explanation many would consider eating meat and offsetting hypocritical, even if it might be virtuous.
The point of the virtue signaling is the signaling, not the virtue and the cleanest and easiest way to do that in many circles might be going vegan.
So if they ask you, "why are you vegan?", your honest answer would be "because I need you to accept me as a non-hypocrite."????? I don't think vegans would give you any extra consideration if they knew this was your reasoning. Any other reason you give would be dishonest and misleading.
If transformative AI is defined by its societal impact rather than its technical capabilities (i.e. TAI as process not a technology), we already have what is needed. The real question isn't about waiting for GPT-X or capability Y - it's about imagining what happens when current AI is deployed 1000x more widely in just a few years. This presents EXTREMELY different problems to solve from a governance and advocacy perspective.
E.g. 1: compute governance might no longer be a good intervention E.g. 2: "Pause" can't just be about pausing model development. It should also be about pausing implementation across use cases
Does someone have a rough fermi on the tradeoffs here? On priors it seems like chickens bred to be bigger would overall cause less suffering because they replace more than one chicken that isn't bread to be as big, but I would expect those chickens to suffer more. I can imagine it going either way, but I guess my prior is that it was broadly good for each individual chicken to weigh more.
I am a bit worried the advocacy here is based more on a purity/environmentalist perspective where genetically modifying animals is bad, but I don't give that perspective much weight. But it could also be great from a more cost-effectiveness/suffering-minimization oriented perspective, and would be curious in people's takes.
(Molly was asked this question in a previous post two months ago, but as far as I can tell responded mostly with orthogonal claims that don't really engage with the core ethical question, so am curious in other people's takes)
(Obvious flag that I know very little about this specific industry)
Agreed that this seems like an important issue. Some quick takes:
Less immediately- obvious pluses/minuses to this sort of campaign: - Plus #1: I assume that anything the animal industry doesn't like would increase costs for raising chickens. I'd correspondingly assume that we should want costs to be high (though it would be much better if it could be the government getting these funds, rather than just decreases in efficiency). - Plus #2: It seems possible that companies have been selecting for growth instead of for well-being. Maybe, if they just can't select for growth, then selecting more for not-feeling-pain would be cheaper. - Minus #1: Focusing on the term "Frankenchicken" could discourage other selective breeding or similar, which could be otherwise useful for very globally beneficial attributes, like pain mitigation. - Ambiguous #1: This could help stop further development here. I assume that it's possible to later use selective breeding and similar to continue making larger / faster growing chickens.
I think I naively feel like the pluses outweigh the negatives. Maybe I'd give this a 80% chance, without doing much investigation. That said, I'd also imagine there might well be more effective measures with a much clearer trade-off. The question of "is this a net-positive thing" is arguably not nearly as important as "are there fairly-clearly better things to do."
Lastly, for all of that, I do want to just thanks those helping animals like this. It's easy for me to argue things one way or the other, but I generally have serious respect for those working to change things, even if I'm not sure if their methods are optimal. I think it's easy to seem combative on this, but we're all on a similar team here.
In terms of a "rough fermi analysis", as I work in the field, I think the numeric part of this is less important at this stage than just laying out a bunch of the key considerations and statistics. What I first want is a careful list of costs and benefits - that seems mature, fairly creative, and unbiased.
The Humane League (THL) filed a lawsuit against the UK Secretary of State for Environment, Food and Rural Affairs (the Defra Secretary) alleging that the Defra Secretary’s policy of permitting farmers to farm fast-growing chickens unlawfully violated paragraph 29 of Schedule 1 to the Welfare of Farmed Animals (England) Regulations 2007.
Paragraph 29 of Schedule 1 to the Welfare of Farmed Animals (England) Regulations 2007 states the following:
“Animals may only be kept for farming purposes if it can reasonably be expected, on the basis of their genotype or phenotype, that they can be kept without any detrimental effect on their health or welfare.” [1]
THL’s case was dismissed.
THL appealed the dismissal, and again THL’s case was dismissed (this most recent dismissal is what THL’s post is about).
In this most recent dismissal, the Court clarified the meaning of Paragraph 29 as follows:
“Paragraph 29 was not concerned with the environmental conditions in which animals were kept; it was concerned with the characteristics of the breed, and with detriment which could not be mitigated by improving the animal’s environment”
“Accordingly, paragraph 29 was a prohibition on the keeping of farmed animals whose genotype and phenotype meant that, regardless of the conditions in which they were kept, they could not be kept without detriment to their health or welfare” [2]
Essentially, the Court ruled that Paragraph 29 is only violated if an animal is bred such that it cannot avoid genetically caused health/welfare problems even under perfect environmental conditions (i.e. giving the animal the best possible food/diet, a perfect living environment, and world class medical treatment). This allows farmers to continue to farm animals so long as their genetic issues can theoretically be mitigated by improving conditions, even if those conditions are unlikely to be implemented in practice.
For example, let’s say there is a genetically selected breed of chicken that under normal factory farming conditions grows so fast that their legs snap under their weight by the time they are a month old. Under the Court’s ruling, this would not violate Paragraph 29, so long as this problem (and other genetically caused problems) could theoretically be mitigated with better environmental conditions (i.e. giving the chicken the best possible food/diet, a perfect living environment, and world class medical treatment).
Since the Court offered this interpretation of Paragraph 29, all trial courts in the UK (except for those in Northern Ireland and Scotland) are now required to use this interpretation of Paragraph 29 when making rulings.
From our understanding, this is not a favorable interpretation of Paragraph 29, as it makes it extremely difficult to prove that a violation of Paragraph 29 has occurred. Under this ruling, the only way to prove that a Paragraph 29 violation has occurred is by proving the health/welfare problems encountered by an animal are completely unavoidable, even with absolutely perfect environmental conditions/treatment.
Because of this ruling, anyone who ever tries to claim a Paragraph 29 violation has occurred will have to meet this extremely high standard of evidence that the Court has laid out.
I am in favor of people considering unconventional approaches to charity.
At the same time, I find it pretty easy to argue against this. Some immediate things that come to mind: 1. My impression is that gambling is typically net-negative to participants, often highly so. I generally don't like seeing work go towards projects that are net-negative to their main customers (among others). 2. Out of all the "do business X, but it goes to charity", why not pick something itself beneficial? There are many business areas to choose from. Insurance can be pretty great - I think Lemonade Insurance did something clever with charity. 3. I think it's easy to start out altruistic with something like this, then become a worse person as you respond to incentives. In the casino business, the corporation is highly incentivized to do increasingly sleazy tactics to find, bait, and often bankrupt whales. If you don't do this, your competitors will, and they'll have more money to advertise. 4. I don't like making this the main thing, but I'd expect the PR to be really bad for anything this touches. "EAs don't really care about helping people, they just use that as an excuse to open sleazy casinos." There are few worse things to be associate with. A lot of charities are highly protective of their brands (and often with good reason). 5. It's very easy for me to imagine something like this creating worse epistemics. In order to grow revenue, it will be very "convenient" if you downplayed the harms caused by the casino. If such a thing does catch on in a certain charitable cluster, very soon that charitable cluster will be encouraged to lie and self-deceive. We saw some of this with the FTX incident. 6. The casino industry attracts and feeds off clients with poor epistemics. I'd imagine they (as in, the people the casino actually makes money from) wouldn't be the type who would care much about reasonable effective charities.
When I personally imagine a world where, "A significant part of the effective giving community is tied to high-rolling casinos", it's hard for me to imagine this not being highly distopic.
By all this, I hope the author doesn't treat this at all on an attack on them specifically. But I would consider it an attack on specific future project proposals that suggest advancing manipulative and harmful industries and tying such work to the topics of effective giving or effective philanthropy. I very much do not want to see more work done here. I'm spending some time on this comment, mainly to use this as an opportunity to hopefully dissuade others considering this sort of thing in the future.
On this note, I'd flag that I think a lot of the crypto industry has been full of scams and other manipulative and harmful behavior. Some of this got very close to EA (i.e. with FTX), and I'm sure with a long tail of much smaller projects. I consider much of this (the bad parts) a black mark on all connected+responsible participants and very much do not want to see more of it.
Whilst I salute the effort and progress here, this post does seem rather full of spin, given that from what I can tell the court ruling was against the animal advocates. I'd rather see posts that present the facts more clearly.
Does someone have a rough fermi on the tradeoffs here? On priors it seems like chickens bred to be bigger would overall cause less suffering because they replace more than one chicken that isn't bread to be as big, but I would expect those chickens to suffer more. I can imagine it going either way, but I guess my prior is that it was broadly good for each individual chicken to weigh more.
I am a bit worried the advocacy here is based more on a purity/environmentalist perspective where genetically modifying animals is bad, but I don't give that perspective much weight. But it could also be great from a more cost-effectiveness/suffering-minimization oriented perspective, and would be curious in people's takes.
(Molly was asked this question in a previous post two months ago, but as far as I can tell responded mostly with orthogonal claims that don't really engage with the core ethical question, so am curious in other people's takes)
You could have a great night in which you win hundreds or thousands of dollars, but even if you lose, they know that your losses are helping to dramatically better the world.
A cynic reads this as "you could have a great night in which you deprive a few hundred people of malaria nets, but at least in the long run they and also random unrelated and typically obnoxious corporations might stand to benefit from the gambling addiction this has instilled in you....". Possibly the first part of the proposition is slightly less icky if the house is simply taking a rake from a competitors in a game of skill, but still.
Maybe I just know too many people broken by gambling.
I think the same amount of healthy and problem gambling would take place in aggregate regardless of whether there was a PFG casino among a set of casinos. But maybe some people would choose to migrate that activity toward the PFG casino, so that more good could happen (they're offering the same odds as competitors).
It comes down to whether you're OK with getting involved in something icky if the net harm you cause to gamblers is zero and you can produce significant good in doing so. For me, this doesn't really pose a problem.
No worries! Relatedly, I’m hoping to get out a post explaining (part of) the case for indeterminacy in the not-too-distant future, so to some extent I’ll punt to that for more details.
without having such an account it's sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds
Cool, that makes sense. I’m all for debunking explanations in principle. Extremely briefly, here's why I think there’s something qualitative that determinate credences fail to capture: If evidence, trustworthy intuitions, and appealing norms like the principle of indifference or Occam's razor don’t uniquely pin down an answer to “how likely should I consider outcome X?”, then I think I shouldn’t pin down an answer. Instead I should suspend judgment, and say that there aren’t enough constraints to give an answer that isn’t arbitrary. (This runs deeper than “wait to learn / think more”! Because I find suspending judgment appropriate even in cases where my uncertainty is resilient. Contra Greg Lewis here.)
Is it some analogue of betting odds? Or what?
No, I see credences as representing the degree to which I anticipate some (hypothetical) experiences, or the weight I put on a hypothesis / how reasonable I find it. IMO the betting odds framing gets things backwards. Bets are decisions, which are made rational by whether the beliefs they’re justified by are rational. I’m not sure what would justify the betting odds otherwise.
how you'd be inclined to think about indeterminate credences in an example like the digits of pi case
Ah, I should have made clear, I wouldn’t say indeterminate credences are necessary in the pi case, as written. Because I think it’s plausible I should apply the principle of indifference here: I know nothing about digits of pi beyond the first 10, except that pi is irrational and I know irrational numbers’ digits are wacky. I have no particular reason to think one digit is more or less likely than another, so, since there’s a unique way of splitting my credence impartially across the possibilities, I end up with 50:50.[1]
Instead, here’s a really contrived variant of the pi case I had too much fun writing, analogous to a situation of complex cluelessness, where I’d think indeterminate credences are appropriate:
Suppose that Sally historically has an uncanny ability to guess the parity of digits of (conjectured-to-be) normal numbers with an accuracy of 70%. Somehow, it’s verifiable that she’s not cheating. No one quite knows how her guesses are so good.
Her accuracy varies with how happy she is at the time, though. She has an accuracy of ~95% when really ecstatic, ~50% when neutral, and only ~10% when really sad. Also, she’s never guessed parities of Nth digits for any N < 1 million.
Now, Sally also hasn’t seen the digits of pi beyond the first 10, and she guesses the 20th is odd. I don’t know how happy she is at the time, though I know she’s both gotten a well-earned promotion at her job and had an important flight canceled.
What should my credence in “the 20th digit is odd” be? Seems like there are various considerations floating around:
The principle of indifference seems like a fair baseline.
But there’s also Sally’s really impressive average track record on N ≥ 1 million.
But also I know nothing about what mechanism drives her intuition, so it’s pretty unclear if her intuition generalizes to such a small N.
And even setting that aside, since I don’t know how happy she is, should I just go with the base rate of 70%? Or should I apply the principle of indifference to the “happiness level” parameter, and assume she’s neutral (so 50%)?
But presumably the evidence about the promotion and canceled flight tell me something about her mood. I guess slightly less than neutral overall (but I have little clue how she personally would react to these two things)? How much less?
I really don’t know a privileged way to weigh all this up, especially since I’ve never thought about how much to defer to a digit-guessing magician before. It seems pretty defensible to have a range of credences between, say, 40% and 75%. These endpoints themselves are kinda arbitrary, but at least seem considerably less arbitrary than pinning down to one number.
I could try modeling all this and computing explicit priors and likelihood ratios, but it seems extremely doubtful there's gonna be one privileged model and distribution over its parameters.
(I think forming beliefs about the long-term future is analogous in many ways to the above.)
Not sure how much that answers your question? Basically I ask myself what constraints the considerations ought to put on my degree of belief, and try not to needlessly get more precise than those constraints warrant.
I don’t think this is clearly the appropriate response. I think it’s kinda defensible to say, “This doesn’t seem like qualitatively the same kind of epistemic situation as guessing a coin flip. I have at least a rough mechanistic picture of how coin flips work physically, which seems symmetric in a way that warrants a determinate prediction of 50:50. But with digits of pi, there’s not so much a ‘symmetry’ as an absence of a determinate asymmetry.” But I don’t think you need to die on that hill to think indeterminacy is warranted in realistic cause prio situations.
IMO the betting odds framing gets things backwards. Bets are decisions, which are made rational by whether the beliefs they’re justified by are rational. I’m not sure what would justify the betting odds otherwise.
Not sure what I overall think of the better odds framing, but to speak in its defence: I think there's a sense in which decisions are more real than beliefs. (I originally wrote "decisions are real and beliefs are not", but they're both ultimately abstractions about what's going on with a bunch of matter organized into an agent-like system.) I can accept the idea of X as an agent making decisions, and ask what those decisions are and what drives them, without implicitly accepting the idea that X has beliefs. Then "X has beliefs" is kind of a useful model for predicting their behaviour in the decision situations. Or could be used (as you imply) to analyse the rationality of their decisions.
I like your contrived variant of the pi case. But to play on it a bit:
Maybe when I first find out the information on Sally, I quickly eyeball and think that defensible credences probably lie within the range 30% to 90%
Then later when I sit down and think about it more carefully, I think that actually the defensible credences are more like in the range 40% to 75%
If I thought about it even longer, maybe I'd tighten my range a bit further again (45% to 55%? 50% to 70%? I don't know!)
In this picture, no realistic amount of thinking I'm going to do will bring it down to just a point estimate being defensible, and perhaps even the limit with infinite thinking time would have me maintain an interval of what seems defensible, so some fundamental indeterminacy may well remain.
But to my mind, this kind of behaviour where you can tighten your understanding by thinking more happens all of the time, and is a really important phenomenon to be able to track and think clearly about. So I really want language or formal frameworks which make it easy to track this kind of thing.
Moreover, after you grant this kind of behaviour [do you grant this kind of behaviour?], you may notice that from our epistemic position we can't even distinguish between:
Cases where we'd collapse our estimated range of defensible credences down to a very small range or even a single point with arbitrary thinking time, but where in practice progress is so slow that it's not viable
Cases where even in the limit with infinite thinking time, we would maintain a significant range of defensible credences
Because of this, from my perspective the question of whether credences are ultimately indeterminate is ... not so interesting? It's enough that in practice a lot of credences will be indeterminate, and that in many cases it may be useful to invest time thinking to shrink our uncertainty, but in many other cases it won't be.
Another idea would just be a normal casino that was owned by a charitable foundation or trust -a "Profit for Good" casino. People could get the exact same value proposition they get from other normal casinos, but by patronizing the Profit for Good Casino, they (in expectation)would be helping save lives or otherwise better the world.
You could have a great night in which you win hundreds or thousands of dollars, but even if you lose, they know that your losses are helping to dramatically better the world.
You could have a great night in which you win hundreds or thousands of dollars, but even if you lose, they know that your losses are helping to dramatically better the world.
A cynic reads this as "you could have a great night in which you deprive a few hundred people of malaria nets, but at least in the long run they and also random unrelated and typically obnoxious corporations might stand to benefit from the gambling addiction this has instilled in you....". Possibly the first part of the proposition is slightly less icky if the house is simply taking a rake from a competitors in a game of skill, but still.
Maybe I just know too many people broken by gambling.
Whilst I salute the effort and progress here, this post does seem rather full of spin, given that from what I can tell the court ruling was against the animal advocates. I'd rather see posts that present the facts more clearly.
Thanks for your proposal. I have actually thought a Profit for Good casino would be a good idea (high capital requirements, but I think it could provide a competitive edge in the Vegas strip, for instance). I find your take on it pretty interesting
I think a casino that did not limit the funds that could be gambled to charitable accounts of some sort would have a much larger market than one that did. There is a lot of friction in requiring the set up of charitable accounts even for people who were interested in charitable giving and enjoyed gambling. I also think that you are going into a narrower subset of prospective clients that have these overlapping qualities. In the meantime, there are millions of people who consistently demonstrate demand for gambling at casinos.
I think a lot of people would feel fine about playing at the casino and winning, because they know that there are winners and losers in casinos, but the house (in the end) always wins. Winners and losers would both be participating in a process that would be helping dramatically better the world.
Could you explain the legal advantage of your proposal vis-a-vis a normal casino either owned by a charitable foundation or being a nonprofit itself (Humanitix, for instance is a ticketing company that is structured as a nonprofit itself)? Is it that people's chips would essentially be tax-deductible (because contributing to their DAF is tax-deductible)?
Short question: why do you say that one who adheres to determinism considers individuals to be genetic blank slates ? (Disclaimer: I know very little about genetics) It seems like if certain things will "inevitably" make us react in a certain way, there must be a genetic component to these rules.
Honestly determinism doesn't really have anything to say about the nature vs nurture, it's just my personal opinion. Basically the only things that influence a person is their environment or genetics, both of which are out of a persons control.
I am a political theorist at Uppsala University, Sweden. Similarly to how I am interested in niche ethical ideas like EA, my research is focused on rather neglected (or weird) political ideas. In particular, I am interested in ‘geoism’ or ‘Georgism’, which combines the economic idea that unequal landownership is a root cause of many social problems with the normative idea that such landownership is unjustified since land was not created by anyone. Hence, geoists argue that taxes should be shifted to land and other naturally occurring resources. Earlier this year I defended my Ph.D. thesis on the relationship between geoism and anarchism. I recently received a postdoc grant to keep on researching geoist political theory in the coming years, being partly based in Oslo and Blacksburg, VA.
In terms of cause area, I really appreciate the wide diversity within EA. But perhaps due to my interest in political theory, I have an extra soft spot for questions concerning institutional and systemic change. This is presumably where my own comparative advantage is, but I also think that it matters massively in terms of ripple effects and global capacity growth. At some point, I want to write up an exploration of land reform as a potential high-impact cause area, and the use of community land value trusts as a way to implement these ideals. The final chapter of my thesis explores some related ideas.
I was first introduced to EA ideas in a university philosophy course in 2018. My New Year's resolution for 2022-23 was to try donating 10% of my income to effective causes for at least a year. I had previously found that smaller trials, like Veganuary, are much more doable than any permanent commitment. During this time I also thought a lot about whether to take any public pledge or just to keep on donating anonymously. I eventually became convinced that the potential social contagion effects provide a really important reason to be public with pledges. I wrote some of these considerations down in this essay, which was published at GWWC last month. I also used this occasion to sign the 🔸 10% Pledge.
Please feel free to reach out if you have any questions, and thank you all for the good that you do!
@Brad West🔸 , thanks for sharing your thoughts! This is what I thought of initially, but then "pivoted to" the complete non-profit framing, mainly because winning in the actual casino would mean that you are in effect taking money from charities. Probably even more important is the legal advantage of my proposal
Thanks for your proposal. I have actually thought a Profit for Good casino would be a good idea (high capital requirements, but I think it could provide a competitive edge in the Vegas strip, for instance). I find your take on it pretty interesting
I think a casino that did not limit the funds that could be gambled to charitable accounts of some sort would have a much larger market than one that did. There is a lot of friction in requiring the set up of charitable accounts even for people who were interested in charitable giving and enjoyed gambling. I also think that you are going into a narrower subset of prospective clients that have these overlapping qualities. In the meantime, there are millions of people who consistently demonstrate demand for gambling at casinos.
I think a lot of people would feel fine about playing at the casino and winning, because they know that there are winners and losers in casinos, but the house (in the end) always wins. Winners and losers would both be participating in a process that would be helping dramatically better the world.
Could you explain the legal advantage of your proposal vis-a-vis a normal casino either owned by a charitable foundation or being a nonprofit itself (Humanitix, for instance is a ticketing company that is structured as a nonprofit itself)? Is it that people's chips would essentially be tax-deductible (because contributing to their DAF is tax-deductible)?
Thanks for all your hard work on the audio narrations and making EA Forum content accessible!
Question: Do you intend to license the audio under a Creative Commons license? Since EA Forum text since 2022 is licensed under CC-BY 4.0, all that's legally required is any attribution info provided by the source material and a link to the license; derived works don't have to be also licensed under CC-BY. However, to the extent that AI-generated narrations can be protected by copyright at all, it seems appropriate to use CC-BY, or maybe CC-BY-SA to enforce modifications being under the same terms.
@Brad West🔸 , thanks for sharing your thoughts! This is what I thought of initially, but then "pivoted to" the complete non-profit framing, mainly because winning in the actual casino would mean that you are in effect taking money from charities. Probably even more important is the legal advantage of my proposal
My views have not changed directionally, but I do feel happier with them than I did at the time for a couple of reasons:
I thought and continue to think that the best argument is some version of 'clever arguments aside, from a layperson perspective what you're doing looks awfully similar to what caused the GFC, and the GFC was a huge disaster which society has not learned the lessons from'.
If you talk to people inside finance, they will usually reject the second claim and say a huge amount has changed since the GFC.
In particular, regulatory pressure shifted many 'interesting' risks from too-big-to-fail banks to hedge funds and firms like Jane Street (JS), where I used to work. JS arguably has much better incentives to keep its house in order than the big banks did, and it shouldn't have any call on public funds if it fails to do so.
But of course there was a reasonable question of whether JS and its ilk would actually succeed in doing this. And if they failed, would society pick up the tab somehow. As more time passes, the GFC looks more like the outlier event here.
On the positive side of the ledger, most of my work at JS was improving the pricing of equity ETFs. When I started there I felt like almost nobody I spoke to outside JS knew what an ETF was and when I explained it they couldn't really see the point. Now I feel like virtually all UK personal financial advice I see will mention ETFs as a solid option; a cheap and simple way to invest in a diversified fashion. I'm fine with having been a very small part of what made that happen.
With my more recent work it seems much too soon to say anything definitive about social impact, so I always try to acknowledge some chance that I'll feel bad when I look back on this.
ETFs do sound like a big win. I suppose someone could look at them as "finance solving a problem that finance created" (if the "problem" is e.g. expensive mutual funds). But even the mutual funds may be better than the "state of nature" (people buying individual stocks based on personal preference?). And expensive funds being outpaced by cheaper, better products sounds like finance working the way any competitive market should.
Thanks for the suggestion! Currently, GiveCalc handles the charitable deduction value whether you donate cash or appreciated assets—you'd enter the fair market value of the assets as your donation amount. (One limitation is that we assume all donations are cash, which can be deducted up to 60% of AGI, while appreciated assets are limited to 30% of AGI.)
We could add functionality to compare scenarios, like donating an appreciated asset vs selling it and donating the after-tax proceeds. I've opened an issue to explore this: https://github.com/PolicyEngine/givecalc/issues/41
Could you help us understand your use case? When considering donating appreciated assets, would you want to:
See the tax implications of donating at current market value, accounting for the 30% AGI limit?
Compare with the scenario of selling the asset and paying capital gains tax?
Something else?
Your thoughts on which calculations would be most helpful would be great to hear.
Hypothetically yes. The actual counterfactual would not be selling assets, but it's informative to know how much capital gains taxes have hypothetically been avoided
> only deduct 30% of AGI, rather than 60% if cash
Can 30% of AGI be deducted for donated assets and the rest of the cash deduction limit deducted for donated cash? Or is it either/or?
most helpful
Interested in calculating the highest tax savings (assuming ownership of appreciated assets with unrealized capital gains). As mentioned elsewhere, it's worth researching that point and bunching donations towards it
Another idea would just be a normal casino that was owned by a charitable foundation or trust -a "Profit for Good" casino. People could get the exact same value proposition they get from other normal casinos, but by patronizing the Profit for Good Casino, they (in expectation)would be helping save lives or otherwise better the world.
You could have a great night in which you win hundreds or thousands of dollars, but even if you lose, they know that your losses are helping to dramatically better the world.
What a wonderful piece! I've always wondered why some people choose not to share their donations. Being perceived as a "bragger" in exchange for potentially influencing people around you to donate, always sounded like a good trade-off. Your points clarified a bunch of things here. Thank you!
Well done, it's super cool to see everything you guys have achieved this year. One thing I was surprised by is that EAGxs are almost three times cheaper than EAGs while having a slightly higher likelihood to recommend. I assume part of this is because EAGs are typically held in more expensive areas, but I'd be surprised if that explained all of it. Are there any other factors that explain the cost difference?
Personal reasons why I wished I delayed donations: I started donating 10% of my income about 6 years back when I was making Software Engineer money. Then I delayed my donations when I moved into a direct work path, intending to make up the difference later in life. I don't have any regrets about 'donating right away' back then. But if I could do it all over again with the benefit of hindsight, I would have delayed most of my earlier donations too.
First, I've been surprised by 'necessary expenses'. Most of my health care needs have been in therapy and dental care, neither of which is covered much by insurance. On top of that, friend visits cost more over time as people scatter to different cities, meaning I'm paying a lot more for travel costs. And family obligations always manage to catch me off-caught.
Second, career transitions are expensive. I was counting on my programming skills and volunteer organizing to mean a lot more in public policy and research. But there are few substitutes for working inside your target field. And while everyone complains about Master's degrees, it's still heavily rewarded on the job market so I ultimately caved in and paid for one.
Finally, I'm getting a lot more from 'money right away' these days. Thanks to some mental health improvements, fancy things are less stressful and more enjoyable than before. The extra vacation, concert, or restaurant is now worth it, and so my optimal spending level has increased. That's not just for enjoyment. My productivity also improves after that extra splurging, whereas before there wasn't much difference in the relaxation benefit I got from a concert and a series of YouTube comedy skits.
If I had to find a lesson here, it's that I thought too much about my altruistic desires changing and not enough on everything else changing. I opted to 'donate right away' to protect against future me rebelling against effective charity, worrying about value drift and stories of lost motivation. In practice, my preference for giving 10% has been incredibly robust. My other preferences have been a lot more dynamic.
Short question: why do you say that one who adheres to determinism considers individuals to be genetic blank slates ? (Disclaimer: I know very little about genetics) It seems like if certain things will "inevitably" make us react in a certain way, there must be a genetic component to these rules.
Whoop - great work! Anec-data: I've been going to these conferences for years now; to my mind the quality/usefulness of them has in no way diminished, even as you've been able to trim costs. Well done. They are sooo value-adding in terms of motivation, connections, inspiration, etc; you are providing a massive public good for the EA community. Thanks!
I want to point out that besides the informational value, I find it personally encouraging and heartwarming to read the part where you expressed your appreciation to donors and advocates in the space, and your vision. I think I might learn from you and try doing more of this in some of my writings. Thank you for doing that.
Thanks Fai! Yes I'm trying to express more often the deep appreciation that I feel for the incredible donors and advocates in our space. I'm glad to hear you find it encouraging :)
I appreciated a bunch of things about this comment. Sorry, I'll just reply (for now) to a couple of parts.
The metaphor with hedonism felt clarifying. But I would say (in the metaphor) that I'm not actually arguing that it's confused to intrinsically care about the non-hedonist stuff, but that it would be really great to have an account of how the non-hedonist stuff is or isn't helpful on hedonist grounds, both because this may just be helpful to input into our thinking to whatever extent we endorse hedonist goods (even if we may also care about other things), and because without having such an account it's sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds.
I think the piece I feel most inclined to double-click on is the digits of pi piece. Reading your reply, I realise I'm not sure what indeterminate credences are actually supposed to represent (and this is maybe more fundamental than "where do the numbers come from?"). Is it some analogue of betting odds? Or what?
And then, you said:
I think this fights the hypothetical. If you “make guesses about your expectation of where you’d end up,” you’re computing a determinate credence and plugging that into your EV calculation. If you truly have indeterminate credences, EV maximization is undefined.
To some extent, maybe fighting the hypothetical is a general move I'm inclined to make? This gets at "what does your range of indeterminate credences represent?". I think if you could step me through how you'd be inclined to think about indeterminate credences in an example like the digits of pi case, I might find that illuminating.
(Not sure this is super important, but note that I don't need to compute a determinate credence here -- it may be enough have an indeterminate range of credences, all of which would make the EV calculation fall out the same way.)
No worries! Relatedly, I’m hoping to get out a post explaining (part of) the case for indeterminacy in the not-too-distant future, so to some extent I’ll punt to that for more details.
without having such an account it's sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds
Cool, that makes sense. I’m all for debunking explanations in principle. Extremely briefly, here's why I think there’s something qualitative that determinate credences fail to capture: If evidence, trustworthy intuitions, and appealing norms like the principle of indifference or Occam's razor don’t uniquely pin down an answer to “how likely should I consider outcome X?”, then I think I shouldn’t pin down an answer. Instead I should suspend judgment, and say that there aren’t enough constraints to give an answer that isn’t arbitrary. (This runs deeper than “wait to learn / think more”! Because I find suspending judgment appropriate even in cases where my uncertainty is resilient. Contra Greg Lewis here.)
Is it some analogue of betting odds? Or what?
No, I see credences as representing the degree to which I anticipate some (hypothetical) experiences, or the weight I put on a hypothesis / how reasonable I find it. IMO the betting odds framing gets things backwards. Bets are decisions, which are made rational by whether the beliefs they’re justified by are rational. I’m not sure what would justify the betting odds otherwise.
how you'd be inclined to think about indeterminate credences in an example like the digits of pi case
Ah, I should have made clear, I wouldn’t say indeterminate credences are necessary in the pi case, as written. Because I think it’s plausible I should apply the principle of indifference here: I know nothing about digits of pi beyond the first 10, except that pi is irrational and I know irrational numbers’ digits are wacky. I have no particular reason to think one digit is more or less likely than another, so, since there’s a unique way of splitting my credence impartially across the possibilities, I end up with 50:50.[1]
Instead, here’s a really contrived variant of the pi case I had too much fun writing, analogous to a situation of complex cluelessness, where I’d think indeterminate credences are appropriate:
Suppose that Sally historically has an uncanny ability to guess the parity of digits of (conjectured-to-be) normal numbers with an accuracy of 70%. Somehow, it’s verifiable that she’s not cheating. No one quite knows how her guesses are so good.
Her accuracy varies with how happy she is at the time, though. She has an accuracy of ~95% when really ecstatic, ~50% when neutral, and only ~10% when really sad. Also, she’s never guessed parities of Nth digits for any N < 1 million.
Now, Sally also hasn’t seen the digits of pi beyond the first 10, and she guesses the 20th is odd. I don’t know how happy she is at the time, though I know she’s both gotten a well-earned promotion at her job and had an important flight canceled.
What should my credence in “the 20th digit is odd” be? Seems like there are various considerations floating around:
The principle of indifference seems like a fair baseline.
But there’s also Sally’s really impressive average track record on N ≥ 1 million.
But also I know nothing about what mechanism drives her intuition, so it’s pretty unclear if her intuition generalizes to such a small N.
And even setting that aside, since I don’t know how happy she is, should I just go with the base rate of 70%? Or should I apply the principle of indifference to the “happiness level” parameter, and assume she’s neutral (so 50%)?
But presumably the evidence about the promotion and canceled flight tell me something about her mood. I guess slightly less than neutral overall (but I have little clue how she personally would react to these two things)? How much less?
I really don’t know a privileged way to weigh all this up, especially since I’ve never thought about how much to defer to a digit-guessing magician before. It seems pretty defensible to have a range of credences between, say, 40% and 75%. These endpoints themselves are kinda arbitrary, but at least seem considerably less arbitrary than pinning down to one number.
I could try modeling all this and computing explicit priors and likelihood ratios, but it seems extremely doubtful there's gonna be one privileged model and distribution over its parameters.
(I think forming beliefs about the long-term future is analogous in many ways to the above.)
Not sure how much that answers your question? Basically I ask myself what constraints the considerations ought to put on my degree of belief, and try not to needlessly get more precise than those constraints warrant.
I don’t think this is clearly the appropriate response. I think it’s kinda defensible to say, “This doesn’t seem like qualitatively the same kind of epistemic situation as guessing a coin flip. I have at least a rough mechanistic picture of how coin flips work physically, which seems symmetric in a way that warrants a determinate prediction of 50:50. But with digits of pi, there’s not so much a ‘symmetry’ as an absence of a determinate asymmetry.” But I don’t think you need to die on that hill to think indeterminacy is warranted in realistic cause prio situations.
I don't work in commercial aviation any more, but can offer a few pointers
Eurocontrol are exactly the people you want taking this seriously - they regulate European airspace. So whilst I think it probably is neglected relative to other climate proposals in terms of funding vs estimated impact, it may not be neglected by the right people.
For related reasons, I think it's way more tractable than most interventions: changing altitude under certain conditions is a lot easier than dissuading people from flying or consuming. And there is an established track record of regulators enforcing environmental rules and costs like noise restrictions and NOx emissions charges (along with sticks governments haven't beat them with yet like carbon taxing jet fuel)
On the other hand it seems like it's actually true the current state of scientific consensus hasn't resolved the important question of when and where to divert yet (see the variability factors in your infographic) and the diversion usually does result in increased fuel burn (and some contrails are even cooling!) And flight directions are a complex multidimesional problem
Airspace controllers will need to be involved because airlines are unlikely to do anything voluntarily that impacts their profit margins (which are on average small anyway) regardless of how settled the science. In general, being "greener" through lower fuel consumption actually saves them money; this is an obvious exception.
An indirect "stick" approach like levying fines or additional charges on airlines causing contrails whilst passing through particular airspace sounds neat, but whilst theoretically contrails observed from the ground or orbit can be matched to ADS-B readings of aircraft that recently passed through that space, systematically validating that in a legally-valid way in congested airspace seems tricky...
I can't see it being practical to achieve via consumer pressure and wider public awareness campaigns run the risk of getting mixed up with "chemtrails" conspiracy theories
If you want a possible exception to airline lack of sympathy, a UK startup airline Zeroavia is owned by eco-activist billionaire Dale Vince. Their hydrogen powered fleet claims they already intend to capture water emissions to release at lower altitude [1] for the stated purpose of avoiding contrails. Zeroavia are a very atypical airline, currently have zero flights and I'm not sure how much aviation industry executives actually respect Dale, but if you wanted to outreach to an airline that actually might be sympathetic and see PR benefits of shouting about contrails, they'd be a starting point
So I think there's definitely something to be worked on here, but its going to take industry experts more than grassroots campaigning. I think there are probably some really interesting algorithm development projects there for people with the right skillsets too...
(For anyone interested in space, an analogous situation is the aluminium oxide deposited in the mesosphere by deorbiting spacecraft. This used to be negligible. It isn't now that constellations of 10s of 1000s of satellites with short design lives in LEO are a thing. The climate impact is uncertain and not necessarily large but probably negative; the impact on ozone depletion could be much more concerning. Changing mindsets on that one will be harder)
I haven't looked into this at all, but the effect of eradication efforts (whether through gene drive or the traditional sterile insect technique) is that screwworm stop reproducing and cease to exist, not that they die anguishing deaths.
Thanks for the post, Mathias! Do you know whether the increase in welfare of the infected wild animals would be larger than the decrease in welfare of the eradicated screwworms assuming these have positive lives?
I haven't looked into this at all, but the effect of eradication efforts (whether through gene drive or the traditional sterile insect technique) is that screwworm stop reproducing and cease to exist, not that they die anguishing deaths.
I don't know man, virtue signaling to non-vegans and vegans that you care about animals can be done simply by telling people you donate 10% of your money to animal welfare. It doesn't take much more than that. Utilitarianism can be explained.
As for lowering cognitive dissonance, this is an extremely person to person thing. I would never prescribe veganism to an EA with this reasoning. And this this was a common reason, why haven't I also been told to have a pet/animal companion to increase how much moral worth I give animals?
And reducing daily suffering that you cause can also be done better with an extra 10 cents or so. Wouldn't this be more in accordance with your values? Surely 10 cents is also cheaper than veganism.
I don't think that virtue signaling by telling most people you donate 10 percent would with week to noon vegans would work well. Most of my friends would consider me a hypocrite for doing that, and longer explanations wouldn't work for many.
Utilitarianism can be explained, but even after that explanation many would consider eating meat and offsetting hypocritical, even if it might be virtuous.
The point of the virtue signaling is the signaling, not the virtue and the cleanest and easiest way to do that in many circles might be going vegan.
A useful test when moral theorizing about animals is to swap "animals" with "humans" and see if your answer changes substantially. In this example, if the answer changes, the relevant difference for you isn't about pure expected value consequentialism, it's about some salient difference between the rights or moral status of animals vs. humans. Vegans tend to give significant, even equivalent, moral status to some animals used for food. If you give near-equal moral status to animals, "offsetting meat eating by donating to animal welfare orgs" is similar to "donating to global health charities to offset hiring a hitman to target a group of humans". There are a series of rebuttals, counter-rebuttals, etc. to this line of reasoning. Not going to get into all of them. But suffice to say that in the animal welfare space, an animal welfarist carnivore is hesitantly trusted - it signals either a lack of commitment or discipline, a diet/health struggle, a discordant belief that animals deserve far less rights and moral status as humans, or (much rarer) a fanatic consequentialist ideology that thinks offsetting human killing is morally coherent and acceptable. A earnest carnivore that cares a lot about animal welfare is incredibly rare.
This comment is extremely good. I wish I could incorporate some of it into my comment since it hits the cognitive dissonance aspect far better than I did. It's near impossible to give significant moral weight to animals and still think it is okay to eat them.
I am seeing here that they already work closely with Open Philanthropy and were involved in drafting the Executive Order on AI. So this does not seem like a neglected avenue.
Yea I have no idea if they actually need money but if they still want to hire more people to the AI team wouldn't it be better to give the money to RAND to hire those policymakers rather than like the Americans for Responsible Innovation - which open phil currently recommends but is much less prestigious and I'm not sure if they are working side by side with legislators. The fact that open phil gave grants but doesn't currently recommend for individual donors makes me think you are right that they don't need money atm but it would be nice to be sure.
This seems likely to be incorrect to me, at least sometimes. In particular I disagree with the suggestion that the improvement on the margin is likely to be only on the order of 5%.
Let's take someone who moves from donating to global health causes to donating to help animals. It's very plausible that they may think the difference in effectiveness there is by a factor of 10, or even more.
They may also think that non-EA dollars are more easily persuaded to donate to global health initiatives than animal welfare ones. In this case, if a non-EA dollar is 80% likely to go to global health, and 20% to animal welfare, then by their own lights the change in use of their dollar was more than 3x as important as the introduction of the extra non-EA dollar.
How sure are you are right and the other EA (who has also likely thought carefully about their donations) is wrong, though? I'm much more confident that I will increase the impact of someone's donation / spending if they are not in EA, rather than being too convinced of my own opinion and causing harm (by negative side effects, opportunity costs or lowering the value of their donation).
Thanks Luke! It makes sense what you mention. It is true that it would become significantly more messy to track, even when the spirit of the 10% pledge would suggest accounting for it. Just a random idea: perhaps you could offer the option of “pausing” the pledge temporarily so it does not become a blocker for people aiming to do direct work that they deem to be particularly impactful.
Edit: upon reflection I think this idea may not be that useful. Since the 10% pledge is for the entire career, not each year, that flexibility is already incorporated. And a pause could produce some attrition.
It's important to note that few people will share their negative experiences with the Community Health Team because the CHT blacklists people from funding, EAG attendance, job opportunities, etc.
Also, if they cause people to leave the community, you're unlikely to hear about it because they've left the community.
This leads to a large information asymmetry.
I know many people who's lives and impact have been deeply damaged by the CHT but they won't share their experiences because they are afraid of retaliation or have given up on the EA community because of them.
I'm definitely sympathetic to this point, yep. I think it would be very difficult to write a post of this nature if you felt that your participation in EA was being wrongly affected by CH.
At the same time, I think both the negative and positive experiences are difficult to talk about, due to their sensitive nature. I felt comfortable writing this because the incident is now four years old and I'm lucky to be in an incredibly supportive environment; many who have had positive experiences will not want to write about them. Thus, I am not confident there is a "large information asymmetry" in either direction, there are deterrents to information sharing on both sides.
I think the unfortunate reality is: Community Health is not infallible, I would be very keen to hear about mistakes they've made or genuine concerns, as would the team, I'm certain. I'm also acutely aware that a lot of people who exhibit poor behaviour, and are then prevented from taking certain actions within the community, will claim to have been slighted. People who cross clear boundaries and then face consequences do not often go, "this seems right and fair to me, thank you for taking these measures against me to protect others." This is certainly not to say, "no one who says they've been blacklisted or slighted can be correct." This is to say that, I am not sure how to update on claims that CH has damaged people's lives without more information.
It's important to note that few people will share their negative experiences with the Community Health Team because the CHT blacklists people from funding, EAG attendance, job opportunities, etc.
Also, if they cause people to leave the community, you're unlikely to hear about it because they've left the community.
This leads to a large information asymmetry.
I know many people who's lives and impact have been deeply damaged by the CHT but they won't share their experiences because they are afraid of retaliation or have given up on the EA community because of them.
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking he Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.
This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. 👏
Honourable Mentions:
Towards more cooperative AI safety strategies by @richard_ngo: This was a post that I read at exactly the right time for me, as it was a point that I was also highly concerned that the AI Safety field was having a "legitimacy problem".[1] As such, I think Richard's call to action to focus on legitimacy and competence is well made, and I would urge those working explicitly in the field to read it (as well as the comments and discussion on the LessWrong version), and perhaps consider my quick take on the 'vibe shift' in Silicon Valley as a chaser.
On Owning Our EA Affiliationby @Alix Pham: One of the most wholesome EA posts this year on the Forum? The post is a bit bittersweet to me now, as I was moved by it at the time but now I affiliate and identify less with EA now that than I have for a long time. The vibes around EA have not been great this year, and many people are explicitly or implicitly abandoning the movement, alix actually took the radical idea of "do the opposite". She's careful to try to draw a distinction between affiliation and identity, and really engages in the comments leading to very good discussion.
Policy advocacy for eradicating screwworm looks remarkably cost-effective by @MathiasKB🔸:EA Megaprojects are BACK baby! More seriously, this post people had the most 'blow my mind' effect on me this year. Who knew that the US Gov already engages in a campaign of strategic sterile-fly bombing, dropping millions of them on Central America every week? I feel like Mathias did great work finding a signal here, and I'm sure other organisations (maybe an AIM-incubated kind of one) are well placed to pick up the baton.
Forum Posters of the Year:
@Vasco Grilo🔸 - I presume that the Forum has a bat-signal of sorts, where long discussions are made without anyone trying to do an EV calculation. And in such dire times, vasco appears, and always with amazing sincerity and thoroughness. Probably the Forum's current postchild of 'calculate all the things' EA. I think this year he's been an awesome presence on the Forum, and long may it continue.
@Matthew_Barnett - Matthew is somewhat of an engima to me ideologically, there have been many cases where I've read a position of his and gone "no that can't be right". Nevertheless, I think the consistently high-quality nature of his contributions on the Forum, often presenting an unorthodox view compared to the rest of EA, is worth celebrating regardless of whether I personally agree. Furthermore, one of my major updates this year has been towards viewing the Alignment Problem as one of political participation and incentives, and this can probably traced back significantly to his posts this year.
Non-Forum Poasters of the Year:
Matt Reardon (mjreard on X) - Currently, X is not a nice place to be an Effective Altruist at the moment. It seems to be attacked from all directions, and it means it's not fun at all to push back on people and defend the EA point-of-view. Yet Matt has just consistently pushed back on some of the most egregious cases of this,[2] and also has had good discussion in EA Twitter too.
Jacques Thibodeau (JacquesThibs on X) - I think Jacques is great. He does interesting cool work on Alignment and you should consider working with him if you're also in that space. I think one of the most positivie things that Jacques does on X to build bridges across the wider 'AGI Twitter', including many who are sceptical or even hostile to AI Safety work like teortaxesTex or jd_pressman? I think this to his great credit, and I've never (or rarely) seen him get that angry on the platform, which might even deserve another award!
Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldn't shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.
My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/comments/contributors were this year!
I think that the fractured and mixed response to the latest Apollo reports (both for OpenAI and Anthropic) is partially downstream of this loss of trust and legitimacy
This is great JWS, thanks for writing it! After Forum Wrapped is out in Jan, we should have a list of underrated posts (unsure on exact wording), we'll see how it compares.
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking he Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.
This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. 👏
Honourable Mentions:
Towards more cooperative AI safety strategies by @richard_ngo: This was a post that I read at exactly the right time for me, as it was a point that I was also highly concerned that the AI Safety field was having a "legitimacy problem".[1] As such, I think Richard's call to action to focus on legitimacy and competence is well made, and I would urge those working explicitly in the field to read it (as well as the comments and discussion on the LessWrong version), and perhaps consider my quick take on the 'vibe shift' in Silicon Valley as a chaser.
On Owning Our EA Affiliationby @Alix Pham: One of the most wholesome EA posts this year on the Forum? The post is a bit bittersweet to me now, as I was moved by it at the time but now I affiliate and identify less with EA now that than I have for a long time. The vibes around EA have not been great this year, and many people are explicitly or implicitly abandoning the movement, alix actually took the radical idea of "do the opposite". She's careful to try to draw a distinction between affiliation and identity, and really engages in the comments leading to very good discussion.
Policy advocacy for eradicating screwworm looks remarkably cost-effective by @MathiasKB🔸:EA Megaprojects are BACK baby! More seriously, this post people had the most 'blow my mind' effect on me this year. Who knew that the US Gov already engages in a campaign of strategic sterile-fly bombing, dropping millions of them on Central America every week? I feel like Mathias did great work finding a signal here, and I'm sure other organisations (maybe an AIM-incubated kind of one) are well placed to pick up the baton.
Forum Posters of the Year:
@Vasco Grilo🔸 - I presume that the Forum has a bat-signal of sorts, where long discussions are made without anyone trying to do an EV calculation. And in such dire times, vasco appears, and always with amazing sincerity and thoroughness. Probably the Forum's current postchild of 'calculate all the things' EA. I think this year he's been an awesome presence on the Forum, and long may it continue.
@Matthew_Barnett - Matthew is somewhat of an engima to me ideologically, there have been many cases where I've read a position of his and gone "no that can't be right". Nevertheless, I think the consistently high-quality nature of his contributions on the Forum, often presenting an unorthodox view compared to the rest of EA, is worth celebrating regardless of whether I personally agree. Furthermore, one of my major updates this year has been towards viewing the Alignment Problem as one of political participation and incentives, and this can probably traced back significantly to his posts this year.
Non-Forum Poasters of the Year:
Matt Reardon (mjreard on X) - Currently, X is not a nice place to be an Effective Altruist at the moment. It seems to be attacked from all directions, and it means it's not fun at all to push back on people and defend the EA point-of-view. Yet Matt has just consistently pushed back on some of the most egregious cases of this,[2] and also has had good discussion in EA Twitter too.
Jacques Thibodeau (JacquesThibs on X) - I think Jacques is great. He does interesting cool work on Alignment and you should consider working with him if you're also in that space. I think one of the most positivie things that Jacques does on X to build bridges across the wider 'AGI Twitter', including many who are sceptical or even hostile to AI Safety work like teortaxesTex or jd_pressman? I think this to his great credit, and I've never (or rarely) seen him get that angry on the platform, which might even deserve another award!
Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldn't shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.
My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/comments/contributors were this year!
I think that the fractured and mixed response to the latest Apollo reports (both for OpenAI and Anthropic) is partially downstream of this loss of trust and legitimacy
Highly agree with this! In fact, I hope that if a significant number of shelters is produced, that the primary effect would be to help make the case for stopping development of dangerous mirror bio research. It just happens to be that my expertise and experience lends itself more naturally to this rather grim work. I would be very happy to work on something more uplifting next - I am very open to suggestions for the next problem I can help tackle (having been a small part of bringing down the cost of wind energy dramatically).
I should probably emphasize more that the ideal outcome here is of course first that we don't pursue dangerous mirror bio research. And if that happens, that the "next-in-line" ideal outcome would be for gov'ts to create such shelters and distribute them more like Nordic countries have distributed nuclear shelters - not just for "the elites".
This is super helpful, I have tried to reflect this better in an updated title. The shelters I am fairly certain can but built for this material cost (not including labor as in a pinch I think these could be made by a wide range of people, perhaps even by the inhabitants themselves). But it is right that cost effectiveness is much harder than simply summing up material costs - one would have to cost the total solution and also have some grasp of the reduction in x-risk, which is far beyond the scope of what I have done. I simply found a physical structure that seems quite robust.
I believe the consequences of eating vegan are more plausibly characterized as falling under the domain of procreation ethics, rather than that of the ethics of killing. When you eat meat, the only difference you can reasonably expect to make is affecting how many farmed animals are born in the near future, since the fate of the ones that already exist in the farms is sealed (i.e. they'll be killed no matter what) and can't be affected by our dietary choices.
So I think, rather than factory farm offsets being similar to murdering someone and then saving others, they're akin to causing someone's birth in miserable conditions (who later dies prematurely), and then 'offsetting' that harm by preventing the suffering of hundreds of other human beings.
I submit that offsetting still feels morally questionable in this scenario, but at least my intuitions are less clear here.
I didn’t say they fell under the ethics of killing, I was using killing as an example of a generic rights violation under a plausible patient-centered deontological theory to illustrate the difference between “a rights violation happening to one person and help coming for a separate person as an offset” and “one’s harm being directly offset.”
(I agree that it seems a bit more unclear if potential people can have rights, even if they can have moral consideration, and in particular rights to not be brought into existence, but I think it’s very plausible.)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
There are large downsides from this intervention - it could be seen by another nation state as preparation for biowarfare and thus contribute to a bioweapons arms race.
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
The answer to this is probably that in most cases, the welfare realized by that counterfactual life and/or their children might not be huge, but on average still positive. This can be exemplified by the commonly used life satisfaction scale, which ranges from 0-10, and aims to capture all possible wellbeing states. Therefore it follows that the extreme lowest scores represent states worse than not existing at all. There have been several attempts to determine this “neutral point”, above which life is better lived than not.
A survey conducted in Ghana and Kenya estimated the neutral point at 0.56 (IDinsight, 2019). Results from Brazil, China and the US suggest a neutral point of 25 on a 100-point scale (Referenced by HLI, 2022). Another in the UK suggested 2/10 (referenced in Krekel & Frijters, 2021). While people disagree where exactly to locate this neutral point, there is some agreement that it could be between 1.5-2.
When looking at country-level average happiness levels (happiness is closely related to life satisfaction), the only country deceeding 2 is Afghanistan. So while there might be a case for that argument in Afghanistan, in most other countries there will be no problem in expectation. That said, there is of course variance in life satisfaction within countries, so there is still the possibility of edge cases where the intervention benefits a person whose life satisfaction is far below the country-average and deceeds the neutral point. Some within-region life satisfaction averages are provided here.
I personally think people overrate people's stated reasons for extreme behaviour and underrate the material circumstances of their life. In particular, loneliness
As one counterexample, EA is really rare in humans, but does seem more fueled by principles than situations.
(Otoh, if situations make one more susceptible to adopting some principles, is any really the "true cause"? Like plausibly me being abused as a child made me want to reduce suffering more, like this post describes. But it doesn't seem coherent to say that means the principles are overstated as an explanation for my behavior.
I dunno why loneliness would be different; first thought is that loneliness means one has less of a community to appeal to, so there's less conformity biases preventing such a person from developing divergent or (relatively) extreme views; the fact that they can find some community around said views and have conformity pressures towards them is also a factor of course; and that actually would be an 'unprincipled' reason to adopt a view so i guess for that case it does make sense to say, "it's more situation(-activated biases) than genuine (less-biasedly arrived at) principles".
An implication in my view is that this isn't particularly about extreme behavior; less biased behavior is just rare across the spectrum. (Also, if we narrow in on people who are trying to be less biased, their behavior might be extreme; e.g., Rationalists trying prevent existential risk from AI seems deeply weird from the outside))
Haven't seen anyone mention RAND as a possible best charity for AI stuff and I guess I'd like to throw their hat in the ring or at least invite people to tell me why I'm wrong. My core claims are approximately:
Influencing the US (federal) government is probably one of the most scalable cost-effective routes for AI safety.
Think tanks are one of the most cost-effective ways to influence the US government.
The prestige of the think tank matters for getting into the room/influencing change.
Rand is among the most prestigious think tank doing AI safety work.
It's also probably the most value-aligned, given Jason Matheny is in charge.
You can earmark donations to the catastrophic risks/emerging risks departments
I'll add I have no idea if they need/have asked for marginal funding.
I am seeing here that they already work closely with Open Philanthropy and were involved in drafting the Executive Order on AI. So this does not seem like a neglected avenue.
I feel confusion about "where does the range come from? what's it supposed to represent?"
Honestly this echoes some of my unease about precise credences in the first place!
Indeed. :) If “where do these numbers come from?” is your objection, this is a problem for determinate credences too. We could get into the positive motivations for having indeterminate credences, if you’d like, but I’m confused as to why your questions are an indictment of indeterminacy in particular.
Some less pithy answers to your question:
They might come from the same sort of process people go through when generating determinate credences — i.e. thinking through various considerations and trying to quantify them. But, at the step where you find yourself thinking, “Hm, it could be 0.2, but it could also be 0.3 I guess, idk…”, you don’t force yourself to pick just one number.
More formally, interval-valued credences fall out of Bradley’s (2017, sec 11.5.2) representation theorem. Even if your beliefs are just comparative judgments like “is A more/less/equally/[none-of-the-above] likely than B?” — which are realistic for bounded agents like us — if they satisfy all the usual axioms of probabilism except for completeness,[1] they have the structure of a set of probability distributions.
I don't see probabilities as magic absolutes, rather than a tool
I’m confused about this “tool” framing, because it seems that in order to evaluate some numerical representation of your epistemic state as “helpful,” you still need to make reference to your beliefs per se. There’s no belief-independent stance from which you can evaluate beliefs as useful (see this post).[2]
The epistemic question here is whether your beliefs per se should have the structure of (in)determinacy, e.g., do you think you should always be able to say “intervention XYZ is net-good, net-bad, or net-neutral for the long-term future”. That’s what I’m talking about when talking about “rational obligation” to have (in)determinate credences in some situation. It's independent of the kind of mere practical limitations on the precision of numbers in our heads you’re talking about.
Analogy: Your view here is like that of a hedonist saying, "Oh yeah, if I tried always directly maximizing my own pleasure, I'd feel worse. So pursuing non-pleasure things is sometimes helpful for bounded agents, by a hedonist axiology. But sometimes it actually is better to just maximize pleasure." Whereas I'm the non-hedonist saying, "Okay but I'm endorsing the non-pleasure stuff as intrinsically valuable, and I'm not sure you've explained why intrinsically valuing non-pleasure stuff is confused." (The hedonism thing is just illustrative, to be clear. I don't think epistemology is totally analogous to axiology.)
for the normal vNM kind of reasons
The VNM theorem only tells you you’re representable as a precise EV maximizer if your preferences satisfy completeness. But completeness is exactly what defenders of indeterminate beliefs call into question. Rationality doesn’t seem to demand completeness — you can avoid money pumps / Dutch books with incomplete preferences.
For a toy example, suppose that I could take action X, which will lose me $1 if the 20th digit of Pi is odd, and gain me $2 if the 20th digit of Pi is even. Without doing any calculations or looking it up, my range of credences is [0,1] -- if I think about it long enough (at least with computational aids), I'll resolve it to 0 or 1. But right now I can still make guesses about my expectation of where I'd end up
I think this fights the hypothetical. If you “make guesses about your expectation of where you’d end up,” you’re computing a determinate credence and plugging that into your EV calculation. If you truly have indeterminate credences, EV maximization is undefined.
I don't think I'd agree with that.
I’d like to understand why, then. As I said, if indeterminate beliefs are on the table, it seems like the straightforward response to unknown unknowns is to say, “By nature, my access to these considerations is murky, so why should I think this particular determinate ‘simplicity prior’ is privileged as a good model?”
I appreciated a bunch of things about this comment. Sorry, I'll just reply (for now) to a couple of parts.
The metaphor with hedonism felt clarifying. But I would say (in the metaphor) that I'm not actually arguing that it's confused to intrinsically care about the non-hedonist stuff, but that it would be really great to have an account of how the non-hedonist stuff is or isn't helpful on hedonist grounds, both because this may just be helpful to input into our thinking to whatever extent we endorse hedonist goods (even if we may also care about other things), and because without having such an account it's sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds.
I think the piece I feel most inclined to double-click on is the digits of pi piece. Reading your reply, I realise I'm not sure what indeterminate credences are actually supposed to represent (and this is maybe more fundamental than "where do the numbers come from?"). Is it some analogue of betting odds? Or what?
And then, you said:
I think this fights the hypothetical. If you “make guesses about your expectation of where you’d end up,” you’re computing a determinate credence and plugging that into your EV calculation. If you truly have indeterminate credences, EV maximization is undefined.
To some extent, maybe fighting the hypothetical is a general move I'm inclined to make? This gets at "what does your range of indeterminate credences represent?". I think if you could step me through how you'd be inclined to think about indeterminate credences in an example like the digits of pi case, I might find that illuminating.
(Not sure this is super important, but note that I don't need to compute a determinate credence here -- it may be enough have an indeterminate range of credences, all of which would make the EV calculation fall out the same way.)
Haven't seen anyone mention RAND as a possible best charity for AI stuff and I guess I'd like to throw their hat in the ring or at least invite people to tell me why I'm wrong. My core claims are approximately:
Influencing the US (federal) government is probably one of the most scalable cost-effective routes for AI safety.
Think tanks are one of the most cost-effective ways to influence the US government.
The prestige of the think tank matters for getting into the room/influencing change.
Rand is among the most prestigious think tank doing AI safety work.
It's also probably the most value-aligned, given Jason Matheny is in charge.
You can earmark donations to the catastrophic risks/emerging risks departments
I'll add I have no idea if they need/have asked for marginal funding.
Thanks for sharing your experience! We at EA Netherlands have been thinking about final events lately. Could you share more specific details about what yours looked like? Agenda,vibes, etc.
We think both programs are relatively valuable, but are less aligned with our current vision (of providing value through helping EA university group organizers run better groups) than some of our alternatives.
We have made this (difficult!) decision so that we can instead focus on:
Increasing capacity on the team (currently under 3 FTE), including by recruiting a strategy lead for our pilot university work (and building out that work more generally)
This decision does not rule out running UGOR or our internship in the future. In fact, we are exploring whether we should run UGOR over (northern hemisphere) summer break, allowing more groups to better prepare for their academic year. We piloted such a retreat as part of our pilot university programming this summer, and that worked well.
We aim to continue to transparently share updates such as this one! We are also always open to feedback (including anonymously), especially if you have specific suggestions on what things we should deprioritize to create space for UGOR or the summer internship.
Thank you for sharing this update! I’m interested in learning more about how you arrived at this decision, as we at EA Netherlands often encounter similar choices. Your insights could be really valuable for us.
Would you mind sharing a bit about your reasoning process?
I think offsetting emissions and offsetting meat consumption are comparable under utilitarianism, but much less comparable under most deontological moral theories, if you think animals have rights. For instance, if you killed someone and donated $5,000 to the Malaria Consortium, that seems worse – from a deontological perspective – than if you just did nothing at all, because the life you kill and the life you save are different people, and many deontological theories are built on the “separateness of persons.” In contrast, if you offset your CO2 emissions, you’re offsetting your effect on warming, so you don’t kill anyone to begin with (because it’s not like your CO2 emissions cause warming that hurts agent A, and then your offset reduces temperatures to benefit agent B). It might be similarly problematic to offset your contribution to air pollution, though, because the effects of air pollution happen near the place where the pollution actually happened.
I believe the consequences of eating vegan are more plausibly characterized as falling under the domain of procreation ethics, rather than that of the ethics of killing. When you eat meat, the only difference you can reasonably expect to make is affecting how many farmed animals are born in the near future, since the fate of the ones that already exist in the farms is sealed (i.e. they'll be killed no matter what) and can't be affected by our dietary choices.
So I think, rather than factory farm offsets being similar to murdering someone and then saving others, they're akin to causing someone's birth in miserable conditions (who later dies prematurely), and then 'offsetting' that harm by preventing the suffering of hundreds of other human beings.
I submit that offsetting still feels morally questionable in this scenario, but at least my intuitions are less clear here.
Comments on 2024-12-21
sammyboiz @ 2024-12-21T05:58 (–3) in response to My Problem with Veganism and Deontology
Two of your reasons to go vegan involve getting to tell others you are vegan. I find this pretty dishonest because I assume you aren't telling them this.
MarcusAbramovitch @ 2024-12-21T16:07 (+2)
It's not about telling others I'm vegan. It's about telling them that I think non human animals are worthy of moral consideration. I also tell people that I donate to animal welfare charities and even which ones.
Ozzie Gooen @ 2024-12-19T00:22 (+6) in response to Meta Charity Funders: Third round retrospective
Happy to see this! I continue to think that smart EA funding expansion is an important area and wish it got more attention.
Minor notes:
I'm surprised to see the focus fundraising charities focused on international countries. I'm looking now, and it seems like the giant majority of charitable funding is given by the top few countries. (Maybe this is where Ark and Bedrock are focused, that wasn't clear).
Denis @ 2024-12-21T15:57 (+5)
"Links to the nonprofits would be useful."
Here are a few which may not be easy to find since they're quite new:
Effective Giving Ireland: Effective Giving Ireland
Benefficenza: Home – Benefficienza
Mieux Donner: Mieux Donner
We at Effective Giving Ireland are thrilled to be supported by Meta-Charity Funders. It's really going to be a game-changer for us. For tax-reasons, we'd strongly encourage everyone to donate effectively in their home countries, many of which will now have an effective giving option, which is often tax deductible.
Hugh P @ 2024-12-21T15:53 (+3) in response to Ten big wins in 2024 for farmed animals
I notice that one of the UK grants for alernative proteins which you cite says, "Cultured meat, insect-based proteins and proteins made by fermentation" (my emphasis). I find this quite concerning.
I didn't previously realise the term "alternative proteins" includes insects. Has this always been the case? Is the definition contested or is a different term needed?
From the NAPIC website, they include Entocycle, "a world-leading provider of insect farming technology", as one of their partners (though this may not be representative). Interestingly Entocycle do have two pages on insect welfare.
Ozzie Gooen @ 2024-12-21T01:40 (+4) in response to Non-Profit Casino
I am in favor of people considering unconventional approaches to charity.
At the same time, I find it pretty easy to argue against this. Some immediate things that come to mind:
1. My impression is that gambling is typically net-negative to participants, often highly so. I generally don't like seeing work go towards projects that are net-negative to their main customers (among others).
2. Out of all the "do business X, but it goes to charity", why not pick something itself beneficial? There are many business areas to choose from. Insurance can be pretty great - I think Lemonade Insurance did something clever with charity.
3. I think it's easy to start out altruistic with something like this, then become a worse person as you respond to incentives. In the casino business, the corporation is highly incentivized to do increasingly sleazy tactics to find, bait, and often bankrupt whales. If you don't do this, your competitors will, and they'll have more money to advertise.
4. I don't like making this the main thing, but I'd expect the PR to be really bad for anything this touches. "EAs don't really care about helping people, they just use that as an excuse to open sleazy casinos." There are few worse things to be associate with. A lot of charities are highly protective of their brands (and often with good reason).
5. It's very easy for me to imagine something like this creating worse epistemics. In order to grow revenue, it will be very "convenient" if you downplayed the harms caused by the casino. If such a thing does catch on in a certain charitable cluster, very soon that charitable cluster will be encouraged to lie and self-deceive. We saw some of this with the FTX incident.
6. The casino industry attracts and feeds off clients with poor epistemics. I'd imagine they (as in, the people the casino actually makes money from) wouldn't be the type who would care much about reasonable effective charities.
When I personally imagine a world where, "A significant part of the effective giving community is tied to high-rolling casinos", it's hard for me to imagine this not being highly distopic.
By all this, I hope the author doesn't treat this at all on an attack on them specifically. But I would consider it an attack on specific future project proposals that suggest advancing manipulative and harmful industries and tying such work to the topics of effective giving or effective philanthropy. I very much do not want to see more work done here. I'm spending some time on this comment, mainly to use this as an opportunity to hopefully dissuade others considering this sort of thing in the future.
On this note, I'd flag that I think a lot of the crypto industry has been full of scams and other manipulative and harmful behavior. Some of this got very close to EA (i.e. with FTX), and I'm sure with a long tail of much smaller projects. I consider much of this (the bad parts) a black mark on all connected+responsible participants and very much do not want to see more of it.
Mckiev 🔸 @ 2024-12-21T14:51 (+4)
I appreciate your take @Ozzie Gooen.
I agree that casinos are an evil business, and I would be extremely wary of making people worse off in a hope to "make it up" by charitable contributions.
@Brad West🔸 have already answered point by point, so I'll just add that I believe it's better to think of my proposal as a charity, that also provides games to it's customers, rather than casino that donates it's profits.
I'd argue that regular casinos are net positive for people without a gambling addiction, who treat is as an evening entertainment with an almost guaranteed loss. The industry preys on people who lost more then they could afford and are trying to get even, and it is not possible case.
I struggle to imagine someone, who would donate more to their DAF that they feel comfortable with because they felt devastated that money went to the charity of not their choice.
david_reinstein @ 2024-12-21T11:58 (+2) in response to ACX/EA Lisbon December 2024 Meetup
Okay, it was a time zone mixup. I guess I’ll see you here at 3 pm?
Patrick Gruban 🔸 @ 2024-12-21T11:29 (+6) in response to Patrick Gruban's Quick takes
A year ago, I wrote "It's OK to Have Unhappy Holidays" during a time when I wasn’t feeling great about the season myself. That post inspired someone to host an impromptu Christmas Eve dinner, inviting others on short notice. Over vegan food and wine, six people came together to share their feelings about the holidays, reflect on the past year with gratitude, and enjoy a truly magical evening. It’s a moment I’m deeply thankful for. Perhaps this could inspire you this year—to host a gathering or spontaneously reach out to those nearby for a walk, a drink, or a shared meal.
Charles Dillon 🔸 @ 2024-12-21T10:44 (+1) in response to The Most Valuable Dollars Aren’t Owned By Us
Personally speaking, if I say I think something is 10x as effective, I mean that as an all-things-considered statement, which includes deferring however much I think it is appropriate to the views of others.
Milli🔸 @ 2024-12-21T11:12 (+3)
That's not what I asked: In percentage points, how likely do you think you are right (and people who value e.g. GHWB over Animal Welfare are wrong)?
david_reinstein @ 2024-12-21T11:03 (+5) in response to Should I tell my clients I donated 10% of what they paid me?
Probably depends on how you describe it and frame it. How do you explain why you are telling them this? If you’re willing, you might do a trial on this. Do something like divide your clients into two random groups and send this message to half. See if you observe any difference (try to keep track of the numbers as well as the more qualitative outcomes like how they respond to the card)
david_reinstein @ 2024-12-21T10:48 (+2) in response to ACX/EA Lisbon December 2024 Meetup
I think I’m at the EAACX meeting point in Amalia Rodriguez Park. I’m near the statue of two women kissing. (O segredo). Has the event moved or am I just the first one here? I think I’ll go to the café now and get something to eat and wait to hear from anyone.
Milli🔸 @ 2024-12-20T15:43 (+3) in response to The Most Valuable Dollars Aren’t Owned By Us
How sure are you are right and the other EA (who has also likely thought carefully about their donations) is wrong, though?
I'm much more confident that I will increase the impact of someone's donation / spending if they are not in EA, rather than being too convinced of my own opinion and causing harm (by negative side effects, opportunity costs or lowering the value of their donation).
Charles Dillon 🔸 @ 2024-12-21T10:44 (+1)
Personally speaking, if I say I think something is 10x as effective, I mean that as an all-things-considered statement, which includes deferring however much I think it is appropriate to the views of others.
david_reinstein @ 2024-12-21T10:33 (+2) in response to ACX/EA Lisbon December 2024 Meetup
I'm a ok at at the linha dagua cafe. Is everyone on the hill?
david_reinstein @ 2024-12-21T10:41 (+2)
Wait, it’s hard to know which hill this refers to
david_reinstein @ 2024-12-21T10:33 (+2) in response to ACX/EA Lisbon December 2024 Meetup
I'm a ok at at the linha dagua cafe. Is everyone on the hill?
Oscar Sykes @ 2024-12-20T21:11 (+13) in response to Update on EA Global costs and 2024 CEA Events Team activities
Well done, it's super cool to see everything you guys have achieved this year. One thing I was surprised by is that EAGxs are almost three times cheaper than EAGs while having a slightly higher likelihood to recommend. I assume part of this is because EAGs are typically held in more expensive areas, but I'd be surprised if that explained all of it. Are there any other factors that explain the cost difference?
OllieBase @ 2024-12-21T10:25 (+5)
Good question! Yes, TL;DR large venues in major US/UK cities are more expensive per-attendee than smaller venues in other cities.
Eli covered this a bit in our last post about costs. There aren't that many venues big enough for EA Globals, and the venues that are big enough force you to use their in-house catering company, generally have a minimum mandatory spend, and significantly mark up the costs of their services. Our best guesses at why (from Eli's post):
I suspect straightforward lack of competition also plays a role. As an extreme example, if there's only one venue in a city large enough for conferences and you want to run a conference there, they can basically charge what they want to.
Meanwhile, venues that can host 200–600 people (EAGx events) are easier to come by. EAGx organizers often secure university venues which are cheap but often more difficult to work with. Location does play a role, of course. You may not be surprised to learn that Mexico City, Bangalore and Berlin are cheaper than Oakland, London and Boston. But we also hosted events in Sydney and Copenhagen this year, so I think the above cost vs. size factor / availability of space plays a bigger role.
I do want to add that we are consistently impressed by EAGx and EA Summit organizers when it comes to resourcefulness and the LTR scores they generate given the lower CPA. The EA Brazil Summit team, for example, had food donated by the Brazilian Vegetarian Society. The bar for hustling in service of impact is continuously being raised, and we hustle on.
(Other team members or EAGx organizers should feel free to jump in here and push back / add more details.)
PabloAMC 🔸 @ 2024-12-21T10:16 (+3) in response to Should I tell my clients I donated 10% of what they paid me?
I guess it is ok to mention it, particularly in a holiday gift. Specifically I would feel it is ok to mention what it achieved without being preachy. Some companies use smaller amounts (1%) to signal social impact.
Engin Arıkan @ 2024-12-21T10:11 (+1) in response to Ask Us Anything: EA Animal Welfare Fund
How often grantees pivot to more modest goals or different tactics after they realise that their initial goals are very hard to reach or their initial idea does not deliver results- after they receive their grants for certain high goals and specific plans in their application? How do you balance holding grantees accountable vs. providing them flexibility?
JWS 🔸 @ 2024-12-20T13:26 (+51) in response to JWS's Quick takes
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
Best Forum Post I read this year:
Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations by @titotal
It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking he Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.
This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. 👏
Honourable Mentions:
Forum Posters of the Year:
Non-Forum Poasters of the Year:
Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldn't shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.
My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/comments/contributors were this year!
I think that the fractured and mixed response to the latest Apollo reports (both for OpenAI and Anthropic) is partially downstream of this loss of trust and legitimacy
e.g. here and here
EffectiveAdvocate🔸 @ 2024-12-21T08:23 (+3)
As a bit of a lurker, let me echo all of this, particularly the appreciation of @Vasco Grilo🔸. I don't always agree with him, but adding some numbers makes every discussion better!
tobycrisford 🔸 @ 2024-12-21T07:00 (+8) in response to o3
The ARC performance is a huge update for me.
I've previously found Francois Chollet's arguments that LLMs are unlikely to scale to AGI pretty convincing. Mainly because he had created an until-now unbeaten benchmark to back those arguments up.
But reading his linked write-up, he describes this as "not merely an incremental improvement, but a genuine breakthrough". He does not admit he was wrong, but instead paints o3 as something fundamentally different to previous LLM-based AIs, which for the purpose of assessing the significance of o3, amounts to the same thing!
adnhw🔸 @ 2024-12-17T10:15 (+3) in response to My Problem with Veganism and Deontology
As a vegan I agree with Marcus and Jeff's takes but also think at least carnitarianism (not eating fish) is justifiable on pure utilitarian grounds. The 5 cent offset estimate is miles off (by a factor of 50-100) for fish and shrimp here, and this is how your argument falls.
I made a rough model that suggests a 100g cooked serving for farmed carp is ~1.1 years in a factory farm, and that of farmed shrimp is ~6 years in a factory farm. I modelled salmon and it came out much lower than this, but I expect this to grow when I factor in the fact salmon are carnivorous and farmed fish are used in salmon fish feed.
This is a lot of time, and it's more expensive to pay for offsets that cover a longer time period. We have two main EA-aligned options for aquaculture 'offsets', one is the Fish Welfare Initiative, which (iirc) improves the life of a single fish across its lifetime for a marginal dollar, and the other is the Shrimp Welfare Project, which improves the death (a process lasting 3-5 minutes) of 1000 shrimp per year for a marginal dollar (we don't know how good their corporate campaigns will be yet).
I'm really not sure how good it is for a carp to have a lower stocking density and higher water quality, which is FWI's intervention in India, and essentially the best case for FWI's effectiveness. If we assume it's a 30% reduction in lifetime pain we can offset a fish meal for roughly $3.33.
I don't think it's good to prevent 1 year of shrimp suffocation and then go off and cause shrimp to spend 100 years in farmed conditions (which are really bad, to be clear). Biting the bullet on that and assuming a stunner lasts 20 years and no discount rate, to offset a single shrimp meal you'd have to pay $4.6 (nearly 100 times more than the estimate you used).
Maybe you could offset using a different species (chicken, through corporate commitments). Vasco Grilo thinks a marginal dollar gets you 2 years of chicken life in factory farms averted. Naively I'd think that chicken lives are better than shrimp lives, but shrimp matter slightly less morally. This time you probably have to pay $3 to offset a shrimp meal using the easiest species to influence.
Additionally, the lead time on offsets is long (I would think at least five years from a donation to a corporate commitment being implemented). It's not good to have an offset that realises most of its value 20 years from now when, by then, there is a much higher chance of lab grown meat being cheaper or animal welfare regulations being better.
I think that you should at least be carnitarian because this is incredibly easy and based on my modelling (second sheet) it's the vast majority (90-95%) of the (morally adjusted) time saved in factory farms associated with vegetarianism. I doubt that any person gets $4 of utility from eating a different kind of meat, and this just adds up over time.
sammyboiz @ 2024-12-21T06:29 (+1)
love it
Christoph Eggert🔸 @ 2024-12-17T10:32 (+9) in response to My Problem with Veganism and Deontology
As an omnivore who wants to eat lots of protein for fitness, I would love to agree with this and just keep on piling up chicken breasts on my plate. However, I think there are some factors ignored here. Most of them have already been addressed, but I'd like to add another that I did not find so far:
Not eating meat has not only an effect in terms of less demand for meat, it also increases demand for alternatives. This should, in my opinion, not be underestimated, as it also makes the diet change much easier.
For example: In Germany, we have a company called Rügenwalder Mühle. The origins of this company go back to a butcher shop back in 1834 and consequently, they always sold meat-based products. However, in 2014 they introduced vegetarian and vegan alternatives that were so great in terms of taste, quality and nutritional value that the demand was incredibly high. By now, these products bring in more revenue for them than the meat products. Obviously, this company will now focus more and more on the alternatives and they keep expanding their catalogue, often times with very high protein. This makes it much easier for a person like me to consider alternatives, and leads people to consume less meat even if they don't have any moral motivation to go vegan.
I doubt that any realistic amount of donations can top this. Sure, e.g. The Good Food Institute is basically trying to go into this direction, but at the end the demand needs to be there for it to work out long-term. Similar to voting in democracies, I think the "small effect" of our decisions can have quite an impact here that is hard to replace with donations.
sammyboiz @ 2024-12-21T06:22 (+1)
I know that you state this as a reason that has not been addressed so your argument is probably not your main argument. But if you are using this as a main reason for going vegan, I feel like it misses the point. Maybe going vegan yourself makes it 20% easier for the next person to go vegan. That is still nowhere near the cost-effectiveness/effort-effectiveness of donating to animal welfare since the one estimate I listed was $1000 to offset a lifetime of veganism.
Theodore Yohalem Shouse 🔸 @ 2024-12-18T05:54 (+3) in response to My Problem with Veganism and Deontology
I think @Richard Y Chappell🔸 is right. I'd add that lots of my non-EA peers care about hypocrisy (ie, they would be unwilling to entertain arguments in favour of veganism or donating to animal welfare coming from a non-vegan).
I care a lot about spreading the cause of veganism (and effective altruism more generally), and I think that by eating vegan I hold a certain amount of moral legitimacy in the eyes of others that I don't want to give up because it might help me convince them about animal welfare or EA one day. (Being vegan also provides some reflective moral legitimacy or satisfaction to the irrational part of me that also cares about hypocrisy.)
sammyboiz @ 2024-12-21T06:17 (+1)
My question for you is why do you promote AW donation AND veganism. Do you think you can increase your EU by only advocating for AW donations? Do you care that others abide by deontological side-contraints?
Ozzie Gooen @ 2024-12-21T01:40 (+4) in response to Non-Profit Casino
I am in favor of people considering unconventional approaches to charity.
At the same time, I find it pretty easy to argue against this. Some immediate things that come to mind:
1. My impression is that gambling is typically net-negative to participants, often highly so. I generally don't like seeing work go towards projects that are net-negative to their main customers (among others).
2. Out of all the "do business X, but it goes to charity", why not pick something itself beneficial? There are many business areas to choose from. Insurance can be pretty great - I think Lemonade Insurance did something clever with charity.
3. I think it's easy to start out altruistic with something like this, then become a worse person as you respond to incentives. In the casino business, the corporation is highly incentivized to do increasingly sleazy tactics to find, bait, and often bankrupt whales. If you don't do this, your competitors will, and they'll have more money to advertise.
4. I don't like making this the main thing, but I'd expect the PR to be really bad for anything this touches. "EAs don't really care about helping people, they just use that as an excuse to open sleazy casinos." There are few worse things to be associate with. A lot of charities are highly protective of their brands (and often with good reason).
5. It's very easy for me to imagine something like this creating worse epistemics. In order to grow revenue, it will be very "convenient" if you downplayed the harms caused by the casino. If such a thing does catch on in a certain charitable cluster, very soon that charitable cluster will be encouraged to lie and self-deceive. We saw some of this with the FTX incident.
6. The casino industry attracts and feeds off clients with poor epistemics. I'd imagine they (as in, the people the casino actually makes money from) wouldn't be the type who would care much about reasonable effective charities.
When I personally imagine a world where, "A significant part of the effective giving community is tied to high-rolling casinos", it's hard for me to imagine this not being highly distopic.
By all this, I hope the author doesn't treat this at all on an attack on them specifically. But I would consider it an attack on specific future project proposals that suggest advancing manipulative and harmful industries and tying such work to the topics of effective giving or effective philanthropy. I very much do not want to see more work done here. I'm spending some time on this comment, mainly to use this as an opportunity to hopefully dissuade others considering this sort of thing in the future.
On this note, I'd flag that I think a lot of the crypto industry has been full of scams and other manipulative and harmful behavior. Some of this got very close to EA (i.e. with FTX), and I'm sure with a long tail of much smaller projects. I consider much of this (the bad parts) a black mark on all connected+responsible participants and very much do not want to see more of it.
Brad West🔸 @ 2024-12-21T06:17 (+6)
Re #1 - the customers in OPs contemplation would have already committed the funds to be donated and prospective wins would inure to the benefit of charities. So it isn't clear to me that the same typical harm applies (if you buy the premise that gamblers are net harmed by gambling). There wouldn't be the circumstance where the gambler feels they need to win it back - because they've already lost the money when they committed it to the DAF.
Re #2 - this could produce a good experience for customers - donating money to charities while playing games. And with how OP set it up, they know what they are losing (unlike with a typical casino there's that hope of winning it big).
Re #3 - for the reasons discussed above, the predatory and deceptive implications are less significant here. Unlike when someone takes money to a slot machine in a typical casino, when they put the money in the DAF they no longer have a chance of "getting it back"
Re #4 - yeah there might be some bad pr. But if people liked this and substituted it for normal gambling, it probably would be less morally problematic for the reasons discussed above.
Re #5 - I'm not really sure that this business is as morally corrosive as you suggest... It's potentially disadvantaging the gambler's preferred charity to the casino's, but not by much, and not without the gambler's knowledge.
Re #6 - the gamblers could choose the charities that are the beneficiaries of their DAF. And I don't know that enjoying gambling means that you wouldn't like to see kids saved from malaria and such.
I think your criticisms would better apply to a straight Profit for Good casino (normal casino with charities as shareholder). The concerns you bring up are some reasons I think a PFG casino, though an interesting idea, would not be a place I'd be looking to do as an early, strategic PFG (also big capital requirements).
OP's proposal is much more wholesome and actually addresses a lot more of the ethical concerns. I just think people may not be interested in gambling as much if there was not the prospect of winning money for themselves.
yanni kyriacos @ 2024-12-20T10:35 (+6) in response to My Problem with Veganism and Deontology
Interesting post! Would you keep a human slave if there was an effective anti slavery charity? Or is this speciesism?
sammyboiz @ 2024-12-21T06:15 (+1)
I would respond to this, the same way I did to this other comment
MatthewDahlhausen @ 2024-12-19T19:44 (+17) in response to My Problem with Veganism and Deontology
A useful test when moral theorizing about animals is to swap "animals" with "humans" and see if your answer changes substantially. In this example, if the answer changes, the relevant difference for you isn't about pure expected value consequentialism, it's about some salient difference between the rights or moral status of animals vs. humans. Vegans tend to give significant, even equivalent, moral status to some animals used for food. If you give near-equal moral status to animals, "offsetting meat eating by donating to animal welfare orgs" is similar to "donating to global health charities to offset hiring a hitman to target a group of humans". There are a series of rebuttals, counter-rebuttals, etc. to this line of reasoning. Not going to get into all of them. But suffice to say that in the animal welfare space, an animal welfarist carnivore is hesitantly trusted - it signals either a lack of commitment or discipline, a diet/health struggle, a discordant belief that animals deserve far less rights and moral status as humans, or (much rarer) a fanatic consequentialist ideology that thinks offsetting human killing is morally coherent and acceptable. A earnest carnivore that cares a lot about animal welfare is incredibly rare.
sammyboiz @ 2024-12-21T06:12 (+4)
Are people here against killing one to save two in a vacuum? I thought EA was very utilitarian. I think intuitively, causing harm is repulsive but ultimately, our goal should be creating a better world.
To your "animal" to "human" swap, it's hard to give "would you kill/eat humans if you could offset" as an double standard since most self-proclaimed utilitarians are still intuitively repulsed to immoral behavior like causing harm to humans, cannibalism, etc. On the other hand, we are biologically programmed to not care when eating animal flesh, even if we deem animal suffering immoral. What this means is that I would be way to horrified to offset killing or eating a human even if I deem it moral. On the other hand, I can offset eating an animal because I don't intuitively care about the harm I caused. I am too disconnected, biologically preprogrammed, and cognitively dissonant. Therefore, offsetting animal suffering is not repulsive nor immoral to me.
MarcusAbramovitch @ 2024-12-16T23:06 (+14) in response to My Problem with Veganism and Deontology
I listed in descending order of importance. I might be confused for one of those "hyper rationalist" types in many instances. I think rationalist undervalue the cognitive dissonance. In my experience, a lot of rationalists just don't value non human animals. Even rationalists behave in a much more "vibes" based way than they'd have you believe. It really is hard to hold in your head both "it's okay to eat animals" and "we can avert tremendous amounts of suffering to hundreds of animals per dollar and have a moral compulsion to do so".
I also wouldn't call what I do virtue signaling. I never forthright tell people and I live in a very conservative part of the world.
sammyboiz @ 2024-12-21T05:58 (–3)
Two of your reasons to go vegan involve getting to tell others you are vegan. I find this pretty dishonest because I assume you aren't telling them this.
NickLaing @ 2024-12-20T17:00 (+3) in response to My Problem with Veganism and Deontology
I don't think that virtue signaling by telling most people you donate 10 percent would with week to noon vegans would work well. Most of my friends would consider me a hypocrite for doing that, and longer explanations wouldn't work for many.
Utilitarianism can be explained, but even after that explanation many would consider eating meat and offsetting hypocritical, even if it might be virtuous.
The point of the virtue signaling is the signaling, not the virtue and the cleanest and easiest way to do that in many circles might be going vegan.
sammyboiz @ 2024-12-21T05:53 (+1)
So if they ask you, "why are you vegan?", your honest answer would be "because I need you to accept me as a non-hypocrite."????? I don't think vegans would give you any extra consideration if they knew this was your reasoning. Any other reason you give would be dishonest and misleading.
yanni kyriacos @ 2024-12-21T05:10 (+2) in response to Yanni Kyriacos's Quick takes
If transformative AI is defined by its societal impact rather than its technical capabilities (i.e. TAI as process not a technology), we already have what is needed. The real question isn't about waiting for GPT-X or capability Y - it's about imagining what happens when current AI is deployed 1000x more widely in just a few years. This presents EXTREMELY different problems to solve from a governance and advocacy perspective.
E.g. 1: compute governance might no longer be a good intervention
E.g. 2: "Pause" can't just be about pausing model development. It should also be about pausing implementation across use cases
Xing Shi Cai @ 2024-12-12T09:57 (+1) in response to Factory farming as a pressing world problem
https://notebooklm.google.com/notebook/de9ec521-56b3-458f-a261-2294e099e08c/audio It seems that I missed an “o” at the end. 😂
David_R @ 2024-12-21T03:26 (+3)
I was listening to the audio and noticed a mistake at 1:15. It says:
That's very incorrect but I don't mean to nitpick because this is still super interesting technology in its infancy.
Habryka @ 2024-12-21T00:34 (+4) in response to News from THL UK: Judge rules on our historic Frankenchicken case
Does someone have a rough fermi on the tradeoffs here? On priors it seems like chickens bred to be bigger would overall cause less suffering because they replace more than one chicken that isn't bread to be as big, but I would expect those chickens to suffer more. I can imagine it going either way, but I guess my prior is that it was broadly good for each individual chicken to weigh more.
I am a bit worried the advocacy here is based more on a purity/environmentalist perspective where genetically modifying animals is bad, but I don't give that perspective much weight. But it could also be great from a more cost-effectiveness/suffering-minimization oriented perspective, and would be curious in people's takes.
(Molly was asked this question in a previous post two months ago, but as far as I can tell responded mostly with orthogonal claims that don't really engage with the core ethical question, so am curious in other people's takes)
Ozzie Gooen @ 2024-12-21T02:05 (+2)
(Obvious flag that I know very little about this specific industry)
Agreed that this seems like an important issue. Some quick takes:
Less immediately- obvious pluses/minuses to this sort of campaign:
- Plus #1: I assume that anything the animal industry doesn't like would increase costs for raising chickens. I'd correspondingly assume that we should want costs to be high (though it would be much better if it could be the government getting these funds, rather than just decreases in efficiency).
- Plus #2: It seems possible that companies have been selecting for growth instead of for well-being. Maybe, if they just can't select for growth, then selecting more for not-feeling-pain would be cheaper.
- Minus #1: Focusing on the term "Frankenchicken" could discourage other selective breeding or similar, which could be otherwise useful for very globally beneficial attributes, like pain mitigation.
- Ambiguous #1: This could help stop further development here. I assume that it's possible to later use selective breeding and similar to continue making larger / faster growing chickens.
I think I naively feel like the pluses outweigh the negatives. Maybe I'd give this a 80% chance, without doing much investigation. That said, I'd also imagine there might well be more effective measures with a much clearer trade-off. The question of "is this a net-positive thing" is arguably not nearly as important as "are there fairly-clearly better things to do."
Lastly, for all of that, I do want to just thanks those helping animals like this. It's easy for me to argue things one way or the other, but I generally have serious respect for those working to change things, even if I'm not sure if their methods are optimal. I think it's easy to seem combative on this, but we're all on a similar team here.
In terms of a "rough fermi analysis", as I work in the field, I think the numeric part of this is less important at this stage than just laying out a bunch of the key considerations and statistics. What I first want is a careful list of costs and benefits - that seems mature, fairly creative, and unbiased.
Habryka @ 2024-12-21T00:35 (+4) in response to News from THL UK: Judge rules on our historic Frankenchicken case
Wow, yeah, I was quite misled by the lead. Can anyone give a more independent assessment of what this actually means legally?
VettedCauses @ 2024-12-21T02:02 (+14)
The Humane League (THL) filed a lawsuit against the UK Secretary of State for Environment, Food and Rural Affairs (the Defra Secretary) alleging that the Defra Secretary’s policy of permitting farmers to farm fast-growing chickens unlawfully violated paragraph 29 of Schedule 1 to the Welfare of Farmed Animals (England) Regulations 2007.
Paragraph 29 of Schedule 1 to the Welfare of Farmed Animals (England) Regulations 2007 states the following:
THL’s case was dismissed.
THL appealed the dismissal, and again THL’s case was dismissed (this most recent dismissal is what THL’s post is about).
In this most recent dismissal, the Court clarified the meaning of Paragraph 29 as follows:
Essentially, the Court ruled that Paragraph 29 is only violated if an animal is bred such that it cannot avoid genetically caused health/welfare problems even under perfect environmental conditions (i.e. giving the animal the best possible food/diet, a perfect living environment, and world class medical treatment). This allows farmers to continue to farm animals so long as their genetic issues can theoretically be mitigated by improving conditions, even if those conditions are unlikely to be implemented in practice.
For example, let’s say there is a genetically selected breed of chicken that under normal factory farming conditions grows so fast that their legs snap under their weight by the time they are a month old. Under the Court’s ruling, this would not violate Paragraph 29, so long as this problem (and other genetically caused problems) could theoretically be mitigated with better environmental conditions (i.e. giving the chicken the best possible food/diet, a perfect living environment, and world class medical treatment).
Since the Court offered this interpretation of Paragraph 29, all trial courts in the UK (except for those in Northern Ireland and Scotland) are now required to use this interpretation of Paragraph 29 when making rulings.
From our understanding, this is not a favorable interpretation of Paragraph 29, as it makes it extremely difficult to prove that a violation of Paragraph 29 has occurred. Under this ruling, the only way to prove that a Paragraph 29 violation has occurred is by proving the health/welfare problems encountered by an animal are completely unavoidable, even with absolutely perfect environmental conditions/treatment.
Because of this ruling, anyone who ever tries to claim a Paragraph 29 violation has occurred will have to meet this extremely high standard of evidence that the Court has laid out.
Ozzie Gooen @ 2024-12-21T01:40 (+4) in response to Non-Profit Casino
I am in favor of people considering unconventional approaches to charity.
At the same time, I find it pretty easy to argue against this. Some immediate things that come to mind:
1. My impression is that gambling is typically net-negative to participants, often highly so. I generally don't like seeing work go towards projects that are net-negative to their main customers (among others).
2. Out of all the "do business X, but it goes to charity", why not pick something itself beneficial? There are many business areas to choose from. Insurance can be pretty great - I think Lemonade Insurance did something clever with charity.
3. I think it's easy to start out altruistic with something like this, then become a worse person as you respond to incentives. In the casino business, the corporation is highly incentivized to do increasingly sleazy tactics to find, bait, and often bankrupt whales. If you don't do this, your competitors will, and they'll have more money to advertise.
4. I don't like making this the main thing, but I'd expect the PR to be really bad for anything this touches. "EAs don't really care about helping people, they just use that as an excuse to open sleazy casinos." There are few worse things to be associate with. A lot of charities are highly protective of their brands (and often with good reason).
5. It's very easy for me to imagine something like this creating worse epistemics. In order to grow revenue, it will be very "convenient" if you downplayed the harms caused by the casino. If such a thing does catch on in a certain charitable cluster, very soon that charitable cluster will be encouraged to lie and self-deceive. We saw some of this with the FTX incident.
6. The casino industry attracts and feeds off clients with poor epistemics. I'd imagine they (as in, the people the casino actually makes money from) wouldn't be the type who would care much about reasonable effective charities.
When I personally imagine a world where, "A significant part of the effective giving community is tied to high-rolling casinos", it's hard for me to imagine this not being highly distopic.
By all this, I hope the author doesn't treat this at all on an attack on them specifically. But I would consider it an attack on specific future project proposals that suggest advancing manipulative and harmful industries and tying such work to the topics of effective giving or effective philanthropy. I very much do not want to see more work done here. I'm spending some time on this comment, mainly to use this as an opportunity to hopefully dissuade others considering this sort of thing in the future.
On this note, I'd flag that I think a lot of the crypto industry has been full of scams and other manipulative and harmful behavior. Some of this got very close to EA (i.e. with FTX), and I'm sure with a long tail of much smaller projects. I consider much of this (the bad parts) a black mark on all connected+responsible participants and very much do not want to see more of it.
ClimateDoc @ 2024-12-20T23:15 (+23) in response to News from THL UK: Judge rules on our historic Frankenchicken case
Whilst I salute the effort and progress here, this post does seem rather full of spin, given that from what I can tell the court ruling was against the animal advocates. I'd rather see posts that present the facts more clearly.
Habryka @ 2024-12-21T00:35 (+4)
Wow, yeah, I was quite misled by the lead. Can anyone give a more independent assessment of what this actually means legally?
Habryka @ 2024-12-21T00:34 (+4) in response to News from THL UK: Judge rules on our historic Frankenchicken case
Does someone have a rough fermi on the tradeoffs here? On priors it seems like chickens bred to be bigger would overall cause less suffering because they replace more than one chicken that isn't bread to be as big, but I would expect those chickens to suffer more. I can imagine it going either way, but I guess my prior is that it was broadly good for each individual chicken to weigh more.
I am a bit worried the advocacy here is based more on a purity/environmentalist perspective where genetically modifying animals is bad, but I don't give that perspective much weight. But it could also be great from a more cost-effectiveness/suffering-minimization oriented perspective, and would be curious in people's takes.
(Molly was asked this question in a previous post two months ago, but as far as I can tell responded mostly with orthogonal claims that don't really engage with the core ethical question, so am curious in other people's takes)
David T @ 2024-12-20T23:25 (+3) in response to Non-Profit Casino
A cynic reads this as "you could have a great night in which you deprive a few hundred people of malaria nets, but at least in the long run they and also random unrelated and typically obnoxious corporations might stand to benefit from the gambling addiction this has instilled in you....". Possibly the first part of the proposition is slightly less icky if the house is simply taking a rake from a competitors in a game of skill, but still.
Maybe I just know too many people broken by gambling.
Brad West🔸 @ 2024-12-21T00:05 (+2)
I think the same amount of healthy and problem gambling would take place in aggregate regardless of whether there was a PFG casino among a set of casinos. But maybe some people would choose to migrate that activity toward the PFG casino, so that more good could happen (they're offering the same odds as competitors).
It comes down to whether you're OK with getting involved in something icky if the net harm you cause to gamblers is zero and you can produce significant good in doing so. For me, this doesn't really pose a problem.
Comments on 2024-12-20
Anthony DiGiovanni @ 2024-12-20T17:31 (+4) in response to The ‘Dog vs Cat’ cluelessness dilemma (and whether it makes sense)
No worries! Relatedly, I’m hoping to get out a post explaining (part of) the case for indeterminacy in the not-too-distant future, so to some extent I’ll punt to that for more details.
Cool, that makes sense. I’m all for debunking explanations in principle. Extremely briefly, here's why I think there’s something qualitative that determinate credences fail to capture: If evidence, trustworthy intuitions, and appealing norms like the principle of indifference or Occam's razor don’t uniquely pin down an answer to “how likely should I consider outcome X?”, then I think I shouldn’t pin down an answer. Instead I should suspend judgment, and say that there aren’t enough constraints to give an answer that isn’t arbitrary. (This runs deeper than “wait to learn / think more”! Because I find suspending judgment appropriate even in cases where my uncertainty is resilient. Contra Greg Lewis here.)
No, I see credences as representing the degree to which I anticipate some (hypothetical) experiences, or the weight I put on a hypothesis / how reasonable I find it. IMO the betting odds framing gets things backwards. Bets are decisions, which are made rational by whether the beliefs they’re justified by are rational. I’m not sure what would justify the betting odds otherwise.
Ah, I should have made clear, I wouldn’t say indeterminate credences are necessary in the pi case, as written. Because I think it’s plausible I should apply the principle of indifference here: I know nothing about digits of pi beyond the first 10, except that pi is irrational and I know irrational numbers’ digits are wacky. I have no particular reason to think one digit is more or less likely than another, so, since there’s a unique way of splitting my credence impartially across the possibilities, I end up with 50:50.[1]
Instead, here’s a really contrived variant of the pi case I had too much fun writing, analogous to a situation of complex cluelessness, where I’d think indeterminate credences are appropriate:
(I think forming beliefs about the long-term future is analogous in many ways to the above.)
Not sure how much that answers your question? Basically I ask myself what constraints the considerations ought to put on my degree of belief, and try not to needlessly get more precise than those constraints warrant.
I don’t think this is clearly the appropriate response. I think it’s kinda defensible to say, “This doesn’t seem like qualitatively the same kind of epistemic situation as guessing a coin flip. I have at least a rough mechanistic picture of how coin flips work physically, which seems symmetric in a way that warrants a determinate prediction of 50:50. But with digits of pi, there’s not so much a ‘symmetry’ as an absence of a determinate asymmetry.” But I don’t think you need to die on that hill to think indeterminacy is warranted in realistic cause prio situations.
Owen Cotton-Barratt @ 2024-12-20T23:36 (+2)
Not sure what I overall think of the better odds framing, but to speak in its defence: I think there's a sense in which decisions are more real than beliefs. (I originally wrote "decisions are real and beliefs are not", but they're both ultimately abstractions about what's going on with a bunch of matter organized into an agent-like system.) I can accept the idea of X as an agent making decisions, and ask what those decisions are and what drives them, without implicitly accepting the idea that X has beliefs. Then "X has beliefs" is kind of a useful model for predicting their behaviour in the decision situations. Or could be used (as you imply) to analyse the rationality of their decisions.
I like your contrived variant of the pi case. But to play on it a bit:
In this picture, no realistic amount of thinking I'm going to do will bring it down to just a point estimate being defensible, and perhaps even the limit with infinite thinking time would have me maintain an interval of what seems defensible, so some fundamental indeterminacy may well remain.
But to my mind, this kind of behaviour where you can tighten your understanding by thinking more happens all of the time, and is a really important phenomenon to be able to track and think clearly about. So I really want language or formal frameworks which make it easy to track this kind of thing.
Moreover, after you grant this kind of behaviour [do you grant this kind of behaviour?], you may notice that from our epistemic position we can't even distinguish between:
Because of this, from my perspective the question of whether credences are ultimately indeterminate is ... not so interesting? It's enough that in practice a lot of credences will be indeterminate, and that in many cases it may be useful to invest time thinking to shrink our uncertainty, but in many other cases it won't be.
Brad West🔸 @ 2024-12-20T21:48 (+7) in response to Non-Profit Casino
Another idea would just be a normal casino that was owned by a charitable foundation or trust -a "Profit for Good" casino. People could get the exact same value proposition they get from other normal casinos, but by patronizing the Profit for Good Casino, they (in expectation)would be helping save lives or otherwise better the world.
You could have a great night in which you win hundreds or thousands of dollars, but even if you lose, they know that your losses are helping to dramatically better the world.
David T @ 2024-12-20T23:25 (+3)
A cynic reads this as "you could have a great night in which you deprive a few hundred people of malaria nets, but at least in the long run they and also random unrelated and typically obnoxious corporations might stand to benefit from the gambling addiction this has instilled in you....". Possibly the first part of the proposition is slightly less icky if the house is simply taking a rake from a competitors in a game of skill, but still.
Maybe I just know too many people broken by gambling.
ClimateDoc @ 2024-12-20T23:15 (+23) in response to News from THL UK: Judge rules on our historic Frankenchicken case
Whilst I salute the effort and progress here, this post does seem rather full of spin, given that from what I can tell the court ruling was against the animal advocates. I'd rather see posts that present the facts more clearly.
Brad West🔸 @ 2024-12-20T22:24 (+2) in response to Non-Profit Casino
Thanks for your proposal. I have actually thought a Profit for Good casino would be a good idea (high capital requirements, but I think it could provide a competitive edge in the Vegas strip, for instance). I find your take on it pretty interesting
I think a casino that did not limit the funds that could be gambled to charitable accounts of some sort would have a much larger market than one that did. There is a lot of friction in requiring the set up of charitable accounts even for people who were interested in charitable giving and enjoyed gambling. I also think that you are going into a narrower subset of prospective clients that have these overlapping qualities. In the meantime, there are millions of people who consistently demonstrate demand for gambling at casinos.
I think a lot of people would feel fine about playing at the casino and winning, because they know that there are winners and losers in casinos, but the house (in the end) always wins. Winners and losers would both be participating in a process that would be helping dramatically better the world.
Could you explain the legal advantage of your proposal vis-a-vis a normal casino either owned by a charitable foundation or being a nonprofit itself (Humanitix, for instance is a ticketing company that is structured as a nonprofit itself)? Is it that people's chips would essentially be tax-deductible (because contributing to their DAF is tax-deductible)?
Mckiev 🔸 @ 2024-12-20T23:15 (+1)
I meant that my version of casino could operate in all states legally (vs 8 states for regular casinos)
Also: have you used Daffy? It's really easy to set up (to your point about friction of setting up accounts)
JoA @ 2024-12-20T19:49 (+1) in response to Determinism and EA
Short question: why do you say that one who adheres to determinism considers individuals to be genetic blank slates ? (Disclaimer: I know very little about genetics) It seems like if certain things will "inevitably" make us react in a certain way, there must be a genetic component to these rules.
ASiva @ 2024-12-20T23:06 (+1)
Honestly determinism doesn't really have anything to say about the nature vs nurture, it's just my personal opinion. Basically the only things that influence a person is their environment or genetics, both of which are out of a persons control.
Martin Jacobson 🔸 @ 2024-12-20T22:48 (+3) in response to Open thread: October - December 2024
Hello everyone!
I am a political theorist at Uppsala University, Sweden. Similarly to how I am interested in niche ethical ideas like EA, my research is focused on rather neglected (or weird) political ideas. In particular, I am interested in ‘geoism’ or ‘Georgism’, which combines the economic idea that unequal landownership is a root cause of many social problems with the normative idea that such landownership is unjustified since land was not created by anyone. Hence, geoists argue that taxes should be shifted to land and other naturally occurring resources. Earlier this year I defended my Ph.D. thesis on the relationship between geoism and anarchism. I recently received a postdoc grant to keep on researching geoist political theory in the coming years, being partly based in Oslo and Blacksburg, VA.
In terms of cause area, I really appreciate the wide diversity within EA. But perhaps due to my interest in political theory, I have an extra soft spot for questions concerning institutional and systemic change. This is presumably where my own comparative advantage is, but I also think that it matters massively in terms of ripple effects and global capacity growth. At some point, I want to write up an exploration of land reform as a potential high-impact cause area, and the use of community land value trusts as a way to implement these ideals. The final chapter of my thesis explores some related ideas.
I was first introduced to EA ideas in a university philosophy course in 2018. My New Year's resolution for 2022-23 was to try donating 10% of my income to effective causes for at least a year. I had previously found that smaller trials, like Veganuary, are much more doable than any permanent commitment. During this time I also thought a lot about whether to take any public pledge or just to keep on donating anonymously. I eventually became convinced that the potential social contagion effects provide a really important reason to be public with pledges. I wrote some of these considerations down in this essay, which was published at GWWC last month. I also used this occasion to sign the 🔸 10% Pledge.
Please feel free to reach out if you have any questions, and thank you all for the good that you do!
Mckiev 🔸 @ 2024-12-20T22:09 (+3) in response to Non-Profit Casino
@Brad West🔸 , thanks for sharing your thoughts! This is what I thought of initially, but then "pivoted to" the complete non-profit framing, mainly because winning in the actual casino would mean that you are in effect taking money from charities. Probably even more important is the legal advantage of my proposal
Brad West🔸 @ 2024-12-20T22:24 (+2)
Thanks for your proposal. I have actually thought a Profit for Good casino would be a good idea (high capital requirements, but I think it could provide a competitive edge in the Vegas strip, for instance). I find your take on it pretty interesting
I think a casino that did not limit the funds that could be gambled to charitable accounts of some sort would have a much larger market than one that did. There is a lot of friction in requiring the set up of charitable accounts even for people who were interested in charitable giving and enjoyed gambling. I also think that you are going into a narrower subset of prospective clients that have these overlapping qualities. In the meantime, there are millions of people who consistently demonstrate demand for gambling at casinos.
I think a lot of people would feel fine about playing at the casino and winning, because they know that there are winners and losers in casinos, but the house (in the end) always wins. Winners and losers would both be participating in a process that would be helping dramatically better the world.
Could you explain the legal advantage of your proposal vis-a-vis a normal casino either owned by a charitable foundation or being a nonprofit itself (Humanitix, for instance is a ticketing company that is structured as a nonprofit itself)? Is it that people's chips would essentially be tax-deductible (because contributing to their DAF is tax-deductible)?
Eevee🔹 @ 2024-12-20T22:22 (+2) in response to EA Forum audio: help us choose the new voice
Thanks for all your hard work on the audio narrations and making EA Forum content accessible!
Question: Do you intend to license the audio under a Creative Commons license? Since EA Forum text since 2022 is licensed under CC-BY 4.0, all that's legally required is any attribution info provided by the source material and a link to the license; derived works don't have to be also licensed under CC-BY. However, to the extent that AI-generated narrations can be protected by copyright at all, it seems appropriate to use CC-BY, or maybe CC-BY-SA to enforce modifications being under the same terms.
BrianTan @ 2024-12-20T09:12 (+12) in response to Announcing the Q1 2025 Long-Term Future Fund grant round
Thanks for the update! I appreciate that decisions will be communicated within 1.5 months of the application deadline.
Btw, this URL (https://funds.effectivealtruism.org/funds/far-future/apply) you link to leads to "Page not found".
Linch @ 2024-12-20T22:20 (+4)
Appreciate it! @BrianTan and others, feel free to use this thread as a way to report other issues and bugs with the website/grant round announcement.
Mckiev 🔸 @ 2024-12-20T22:09 (+3) in response to Non-Profit Casino
@Brad West🔸 , thanks for sharing your thoughts! This is what I thought of initially, but then "pivoted to" the complete non-profit framing, mainly because winning in the actual casino would mean that you are in effect taking money from charities. Probably even more important is the legal advantage of my proposal
AGB 🔸 @ 2024-12-17T20:05 (+14) in response to AMA: 10 years of Earning To Give
My views have not changed directionally, but I do feel happier with them than I did at the time for a couple of reasons:
With my more recent work it seems much too soon to say anything definitive about social impact, so I always try to acknowledge some chance that I'll feel bad when I look back on this.
Aaron Gertler 🔸 @ 2024-12-20T22:01 (+2)
Thanks!
ETFs do sound like a big win. I suppose someone could look at them as "finance solving a problem that finance created" (if the "problem" is e.g. expensive mutual funds). But even the mutual funds may be better than the "state of nature" (people buying individual stocks based on personal preference?). And expensive funds being outpaced by cheaper, better products sounds like finance working the way any competitive market should.
Max Ghenis @ 2024-12-07T21:03 (+1) in response to GiveCalc: A new tool to calculate the true cost of US charitable giving
Thanks for the suggestion! Currently, GiveCalc handles the charitable deduction value whether you donate cash or appreciated assets—you'd enter the fair market value of the assets as your donation amount. (One limitation is that we assume all donations are cash, which can be deducted up to 60% of AGI, while appreciated assets are limited to 30% of AGI.)
We could add functionality to compare scenarios, like donating an appreciated asset vs selling it and donating the after-tax proceeds. I've opened an issue to explore this: https://github.com/PolicyEngine/givecalc/issues/41
Could you help us understand your use case? When considering donating appreciated assets, would you want to:
Your thoughts on which calculations would be most helpful would be great to hear.
Pat Myron 🔸 @ 2024-12-20T22:01 (+1)
Can 30% of AGI be deducted for donated assets and the rest of the cash deduction limit deducted for donated cash? Or is it either/or?
Interested in calculating the highest tax savings (assuming ownership of appreciated assets with unrealized capital gains). As mentioned elsewhere, it's worth researching that point and bunching donations towards it
Brad West🔸 @ 2024-12-20T21:48 (+7) in response to Non-Profit Casino
Another idea would just be a normal casino that was owned by a charitable foundation or trust -a "Profit for Good" casino. People could get the exact same value proposition they get from other normal casinos, but by patronizing the Profit for Good Casino, they (in expectation)would be helping save lives or otherwise better the world.
You could have a great night in which you win hundreds or thousands of dollars, but even if you lose, they know that your losses are helping to dramatically better the world.
Igor Scaldini @ 2024-12-20T21:46 (+10) in response to The virtues of virtue signalling
What a wonderful piece! I've always wondered why some people choose not to share their donations. Being perceived as a "bragger" in exchange for potentially influencing people around you to donate, always sounded like a good trade-off. Your points clarified a bunch of things here. Thank you!
Oscar Sykes @ 2024-12-20T21:11 (+13) in response to Update on EA Global costs and 2024 CEA Events Team activities
Well done, it's super cool to see everything you guys have achieved this year. One thing I was surprised by is that EAGxs are almost three times cheaper than EAGs while having a slightly higher likelihood to recommend. I assume part of this is because EAGs are typically held in more expensive areas, but I'd be surprised if that explained all of it. Are there any other factors that explain the cost difference?
Vasco Grilo🔸 @ 2024-12-20T21:06 (+2) in response to I’m grateful for you
Thanks for the kind words, Sarah!
geoffrey @ 2024-12-20T19:55 (+12) in response to geoffrey's Quick takes
Personal reasons why I wished I delayed donations: I started donating 10% of my income about 6 years back when I was making Software Engineer money. Then I delayed my donations when I moved into a direct work path, intending to make up the difference later in life. I don't have any regrets about 'donating right away' back then. But if I could do it all over again with the benefit of hindsight, I would have delayed most of my earlier donations too.
First, I've been surprised by 'necessary expenses'. Most of my health care needs have been in therapy and dental care, neither of which is covered much by insurance. On top of that, friend visits cost more over time as people scatter to different cities, meaning I'm paying a lot more for travel costs. And family obligations always manage to catch me off-caught.
Second, career transitions are expensive. I was counting on my programming skills and volunteer organizing to mean a lot more in public policy and research. But there are few substitutes for working inside your target field. And while everyone complains about Master's degrees, it's still heavily rewarded on the job market so I ultimately caved in and paid for one.
Finally, I'm getting a lot more from 'money right away' these days. Thanks to some mental health improvements, fancy things are less stressful and more enjoyable than before. The extra vacation, concert, or restaurant is now worth it, and so my optimal spending level has increased. That's not just for enjoyment. My productivity also improves after that extra splurging, whereas before there wasn't much difference in the relaxation benefit I got from a concert and a series of YouTube comedy skits.
If I had to find a lesson here, it's that I thought too much about my altruistic desires changing and not enough on everything else changing. I opted to 'donate right away' to protect against future me rebelling against effective charity, worrying about value drift and stories of lost motivation. In practice, my preference for giving 10% has been incredibly robust. My other preferences have been a lot more dynamic.
JoA @ 2024-12-20T19:49 (+1) in response to Determinism and EA
Short question: why do you say that one who adheres to determinism considers individuals to be genetic blank slates ? (Disclaimer: I know very little about genetics) It seems like if certain things will "inevitably" make us react in a certain way, there must be a genetic component to these rules.
Forumite @ 2024-12-20T18:47 (+9) in response to Update on EA Global costs and 2024 CEA Events Team activities
Whoop - great work! Anec-data: I've been going to these conferences for years now; to my mind the quality/usefulness of them has in no way diminished, even as you've been able to trim costs. Well done. They are sooo value-adding in terms of motivation, connections, inspiration, etc; you are providing a massive public good for the EA community. Thanks!
Fai @ 2024-12-20T09:08 (+10) in response to Ten big wins in 2024 for farmed animals
Thank you for writing this!
I want to point out that besides the informational value, I find it personally encouraging and heartwarming to read the part where you expressed your appreciation to donors and advocates in the space, and your vision. I think I might learn from you and try doing more of this in some of my writings. Thank you for doing that.
LewisBollard @ 2024-12-20T17:32 (+8)
Thanks Fai! Yes I'm trying to express more often the deep appreciation that I feel for the incredible donors and advocates in our space. I'm glad to hear you find it encouraging :)
Owen Cotton-Barratt @ 2024-12-20T11:10 (+4) in response to The ‘Dog vs Cat’ cluelessness dilemma (and whether it makes sense)
I appreciated a bunch of things about this comment. Sorry, I'll just reply (for now) to a couple of parts.
The metaphor with hedonism felt clarifying. But I would say (in the metaphor) that I'm not actually arguing that it's confused to intrinsically care about the non-hedonist stuff, but that it would be really great to have an account of how the non-hedonist stuff is or isn't helpful on hedonist grounds, both because this may just be helpful to input into our thinking to whatever extent we endorse hedonist goods (even if we may also care about other things), and because without having such an account it's sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds.
I think the piece I feel most inclined to double-click on is the digits of pi piece. Reading your reply, I realise I'm not sure what indeterminate credences are actually supposed to represent (and this is maybe more fundamental than "where do the numbers come from?"). Is it some analogue of betting odds? Or what?
And then, you said:
To some extent, maybe fighting the hypothetical is a general move I'm inclined to make? This gets at "what does your range of indeterminate credences represent?". I think if you could step me through how you'd be inclined to think about indeterminate credences in an example like the digits of pi case, I might find that illuminating.
(Not sure this is super important, but note that I don't need to compute a determinate credence here -- it may be enough have an indeterminate range of credences, all of which would make the EV calculation fall out the same way.)
Anthony DiGiovanni @ 2024-12-20T17:31 (+4)
No worries! Relatedly, I’m hoping to get out a post explaining (part of) the case for indeterminacy in the not-too-distant future, so to some extent I’ll punt to that for more details.
Cool, that makes sense. I’m all for debunking explanations in principle. Extremely briefly, here's why I think there’s something qualitative that determinate credences fail to capture: If evidence, trustworthy intuitions, and appealing norms like the principle of indifference or Occam's razor don’t uniquely pin down an answer to “how likely should I consider outcome X?”, then I think I shouldn’t pin down an answer. Instead I should suspend judgment, and say that there aren’t enough constraints to give an answer that isn’t arbitrary. (This runs deeper than “wait to learn / think more”! Because I find suspending judgment appropriate even in cases where my uncertainty is resilient. Contra Greg Lewis here.)
No, I see credences as representing the degree to which I anticipate some (hypothetical) experiences, or the weight I put on a hypothesis / how reasonable I find it. IMO the betting odds framing gets things backwards. Bets are decisions, which are made rational by whether the beliefs they’re justified by are rational. I’m not sure what would justify the betting odds otherwise.
Ah, I should have made clear, I wouldn’t say indeterminate credences are necessary in the pi case, as written. Because I think it’s plausible I should apply the principle of indifference here: I know nothing about digits of pi beyond the first 10, except that pi is irrational and I know irrational numbers’ digits are wacky. I have no particular reason to think one digit is more or less likely than another, so, since there’s a unique way of splitting my credence impartially across the possibilities, I end up with 50:50.[1]
Instead, here’s a really contrived variant of the pi case I had too much fun writing, analogous to a situation of complex cluelessness, where I’d think indeterminate credences are appropriate:
(I think forming beliefs about the long-term future is analogous in many ways to the above.)
Not sure how much that answers your question? Basically I ask myself what constraints the considerations ought to put on my degree of belief, and try not to needlessly get more precise than those constraints warrant.
I don’t think this is clearly the appropriate response. I think it’s kinda defensible to say, “This doesn’t seem like qualitatively the same kind of epistemic situation as guessing a coin flip. I have at least a rough mechanistic picture of how coin flips work physically, which seems symmetric in a way that warrants a determinate prediction of 50:50. But with digits of pi, there’s not so much a ‘symmetry’ as an absence of a determinate asymmetry.” But I don’t think you need to die on that hill to think indeterminacy is warranted in realistic cause prio situations.
David T @ 2024-12-20T17:14 (+3) in response to Contrails: an underrated climate mitigation opportunity?
Thanks for the very interesting post.
I don't work in commercial aviation any more, but can offer a few pointers
So I think there's definitely something to be worked on here, but its going to take industry experts more than grassroots campaigning. I think there are probably some really interesting algorithm development projects there for people with the right skillsets too...
(For anyone interested in space, an analogous situation is the aluminium oxide deposited in the mesosphere by deorbiting spacecraft. This used to be negligible. It isn't now that constellations of 10s of 1000s of satellites with short design lives in LEO are a thing. The climate impact is uncertain and not necessarily large but probably negative; the impact on ozone depletion could be much more concerning. Changing mindsets on that one will be harder)
which sounds seriously expensive to me....
MathiasKB🔸 @ 2024-12-20T17:01 (+4) in response to Policy advocacy for eradicating screwworm looks remarkably cost-effective
I haven't looked into this at all, but the effect of eradication efforts (whether through gene drive or the traditional sterile insect technique) is that screwworm stop reproducing and cease to exist, not that they die anguishing deaths.
Vasco Grilo🔸 @ 2024-12-20T17:11 (+2)
Thanks, Mathias. Just to clarify, my "decrease in welfare" was referring to screwworms with positive lives ceasing to exist, not to their deaths.
Vasco Grilo🔸 @ 2024-12-17T15:04 (+4) in response to Policy advocacy for eradicating screwworm looks remarkably cost-effective
Thanks for the post, Mathias! Do you know whether the increase in welfare of the infected wild animals would be larger than the decrease in welfare of the eradicated screwworms assuming these have positive lives?
MathiasKB🔸 @ 2024-12-20T17:01 (+4)
I haven't looked into this at all, but the effect of eradication efforts (whether through gene drive or the traditional sterile insect technique) is that screwworm stop reproducing and cease to exist, not that they die anguishing deaths.
sammyboiz @ 2024-12-16T21:10 (+15) in response to My Problem with Veganism and Deontology
I don't know man, virtue signaling to non-vegans and vegans that you care about animals can be done simply by telling people you donate 10% of your money to animal welfare. It doesn't take much more than that. Utilitarianism can be explained.
As for lowering cognitive dissonance, this is an extremely person to person thing. I would never prescribe veganism to an EA with this reasoning. And this this was a common reason, why haven't I also been told to have a pet/animal companion to increase how much moral worth I give animals?
And reducing daily suffering that you cause can also be done better with an extra 10 cents or so. Wouldn't this be more in accordance with your values? Surely 10 cents is also cheaper than veganism.
Sorry if I sound attacking.
NickLaing @ 2024-12-20T17:00 (+3)
I don't think that virtue signaling by telling most people you donate 10 percent would with week to noon vegans would work well. Most of my friends would consider me a hypocrite for doing that, and longer explanations wouldn't work for many.
Utilitarianism can be explained, but even after that explanation many would consider eating meat and offsetting hypocritical, even if it might be virtuous.
The point of the virtue signaling is the signaling, not the virtue and the cleanest and easiest way to do that in many circles might be going vegan.
MatthewDahlhausen @ 2024-12-19T19:44 (+17) in response to My Problem with Veganism and Deontology
A useful test when moral theorizing about animals is to swap "animals" with "humans" and see if your answer changes substantially. In this example, if the answer changes, the relevant difference for you isn't about pure expected value consequentialism, it's about some salient difference between the rights or moral status of animals vs. humans. Vegans tend to give significant, even equivalent, moral status to some animals used for food. If you give near-equal moral status to animals, "offsetting meat eating by donating to animal welfare orgs" is similar to "donating to global health charities to offset hiring a hitman to target a group of humans". There are a series of rebuttals, counter-rebuttals, etc. to this line of reasoning. Not going to get into all of them. But suffice to say that in the animal welfare space, an animal welfarist carnivore is hesitantly trusted - it signals either a lack of commitment or discipline, a diet/health struggle, a discordant belief that animals deserve far less rights and moral status as humans, or (much rarer) a fanatic consequentialist ideology that thinks offsetting human killing is morally coherent and acceptable. A earnest carnivore that cares a lot about animal welfare is incredibly rare.
MarcusAbramovitch @ 2024-12-20T16:30 (+4)
This comment is extremely good. I wish I could incorporate some of it into my comment since it hits the cognitive dissonance aspect far better than I did. It's near impossible to give significant moral weight to animals and still think it is okay to eat them.
jessica_mccurdy🔸 @ 2024-12-20T16:11 (+2) in response to What apps or methods have helped you budget so you know how much you can give?
I have found rocket money to be quite helpful!
huw @ 2024-12-20T11:19 (+2) in response to Charlie_Guthmann's Quick takes
I am seeing here that they already work closely with Open Philanthropy and were involved in drafting the Executive Order on AI. So this does not seem like a neglected avenue.
Charlie_Guthmann @ 2024-12-20T16:03 (+1)
Yea I have no idea if they actually need money but if they still want to hire more people to the AI team wouldn't it be better to give the money to RAND to hire those policymakers rather than like the Americans for Responsible Innovation - which open phil currently recommends but is much less prestigious and I'm not sure if they are working side by side with legislators. The fact that open phil gave grants but doesn't currently recommend for individual donors makes me think you are right that they don't need money atm but it would be nice to be sure.
Ben Millwood🔸 @ 2024-12-17T19:32 (+21) in response to The Most Valuable Dollars Aren’t Owned By Us
Similarly if you think animal charities are 10x global health charities in effectiveness, then you think these options are equally good:
To me, the first of these sounds way easier.
Milli🔸 @ 2024-12-20T15:44 (+1)
IIRC studies show it's easier to motivate people to give more than to shift existing donations.
Charles Dillon 🔸 @ 2024-12-17T14:15 (+29) in response to The Most Valuable Dollars Aren’t Owned By Us
This seems likely to be incorrect to me, at least sometimes. In particular I disagree with the suggestion that the improvement on the margin is likely to be only on the order of 5%.
Let's take someone who moves from donating to global health causes to donating to help animals. It's very plausible that they may think the difference in effectiveness there is by a factor of 10, or even more.
They may also think that non-EA dollars are more easily persuaded to donate to global health initiatives than animal welfare ones. In this case, if a non-EA dollar is 80% likely to go to global health, and 20% to animal welfare, then by their own lights the change in use of their dollar was more than 3x as important as the introduction of the extra non-EA dollar.
Milli🔸 @ 2024-12-20T15:43 (+3)
How sure are you are right and the other EA (who has also likely thought carefully about their donations) is wrong, though?
I'm much more confident that I will increase the impact of someone's donation / spending if they are not in EA, rather than being too convinced of my own opinion and causing harm (by negative side effects, opportunity costs or lowering the value of their donation).
PabloAMC 🔸 @ 2024-12-19T13:10 (+2) in response to Ask Giving What We Can anything, all week
Thanks Luke! It makes sense what you mention. It is true that it would become significantly more messy to track, even when the spirit of the 10% pledge would suggest accounting for it. Just a random idea: perhaps you could offer the option of “pausing” the pledge temporarily so it does not become a blocker for people aiming to do direct work that they deem to be particularly impactful.
PabloAMC 🔸 @ 2024-12-20T15:01 (+2)
Edit: upon reflection I think this idea may not be that useful. Since the 10% pledge is for the entire career, not each year, that flexibility is already incorporated. And a pause could produce some attrition.
TheAthenians @ 2024-12-20T13:57 (+7) in response to My experience with the Community Health team at CEA
It's important to note that few people will share their negative experiences with the Community Health Team because the CHT blacklists people from funding, EAG attendance, job opportunities, etc.
Also, if they cause people to leave the community, you're unlikely to hear about it because they've left the community.
This leads to a large information asymmetry.
I know many people who's lives and impact have been deeply damaged by the CHT but they won't share their experiences because they are afraid of retaliation or have given up on the EA community because of them.
frances_lorenz @ 2024-12-20T15:01 (+7)
I'm definitely sympathetic to this point, yep. I think it would be very difficult to write a post of this nature if you felt that your participation in EA was being wrongly affected by CH.
At the same time, I think both the negative and positive experiences are difficult to talk about, due to their sensitive nature. I felt comfortable writing this because the incident is now four years old and I'm lucky to be in an incredibly supportive environment; many who have had positive experiences will not want to write about them. Thus, I am not confident there is a "large information asymmetry" in either direction, there are deterrents to information sharing on both sides.
I think the unfortunate reality is: Community Health is not infallible, I would be very keen to hear about mistakes they've made or genuine concerns, as would the team, I'm certain. I'm also acutely aware that a lot of people who exhibit poor behaviour, and are then prevented from taking certain actions within the community, will claim to have been slighted. People who cross clear boundaries and then face consequences do not often go, "this seems right and fair to me, thank you for taking these measures against me to protect others." This is certainly not to say, "no one who says they've been blacklisted or slighted can be correct." This is to say that, I am not sure how to update on claims that CH has damaged people's lives without more information.
TheAthenians @ 2024-12-20T13:57 (+7) in response to My experience with the Community Health team at CEA
It's important to note that few people will share their negative experiences with the Community Health Team because the CHT blacklists people from funding, EAG attendance, job opportunities, etc.
Also, if they cause people to leave the community, you're unlikely to hear about it because they've left the community.
This leads to a large information asymmetry.
I know many people who's lives and impact have been deeply damaged by the CHT but they won't share their experiences because they are afraid of retaliation or have given up on the EA community because of them.
JWS 🔸 @ 2024-12-20T13:26 (+51) in response to JWS's Quick takes
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
Best Forum Post I read this year:
Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations by @titotal
It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking he Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.
This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. 👏
Honourable Mentions:
Forum Posters of the Year:
Non-Forum Poasters of the Year:
Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldn't shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.
My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/comments/contributors were this year!
I think that the fractured and mixed response to the latest Apollo reports (both for OpenAI and Anthropic) is partially downstream of this loss of trust and legitimacy
e.g. here and here
Toby Tremlett🔹 @ 2024-12-20T13:48 (+4)
I wish it could be EV-mas every day...
This is great JWS, thanks for writing it! After Forum Wrapped is out in Jan, we should have a list of underrated posts (unsure on exact wording), we'll see how it compares.
JWS 🔸 @ 2024-12-20T13:26 (+51) in response to JWS's Quick takes
Ho-ho-ho, Merry-EV-mas everyone. It is once more the season of festive cheer and especially effective charitable donations, which also means that it's time for the long-awaited-by-nobody-return of the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-Forum-Awards 🏆✨🎄, updated for the 2024! Spreading Forum cheer and good vibes instead of nitpicky criticism!!
Best Forum Post I read this year:
Explaining the discrepancies in cost effectiveness ratings: A replication and breakdown of RP's animal welfare cost effectiveness calculations by @titotal
It was a tough choice this year, but I think this deep, deep dive into the different cost effectiveness calculations that were being used to anchor discussion in the GH v AW Debate Week was thorough, well-presented, and timely. Anyone could have done this instead of just taking he Saulius/Rethink estimates at face value, but titotal actually put in the effort. It was the culmination of a lot of work across multiple threads and comments, especially this one, and the full google doc they worked through is here.
This was, I think, an excellent example of good epistemic practices on the EA Forum. It was a replication which involved people on the original post, drilling down into models to find the differences, and also surfacing where the disagreements are based on moral beliefs rather than empirical data. Really fantastic work. 👏
Honourable Mentions:
Forum Posters of the Year:
Non-Forum Poasters of the Year:
Congratulations to all of the winners! I also know that there were many people who made excellent posts and contributions that I couldn't shout out, but I want to know that I appreciate all of you for sharing things on the Forum or elsewhere.
My final ask is, once again, for you all to share your appreciation for others on the Forum this year and tell me what your best posts/comments/contributors were this year!
I think that the fractured and mixed response to the latest Apollo reports (both for OpenAI and Anthropic) is partially downstream of this loss of trust and legitimacy
e.g. here and here
Benevolent_Rain @ 2024-12-20T12:03 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
I think we should stop a bio catastrophe from happening, and not preparing for doom.
Benevolent_Rain @ 2024-12-20T13:22 (+2)
Highly agree with this! In fact, I hope that if a significant number of shelters is produced, that the primary effect would be to help make the case for stopping development of dangerous mirror bio research. It just happens to be that my expertise and experience lends itself more naturally to this rather grim work. I would be very happy to work on something more uplifting next - I am very open to suggestions for the next problem I can help tackle (having been a small part of bringing down the cost of wind energy dramatically).
Benevolent_Rain @ 2024-12-20T12:16 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
This post could potentially be bad PR for EA (e.g. "altruists are preparing for doom")
Benevolent_Rain @ 2024-12-20T13:19 (+2)
I should probably emphasize more that the ideal outcome here is of course first that we don't pursue dangerous mirror bio research. And if that happens, that the "next-in-line" ideal outcome would be for gov'ts to create such shelters and distribute them more like Nordic countries have distributed nuclear shelters - not just for "the elites".
Benevolent_Rain @ 2024-12-20T12:02 (+10) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
The cost effectiveness claim is misleading or worse.
Benevolent_Rain @ 2024-12-20T13:16 (+2)
This is super helpful, I have tried to reflect this better in an updated title. The shelters I am fairly certain can but built for this material cost (not including labor as in a pinch I think these could be made by a wide range of people, perhaps even by the inhabitants themselves). But it is right that cost effectiveness is much harder than simply summing up material costs - one would have to cost the total solution and also have some grasp of the reduction in x-risk, which is far beyond the scope of what I have done. I simply found a physical structure that seems quite robust.
Sujan Roy @ 2024-12-20T10:18 (+3) in response to My Problem with Veganism and Deontology
I believe the consequences of eating vegan are more plausibly characterized as falling under the domain of procreation ethics, rather than that of the ethics of killing. When you eat meat, the only difference you can reasonably expect to make is affecting how many farmed animals are born in the near future, since the fate of the ones that already exist in the farms is sealed (i.e. they'll be killed no matter what) and can't be affected by our dietary choices.
So I think, rather than factory farm offsets being similar to murdering someone and then saving others, they're akin to causing someone's birth in miserable conditions (who later dies prematurely), and then 'offsetting' that harm by preventing the suffering of hundreds of other human beings.
I submit that offsetting still feels morally questionable in this scenario, but at least my intuitions are less clear here.
Tejas Subramaniam @ 2024-12-20T13:10 (+1)
I didn’t say they fell under the ethics of killing, I was using killing as an example of a generic rights violation under a plausible patient-centered deontological theory to illustrate the difference between “a rights violation happening to one person and help coming for a separate person as an offset” and “one’s harm being directly offset.”
(I agree that it seems a bit more unclear if potential people can have rights, even if they can have moral consideration, and in particular rights to not be brought into existence, but I think it’s very plausible.)
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:16 (+2)
This post could potentially be bad PR for EA (e.g. "altruists are preparing for doom")
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:16 (+2)
Something else about downsides of this intervention
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:16 (+2)
Something else about the account this is posted from
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:15 (+2)
Something else about how this is presented on the EAF
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:15 (+2)
Something else technical (including cost effectiveness)
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:07 (+2)
The author does not have sufficient background in the required fields to make assertions about environmental concentrations etc.
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:05 (+2)
The post seems to be trying to sell readers these shelters.
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:05 (+2)
I have reservations about only rich people being able to afford these shelters while the rest of us would be left exposed.
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:03 (+2)
There are large downsides from this intervention - it could be seen by another nation state as preparation for biowarfare and thus contribute to a bioweapons arms race.
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:03 (+2)
I think we should stop a bio catastrophe from happening, and not preparing for doom.
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:02 (+10)
The cost effectiveness claim is misleading or worse.
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:02 (+2)
The lack of evidence for positive pressure makes this intervention premature.
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:01 (+2)
There are problems with using serial filters.
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T12:01 (+2)
The account posting this has previously caused damage to the EA community by the way it has engaged with the topic of DEI.
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
Benevolent_Rain @ 2024-12-20T11:59 (+2)
I have reservations about the account this being posted from being anonymous.
Benevolent_Rain @ 2024-12-20T11:59 (+2) in response to A mirror bio shelter might cost as little as ~$10,000/person (material cost only)
As there are downvotes but without any comments, below is a thread I really encourage people to agree vote on in order to help assess this intervention's effectiveness (please only vote for what you think - not what you think other people are downvoting because of):
conradical @ 2024-12-20T11:56 (+2) in response to Children in Low-Income Countries
The answer to this is probably that in most cases, the welfare realized by that counterfactual life and/or their children might not be huge, but on average still positive. This can be exemplified by the commonly used life satisfaction scale, which ranges from 0-10, and aims to capture all possible wellbeing states. Therefore it follows that the extreme lowest scores represent states worse than not existing at all. There have been several attempts to determine this “neutral point”, above which life is better lived than not.
A survey conducted in Ghana and Kenya estimated the neutral point at 0.56 (IDinsight, 2019). Results from Brazil, China and the US suggest a neutral point of 25 on a 100-point scale (Referenced by HLI, 2022). Another in the UK suggested 2/10 (referenced in Krekel & Frijters, 2021). While people disagree where exactly to locate this neutral point, there is some agreement that it could be between 1.5-2.
When looking at country-level average happiness levels (happiness is closely related to life satisfaction), the only country deceeding 2 is Afghanistan. So while there might be a case for that argument in Afghanistan, in most other countries there will be no problem in expectation. That said, there is of course variance in life satisfaction within countries, so there is still the possibility of edge cases where the intervention benefits a person whose life satisfaction is far below the country-average and deceeds the neutral point. Some within-region life satisfaction averages are provided here.
Gemma 🔸 @ 2024-12-15T13:56 (+19) in response to titotal's Quick takes
I read this more like the guy was lonely and wanted community so was looking for some kind of secular religion to provide grounding to his life.
I personally think people overrate people's stated reasons for extreme behaviour and underrate the material circumstances of their life. In particular, loneliness https://time.com/6223229/loneliness-vulnerable-extremist-views/
(would genuinely be interested to hear counter arguments to this! I'm not a researcher so honestly no idea how to go about testing that hypothesis)
quila @ 2024-12-20T11:50 (+1)
As one counterexample, EA is really rare in humans, but does seem more fueled by principles than situations.
(Otoh, if situations make one more susceptible to adopting some principles, is any really the "true cause"? Like plausibly me being abused as a child made me want to reduce suffering more, like this post describes. But it doesn't seem coherent to say that means the principles are overstated as an explanation for my behavior.
I dunno why loneliness would be different; first thought is that loneliness means one has less of a community to appeal to, so there's less conformity biases preventing such a person from developing divergent or (relatively) extreme views; the fact that they can find some community around said views and have conformity pressures towards them is also a factor of course; and that actually would be an 'unprincipled' reason to adopt a view so i guess for that case it does make sense to say, "it's more situation(-activated biases) than genuine (less-biasedly arrived at) principles".
An implication in my view is that this isn't particularly about extreme behavior; less biased behavior is just rare across the spectrum. (Also, if we narrow in on people who are trying to be less biased, their behavior might be extreme; e.g., Rationalists trying prevent existential risk from AI seems deeply weird from the outside))
Charlie_Guthmann @ 2024-12-19T23:56 (+16) in response to Charlie_Guthmann's Quick takes
Haven't seen anyone mention RAND as a possible best charity for AI stuff and I guess I'd like to throw their hat in the ring or at least invite people to tell me why I'm wrong. My core claims are approximately:
I'll add I have no idea if they need/have asked for marginal funding.
huw @ 2024-12-20T11:19 (+2)
I am seeing here that they already work closely with Open Philanthropy and were involved in drafting the Executive Order on AI. So this does not seem like a neglected avenue.
Anthony DiGiovanni @ 2024-12-20T04:22 (+2) in response to The ‘Dog vs Cat’ cluelessness dilemma (and whether it makes sense)
Thanks for explaining!
Indeed. :) If “where do these numbers come from?” is your objection, this is a problem for determinate credences too. We could get into the positive motivations for having indeterminate credences, if you’d like, but I’m confused as to why your questions are an indictment of indeterminacy in particular.
Some less pithy answers to your question:
I’m confused about this “tool” framing, because it seems that in order to evaluate some numerical representation of your epistemic state as “helpful,” you still need to make reference to your beliefs per se. There’s no belief-independent stance from which you can evaluate beliefs as useful (see this post).[2]
The epistemic question here is whether your beliefs per se should have the structure of (in)determinacy, e.g., do you think you should always be able to say “intervention XYZ is net-good, net-bad, or net-neutral for the long-term future”. That’s what I’m talking about when talking about “rational obligation” to have (in)determinate credences in some situation. It's independent of the kind of mere practical limitations on the precision of numbers in our heads you’re talking about.
Analogy: Your view here is like that of a hedonist saying, "Oh yeah, if I tried always directly maximizing my own pleasure, I'd feel worse. So pursuing non-pleasure things is sometimes helpful for bounded agents, by a hedonist axiology. But sometimes it actually is better to just maximize pleasure." Whereas I'm the non-hedonist saying, "Okay but I'm endorsing the non-pleasure stuff as intrinsically valuable, and I'm not sure you've explained why intrinsically valuing non-pleasure stuff is confused." (The hedonism thing is just illustrative, to be clear. I don't think epistemology is totally analogous to axiology.)
The VNM theorem only tells you you’re representable as a precise EV maximizer if your preferences satisfy completeness. But completeness is exactly what defenders of indeterminate beliefs call into question. Rationality doesn’t seem to demand completeness — you can avoid money pumps / Dutch books with incomplete preferences.
I think this fights the hypothetical. If you “make guesses about your expectation of where you’d end up,” you’re computing a determinate credence and plugging that into your EV calculation. If you truly have indeterminate credences, EV maximization is undefined.
I’d like to understand why, then. As I said, if indeterminate beliefs are on the table, it seems like the straightforward response to unknown unknowns is to say, “By nature, my access to these considerations is murky, so why should I think this particular determinate ‘simplicity prior’ is privileged as a good model?”
(plus another condition that doesn’t seem controversial)
Technically, there are Dutch book and money pump arguments, but those put very little constraints on beliefs, as argued in the linked post.
Owen Cotton-Barratt @ 2024-12-20T11:10 (+4)
I appreciated a bunch of things about this comment. Sorry, I'll just reply (for now) to a couple of parts.
The metaphor with hedonism felt clarifying. But I would say (in the metaphor) that I'm not actually arguing that it's confused to intrinsically care about the non-hedonist stuff, but that it would be really great to have an account of how the non-hedonist stuff is or isn't helpful on hedonist grounds, both because this may just be helpful to input into our thinking to whatever extent we endorse hedonist goods (even if we may also care about other things), and because without having such an account it's sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds.
I think the piece I feel most inclined to double-click on is the digits of pi piece. Reading your reply, I realise I'm not sure what indeterminate credences are actually supposed to represent (and this is maybe more fundamental than "where do the numbers come from?"). Is it some analogue of betting odds? Or what?
And then, you said:
To some extent, maybe fighting the hypothetical is a general move I'm inclined to make? This gets at "what does your range of indeterminate credences represent?". I think if you could step me through how you'd be inclined to think about indeterminate credences in an example like the digits of pi case, I might find that illuminating.
(Not sure this is super important, but note that I don't need to compute a determinate credence here -- it may be enough have an indeterminate range of credences, all of which would make the EV calculation fall out the same way.)
Charlie_Guthmann @ 2024-12-19T23:56 (+16) in response to Charlie_Guthmann's Quick takes
Haven't seen anyone mention RAND as a possible best charity for AI stuff and I guess I'd like to throw their hat in the ring or at least invite people to tell me why I'm wrong. My core claims are approximately:
I'll add I have no idea if they need/have asked for marginal funding.
Nick K. @ 2024-12-20T11:09 (+1)
What have they done or are planning to do that seems worth supporting?
yanni kyriacos @ 2024-12-20T10:35 (+6) in response to My Problem with Veganism and Deontology
Interesting post! Would you keep a human slave if there was an effective anti slavery charity? Or is this speciesism?
James Herbert @ 2024-12-20T10:27 (+2) in response to Running a Project-Based Intro Fellowship
Thanks for sharing your experience! We at EA Netherlands have been thinking about final events lately. Could you share more specific details about what yours looked like? Agenda,vibes, etc.
Uni Groups Team @ 2024-12-16T16:09 (+26) in response to Uni Groups Team's Quick takes
We just wanted to transparently share that CEA’s University Groups Team is not running two of our historical programs over the next few months:
We think both programs are relatively valuable, but are less aligned with our current vision (of providing value through helping EA university group organizers run better groups) than some of our alternatives.
We have made this (difficult!) decision so that we can instead focus on:
This decision does not rule out running UGOR or our internship in the future. In fact, we are exploring whether we should run UGOR over (northern hemisphere) summer break, allowing more groups to better prepare for their academic year. We piloted such a retreat as part of our pilot university programming this summer, and that worked well.
We aim to continue to transparently share updates such as this one! We are also always open to feedback (including anonymously), especially if you have specific suggestions on what things we should deprioritize to create space for UGOR or the summer internship.
James Herbert @ 2024-12-20T10:23 (+2)
Thank you for sharing this update! I’m interested in learning more about how you arrived at this decision, as we at EA Netherlands often encounter similar choices. Your insights could be really valuable for us.
Would you mind sharing a bit about your reasoning process?
Thanks again for keeping us informed!
Tejas Subramaniam @ 2024-12-17T11:20 (+3) in response to My Problem with Veganism and Deontology
I think offsetting emissions and offsetting meat consumption are comparable under utilitarianism, but much less comparable under most deontological moral theories, if you think animals have rights. For instance, if you killed someone and donated $5,000 to the Malaria Consortium, that seems worse – from a deontological perspective – than if you just did nothing at all, because the life you kill and the life you save are different people, and many deontological theories are built on the “separateness of persons.” In contrast, if you offset your CO2 emissions, you’re offsetting your effect on warming, so you don’t kill anyone to begin with (because it’s not like your CO2 emissions cause warming that hurts agent A, and then your offset reduces temperatures to benefit agent B). It might be similarly problematic to offset your contribution to air pollution, though, because the effects of air pollution happen near the place where the pollution actually happened.
Sujan Roy @ 2024-12-20T10:18 (+3)
I believe the consequences of eating vegan are more plausibly characterized as falling under the domain of procreation ethics, rather than that of the ethics of killing. When you eat meat, the only difference you can reasonably expect to make is affecting how many farmed animals are born in the near future, since the fate of the ones that already exist in the farms is sealed (i.e. they'll be killed no matter what) and can't be affected by our dietary choices.
So I think, rather than factory farm offsets being similar to murdering someone and then saving others, they're akin to causing someone's birth in miserable conditions (who later dies prematurely), and then 'offsetting' that harm by preventing the suffering of hundreds of other human beings.
I submit that offsetting still feels morally questionable in this scenario, but at least my intuitions are less clear here.
SofiaBalderson @ 2024-12-20T10:16 (+2) in response to Our first half year at Farmed Animal Protection Hungary - wins, fails and lessons learned
Thanks a lot for sharing your progress and what you've learned. Very inspiring to read about your updates team, and congrats on finding volunteers!
BrianTan @ 2024-12-20T09:12 (+12) in response to Announcing the Q1 2025 Long-Term Future Fund grant round
Thanks for the update! I appreciate that decisions will be communicated within 1.5 months of the application deadline.
Btw, this URL (https://funds.effectivealtruism.org/funds/far-future/apply) you link to leads to "Page not found".
calebp @ 2024-12-20T09:45 (+4)
Thanks. Should now be fixed!
Rasool @ 2024-12-20T09:35 (+1) in response to 3 Days to Tell the UK to Focus on Lead
I got an email from the IDC yesterday saying that they received "over 130 submissions" which is far fewer than I expected.
People who made a submission based on this post are a meaningful portion of all those who engaged with this process!