it seems boggling at first glance that this would work, but in summary, it would work like this:
Sometimes, in an argument, one or more sides doesnât care about reaching the RIGHT conclusion, they just care about it reaching a conclusion they approve of. This is often the difficulty with arguments.
However, when everyone is brought to the table and wants to reach the RIGHT conclusion, you find that the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
This project would basically bring world leaders to the table, where they would look for the RIGHT conclusion to major problems, which should lead to the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
There is sort of precedent for this: science used to be much more argumentative, and now, most of science is done in very intelligent ways, aimed at getting to the RIGHT answer, and not âtheir answerâ. This led to many, if not most or all, scientific problems being solved*.
In addition, if you aim to be a powerful scientist, fighting for âyour answerâ makes it much harder than it is if you were fighting for the RIGHT answer.
Similarly, if this project worked well, it would be much harder to gain power if you fought for âyour valuesâ than if you fought for the RIGHT values!
I think it's a great idea to do a fundraising campaign as part of your university groups! Fundraisers can be a great way to raise awareness as well as money!
I think that fundraising for a cause tied to a run/walk or some type of other event that's happening near you could be a great way to gather momentum!
I would generally favour charities that 1) you think are highly effective 2) have a clear story that you can explain to potential donors about how it works and why they're worth supporting. I think GiveWell's top charities are great examples. Climate change charities or animal welfare charities could also resonate with people at universities!
it seems boggling at first glance that this would work, but in summary, it would work like this:
Sometimes, in an argument, one or more sides doesnât care about reaching the RIGHT conclusion, they just care about it reaching a conclusion they approve of. This is often the difficulty with arguments.
However, when everyone is brought to the table and wants to reach the RIGHT conclusion, you find that the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
This project would basically bring world leaders to the table, where they would look for the RIGHT conclusion to major problems, which should lead to the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
Nice! This is a different question, but I'd be curious if you have any thoughts on how to evaluate risks from BDTs. There's a new NIST RFI on bio/chem models asking about this, and while I've seen someanswers to the question, most of them say they have a ton of uncertainty and no great solutions. Maybe reliable evaluations aren't possible today, but what would we need to build them?
Thanks for sharing. Do you think children born from unwanted pregnancies have positive lives? If so, would the family planning intervention still be beneficial accounting for the welfare loss of the children who would have been born from the prevented unwanted pregnancies? This seems like a crucial consideration.
I remember the Collinsâ being emphatically pro abortion and contraception to increase the cultural prestige and frequency of having children - so the poster couple of population=good seems to think contraception and abortion access does not reduce the population, all things considered. Iâm not sure if the lives of unwanted children are worth starting, but I should flag that Iâm generally pessimistic about which lives are worth starting.
Edit: Iâm not familiar with the culture of Nigeria. My intuitions about this developed in a western context and maybe there are relevant differences in Nigeria.
I think this is the closest that I currently have (in-general, "sharing all of my OP related critiques" would easily be a 2-3 book sized project, so I don't think it's feasible, but I try to share what I think whenever it seems particularly pertinent):
I also have some old memos I wrote for the 2023 Coordination Forum I would still be happy to share with people if they DM me that I referenced a few times in past discussions.
Yeah I've seen that. I think costly-signalling is very real, and the effort to create something formal, polished and thoughtful would go a long way. But obviously i have no idea what else you've got on your plate so YMMV
Congrats! One way I've been thinking about this recently -- if we expect most people will permanently die now (usually without desiring to do so), but at some point in the future, humanity will "cure death," then interventions to allow people to join the cohort of people who don't have to involuntarily die could be remarkably effective from a QALY perspective. As I've argued before, I think that key questions for this analysis are how many QALYs individuals can experience, whether humans are simply replaceable, and what is the probability that brain preservation will help people get there. Another consideration is that if it could be performed cheaply enough -- perhaps with robotic automation of the procedure -- it could also be used for non-human animals, with a similar justification.
Yeah, as I see it, the motivations to pursue this differ in strength dramatically depending on whether one's flavour of utilitarianism is more inclined to a person-affecting view or a total hedonic view.
If you're inclined towards the person-affecting view, then preserving people for revival is a no-brainer (pun intended, sorry, I'm a terrible person).
If you hold more of a total hedonic view, then you're more likely to be indifferent to whether one person is replaced for any other. In that case, abolishing death only has value in so far as it reduces the suffering or increases the joy of people who'd prefer to hold onto their existing loved ones rather than have them changed out for new people over time. From this perspective, it'd be equally efficacious to just ensure no-one cared about dying or attachments to particular people, and a world in which everyone was replaced with new people of slightly higher utility would be a net improvement to the universe.
Back in the real world though, outside of philosophical thought experiments, I suspect most people aren't indifferent to whether they or their loved ones die and are replaced, so for humans at least I think the argument for preservation is strong. That may well hold for great ape cousins too, but it's perhaps a weaker argument when considering something like fish?
Congrats! One way I've been thinking about this recently -- if we expect most people will permanently die now (usually without desiring to do so), but at some point in the future, humanity will "cure death," then interventions to allow people to join the cohort of people who don't have to involuntarily die could be remarkably effective from a QALY perspective. As I've argued before, I think that key questions for this analysis are how many QALYs individuals can experience, whether humans are simply replaceable, and what is the probability that brain preservation will help people get there. Another consideration is that if it could be performed cheaply enough -- perhaps with robotic automation of the procedure -- it could also be used for non-human animals, with a similar justification.
I think focussing on pledges of future income (if you are targeting students) seems great, most students don't have much money and are also used to living on a much lower amount than they will in a few years after graduating (particularly people in engineering, CS, and math).
I completely agree that focusing on pledges for students over direct fundraising is a good idea! In our latest internal impact evaluation (2022) at GWWC we found that each new 10% Pledge results in roughly $100,000 USD donated to high impact funding opportunities over the lifetime of the pledge (and we conservatively estimate that ~1/5 of that is counterfactual). Because of this, in my view focusing on promoting pledges is the more impactful path as one single 10% Pledge would raise more in the longrun as the most successful student fundraising campaign imaginable. It also has the added benefit of making a clear case for effective and significant giving which I think helps to promote positive values in the world and demonstrates the kind of principles that we care about in the EA community.
OTOH I think that often people feel like students might not feel able to make such a big commitment. However, I think that this is a little overcautious. I took the 10% Pledge as a student and found giving incredibly manageable. The 10% Pledge encouraged students to aim for about 1% of their spending money, which for me amounted to roughly ÂŁ100 a yearâless than the cost of a couple of pints each month. It was easy and, honestly, it felt really rewarding. Getting into the habit of giving early on has been very helpful as well. It became a core part of my identity, something I felt really proud of. Once I started working full-time, giving 10% of my income was easy. I simply was able to set it aside each month and hardly noticed it was gone. Since I had never been accustomed to that extra 10%, I've never felt like I was sacrificing anything.
Hey! Glad you want to bring more effective giving into your uni group. I myself took the 10% Pledge as student and still think it was amongst the best decisions I've ever made :)
I now work at Giving What We Can and we've developed a guide for how we can support / collaborate with EA groups to further our shared mission of spreading the ideas of effective giving, and effective altruism more broadly. I've DM'd you a link
Hey, Iâm Joris and I currently run CEAâs University Groups Team. I just wanted to share some more personal thoughts on this topic. My thoughts do not represent CEAâs official position and are also a bit messy, but I wanted to share them to be transparent and maybe provide some insight into how I am thinking about things.
First, I just wanted to flag that the team and I are aware of themanydiscussions on the topic of âtopâ or âfocusâ universities that have happened on the Forum. Many of our internal conversations touch upon many of the arguments raised there
I think itâs important to keep in mind that most talented people that weâd want to contribute to the EA mission are just not at these few institutions
Zooming out, I believe that most talented, altruistic and driven people in the world might just never even get a chance to act on those motivations due to large systemic issues like poverty, lack of education, or just a sheer dearth of opportunities to do good on a large scale more generally.
I think a lot of EA initiatives try to address these problems, but I donât think itâs where our team should try to make a dent in the world (at least not directly).
On a more actionable note, however, it does feel like there is a subsection of the worldâs population that we can reach with EA ideas, and we should make sure to prioritize well between all of those possible audiences. I think itâs now the right call to spend some of our marginal efforts on improving EA groups at these âtopâ universities:
Now that our scalable support feels firmly in place and reaches hundreds of organizers around the world, I feel better about focusing more of our efforts on the pilot universities. This, together with our focus on âwhat things from our pilot program can we scale to other groups?â, makes things feel different from past work with âtopâ universities
For context: the University Groups Team currently consists of Alex, Jemima, Joris, ~0.5 of Sam (heâs still studying), and our assistants Anto and Igna. Iâd estimate that from May till now, about 20% of our combined time has been spent on mentorship and other support for the pilot uni program, while at least 30% of our time is spent on broad support programming
Personally, Iâm excited to be focusing more of my time on our scalable support once we have onboarded someone for the pilot university role!
In general, I often notice myself thinking that I feel sad to live in a world where a small fraction of the population has such outsized opportunities to shape the world. But people in certain positions do have outsized resources to impact the world, and if we get a chance to inspire these people with EA ideas and motivate them to act on them, we should. Iâm excited to find someone who can work with us on making the most of this large opportunity for impact, and hope people apply for the role!
On a basic level, I agree that we should take artificial sentience extremely seriously, and think carefully about the right type of laws to put in place to ensure that artificial life is able to happily flourish, rather than suffer. This includes enacting appropriate legal protections to ensure that sentient AIs are treated in ways that promote well-being rather than suffering. Relying solely on voluntary codes of conduct to govern the treatment of potentially sentient AIs seems deeply inadequate, much like it would be for protecting children against abuse. Instead, I believe that establishing clear, enforceable laws is essential for ethically managing artificial sentience.
That said, I'm skeptical that a moratorium is the best policy.
From a classical utilitarian perspective, the imposition of a lengthy moratorium on the development of sentient AI seems like it would help to foster a more conservative global cultureâone that is averse towards not only creating sentient AI, but also potentially towards other forms of life-expanding ventures, such as space colonization. Classical utilitarianism is typically seen as aiming to maximize the number of conscious beings in existence, advocating for actions that enable the flourishing and expansion of life, happiness, and fulfillment on as broad a scale as possible. However, implementing and sustaining a lengthy ban on AI would likely require substantial cultural and institutional shifts away from these permissive, exploratory values.
To enforce a moratorium of this nature, societies would likely adopt a framework centered around caution, restriction, and a deep-seated aversion to riskâvalues that would contrast sharply with those that encourage creating sentient life and proliferating this life on as large of a scale as possible. Maintaining a strict stance on AI development might lead governments, educational institutions, and media to promote narratives emphasizing the potential dangers of sentience and AI experimentation, instilling an atmosphere of risk-aversion rather than curiosity, openness, and progress. Over time, these narratives could lead to a culture less inclined to support or value efforts to expand sentient life.
Even if the ban is at some point lifted, there's no guarantee that the conservative attitudes generated under the ban would entirely disappear, or that all relevant restrictions on artificial life would completely go away. Instead, it seems more likely that many of these risk-averse attitudes would remain even after the ban is formally lifted, given the initially long duration of the ban, and the type of culture the ban would inculcate.
In my view, this type of cultural conservatism seems likely to, in the long run, undermine the core aims of classical utilitarianism. A shift toward a society that is fearful or resistant to creating new forms of life may restrict humanityâs potential to realize a future that is not only technologically advanced but also rich in conscious, joyful beings. If we accept the idea of 'value lock-in'âthe notion that the values and institutions we establish now may set a trajectory that lasts for billions of yearsâthen cultivating a culture that emphasizes restriction and caution may have long-term effects that are difficult to reverse. Such a locked-in value system could close off paths to outcomes that are aligned with maximizing the proliferation of happy, meaningful lives.
Thus, if a moratorium on sentient AI were to shape society's cultural values in a way that leans toward caution and restriction, I think the enduring impact would likely contradict classical utilitarianism's ultimate goal: the maximal promotion and flourishing of sentient life. Rather than advancing a world with greater life, joy, and meaningful experiences, these shifts might result in a more closed-off, limited society, actively impeding efforts to create a future rich with diverse and conscious life forms.
(Note that I have talked here mainly about these concerns from a classical utilitarian point of view. However, I concede that a negative utilitarian or antinatalist would find it much easier to rationally justify a long moratorium on AI.
It is also important to note that my conclusion holds even if one does not accept the idea of a 'value lock-in'. In that case, longtermists should likely focus on the near-term impacts of their decisions, as the long-term impacts of their actions may be impossible to predict. And I'd argue that a moratorium would likely have a variety of harmful near-term effects.)
Thanks for sharing. Do you think children born from unwanted pregnancies have positive lives? If so, would the family planning intervention still be beneficial accounting for the welfare loss of the children who would have been born from the prevented unwanted pregnancies? This seems like a crucial consideration.
Hello Habryka! I occasionally see you post something OP critical and am now wondering âis there a single post where Habryka shares all of his OP related critiques in one spot?â
If that does exist I think it could be very valuable to do.
I think this is the closest that I currently have (in-general, "sharing all of my OP related critiques" would easily be a 2-3 book sized project, so I don't think it's feasible, but I try to share what I think whenever it seems particularly pertinent):
I also have some old memos I wrote for the 2023 Coordination Forum I would still be happy to share with people if they DM me that I referenced a few times in past discussions.
Many animal advocates frame the goal of the movement as "ending factory farming".
I see why itâs a tempting message, both to hold onto internally, and when pitching to people new to the movement.
Yet, I think the reality is that we might never get there.
I think the framing therefore leads to the following problems:
Unrealistic hope leads to disillusionment and burnout.
You should count counterfactual wins, not the absolute numbers.
A lack of strategic clarity when developing a theory of change.
Leads to a poor allocation of resources.
There is another point which makes me especially in favour of focussing on reducing suffering, and also increasing happiness. Ending factory-farming only increases animal welfare if factory-farmed animals continue to have negative lives forever, whereas I would say they may become positive in the next few decades at least in some animal-friendly countries.
I think focussing on pledges of future income (if you are targeting students) seems great, most students don't have much money and are also used to living on a much lower amount than they will in a few years after graduating (particularly people in engineering, CS, and math).
Open Phil has seemingly movedaway from funding âfrontier of weirdnessâ-type projects and cause areas; I therefore think a hole has opened up that EAIF is well-placed to fill. In particular, I think an FHI 2.0 of some sort (perhaps starting small and scaling up if itâs going well) could be hugely valuable, and that finding a leader for this new org could fit in with your ârunning specific application rounds to fund people to work on [particularly valuable projects].â
My sense is that an FHI 2.0 grant would align well with EAIFâs scope. Quoting from your announcement post for your new scope:
Examples of projects that I (Caleb) would be excited for this fund [EAIF] to support
A program that puts particularly thoughtful researchers who want to investigate speculative but potentially important considerations (like acausal trade and ethics of digital minds) in the same physical space and gives them stipends - ideally with mentorship and potentially an emphasis on collaboration.
âŚ
Foundational research into big, if true, areas that arenât currently receiving much attention (e.g. post-AGI governance, ECL, wild animal suffering, suffering of current AI systems).
Having said this, I imagine that you saw Habrykaâs âFHI of the Westâ proposal from six months ago. The fact that that has not already been funded, and that talk around it has died down, makes me wonder if you have already ruled out funding such a project. (If so, Iâd be curious as to why, though of course no obligation on you to explain yourself.)
One possible concern with this idea is that the project would probably take a lot of funding to launch. With Open Phil's financial distancing from EA Funds, my guess is that EAIF may often not be in the ideal position to be an early funder of a seven-figure-a-year project, by which I mean one that comes on board earlier than individual major funders.
I can envision some cases in which EAIF might be a better fit for seed funding, such as cases where funding would allow further development or preliminary testing of a big-project proposal to the point it could be better evaluated by funders who can consistently offer mid-six figures plus a year. It's unclear how well that would describe something like the FHI/West proposal, though.
I could easily be wrong (or there could already be enough major funder interest to alleviate the first paragraph concern), and a broader discussion about EAIF's comparative advantages / disadvantages for various project characteristics might be helpful in any event.
I like the thought process and the sentiment, but I think big goals are a critical guiding light for the future. "Reducing suffering as much as possible" is neither inspirational enough nor concrete enough to work as as public waiting for
"End factory farming" is a clearer a inspiring rallying point, the same way we in global development do talk about ending poverty, and yes eradicating Malaria. The millennium and sustainable development goals use those kind of terms and I believe help light the way.
Call me naive, but I think distant hope is more likely to keep people going than lead to burnout, as long as we are realistic about out short term goals. I don't think ending factory farming is unrealistic long term.
Any updates on this? Have other projects and tools superceded it?
We're looking to do something similar with content from unjournal.org. We're exploring the alternatives, and considering hiring a specialist for this project.
Can you elaborate on why you think we will never eradicate factory farming? You point to near-term trends that suggest it will get worse over the coming decades. What about on a century long time scale or longer? Factory farming has only been around for a few generations, and food habits have changed tremendously over that time.
I think it's important to consider how some strategies may make future work difficult. For example, Martha Nussbaum highlights how much of the legal theory in the animal rights movement has relied on showing similarities between human and animal intelligence. Such a "like us" comparison limits consideration to a small subset of vertebrates. They are impotent at helping animals like chickens, were much legal work is happening now. Other legal theories are much more robust to expansion and consideration of other animals as the science improves to understand their needs and behavior.
Using your line of argument applied to the analogy you provided would suggest that efforts like developing a malaria vaccine are misguided, because malaria will always be with us, and we should just focus on reducing infection rates and treatment.
I also wanted to add that as someone who leans more towards valuing happiness and suffering equally, that I still find the moratorium to be a good idea and don't feel pangs of frustration about the lost positive experience if we were to delay. I would be very concerned if humanity chose now to never produce any new kinds of beings who could feel, as that may forever rule out some of the best futures available to us. But as you say, a 50-year delay in order to make this monumentally importance choice properly would seem to be a wise and patient decision by humanity (given that we can see this is a crucial choice for which we are ill prepared). It is important that our rapid decision to avoid doing something before we know what we're doing doesn't build in a bias towards never doing it, so some care might need to be taken with the end condition for a moratorium. Having it simply expire after 20 to 50 years (but where a new one could be added if desired) seems pretty good in this regard.
I think that thoughtful people who lean towards classical utilitarianism should generally agree with this (i.e. I don't think this is based on my idiosyncrasies). To get it to turn out otherwise would require extreme moral certainty and/or a combination of the total view with impatience in the form of temporal discounting.
Note that I think it is important to avoid biasing a moratorium towards being permanent even for your 5th option (moratorium on creating beings that suffer). c.f. we have babies despite knowing that their lives will invariably include periods of suffering (because we believe that these will usually be outweighed by other periods of love and joy and comfort). And most people (including me) think that allowing this is a good thing and disallowing it would be disastrous. At the moment, we aren't in a good position to understand the balances of suffering and joy in artificial beings and I'd be inclined to say that a moratorium on creating artificial suffering is a good thing, but when we do understand how to measure this and to tip the scales heavily in favour of positive experience, then a continued ban may be terrible. (That said, we may work also work out how to ensure they have good experiences with zero suffering, in which case a permanent moratorium may well turn out to be a good thing.)
Given your statement that "a 50-year delay in order to make this monumentally importance choice properly would seem to be a wise and patient decision by humanity", I'm curious if you have any thoughts on the comment I just wrote, particularly the part arguing against a long moratorium on creating sentient AI, and how this can be perceived from a classical utilitarian perspective.
On a basic level, I agree that we should take artificial sentience extremely seriously, and think carefully about the right type of laws to put in place to ensure that artificial life is able to happily flourish, rather than suffer. This includes enacting appropriate legal protections to ensure that sentient AIs are treated in ways that promote well-being rather than suffering. Relying solely on voluntary codes of conduct to govern the treatment of potentially sentient AIs seems deeply inadequate, much like it would be for protecting children against abuse. Instead, I believe that establishing clear, enforceable laws is essential for ethically managing artificial sentience.
However, it currently seems likely to me that sufficiently advanced AIs will be sentient by default. And if advanced AIs are sentient by default, then instituting a temporary ban on sentient AI development, say for 50 years, would likely be functionally equivalent to pausing the entire field of advanced AI for that period.
Therefore, despite my strong views on AI sentience, I am skeptical about the idea of imposing a moratorium on creating sentient AIs, especially in light of my general support for advancing AI capabilities.
Why I think sufficiently advanced AIs will likely be sentient by default
The idea that sufficiently advanced AIs will likely be sentient by default can be justified by three basic arguments:
Sentience appears to have evolved across a wide spectrum of the animal kingdom, from mammals to cephalopods, indicating it likely serves a critical functional purpose. In general, it is rare for a complex trait like sentience to evolve independently in numerous separate species unless it provides a strong adaptive advantage. This suggests that sentience likely plays a fundamental role in an organismâs behavior and survival, meaning it could similarly arise in artificial systems that develop comparable complexity and behavioral flexibility.
Many theories of consciousness imply that consciousness doesnât arise from a specific, rare set of factors but rather could emerge from a wide variety of psychological states and structural arrangements. This means that a variety of complex, sufficiently advanced AIs might meet the conditions for consciousness, making sentience a plausible outcome of advanced AI development.
At least some AIs will be trained in environments that closely parallel human developmental environments. Current AIs are trained extensively on human cultural data, and future AIs, particularly those with embodied forms like robots, will likely acquire skills in real-world settings similar to those in which humans develop. As these training environments mirror the kinds of experiences that foster human consciousness, it stands to reason that sentience could emerge in AIs trained under these conditions, particularly as their learning processes and interactions with the world grow in sophistication.
Why I'm skeptical of a general AI moratorium
My skepticism of a general AI moratorium contrasts with those of (perhaps) most EAs, who appear to favor such a ban, for both AI safety reasons and to protect AIs themselves (as you argue here). I'm instead inclined to highlight the enormous costs of such a ban, compared to a variety of cheaper alternatives, such as targeted regulation that merely ensures AIs are strongly protected against abuse. These costs appear to include:
The opportunity cost of delaying 50 years of AI-directed technological progress. Since advanced AI can likely greatly accelerate technological progress, delaying advanced AI delays an enormous amount of technology that can be used to help people. This action would likely cause the premature deaths of billions of people, who could have had long, healthy and rich lives, but will instead die of aging-related diseases.
Enforcing a ban on advanced AI for such an extended period would require unprecedented levels of global surveillance, centralized control, and possibly a global police state. The economic incentives for developing AI are immense, and preventing individuals or organizations from circumventing the ban would necessitate sweeping surveillance and policing powers, fundamentally reshaping global governance in a restrictive and intrusive manner. This seems plainly negative on its face.
Moreover, from a classical utilitarian perspective, the imposition of a 50-year moratorium on the development of sentient AI seems like it would help to foster a more conservative global cultureâone that is averse towards not only creating sentient AI, but also potentially towards other forms of life-expanding ventures, such as space colonization. Classical utilitarianism is typically seen as aiming to maximize the number of conscious beings in existence, advocating for actions that enable the flourishing and expansion of life, happiness, and fulfillment on as broad a scale as possible. However, implementing and sustaining a lengthy ban on AI would likely require substantial cultural and institutional shifts away from these permissive, exploratory values.
To enforce a moratorium of this nature, societies would likely adopt a framework centered around caution, restriction, and a deep-seated aversion to riskâvalues that would contrast sharply with those that encourage creating sentient life and proliferating this life on as large of a scale as possible. Maintaining a strict stance on AI development might lead governments, educational institutions, and media to promote narratives emphasizing the potential dangers of sentience and AI experimentation, instilling an atmosphere of risk-aversion rather than curiosity, openness, and progress. Over time, these narratives could lead to a culture less inclined to support or value efforts to expand sentient life.
Even if the ban is at some point lifted, there's no guarantee that the conservative attitudes generated under the ban would entirely disappear, or that all relevant restrictions on artificial life would completely go away. Instead, it seems more likely that many of these risk-averse attitudes would remain even after the ban is formally lifted, given the initially long duration of the ban, and the type of culture the ban would inculcate.
In my view, this type of cultural conservatism seems likely to, in the long run, undermine the core aims of classical utilitarianism. A shift toward a society that is fearful or resistant to creating new forms of life may restrict humanityâs potential to realize a future that is not only technologically advanced but also rich in conscious, joyful beings. If we accept the idea of 'value lock-in'âthe notion that the values and institutions we establish now may set a trajectory that lasts for billions of yearsâthen cultivating a culture that emphasizes restriction and caution may have long-term effects that are difficult to reverse. Such a locked-in value system could close off paths to outcomes that are aligned with maximizing the proliferation of happy, meaningful lives.
Thus, if a moratorium on sentient AI were to shape society's cultural values in a way that leans toward caution and restriction, I think the enduring impact would likely contradict classical utilitarianism's ultimate goal: the maximal promotion and flourishing of sentient life. Rather than advancing a world with greater life, joy, and meaningful experiences, these shifts might result in a more closed-off, limited society, actively impeding efforts to create a future rich with diverse and conscious life forms.
(Note that I have talked here mainly about these concerns from a classical utilitarian point of view, and a person-affecting point of view. However, I concede that a negative utilitarian or antinatalist would find it much easier to rationally justify a long moratorium on AI.
It is also important to note that my conclusion holds even if one does not accept the idea of a 'value lock-in'. In that case, longtermists should likely focus on the near-term impacts of their decisions, as the long-term impacts of their actions may be impossible to predict. And my main argument here is that the near term impacts of such a moratorium are likely to be harmful in a variety of ways.)
I thought this was a well-written, thoughtful and highly intelligent piece, about a really important topic, where getting as close as possible to the truth is super-important and high-stakes. Kudos! I gave it a strong upvote. :)
I am starting from the point of being fairly attached to the âletâs try to end factory farming!â framing, but this post has given me a lot to think about.
I wanted to share a bunch of thoughts that sprung to my mind as I read the post:
One potential advantage of the âletâs try to end factory farming!â framing is that it encourages us to think long-term and systematically, rather than short-term and narrowly. I take long-termism to be true: future suffering matters as much as present-day suffering. I worry that a framing of âletâs accept that factory farming will endure; how can we reduce the most sufferingâ quickly becomes âhow can we reduce the most suffering *right now*, in a readily countable and observable wayâ. This might make us miss opportunities and theories of change which will take longer to work up a head of steam, but which over the long term, may lead to more suffering reduction. It may also push us towards interventions which are easily countable, numerically, at the expense of interventions which may actually, over time, lead to more suffering-reduction, but in more uncertain, unpredictable, indirect and harder-to-measure ways. It may push us towards very technocratic and limited types of intervention, missing things like politics, institutions, ideas, etc. It may discourage creativity and innovation. (To be clear: this is not meant to be a âwoo-wooâ point; Iâm suggesting that these tendencies may fail in their own terms to maximize expected suffering reduction over time).
Aiming to end factory farming encourages us toaim high. Imagine we have a choice between two options, as a movement: try to eradicate 100pc of the suffering caused by factory farming, by abolishing it (perhaps via bold, risky, ambitious theories-of-change). Or, try to eradicate 1pc of the suffering caused by factory farming, through present-day welfare improvements. The high potential payoff of eradicating factory farming seems to look good here, even if we think thereâs only (say) a 10pc chance of it working. I.e, perhaps the best way to maximise expected suffering reduction is, in fact, to âgambleâ a bit and take a shot at eradicating factory farming.
A potentially important counterpoint here, I think, is if it turns out that some welfare reforms deliver huge suffering reduction. I think that the Welfare Footprint folks claim somewhere that moving laying hens (?) out of the worst cage systems basically immediately *halves* their suffering (?) If true, this is huge, and is a point in favour of prioritising such welfare measures.
If we give up on even trying to end factory farming, doesnât this become a self-fulfilling prophecy? If we do this, we guarantee that we end up in a world where factory framing endures. Given uncertainty, shouldnât (at least some of) the movement try to aim high and eradicate it?
Iâm not sure that the analogy with malaria/poverty/health/development is perfect:
Actually, we do seek to end some diseases, not just control them. E.g. we eradicated smallpox, and are nearly there for polio. Some people are also trying to eradicate malaria (I think). (Though eradicating a disease is in many ways easier than eradicating factory farming, so this analogy maybe doesnât work so well.)
Arguably, the focus within EA global health discourse on immediate, countable, tangible interventions (like distributing bednets) has distracted us from more systemic, messy - but also deep and important - questions, such as: Why are some countries rich and others poor? What actually drives development, and how can we help boost it? How can we boost growth? Why do some countries have such bad health systems and outcomes? How can we build strong health systems in developing countries, rather than focus âverticallyâ on specific diseases? *Arguably*, making progress on these questions could, over the long term, actually deliver more suffering-reduction than jumping straight to technocratic, direct âinterventionsâ.
Some of global development discourse *is* framed in terms of *ending* poverty, at least sometimes. For example, the Sustainable Development Goals say we should seek to âend povertyâ, end hungerâ, etc.
Iâm very unsure about this, but I *guess* that a framing of âfactory faming is a gigantic moral evil, letâs eradicate itâ is, on balance, more motivating/attracting than a framing of âfactory farming is a gigantic moral evil, weâll never defeat it, but we can help a tonne of animals, letâs do itâ (?)
*If* we knew the future for sure, and knew it would be impossible ever to eradicate factory farming, then I do agree that we should face facts and adjust our strategy accordingly, rather than live in hope. My gut instinct though is that we canât be sure of this, and there are arguments in favor of aiming for big, bold, systemic changes and wins for animals.
These are just some thoughts that sprang to mind, I don't think that in and of themselves they fully repudiate the case you thoughtfully made. I think more discussion and thought on this topic is important; kudos for kicking this off with your post!
(For those interested, the Sentience Institute have done some fascinating work on the analogies and dis-analogies of factory farming vs other moral crimes such as slavery - eg here and here.)
The answer for a long time has been that it's very hard to drive any change without buy-in from Open Philanthropy. Most organizations in the space are directly dependent on their funding, and even beyond that, they have staff on the boards of CEA and other EA leadership organizations, giving them hard power beyond just funding. Lincoln might be on the EV board, but ultimately what EV and CEA do is directly contingent on OP approval.
OP however has been very uninterested in any kind of reform or structural changes, does not currently have any staff participate in discussion with stakeholders in the EA community beyond a very small group of people, and is majorly limited in what it can say publicly due to managing tricky PR and reputation issues with their primary funder Dustin and their involvement in AI policy.
It is not surprising to me that Lincoln would also feel unclear on how to drive leadership, given this really quite deep gridlock that things have ended up in, with OP having practically filled the complete power vacuum of leadership in EA, but without any interest in actually leading.
Hello Habryka! I occasionally see you post something OP critical and am now wondering âis there a single post where Habryka shares all of his OP related critiques in one spot?â
If that does exist I think it could be very valuable to do.
Could you develop this part please? The "why this problem is much harder and disanalogous" part.
A lack of strategic clarity when developing a theory of change. For advocates who buy that we will end factory farming, this might mean that they are more likely to pursue interventions and theories of change that will do just that: end factory farming. This leads to conversations about how do we mimic previous social movements that have âwonâ like the emancipation and gay marriage movements. While I think this work can be valuable, I often see it discussed in ways I think are insufficiently clear-eyed about why this problem is much harder and disanalogous.
Good question, I wasn't sure how much to err on the side of brevity vs thoroughness.
To phrase it differently I think sometimes advocates start their strategy with the final line 'and then we end factory farming', and then try to develop a strategy about how do we get there. I don't think it is reasonable to assume this is going to happen, and I think this leads to overly optimistic theories of change. From time to time I see a claim about how meat consumption will be drastically reduced in the next few decades based on a theory that is far too optimistic and/or speculative.
For example, I've seen work claim that when plant-based meat reaches taste and price parity, people will choose plant-based over conventional meat, so if we raise the price of meat via regulation, and lower the cost of plant-based, there will be high adoption of plant-based, and meat reduction will be 30% lower by 2040 (those numbers are made up, but ball-park correct). I think these claims just aren't super well founded and some research showed that when a university cafeteria offered impossible and regular burgers, adoption was still quite low (anyone know the citation?).
Could you develop this part please? The "why this problem is much harder and disanalogous" part.
A lack of strategic clarity when developing a theory of change. For advocates who buy that we will end factory farming, this might mean that they are more likely to pursue interventions and theories of change that will do just that: end factory farming. This leads to conversations about how do we mimic previous social movements that have âwonâ like the emancipation and gay marriage movements. While I think this work can be valuable, I often see it discussed in ways I think are insufficiently clear-eyed about why this problem is much harder and disanalogous.
On your question: I chose organic because I had initially planned to take the EU Organic one because itâs so wide spread here and has some animal welfare standards. In the end I chose Naturland though because it seems to be stronger on animal welfare, and I wanted to make a strong case.
I am not aware of any reported malpractices as the one you cited for that label but of course there is always a chance to have these outliers.
Oh, got it! I am so sorry. I'm American and have a very American-centric worldview. I was thinking of organic as referring to the United States Department of Agriculture (USDA) Organic certification. I therefore feel like I pretty much totally missed what you actually meant by your post. I'm sorry! đŞđş
I think the traditional settings are better for animal welfare, though there are huge differences and I've come to realise that traditional vs. intensive is a bit of a false dichotomy (but it's useful for communication purposes). To lay out my perspective in a bit more detail (I am not an animal scientist or anything and more of a generalist researcher who has read some of the work done by Welfare Footprint Project an others, attended some webinars, etc.):
I assume the worst settings to be the highly intensive settings without any proper regulations (e.g., factory farms in Europe have at least some welfare standards that they need to adhere to, while in many African countries this does not exist which can lead to really bad outcomes). The growth of factory farming in regions without proper regulation worries me a lot.
Second worst are probably intensive settings with better regulations (e.g., factory farms in the U.S. with enriched cages).
I also think that traditional/smallholder settings can be quite bad for animals, if their owners do not have the resources to provide proper care for them (e.g., adequate feed, housing, etc.). The upside here is that there usually aren't that many animals farmed in those settings, but the quality of life can be quite bad as well, I think.
Semi-intensive or somewhat more financially stable forms of smallholder farming seem better. Not sure where you live, but I am thinking about smaller farmers as they still exist in Europe for example, where they are able to provide proper housing, feed, etc. and have not intensified their production as much.
The best are probably the kind of settings you envision, where farmers have the required resources and intentionally give animals more space and care about their welfare (organic, pasture-raised, etc.). But I imagine this to be more of a Global North phenomenon.
All of these categories are of course still heavy simplifications (e.g., enriched battery cages and deep littre systems for hens could both fall into the better-regulated factory farming settings category). And of course none of this tells us much about which (if any) of these lives are net positive/negative, but we already discussed that :)
Sorry for the long answer, but hope it's relevant/interesting. I think our top priority should be to avoid the worst outcome on this list (the first bullet point), which is what we are trying to do at AAA. Also because the numbers in that category could grow massively (also think about largely unregulated industries such as shrimp or insect farming).
Final point: I think people strongly underestimate the extent to which animal agriculture is already industrialised in parts of Africa (I did so too before digging deeper into this). This 2022 source cites 60% of hens in Africa being kept in cages. There tend to be a lot of smallholder farmers, but they keep quite a small number of animals per capita, so their animal numbers are outweighed by bigger industrial producers.
Thanks! Indeed thinking along the same lines although I have a much stronger intuition that most human and wild animal lives are lives worth living. From the comment section I liked
The link to the talk on wild animal welfare - while it makes the point that evolution is complicated and not guranteed to increase welfare welfare for all animals, I think I share their assumption that pain is an evolutionary tool for animals to stop doing things that will harm them (which would stop working if it would be overly abundant). In a similar way pleasure can be argued to encourage taking actions to achieve fitness-improving goals, so you'd expect them to always somewhat balance out unless in niche cases where pain/pleasure don't have an effect on reproductive success (like an especially painful death, or a factory farm where activities from the animals don't have an effect on their fitness).
The link to the neutral point discussion although most considerations seem more relevant for human lives (e.g., the ethical issues around legalising assisted suicide) I found it interesting that the estimated neutral point moves "down" as people have lower average life satisfaction (e.g., 2 for UK and 0.5 for Ghana and Kenya).
Somewhat unrelated to this but I read your work for Animal Advocacy Africa. How do you look at the welfare of animals farmed in more traditional settings there? E.g., chickens in a village or small cattle herds by roaming tribes like the Kenyan Maasai? Just from looking at them I always guessed that they have a "good life" but curious what you think! From some conversations I understood that factory farming also becomes more prominent in Kenya but the majority still seems to be farmed in more traditional settings.
I think the traditional settings are better for animal welfare, though there are huge differences and I've come to realise that traditional vs. intensive is a bit of a false dichotomy (but it's useful for communication purposes). To lay out my perspective in a bit more detail (I am not an animal scientist or anything and more of a generalist researcher who has read some of the work done by Welfare Footprint Project an others, attended some webinars, etc.):
I assume the worst settings to be the highly intensive settings without any proper regulations (e.g., factory farms in Europe have at least some welfare standards that they need to adhere to, while in many African countries this does not exist which can lead to really bad outcomes). The growth of factory farming in regions without proper regulation worries me a lot.
Second worst are probably intensive settings with better regulations (e.g., factory farms in the U.S. with enriched cages).
I also think that traditional/smallholder settings can be quite bad for animals, if their owners do not have the resources to provide proper care for them (e.g., adequate feed, housing, etc.). The upside here is that there usually aren't that many animals farmed in those settings, but the quality of life can be quite bad as well, I think.
Semi-intensive or somewhat more financially stable forms of smallholder farming seem better. Not sure where you live, but I am thinking about smaller farmers as they still exist in Europe for example, where they are able to provide proper housing, feed, etc. and have not intensified their production as much.
The best are probably the kind of settings you envision, where farmers have the required resources and intentionally give animals more space and care about their welfare (organic, pasture-raised, etc.). But I imagine this to be more of a Global North phenomenon.
All of these categories are of course still heavy simplifications (e.g., enriched battery cages and deep littre systems for hens could both fall into the better-regulated factory farming settings category). And of course none of this tells us much about which (if any) of these lives are net positive/negative, but we already discussed that :)
Sorry for the long answer, but hope it's relevant/interesting. I think our top priority should be to avoid the worst outcome on this list (the first bullet point), which is what we are trying to do at AAA. Also because the numbers in that category could grow massively (also think about largely unregulated industries such as shrimp or insect farming).
Final point: I think people strongly underestimate the extent to which animal agriculture is already industrialised in parts of Africa (I did so too before digging deeper into this). This 2022 source cites 60% of hens in Africa being kept in cages. There tend to be a lot of smallholder farmers, but they keep quite a small number of animals per capita, so their animal numbers are outweighed by bigger industrial producers.
I'd like to see more basic public philosophy arguing for effective altruism and against its critics. (I obviously do this a bunch, and am puzzled that there isn't more of it, particularly from philosophers who - unlike me - are actually employed by EA orgs!)
One way that EAIF could help with this is by reaching out to promising candidates (well-respected philosophers who seem broadly sympathetic to EA principles) to see whether they could productively use a course buyout to provide time for EA-related public philosophy. (This could of course include constructively criticizing EA, or suggesting ways to improve, in addition to - what I tend to see as the higher priority - drawing attention to apt EA criticisms of ordinary moral thought and behavior and ways that everyone else could clearly improve by taking these lessons on board.)
A specific example that springs to mind is Richard Pettigrew. He independently wrote an excellent, measured criticism of Leif Wenar's nonsense, and also reviewed the Crary et al volume in a top academic journal (Mind, iirc). He's a very highly-regarded philosopher, and I'd love to see him engage more with EA ideas. Maybe a course buyout from EAIF could make that happen? Seems worth exploring, in any case.
I'd be worried that -- even assuming the funding did not actually influence the content of the speech -- the author being perceived as on the EA payroll would seriously diminish the effectiveness of this work. Maybe that is less true in the context of a professional journal where the author's reputation is well-known to the reader than it would be somewhere like Wired, though?
Thanks so much Vasco for your work on this! As with MHR in the past, we really appreciate folks doing in-depth analyses like this, and are very appreciative of the interest in our work :)
In the spirit of this weekâs Forum theme, I wanted to provide some more context regarding SWPâs room for more funding.
Our overheads (i.e. salaries, travel/conferences) and program costs for the India sludge removal work, are currently covered by grants until the end of 2026. Meaning that any additional funds are put towards HSI. (For context, our secured grants do also cover the cost of some stunners, but HSI as a program is still able to absorb more funding).
Each stunner costs us $55k and we ask the producers we work with to commit to stunning a minimum of 120 million shrimps per annum. This results in a cost-effectiveness of ~2,000+ shrimps helped / $ / year (i.e. our marginal impact of additional dollars is higher than our historical cost-effectiveness).
Weâre having our annual team retreat (which we call âShrimposiumâ) next week, during which we hope to map out how we can deploy stunners in such a way as to catalyse a tipping point so that pre-slaughter stunning becomes the industry standard.
Weâve had some good indications recently that HSI does contribute to âlocking-inâ industry adoption, with Tesco and Sainsburyâs recently publishing welfare policies, building on similar wins in the past (such as M&S and Albert Heijn).
This has always been the Theory of Change for the HSI project. Although weâre very excited by how cost-effective it is in its own right, ultimately we want to catalyse industry-wide adoption - deploying stunners to the early adopters in order to build towards a tipping point that achieves critical mass. In other words, over the next few years we want to take the HSI program from Growth to Scale.
I would be surprised if post-Shrimposium our targets regarding HSI required less funding than our current projections. In other words, though I donât currently have an exact sense of our room for more funding, Iâm confident SWP is in a position to absorb significantly more funding to support our HSI work.
Is there any chance HSI may increase the number of shrimp? I guess it would tend to increase costs, and therefore decrease the number of shrimp. I ask because I estimate that moving from ice slurry to electrical stunning only increases welfare by 4.34 % (= 1 - 4.85/5.07). In this case, since I think farmed shrimp have negative lives (for any slaughter method), an increase of more than 4.34 % in the number of shrimp would make HSI harmful.
In your piece you focus on artificial sentience. But similar arguments would apply to somewhat broader categories.
Wellbeing
For example, you could expand it to creating entities that can have wellbeing (or negative elements of wellbeing) even if that wellbeing can be determined by things other than conscious experience. If there were ways of creating millions of beings with negative wellbeing, I'd be very disturbed by that regardless of whether it happened by suffering or some other means. I'm sympathetic to views where suffering is the only form of wellbeing, but am by no means sure they are the correct account of wellbeing, so maybe what I really care about is avoiding creating beings that can have (negative) wellbeing.
Interests
One could also go a step further. Wellbeing is a broad category for all kinds of things that count towards how well your life goes. But on many people's understandings, it might not capture everything about ill treatment. In particular, it might not capture everything to do with deontological wrongs and/or rights violations, which may involve wronging someone in a way that can't be made up for by improvements in wellbeing and can't be cashed out purely in terms of its negative effects on wellbeing. So it may be that creating beings with interests or morally relevant interests is the relevant category.
That said, note that these are both steps towards greater abstraction, so even if they better capture what we really care about, they might still lose out on the grounds of being less compelling, more open to interpretation, and harder to operationalise.
Thanks for being so generous in offering up your time for EA initiatives, I hope you find a good fit that is full of both impact and meaning :) You might be interested in helping out in the Nordic effective giving landscape, where I am Chairman for Ge Effektivt.
At Gi Effektivt and Ge Effektivt in Norway/Sweden respectively, we work to fundraise for charities recommended by GiveWell, Animal Charity Evaluators, and Giving Green. You might be familar either with us or counterparts from other countries like Effektiv Spenden or Ayuda Efectiva. The total money raised is a few million dollars per year, with significant increase in the past year.
Norway and Sweden share backend and most aspects of frontend, and we have a full-time CTO who is significantly capacity constrained. It sounds like you have skills that could come in handy, and I think we could have the right size where you'd get the right kind of support/counterpart to get you leverage on your time â and still an org small enough that you could make an impressive difference in a few months.
Happy to answer any questions at an initial stage, but in case interested I think it would be even more useful would be for you to speak to our CTO to get an idea of whether you could be helpful and if the projects excite you. You can either DM me here, email henri[at]geeffektivt.se, or reach out to the technical team directly.
I'd like to see more basic public philosophy arguing for effective altruism and against its critics. (I obviously do this a bunch, and am puzzled that there isn't more of it, particularly from philosophers who - unlike me - are actually employed by EA orgs!)
One way that EAIF could help with this is by reaching out to promising candidates (well-respected philosophers who seem broadly sympathetic to EA principles) to see whether they could productively use a course buyout to provide time for EA-related public philosophy. (This could of course include constructively criticizing EA, or suggesting ways to improve, in addition to - what I tend to see as the higher priority - drawing attention to apt EA criticisms of ordinary moral thought and behavior and ways that everyone else could clearly improve by taking these lessons on board.)
A specific example that springs to mind is Richard Pettigrew. He independently wrote an excellent, measured criticism of Leif Wenar's nonsense, and also reviewed the Crary et al volume in a top academic journal (Mind, iirc). He's a very highly-regarded philosopher, and I'd love to see him engage more with EA ideas. Maybe a course buyout from EAIF could make that happen? Seems worth exploring, in any case.
Totalitarian regimes have caused enormous suffering in the past, committing some of the largest and most horrifying crimes against humanity ever experienced.
How do totalitarian regimes compare to non-totalitarian regimes in this regard?
Totalitarianism is a particular kind of autocracy, a form of government in which power is highly concentrated. What makes totalitarian regimes distinct is the complete, enforced subservience of the entire populace to the state.
Notice that this definition may not apply to a hypothetical state that gives some freedoms to millions of people while mistreating 95% of humans on earth (e.g. enslaving and torturing people, using weapons of mass destruction against civilians, carrying out covert operations that cause horrible wars, enabling genocide, unjustly incarcerating people in for-profit prisons).
I want to emphasize that this just sets a lower bound on the importance.
E.g. there's a theory that fungal infections are the primary cause of cancer.
How much of chronic fatigue is due to undiagnosed fungal infections? Nobody knows. I know someone with chronic fatigue who can't tell whether it's due in part to a fungal infection. He's got elevated mycotoxins in his urine, but that might be due to past exposure to a moldy environment. He's trying antifungals, but so far the side effects have prevented him from taking more than a few doses of the two that he has tried.
It feels like we need something more novel than slightly better versions of existing approaches to fungal infections. Maybe something as radical as nanomedicine, but that's not very tractable yet.
I also wanted to add that as someone who leans more towards valuing happiness and suffering equally, that I still find the moratorium to be a good idea and don't feel pangs of frustration about the lost positive experience if we were to delay. I would be very concerned if humanity chose now to never produce any new kinds of beings who could feel, as that may forever rule out some of the best futures available to us. But as you say, a 50-year delay in order to make this monumentally importance choice properly would seem to be a wise and patient decision by humanity (given that we can see this is a crucial choice for which we are ill prepared). It is important that our rapid decision to avoid doing something before we know what we're doing doesn't build in a bias towards never doing it, so some care might need to be taken with the end condition for a moratorium. Having it simply expire after 20 to 50 years (but where a new one could be added if desired) seems pretty good in this regard.
I think that thoughtful people who lean towards classical utilitarianism should generally agree with this (i.e. I don't think this is based on my idiosyncrasies). To get it to turn out otherwise would require extreme moral certainty and/or a combination of the total view with impatience in the form of temporal discounting.
Note that I think it is important to avoid biasing a moratorium towards being permanent even for your 5th option (moratorium on creating beings that suffer). c.f. we have babies despite knowing that their lives will invariably include periods of suffering (because we believe that these will usually be outweighed by other periods of love and joy and comfort). And most people (including me) think that allowing this is a good thing and disallowing it would be disastrous. At the moment, we aren't in a good position to understand the balances of suffering and joy in artificial beings and I'd be inclined to say that a moratorium on creating artificial suffering is a good thing, but when we do understand how to measure this and to tip the scales heavily in favour of positive experience, then a continued ban may be terrible. (That said, we may work also work out how to ensure they have good experiences with zero suffering, in which case a permanent moratorium may well turn out to be a good thing.)
This is an excellent exploration of these issues. One of my favourite things about it is that it shows it is possible to write about these issues in a measured, sensible, warm, and wise way â i.e. it provides a model for others wanting to advance this conversation at this nascent stage to follow.
Re the 5 options, I think there is one that is notably missing, and that would probably be the leading option for many of your opponents. It is the wait-and-see approach â leave the space unregulated until a material (but not excessive) amount of harm has occurred and if/when that happens, regulate from this situation where much more information is available. This is the kind of strategy that the anti-SB 1047 coalition seems to have converged on. And it is the usual way that society proceeds with regulating unprecedented kinds of harm.
As it happens, I think your options 4 and 5 (ban creation of artificial sentience/suffering) are superior to the wait-and-see approach, but it is a harder case to argue. Some key points of the comparison are:
in the case of artificial suffering a very large amount of harm may occur very quickly. Many new harms scale up fairly slowly, such that even if it takes a few years to regulate from the time the harms are first clear, the damage done isn't too profound (e.g. it is smaller than or equal to the gains of allowing that early period to be unregulated). But it seems like this could be a case where, say, millions of beings are suffering before the harms are recognised, and billions by the time the regulation is passed.
this is such a profound issue for humanity (whether to bring into existence for the first time in the history of the Earth entirely new kinds of entity that can experience suffering or joy) that it is natural to consider a global conversation about whether to proceed before doing it. Human germline genetic engineering is a similarly grand choice and the scientific and political community indeed chose to have a moritorium on that. Most regulation of new technologies is not like this, so this is an answer to the question of why should we treat this differently to everything else.
NEW event today: How To Get The Media Interested In Your Animal Story!
This one-hour workshop will cover how to reframe animal issues to get mainstream press attention. Today at 5pm CST. Organized in conjunction with the Hive team.
I want to emphasize that this just sets a lower bound on the importance.
E.g. there's a theory that fungal infections are the primary cause of cancer.
How much of chronic fatigue is due to undiagnosed fungal infections? Nobody knows. I know someone with chronic fatigue who can't tell whether it's due in part to a fungal infection. He's got elevated mycotoxins in his urine, but that might be due to past exposure to a moldy environment. He's trying antifungals, but so far the side effects have prevented him from taking more than a few doses of the two that he has tried.
It feels like we need something more novel than slightly better versions of existing approaches to fungal infections. Maybe something as radical as nanomedicine, but that's not very tractable yet.
I have a bit of a nitpicky question on the use of the phrase 'confidence intervals' throughout the report. Are these really supposed to be interpreted as confidence intervals? Rather than the Bayesian alternative, 'credible intervals'..?
My understanding was that the phrase 'confidence interval' has a very particular and subtle definition, coming from frequentist statistics:
80% Confidence Interval: For any possible value of the unknown parameter, there is an 80% chance that your data-collection and estimation process would produce an interval which contained that value.
80% Credible interval: Given the data you actually have, there is an 80% chance that the unknown parameter is contained in the interval.
From my reading of the estimation procedure, it sounds a lot more like these CIs are supposed to be interpreted as the latter rather than the former? Or is that wrong?
Appreciate this is a bit of a pedantic question, that the same terms can have different definitions in different fields, and that discussions about the definitions of terms aren't the most interesting discussions to have anyway. But the term jumped out at me when reading and so thought I would ask the question!
Yes, indeed, what we call 'confidence interval' in our report is better described by the term 'credible interval'.
We chose to use with the term 'confidence interval' because my impression is that this is the more commonly used and understood terminology within EA specifically, but also global health in general - even though it is not technically entirely accurate.
I think it's possible our views are compatible here. I want expertise to be valued more on the margin because I found EV and many other EA orgs to tilt towards an extreme of prioritizing value alignment, but I certainly believe there are cases where value alignment and general intelligence matter most and also that there are cases where expertise matters more.
I think the key lies in trying to figure out which situations are which in advance.
I guess the main thing to be aware of is how hiring non-value aligned people can lead to drift which isn't significant at first, but becomes significant over time. That said, I also agree that a certain level of professionalism within organisation becomes more important as they scale.
And apologies for the late reply - I turned off the notifications after the debate week.
I think the main argument I tried to put forward was more about the dependency of many organisations to one single major donor and the risks associated with this (and how it would make sense to mitigate this via more donations). And to be clear, I wasnât criticising Open Philanthropy. I just think that given Open Philanthropy is a bit alone in the field, animal welfare is neglected in a unique way. If there were multiple donors that are equally major as Open Philanthropy, there wouldnât be such fragility. But as far as I am aware, EAAWF and ACE do not have such funds and do not provide such large grants as of now. They are much smaller than the OP farm animal welfare program.
I donât think it would be likely that OP, ACE and EAAWF simultaneously decide to downscale, since OP has multiple causes while ACE and EAAWF have sole focus on animal welfare. So I donât think having a bigger EAAWF or ACE would result in the same level of fragility, even if organisations still depend on major donors. The main difference would be that many organisations would rely on multiple major donors rather than one single major donor.
By the way, I am less concerned with which avenue (funds or individual organisations) one should choose to donate. But my initial concern with the âindividualâ approach was that more individual donors spread more funds to more organisations which at the end would not help to mitigate this fragility if OP withdraws from animal welfare or significantly downscales. In theory, individual donors can also coordinate to channel their donations to fill in the gaps if such an event occurs, but in practice, I think funds would be more able to do this more efficiently. This is more of a practical issue which I donât have super strong views about.
On the other hand, âfunds vs. individual donorsâ is another debate where I strongly agree with you that more oversight of individual donors is very needed. As you mentioned, this depends mostly on the level of knowledge of the donors, but I can also add (in favour of the individual approach) that this also depends on the level of engagement of the donors. I donât expect major funds with limited staff can engage with each of their grantees perfectly. I think individual donors can play a very important role in engaging with these organisations as âshareholdersâ (or grant managers) and hopefully improve their performance. Of course, donors can do that for funds to some degree as well.
To reply to the last paragraph: yes, I think this is a fair summary.
Hi Engin, thanks for your reply! I agree that it's better to have multiple major donors than one major donor (e.g. it's better to have four major donors who contribute to 20% of all funding each; than one major donor who gives 80% of all funding). I would assume that EAAWF and ACE rely on smaller donors who would have donated invidually otherwise. So in the case that - for example - there is one major donor (60%) and many small donors (summing up to 40%), I don't know if it's good to pool the money of the small donors by ACE or EAAWF (as long as they donate to equally effective charities) so that there are one major donor (60%), and e.g. ACE and EAAWF as further major donors (each 20%). On the one hand, it's easier for ACE and EAAWF to react to a cut of funding by the major donor. On the other hand, there will probably be many charities which depend on ACE or EAAWF instead of many small donors. Of course, if the total amount of donations increases by new major donors, it's a different thing.
My self-identification as EA is cause-first: So long as the EA community puts resources broadly into causes which maximize the impartial good, I'd call myself EA.
Elizabeth's self-identification seems to me to be member-first, given that her self-identification seems more based upon community members acting with integrity towards each other than about whether or not EA is maximizing the impartial good.
This might explain the difference between my and Elizabeth's attitudes about the importance of some EAs claiming that veganism doesn't entail tradeoffs without being corrected. I think being honest about health tradeoffs is important, but I'm far more concerned with shutting up and multiplying by shipping resources towards the best interventions. However, putting on a member-first hat, I could understand why from Elizabeth's perspective, this is so important. Do you think this is a fair characterization?
I'd love to understand more about the way Elizabeth reasons about the importance of raising awareness of veganism's health tradeoffs relative to vegan advocacy:
If Elizabeth is trying to maximize the impartial good, she should probably be far more concerned about an anti-veganism advocate on Facebook than about a veganism advocate who (incorrectly) denies veganism's health tradeoffs. Of course everyone should be transparent about health tradeoffs. However, if Elizabeth is being scope-sensitive about the dominance of farmed animal effects, I struggle to understand why so much attention is being placed on veganism's health tradeoffs relative to vegan advocacy.
By analogy, this feels like sounding an alarm because EA's kidney donation advocates haven't sufficiently acknowledged its potential adverse health effects. Of course everyone should acknowledge that. But when also considering the person being helped, isn't kidney donation clearly the moral imperative?
I might say kidney donation is a moral imperative (or at least all-things-considered-good) if we consider only the effects on your welfare and the effects on the welfare of the beneficiaries. But when you consider indirect effects, things are less clear. There are effects on other people, nonhuman animals (farmed and wild), your productivity and time (which affects your EA work or income and donations), your motivation and your values. For an EA, productivity and time, motivation and values seem most important.
Thank you for writing this! I imagine this took a lot of time to put together, and I really appreciated being able to read it.
From the position of someone without a lot of connection or insight into the day-to-day functioning of EV (and its projects) this provided a lot of context, and gave me confidence that reforms at EV were seriously considered, and then instituted. Its one thing to read an announcement that an organization is working on, or investigation reforms â but being able to see the specifics of those reforms feels differently, and meaningfully, important to me. I feel glad to have read this post, and for some of the updates it allowed me to make!
Thanks for being so generous in offering up your time for EA initiatives, I hope you find a good fit that is full of both impact and meaning :) You might be interested in helping out in the Nordic effective giving landscape, where I am Chairman for Ge Effektivt.
At Gi Effektivt and Ge Effektivt in Norway/Sweden respectively, we work to fundraise for charities recommended by GiveWell, Animal Charity Evaluators, and Giving Green. You might be familar either with us or counterparts from other countries like Effektiv Spenden or Ayuda Efectiva. The total money raised is a few million dollars per year, with significant increase in the past year.
Norway and Sweden share backend and most aspects of frontend, and we have a full-time CTO who is significantly capacity constrained. It sounds like you have skills that could come in handy, and I think we could have the right size where you'd get the right kind of support/counterpart to get you leverage on your time â and still an org small enough that you could make an impressive difference in a few months.
Happy to answer any questions at an initial stage, but in case interested I think it would be even more useful would be for you to speak to our CTO to get an idea of whether you could be helpful and if the projects excite you. You can either DM me here, email henri[at]geeffektivt.se, or reach out to the technical team directly.
Appreciate that, I woke up this morning after finally quitting (the new "ChatGPT 5 is your boss" pilot was too much!), about to register "AI4Gud.com" and get me in the race, but have reconsidered based on this excellent advice.
I spent some time with Claude this morning trying to figure out why I find it cringe calling myself an EA (I never call myself an EA, even though many in EA would call me an EA).
The reason: calling myself "EA" feels cringe because it's inherently a movement/community label - it always carries that social identity baggage with it, even when I'm just trying to describe my personal philosophical views.
I am happy to describe myself as a Buddhist or Utilitarian because I don't think it does those things (at least, not within the broader community context I find myself in - Western, Online, Democratic, Australia, etc).
My self-identification as EA is cause-first: So long as the EA community puts resources broadly into causes which maximize the impartial good, I'd call myself EA.
Elizabeth's self-identification seems to me to be member-first, given that her self-identification seems more based upon community members acting with integrity towards each other than about whether or not EA is maximizing the impartial good.
This might explain the difference between my and Elizabeth's attitudes about the importance of some EAs claiming that veganism doesn't entail tradeoffs without being corrected. I think being honest about health tradeoffs is important, but I'm far more concerned with shutting up and multiplying by shipping resources towards the best interventions. However, putting on a member-first hat, I could understand why from Elizabeth's perspective, this is so important. Do you think this is a fair characterization?
I'd love to understand more about the way Elizabeth reasons about the importance of raising awareness of veganism's health tradeoffs relative to vegan advocacy:
If Elizabeth is trying to maximize the impartial good, she should probably be far more concerned about an anti-veganism advocate on Facebook than about a veganism advocate who (incorrectly) denies veganism's health tradeoffs. Of course everyone should be transparent about health tradeoffs. However, if Elizabeth is being scope-sensitive about the dominance of farmed animal effects, I struggle to understand why so much attention is being placed on veganism's health tradeoffs relative to vegan advocacy.
By analogy, this feels like sounding an alarm because EA's kidney donation advocates haven't sufficiently acknowledged its potential adverse health effects. Of course everyone should acknowledge that. But when also considering the person being helped, isn't kidney donation clearly the moral imperative?
Thanks for bringing this up. I was unsure what terminology would be best here.
I mainly have in mind fermi models and more complex but similar-in-theory estimations. But I believe this could extend gracefully for more complex models. I don't know of many great "ontologies of types of mathematical models," so am not sure how to best draw the line.
Here's a larger list that I think could work.
Fermi estimates
Cost-benefit models
Simple agent-based models
Bayesian models
Physical or social simulations
Risk assessment models
Portfolio optimization models
I think this framework is probably more relevant for models estimating an existing or future parameter, than models optimizing some process, if that helps at all.
Good luck finding a good volunteering project! Consider reaching out to 80K for advice? ( https://80000hours.org/speak-with-us/ ) I wonder if they might have a good sense of which orgs might be excited to use your support.
Hi Angelina, thanks for the link. Didn't think that was fitting for me, considering they already rejected me twice in the past with reasons being that they mostly work with students. đ But I can give it another shot.
I am aware that homosexuality is a scare of your time. Believe me, it is not nearly as bad as its made out to be. I understand that film often portrays them as selfish and villainous, but that's untrue. That's not even necessarily what film writers believe (though some surely do). There's actually specific codes in place that limit the way many characters like that are written--art under that isn't exactly a reflection of reality. Many of us have the same desires you do, of happiness and prosperity, a life of acceptance. They aren't deviants either. Statistics of my time show that they're no more or less likely than heterosexual people to be such. That's another damaging stereotype. The worrying reality is, a lot of such stereotypes come from people in power, and their own misguided fears. Though it's not necessarily easy for you, in such a politically rigid time, I hope that you and anyone else keeps a healthy questioning of power, using their own logic and knowledge to evaluate the soundness of their decisions. Recall, the government is for the needs of the many--and this includes people unlike yourself. And if you're worried of betraying religious teachings, the bible makes no mention of homosexuality (scholars believe that was a mistranslation). Furthermore, Jesus himself loved the outcasts--be like him some more. And this open attitude doesn't stop at sexuality. Some people are at odds with the gender they were assigned (their mind is truer than their body), but that doesn't get much attention until later. Still, keep an open mind to them and try to empathize with the struggle of being inside a body you don't believe to be yours. Act with compassion and consideration.
A good ancestor. One who has ancestors is a descendant. Descendants need not be a genetic lineage (not necessarily even human), but rather the constitution of the living world some time into the future. A good ancestor should act in the present, with a specific vision for the future. A good ancestor should ideally "add value" to the world, that is, reduce suffering one way or another. This of course could be through means of direct, focused work toward progression of a relevant and underserved cause, or by philanthropy of any means. A good ancestor is one who takes a look at every major action they take, every donation they make, beyond the numbers, and asks themselves, "how will this affect the world after I've gone?" They research this well and inform their opinions with such. A good ancestor provides.
My name is Hans Erickson, I am a 65 year old IT professional that is semi-retired. I still own a small IT support company and have an employee who backfills for me, which allows me to travel.
On a trip to Africa in 2022, I was on a safari and was taken through a remote Botswana village that was the home of our tour guide. He pointed out the school house as we passed through. I had been in Africa once before 15 years earlier participating in a technology conference in Lagos, Nigeria. In my research at the time, I discovered the appalling lack of internet connectivity to the majority of the continent. I asked our tour guide about this, and he confirmed the school had no internet.
I volunteered to set up Starlink internet for the school when the service became available. Just a month ago Starlink officially began service in Botswana. I reached out to my contact and the school administrator that he had connected me to. Because it is a government school, they required formal approval, so I have written letters and responded to questions, but still no approval. I am hopeful now that it is in the hands of their IT administrators that a final approval is coming.
There are approx. 150 students attending the school. My plan is to install the starlink dish, Ubiquiti AP's and remote monitoring equipment, connect everything, and supply some chromebooks for student and administration use. I will also configure a google school account, which provides robust tools for school administrators and students.
I have volunteered to support the starlink subscription for a three year period, after which I hope to convince local authorities or Starlink to continue the service.
I only read the 'Doing Goog Better' book after having made this agreement. In the interest of effective altruism, I was hoping to learn from someone the metrics that would be most beneficial to track for a project like this. I am aware that risks are involved with providing high speed internet in a rural setting, but I am not sure exactly what those risks might be.
Thanks so much for posting Hans and thanks for your efforts to help. I've lived in UgAnda for 10 years and worked in remote rural areas. This isn't my area of expertise but I might have something useful to share Have private messaged you and keen to have a chat if you are.
Country and/or domain specific career advising webcontent
80000 Hours and Probably Good are great but their advice can be off putting, irrelevant or not useful enough for many people who are not their main audience. Having content about potentially many impactful careers in medicine, academia, or engineering, in Japan, Germany, Brazil, or India can be much more useful and engaging for those people who are in these categories. This can also be done at a relatively low cost - one or two able and willing writers per country/domain.
2. âBudget hawkâ organisation/consultancy that aims to propose budget cuts to EA organisations without compromising cost-effectiveness.
There is a lot of attention towards effective giving like %10 pledges. Another way of achieving similar outcomes is to make organisations spend less (%10 again?). We tend to assume that EA organisations are cost effective (which is true overall) but this does not mean that every EA organisation spends each penny with %100 cost-effectiveness. It is probable that many EA organisations can make cuts to their ineffective programs or manage their operations/taxes more efficiently. A lot of EA organisations have very large budgets, more than millions of dollars annually. So even modest improvements can be equivalent to adding many GWWC pledgers.
3. Historical case studies about movement or community building
Open philanthropy had commissioned some reports. But most of them are about certain policy reforms. Only a few are about movement or community building. I think more case studies can provide interesting insights. Sentience Instituteâs case studies were very useful for animal advocacy in my opinion.
4. Grand strategy research
This might be already being carried out by major EA organisations. But I can imagine that most leadership and key staff members in EA organisations typically focus on specific and urgent problems and never have enough time and focus on taking a (lot of) step back and think about the grand strategy. Other people might also have better skills to do this too. By the way, I am also more in favour of âlearning by doingâ and âmake decisions as you progressâ type of approaches but nevertheless at least having âsomeâ grand strategy can reveal important insights about what are the real bottlenecks and how to overcome them.
5. Commissioning impact evaluations of major EA organisations and EA funds.
I think the reasons for this are obvious. There are of course some impact evaluations in EA- GWWCâs evaluating the evaluators project was a good example (But note that this was done only last year, once - and from my perspective it evaluated the structure and framework of the funds, not the impact of the grants themselves). I definitely think there is a lot of room for improvement - especially on publicly accessible impact reports. I think this is all the more important for EA, since ânot assuming impact but looking for evidenceâ is one of the distinguishing features of it.
I don't understand the core of your proposal. Like, to ban it you have to point at it. Do you have a pointer? Like, this post reads as "10 easy steps of how to ban X. What is X? Idk"
Is it a ban on use of loss functions or what? Like, if you say that pain is repulsive states and pleasure is attractive ones, the loss is always repulsive
I'm curious if you have any sense of how the average conditions/welfare levels of farmed animals are expected to change on this default trajectory, or how they've changed in the last few decades. I imagine this is difficult to quantify, but seems important.
In particular, assuming market pressures stay as they are, how should we expect technological improvements to affect farmed animal welfare?
My uneducated guess: optimizing hard for (meat production / cost) generally leads to lower animal welfare. This seems roughly true of technological improvements in the past. For example:
I assume factory farming is much worse than early human farming/hunting.
Antibiotics allow animals to be kept in worse sanitary conditions than would otherwise be livable.
Artificial selection and growth hormones has created broiler chickens that grow too fast for good health, though it's unclear to me whether slower growth would be net good (because it would probably lead to more total chicken-days spend living in factory farms, even after accounting for higher prices leading to fewer sales).
Thanks for sharing this! I was looking at https://www.givingwhatwecan.org/best-charities-to-donate-to-2024# to find some good related NGOs to donate to for a friend's birthday but didn't find a section in the front page (maybe in some subpages?). But I will donate to some of the orgs mentioned here!
Animal advocatestypically focus on sentience in order to base their moral claims. This similarity with humans can easily justify some form of similar (moral) treatment of animals and humans. And under the assumption that pain and pleasure are the sole things that have moral worth, extremely high numbers of sentient animals and their suffering has an overwhelming effect on moral considerations.
As a result, if one only considers the headcounts and the sufferings, the obvious conclusion is: total animal welfare >>> total human welfare.
But one needs to keep in mind that the immediate conclusion of the âunrestricted sentience approachâ is not only that animal welfare >>> human welfare. It is rather:
total invertebrate welfare >>>>>>>>>> total rest of the animal welfare >>> human welfare .
This is precisely because of the same reason: total invertebrate welfare is also overwhelming due to their astronomical numbers.
I donât think most people realise this or really adjust their actions/positions accordingly. If your main starting point is exclusively sentience and hedonism, your vote in the debate should not be only â%100 agree with 100 million dollars spent on animal welfareâ, it should also be â%99.99 agree with 100 million dollars spent on invertebrate welfareâ.
One might reasonably think that this moral weight framework should be wrong. âUnless you have confidence in the ruler's reliability, if you use a ruler to measure a table, you may also use the table to measure the rulerâ.
There might be other goods that cannot be adequately reduced to pain and pleasure: like friendship, knowledge, play, reason, etc. And these may have even more moral weight than mere pleasure and pain. Humans might have much higher capacity to actualize these different goods and therefore have higher status due to their nature. Some animals can also actualize these goods in their own limited capacities which can also justify some form of moral hierarchy between mammals, birds and invertebrates. This can then justify more (at least some) attention to non-invertabrate welfare as well. This can then also justify more (or at least some) attention to human welfare, despite overwhelming numbers of animals.
Thanks for your honesty Engin. This section truly reflects my doubts about animal welfare, which I guess has little to do with cost effectiveness or monitorability.. but more about the shadow of the the repugnant conclusion. The fear that we could end up prioritizing moths over humans simply because we keep insisting that the only thing that reflects value in the world is doing arithmetics with pain and pleasure.
I definately agree there is a lot of room of improvement in animal ethics. Most animal welfare people are cool with being unconventional but I think this kind of misses the point which is that we might not currently have the right moral framework.
I also think utilitarianism got "some" things right like extreme pain is really immoral, or one should be seeking efficiency (within reasonable limits) etc. But it remains weak and weird by itself, without any additional (and multiple) values and principles.
A lot of what I have seen regarding "EA Community teams" seems to be be about managing conflicts between different individuals.
Not sure I understand this part - curious if you could say more.
It would be interesting to see an organization or individual that was explicitly an expert in knowing different individuals and organizations and the projects that they are working on and could potentially connect people who might be able to add value to each other's projects.
I like this idea. A related idea/ framing that comes to mind.
There's often a lot of value for people having a strong professional network. Eg. for finding collaborators, getting feedback or input etc.
People's skills/ inclination for network building will vary a lot. And I suspect there's a significant fraction of people working on EA projects that have lower network building inclination/ skills, and would benefit from support in building their network.
eg. If I could sign up for a service that substantially increased my professional network/ helped me build more valuable professional relationships, I would and would be willing to pay for such a service.
Just when I have seen efforts to improve community relations it has typically been in the "Community Health" context relating to when people have had complaints about people in the community or other conflicts. I haven't seen as much concerted effort in connecting people working on different EA projects that might add value to each other.
I'm curious though if there has been any work done on the welfare math of this? Frankenchickens suffer more individually due to their size, but greater size also means less individual chickens are needed to satisfy demand. Furthermore, faster growth means less time spent alive and, presumably, suffering - or maybe more time, if slaughter makes up a large fraction of it?
It seems likely to me that Frankenchickens do entail more suffering and that banning them would mean less regardless, as increasing cost of production also lowers demand; plus the campaign is a good movement building endeavor. However, it would still be good to understand how much of priority this is relative to other policy changes.
In response to your question, this RSPCA report explores the question of fast-growing breeds of broiler chicken. They highlighted the intense suffering that these birds face and the inefficiencies of this system of farming. It is a 36-page report so here are a few key bits of the text:
An RSPCA commissioned trial revealed that, in general, compared to a commercially viable slower growing breed, these three conventional breeds had significantly higher mortality (including culls), poorer leg, hock and plummage health, and more birds affected by breast muscle disease (wooden breast and white striping)*. Further, they were less active â spending less time walking and standing, and more time feeding and sitting â and spent less time engaged in enrichment type behaviours: foraging, perching and dustbathing.
The genetics of these three conventional breeds fail to adequately safeguard their welfare* to such an extent that many birds of these breeds could be considered as having a life not worth living.
The severity of the welfare problems, the huge number of animals involved globally, and the fact that these welfare concerns have not been adequately addressed to date, means this long-standing issue requires urgent attention.
Moreover, it is apparent that the production of chicken meat using conventional breeds is a wasteful and ethically questionable business (e.g. higher mortality, higher culls, and poorer meat quality), bringing into question the sustainability of this enterprise.
There are commercially-viable breeds available that have improved welfare outcomes and these higher welfare breeds should replace the use of conventional breeds.
The Welfare Footprint Project used the Cumulative Pain Framework to investigate how the adoption of the Better Chicken Commitment (BCC) and similar welfare certification programs affect the welfare of broilers. Specifically, they examined concerns that the use of slower-growing breeds may increase suffering by extending the life of chickens for the production of the same amount of meat. From their main findings they stated:
'Our results strongly support the notion that adoption of BCC standards and slower-growing broiler strains have a net positive effect on the welfare of broiler chickens. Because most welfare offenses endured by broilers are strongly associated with fast growth, adoption of slower-growing breeds not only reduces the incidence of these offenses but also delays their onset. As a consequence, slower-growing birds are expected to experience a shorter, not longer, time in pain before being slaughtered.'
You can also read our own white paper on the welfare of broiler chickens.
EA organizations often underrate experience relative to âintelligenceâ and âvalue alignmentâ
I believed this and wanted EV to hire more outside experts. To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped. To my dismay, the result was pretty equivocal: there were certainly instances where non-EA experts outperformed EAs, but ~as many instances to the contrary.
I don't think EA is unique here; I have half a foot in the startup world and pg's recent Founder Mode post has ignited a bunch of discussion about how startup founders with ~0 experience often outperform the seasoned experts that they hire.
Unfortunately I don't have a good solution here - hiring/contracting good people is just actually a very hard problem. But at least from the issues I am aware of at EV I don't think the correct update was in favor of experience and away from value alignment.[1]
If I had to come up with advice, I think it would be to note the scare quotes around "value alignment". Someone sincerely trying to do well at the thing they are hired for is very valuable; someone professing to care about the organization's mission but not actually doing anything is not very valuable. And sometimes people confuse the two. [This is a general comment, not specific to EV.]
Surely itâs not a case of either-or. EA exists because we all found that existing charity was not up to scratch, hence we do want EA to take different approaches. However, I think itâs important to also have people from outside EA (but with good value alignment) to provide diversity of thought and make sure there are no blindspots.
Impressive! A big "Weldone" to the TGov team. With your approach, dedication and achievement in just few months, I believe the TGov initiative will be a game-changer for Africa's emerging technologies governance.
However, I have two concerns running through my mind :
African Leaders are used to receiving policy recommendations but with little political will for implementation. How will you ensure your policy suggestions are really implemented?
Also, Are there any contingency plan perhaps for unforseen obstacles that can hinder TGovs work like Geopolitical crises in pilot countries, key personnel or team member departure? etc...
Thanks for your comments, Joseph. Regarding your raised points.
Our bio aspect focuses primarily on advisory policies, such as guidelines and protocols, rather than improving the local domestication of already established international standards. Our AI aspect is more about getting African stakeholders impactfully engaged in the global forum that defines redlines and boundaries, where their participation is currently lacking, resulting in significant gaps.
I think effective local domestication/implementation is a problem of its own.
To determine the countries we will engage with initially, we have considered various factors to minimize potential challenges in our prioritization exercise. These factors include indices related to democracy, the rule of law, peace, and ease of doing business, among others.
This is a great stride in the right direction. Looking forward to your exploration of the Africa Tech Space.
A quick question, how do you plan on bringing the seeming lack of government collaboration with this type of project especially in some parts of Africa?
We are actively building relationships with our identified stakeholders and aim to leverage these connections to further our mission. In some cases, we will utilize existing engagement and networks, such as those established by the APET, Africa CDCs, or agencies at the national level, to drive our mission.
I'd also like to really thank the judges for their feedback. It's a great luxury to be able to read many pages of thoughtful, probing questions about your work. I made several revisions & additions (and also split the entire thing into parts) in response to feedback, which I think improved the finished sequence a lot, and wish I had had the time to engage even more with the feedback.
Seconding Ben, I did a similar exercise and got similarly mixed (with stark examples in both directions) results (including in some instances you allude to in the post)
I think it's possible our views are compatible here. I want expertise to be valued more on the margin because I found EV and many other EA orgs to tilt towards an extreme of prioritizing value alignment, but I certainly believe there are cases where value alignment and general intelligence matter most and also that there are cases where expertise matters more.
I think the key lies in trying to figure out which situations are which in advance.
My name is Hans Erickson, I am a 65 year old IT professional that is semi-retired. I still own a small IT support company and have an employee who backfills for me, which allows me to travel.
On a trip to Africa in 2022, I was on a safari and was taken through a remote Botswana village that was the home of our tour guide. He pointed out the school house as we passed through. I had been in Africa once before 15 years earlier participating in a technology conference in Lagos, Nigeria. In my research at the time, I discovered the appalling lack of internet connectivity to the majority of the continent. I asked our tour guide about this, and he confirmed the school had no internet.
I volunteered to set up Starlink internet for the school when the service became available. Just a month ago Starlink officially began service in Botswana. I reached out to my contact and the school administrator that he had connected me to. Because it is a government school, they required formal approval, so I have written letters and responded to questions, but still no approval. I am hopeful now that it is in the hands of their IT administrators that a final approval is coming.
There are approx. 150 students attending the school. My plan is to install the starlink dish, Ubiquiti AP's and remote monitoring equipment, connect everything, and supply some chromebooks for student and administration use. I will also configure a google school account, which provides robust tools for school administrators and students.
I have volunteered to support the starlink subscription for a three year period, after which I hope to convince local authorities or Starlink to continue the service.
I only read the 'Doing Goog Better' book after having made this agreement. In the interest of effective altruism, I was hoping to learn from someone the metrics that would be most beneficial to track for a project like this. I am aware that risks are involved with providing high speed internet in a rural setting, but I am not sure exactly what those risks might be.
That's lovely Hans! Perhaps @NickLaing might have takes on your measurement question? Thanks for joining the EA Forum. I'm Toby, the Content Manager for the Forum (I run events, write newsletters, and talk with authors about their work). Let me know if you have any questions about EA, or using the Forum.
Impressive! A big "Weldone" to the TGov team. With your approach, dedication and achievement in just few months, I believe the TGov initiative will be a game-changer for Africa's emerging technologies governance.
However, I have two concerns running through my mind :
African Leaders are used to receiving policy recommendations but with little political will for implementation. How will you ensure your policy suggestions are really implemented?
Also, Are there any contingency plan perhaps for unforseen obstacles that can hinder TGovs work like Geopolitical crises in pilot countries, key personnel or team member departure? etc...
This is an awesome post, and it's a strong update in the direction of EV & CEA being much more transparent under your leadership. Very keen on hearing more from you in the future!
One other risk vector to EV stood out to me as concerning, but went somewhat unaddressed in this post. Consider:
EV was in a financial crisis; it had banked on receiving millions from FTX over the coming years
If a fraudulent or otherwise problematic individual hasn't been caught by the legal system, EV's donor due diligence tools may not catch them either.
I worry that the focus on legal risks is potentially missing a counterfactual here where a funding source is systematically upset. EV was not just banking on FTX to stay solvent / unfraudulent, but was also implicitly depending on cryptocurrency to remain frothy (the same can be said for EA, especially long-term risk cause areas, more broadly). Counterfactually, had FTX not been fraudulent, I still think that's it's likely that cryptocurrency would have collapsed over the following years. Assuming that the LTFF was receiving a proportion of FTX's funds, this still could've meant more than a 50% drop in funding from FTX (for example, Ethereum lost ~3/5ths of its market cap between November 2021 to November 2022).
You note:
Guardrails to prevent projects from running out of funding in a disorderly way and runway requirements to maintain resilience to possible future crises.
I would love to understand more about these financial controls. I can imagine that EV could probably withstand a sudden halving in funding from a major donor, by reallocating funding between projects, which is probably what's alluded to here.
(It's outside the scope of this post, but I'm not so sure that the broader long-term risk cause areas could have withstood this, and indeed, in the present scenario many organisations did not. I sort of worry about this kind of systematic risk with Anthropic, who could be hit quite hard if the current AI bubble starts winding down, even if they aren't directly responsible for it; I'm sure there are others.)
FTX as a funding source also had plenty of non-fraudulent failure modes. Having "banked on receiving millions from FTX over the coming years" to the extent that not receiving those funds created a crisis seems like a serious misjudgment. That being said, it isn't clear to me the extent to which FTX's donation amounts would have tied into short-term fluctuations in crypto values.
I can imagine that EV could probably withstand a sudden halving in funding from a major donor, by reallocating funding between projects, which is probably what's alluded to here.
The extent to which donations could be reallocated is unclear to me; it is possible for a donor to restrict donations to a specific purpose in a legally binding way. At least in some jurisdictions, those restrictions can often be binding even against the charity's creditors if the charity manages its finances correctly.
I read Zach to mean that projects need to have enough funding on hand to shut down in an orderly enough way -- which includes a way that does not create problems for sister projects -- in a near-worst case scenario. This could be a problem, for instance, if a project had financial commitments that bound EV but could not be satisfied out of resources allocated to the project.
There are, however, limits on what good financial controls can do for you if there's a massive funding shortfall and/or a massive unplanned liability. If (e.g.) a 50% revenue loss (not of a short-term nature) wouldn't seriously disrupt a charity's work, then that charity is probably too conservative on its spending or is raising excessive amounts of money that should go elsewhere.
A lot of what I have seen regarding "EA Community teams" seems to be be about managing conflicts between different individuals.
It would be interesting to see an organization or individual that was explicitly an expert in knowing different individuals and organizations and the projects that they are working on and could potentially connect people who might be able to add value to each other's projects. It strikes me that there are a lot of opportunities for collaboration but not as much organization around mapping out the EA space on a more granular level.
A lot of what I have seen regarding "EA Community teams" seems to be be about managing conflicts between different individuals.
Not sure I understand this part - curious if you could say more.
It would be interesting to see an organization or individual that was explicitly an expert in knowing different individuals and organizations and the projects that they are working on and could potentially connect people who might be able to add value to each other's projects.
I like this idea. A related idea/ framing that comes to mind.
There's often a lot of value for people having a strong professional network. Eg. for finding collaborators, getting feedback or input etc.
People's skills/ inclination for network building will vary a lot. And I suspect there's a significant fraction of people working on EA projects that have lower network building inclination/ skills, and would benefit from support in building their network.
eg. If I could sign up for a service that substantially increased my professional network/ helped me build more valuable professional relationships, I would and would be willing to pay for such a service.
EA organizations often underrate experience relative to âintelligenceâ and âvalue alignmentâ
I believed this and wanted EV to hire more outside experts. To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped. To my dismay, the result was pretty equivocal: there were certainly instances where non-EA experts outperformed EAs, but ~as many instances to the contrary.
I don't think EA is unique here; I have half a foot in the startup world and pg's recent Founder Mode post has ignited a bunch of discussion about how startup founders with ~0 experience often outperform the seasoned experts that they hire.
Unfortunately I don't have a good solution here - hiring/contracting good people is just actually a very hard problem. But at least from the issues I am aware of at EV I don't think the correct update was in favor of experience and away from value alignment.[1]
If I had to come up with advice, I think it would be to note the scare quotes around "value alignment". Someone sincerely trying to do well at the thing they are hired for is very valuable; someone professing to care about the organization's mission but not actually doing anything is not very valuable. And sometimes people confuse the two. [This is a general comment, not specific to EV.]
To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped.
Were these mostly situations in which EV had run into a major issue and then an outside expert was brought in? To the extent that the underlying developments that led to an issue came about from an EA / EV-insider way of thinking, I would expect significant performance costs associated with changing horses in midstream. So I wouldn't update much on the advisability of bringing in outside experts before a problem happens, or after a problem happens if the outside experts had played a role in setting up the underlying developments.
As a rough analogy, one can imagine a gridiron football offense that has been built (in terms of training, personnel, etc.) to align with a particular offensive strategy (e.g., the West Coast offense). If your team is set up that way, subbing in a key player whose skill set doesn't align to the previously chosen offensive strategy isn't usually going to work well in the short to medium run. This doesn't imply that the new player is bad -- just that your team has pre-committed to playing a particular offense. Ex ante, the new guy could have been the right player for your team contingent on your team having built a flexible enough system for him to work effectively in.
Thanks for another relevant question too! I do not think that alone would make dairy production net negative:
According to CIWF, "Dairy cows must give birth to one calf per year in order to continue producing milk".
From Animal Australia, "both mother and calf can often be heard calling out for each other for hours [after they are separated]".
Assuming disabling pain of 3 h/year for each the mother and child based on the above, one gets 6 h/year (= 2*3) of disabling pain. For my intensity of disabling pain, that corresponds to a loss of 0.00684 AQALY/year (= 6/24/365.25*10).
The above is quite small in comparison with the magnitude of the values I got for chickens and shrimp. So, in the absence of longer term effects from the separation, I do not think it would bring dairy production from positive to negative.
This is really interesting. Do you think that the fact cows are separated from their child, and arguably really don't like that, would change significantly the results?
Thanks for another relevant question too! I do not think that alone would make dairy production net negative:
According to CIWF, "Dairy cows must give birth to one calf per year in order to continue producing milk".
From Animal Australia, "both mother and calf can often be heard calling out for each other for hours [after they are separated]".
Assuming disabling pain of 3 h/year for each the mother and child based on the above, one gets 6 h/year (= 2*3) of disabling pain. For my intensity of disabling pain, that corresponds to a loss of 0.00684 AQALY/year (= 6/24/365.25*10).
The above is quite small in comparison with the magnitude of the values I got for chickens and shrimp. So, in the absence of longer term effects from the separation, I do not think it would bring dairy production from positive to negative.
This is an awesome post, and it's a strong update in the direction of EV & CEA being much more transparent under your leadership. Very keen on hearing more from you in the future!
One other risk vector to EV stood out to me as concerning, but went somewhat unaddressed in this post. Consider:
EV was in a financial crisis; it had banked on receiving millions from FTX over the coming years
If a fraudulent or otherwise problematic individual hasn't been caught by the legal system, EV's donor due diligence tools may not catch them either.
I worry that the focus on legal risks is potentially missing a counterfactual here where a funding source is systematically upset. EV was not just banking on FTX to stay solvent / unfraudulent, but was also implicitly depending on cryptocurrency to remain frothy (the same can be said for EA, especially long-term risk cause areas, more broadly). Counterfactually, had FTX not been fraudulent, I still think that's it's likely that cryptocurrency would have collapsed over the following years. Assuming that the LTFF was receiving a proportion of FTX's funds, this still could've meant more than a 50% drop in funding from FTX (for example, Ethereum lost ~3/5ths of its market cap between November 2021 to November 2022).
You note:
Guardrails to prevent projects from running out of funding in a disorderly way and runway requirements to maintain resilience to possible future crises.
I would love to understand more about these financial controls. I can imagine that EV could probably withstand a sudden halving in funding from a major donor, by reallocating funding between projects, which is probably what's alluded to here.
(It's outside the scope of this post, but I'm not so sure that the broader long-term risk cause areas could have withstood this, and indeed, in the present scenario many organisations did not. I sort of worry about this kind of systematic risk with Anthropic, who could be hit quite hard if the current AI bubble starts winding down, even if they aren't directly responsible for it; I'm sure there are others.)
I can think of a few scenarios where AGI doesn't kill us.
AGI does not act as a rational agent. The predicted doom scenarios rely on the AGI acting as a rational agent that maximises a utility function at all costs. This behaviour has not been seen in nature. Instead, all intelligences (natural or artificial) have some degree of laziness, which results in them being less destructive. Assuming the orthogonality thesis is true, this is unlikely to change.
The AGI sees humans as more useful alive than dead, probably because it's utility function involves humans somehow. This covers a lot of scenarios from horrible dystopias where AGI tortures us constantly to see how we react all the way to us actually somehow getting alignment right on the first try. It keeps us alive for the same reason as why we keep out pets alive.
The first A"G"I's are actually just a bunch of narrow AI's in a trenchcoat, and no one of them is able to overthrow humanity. A lot of recent advances in AI (including GPT4) have been propelled by a move away from generality and towards a "mixture of experts" model, where complex tasks are split into simpler ones. If this scales, one could expect more advanced systems to still not be general enough to act autonomously in a way that overpowers humanity.
AGI can't self improve because it runs face-first into the alignment problem! If we can think of how creating an intelligence greater than us results in the alignment problem, so can AGI. An AGI that fears creating something more powerful than itself will not do that, resulting in it remaining at around human level. Such an AGI would not be strong enough to beat all of humanity combined, so it will be smart enough not to try.
Species aren't lazy (those who are - or would be - are outcompeted by those who aren't).
The pets scenario is basically an existential catastrophe by other means (who wants to be a pet that is a caricature of a human like a pug is to a wolf?). And obviously so is the torture/dystopia one (i.e. not an "OK outcome"). What mechanism would allow us to get alignment right on the first try?
This seems like a very unstable equilibrium. All that is needed is for one of the experts to be as good as Ilya Sutskever at AI Engineering, to get past that bottleneck in short order (speed and millions of instances run at once) and foom to ASI.
It would also need to stop all other AGIs who are less cautious, and be ahead of them when self-improvement becomes possible. Seems unlikely given current race dynamics. And even if this does happen, unless it was very aligned to humanity it still spells doom for us due to the speed advantage of the AGI and it's different substrate needs (i.e. it's ideal operating environment isn't survivable for us).
Thank you for the insightful talk on scout mindset.My key take away is that good judgement from evidence based information helps to make better decision.Also,embracing growth mindset is key to an effective life.
This is a great stride in the right direction. Looking forward to your exploration of the Africa Tech Space.
A quick question, how do you plan on bringing the seeming lack of government collaboration with this type of project especially in some parts of Africa?
Thanks for writing this Zach! The broad strokes of the dynamics here are not news to me (I work at 80k which is a project of EV) but lots of the detail was novel and feels good to know.
EA organizations often underrate experience relative to âintelligenceâ and âvalue alignmentâ
I believed this and wanted EV to hire more outside experts. To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped. To my dismay, the result was pretty equivocal: there were certainly instances where non-EA experts outperformed EAs, but ~as many instances to the contrary.
I don't think EA is unique here; I have half a foot in the startup world and pg's recent Founder Mode post has ignited a bunch of discussion about how startup founders with ~0 experience often outperform the seasoned experts that they hire.
Unfortunately I don't have a good solution here - hiring/contracting good people is just actually a very hard problem. But at least from the issues I am aware of at EV I don't think the correct update was in favor of experience and away from value alignment.[1]
If I had to come up with advice, I think it would be to note the scare quotes around "value alignment". Someone sincerely trying to do well at the thing they are hired for is very valuable; someone professing to care about the organization's mission but not actually doing anything is not very valuable. And sometimes people confuse the two. [This is a general comment, not specific to EV.]
I would find it valuable if you could share some public version of the spreadsheet, or if you quickly remember some specific examples. Hiring/contracting is very hard but almost always necessary.
I still think that EA Reform is pretty important. I believe that there's been very little work so far on any of the initiatives we discussed here.
My impression is that the vast majority of money that CEA gets is from OP. I think that in practice, this means that they represent OP's interests significantly more than I feel comfortable with. While I generally like OP a lot, I think OP's focuses are fairly distinct from those of the regular EA community.
Some things I'd be eager to see funded: - Work with CEA to find specific pockets of work that the EA community might prioritize, but OP wouldn't. Help fund these things. - Fund other parties to help represent / engage / oversee the EA community. - Audit/oversee key EA funders (OP, SFF, etc); as these often aren't reviewed by third parties. - Make sure that the management in key EA orgs are strong, including the boards. - Make sure that many key EA employees and small donors are properly taken care of and are provided with support. (I think that OP has reason to neglect this area, as it can be difficult to square with naive cost-effectiveness calculations) - Identify voices that want to tackle some of these issues head-on, and give them a space to do so. This could mean bloggers / key journalists / potential community leaders in the future. - Help encourage or set up new EA organizations to sit apart from CEA, but help oversee/manage the movement. - Help out the Community Health team at CEA. This seems like a very tough job that could arguably use more support, some of which might be best done outside of CEA.
Generally, I feel like there's a very significant vacuum of leadership and managerial visibility in the EA community. I think that this is a difficult area to make progress on, but also consider it much more important than other EA donation targets.
I really appreciate this post! I have a few spots of disagreement, but many more of agreement, and appreciate the huge amount of effort that went into summarizing a very complicated situation with lots of stakeholders over an extended period of time in a way that feels sincere and has many points of resonance with my own experience.
EA organizations often underrate experience relative to âintelligenceâ and âvalue alignmentâ
I believed this and wanted EV to hire more outside experts. To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped. To my dismay, the result was pretty equivocal: there were certainly instances where non-EA experts outperformed EAs, but ~as many instances to the contrary.
I don't think EA is unique here; I have half a foot in the startup world and pg's recent Founder Mode post has ignited a bunch of discussion about how startup founders with ~0 experience often outperform the seasoned experts that they hire.
Unfortunately I don't have a good solution here - hiring/contracting good people is just actually a very hard problem. But at least from the issues I am aware of at EV I don't think the correct update was in favor of experience and away from value alignment.[1]
If I had to come up with advice, I think it would be to note the scare quotes around "value alignment". Someone sincerely trying to do well at the thing they are hired for is very valuable; someone professing to care about the organization's mission but not actually doing anything is not very valuable. And sometimes people confuse the two. [This is a general comment, not specific to EV.]
Seconding Ben, I did a similar exercise and got similarly mixed (with stark examples in both directions) results (including in some instances you allude to in the post)
Comments on 2024-10-31
Wes Reisen @ 2024-10-31T02:07 (+1) in response to đşđłWe can make world leaders agree on moral values THIS YEAR.
it seems boggling at first glance that this would work, but in summary, it would work like this: Sometimes, in an argument, one or more sides doesnât care about reaching the RIGHT conclusion, they just care about it reaching a conclusion they approve of. This is often the difficulty with arguments.
However, when everyone is brought to the table and wants to reach the RIGHT conclusion, you find that the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
This project would basically bring world leaders to the table, where they would look for the RIGHT conclusion to major problems, which should lead to the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
Wes Reisen @ 2024-10-31T02:35 (+1)
There is sort of precedent for this: science used to be much more argumentative, and now, most of science is done in very intelligent ways, aimed at getting to the RIGHT answer, and not âtheir answerâ. This led to many, if not most or all, scientific problems being solved*.
In addition, if you aim to be a powerful scientist, fighting for âyour answerâ makes it much harder than it is if you were fighting for the RIGHT answer. Similarly, if this project worked well, it would be much harder to gain power if you fought for âyour valuesâ than if you fought for the RIGHT values!
GraceAdamsđ¸ @ 2024-10-31T02:24 (+2) in response to Running a fundraiser as a University group?
I think it's a great idea to do a fundraising campaign as part of your university groups! Fundraisers can be a great way to raise awareness as well as money!
At Giving What We Can (I work there in Marketing), we have a some resources on running successful fundraisers which could be useful:
https://www.givingwhatwecan.org/get-involved/run-fundraisers
There have also been initiatives in the EA community like:
I think that fundraising for a cause tied to a run/walk or some type of other event that's happening near you could be a great way to gather momentum!
I would generally favour charities that 1) you think are highly effective 2) have a clear story that you can explain to potential donors about how it works and why they're worth supporting. I think GiveWell's top charities are great examples. Climate change charities or animal welfare charities could also resonate with people at universities!
Wes Reisen @ 2024-10-31T02:07 (+1) in response to đşđłWe can make world leaders agree on moral values THIS YEAR.
it seems boggling at first glance that this would work, but in summary, it would work like this: Sometimes, in an argument, one or more sides doesnât care about reaching the RIGHT conclusion, they just care about it reaching a conclusion they approve of. This is often the difficulty with arguments.
However, when everyone is brought to the table and wants to reach the RIGHT conclusion, you find that the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
This project would basically bring world leaders to the table, where they would look for the RIGHT conclusion to major problems, which should lead to the correct/RIGHT conclusion (seemingly) is arrived at much more often, is arrived at much faster, and as a bonus, the debate is much more respectful!
aogara @ 2024-10-31T01:51 (+2) in response to Trendlines in AIxBio evals
Nice! This is a different question, but I'd be curious if you have any thoughts on how to evaluate risks from BDTs. There's a new NIST RFI on bio/chem models asking about this, and while I've seen some answers to the question, most of them say they have a ton of uncertainty and no great solutions. Maybe reliable evaluations aren't possible today, but what would we need to build them?
Vasco Grilođ¸ @ 2024-10-30T22:50 (+2) in response to Cost-effectiveness analysis of Lafiya Nigeria intervention
Thanks for sharing. Do you think children born from unwanted pregnancies have positive lives? If so, would the family planning intervention still be beneficial accounting for the welfare loss of the children who would have been born from the prevented unwanted pregnancies? This seems like a crucial consideration.
jordanveđ¸ @ 2024-10-31T01:49 (+1)
I remember the Collinsâ being emphatically pro abortion and contraception to increase the cultural prestige and frequency of having children - so the poster couple of population=good seems to think contraception and abortion access does not reduce the population, all things considered. Iâm not sure if the lives of unwanted children are worth starting, but I should flag that Iâm generally pessimistic about which lives are worth starting.
Edit: Iâm not familiar with the culture of Nigeria. My intuitions about this developed in a western context and maybe there are relevant differences in Nigeria.
Eeveeđš @ 2024-10-30T21:56 (+3) in response to Eevee's Quick takes
Asking for a friend - there's no dress code for EAG, right?
Cillian_ @ 2024-10-31T01:39 (+3)
I reached out to the events team and they sent me this link :)
Habryka @ 2024-10-30T22:34 (+2) in response to JWS's Quick takes
I think this is the closest that I currently have (in-general, "sharing all of my OP related critiques" would easily be a 2-3 book sized project, so I don't think it's feasible, but I try to share what I think whenever it seems particularly pertinent):
https://www.lesswrong.com/posts/wn5jTrtKkhspshA4c/michaeldickens-s-shortform?commentId=zoBMvdMAwpjTEY4st
I also have some old memos I wrote for the 2023 Coordination Forum I would still be happy to share with people if they DM me that I referenced a few times in past discussions.
yanni kyriacos @ 2024-10-31T01:35 (+2)
Yeah I've seen that. I think costly-signalling is very real, and the effort to create something formal, polished and thoughtful would go a long way. But obviously i have no idea what else you've got on your plate so YMMV
AndyMcKenzie @ 2024-10-31T00:55 (+3) in response to Announcing 'The Future Loves You: How and Why We Should Abolish Death'
Congrats! One way I've been thinking about this recently -- if we expect most people will permanently die now (usually without desiring to do so), but at some point in the future, humanity will "cure death," then interventions to allow people to join the cohort of people who don't have to involuntarily die could be remarkably effective from a QALY perspective. As I've argued before, I think that key questions for this analysis are how many QALYs individuals can experience, whether humans are simply replaceable, and what is the probability that brain preservation will help people get there. Another consideration is that if it could be performed cheaply enough -- perhaps with robotic automation of the procedure -- it could also be used for non-human animals, with a similar justification.
Ariel_ZJ @ 2024-10-31T01:03 (+4)
Yeah, as I see it, the motivations to pursue this differ in strength dramatically depending on whether one's flavour of utilitarianism is more inclined to a person-affecting view or a total hedonic view.
If you're inclined towards the person-affecting view, then preserving people for revival is a no-brainer (pun intended, sorry, I'm a terrible person).
If you hold more of a total hedonic view, then you're more likely to be indifferent to whether one person is replaced for any other. In that case, abolishing death only has value in so far as it reduces the suffering or increases the joy of people who'd prefer to hold onto their existing loved ones rather than have them changed out for new people over time. From this perspective, it'd be equally efficacious to just ensure no-one cared about dying or attachments to particular people, and a world in which everyone was replaced with new people of slightly higher utility would be a net improvement to the universe.
Back in the real world though, outside of philosophical thought experiments, I suspect most people aren't indifferent to whether they or their loved ones die and are replaced, so for humans at least I think the argument for preservation is strong. That may well hold for great ape cousins too, but it's perhaps a weaker argument when considering something like fish?
AndyMcKenzie @ 2024-10-31T00:55 (+3) in response to Announcing 'The Future Loves You: How and Why We Should Abolish Death'
Congrats! One way I've been thinking about this recently -- if we expect most people will permanently die now (usually without desiring to do so), but at some point in the future, humanity will "cure death," then interventions to allow people to join the cohort of people who don't have to involuntarily die could be remarkably effective from a QALY perspective. As I've argued before, I think that key questions for this analysis are how many QALYs individuals can experience, whether humans are simply replaceable, and what is the probability that brain preservation will help people get there. Another consideration is that if it could be performed cheaply enough -- perhaps with robotic automation of the procedure -- it could also be used for non-human animals, with a similar justification.
calebp @ 2024-10-30T22:18 (+10) in response to Running a fundraiser as a University group?
I think focussing on pledges of future income (if you are targeting students) seems great, most students don't have much money and are also used to living on a much lower amount than they will in a few years after graduating (particularly people in engineering, CS, and math).
Luke Moore đ¸ @ 2024-10-31T00:22 (+3)
I completely agree that focusing on pledges for students over direct fundraising is a good idea! In our latest internal impact evaluation (2022) at GWWC we found that each new 10% Pledge results in roughly $100,000 USD donated to high impact funding opportunities over the lifetime of the pledge (and we conservatively estimate that ~1/5 of that is counterfactual). Because of this, in my view focusing on promoting pledges is the more impactful path as one single 10% Pledge would raise more in the longrun as the most successful student fundraising campaign imaginable. It also has the added benefit of making a clear case for effective and significant giving which I think helps to promote positive values in the world and demonstrates the kind of principles that we care about in the EA community.
OTOH I think that often people feel like students might not feel able to make such a big commitment. However, I think that this is a little overcautious. I took the 10% Pledge as a student and found giving incredibly manageable. The 10% Pledge encouraged students to aim for about 1% of their spending money, which for me amounted to roughly ÂŁ100 a yearâless than the cost of a couple of pints each month. It was easy and, honestly, it felt really rewarding. Getting into the habit of giving early on has been very helpful as well. It became a core part of my identity, something I felt really proud of. Once I started working full-time, giving 10% of my income was easy. I simply was able to set it aside each month and hardly noticed it was gone. Since I had never been accustomed to that extra 10%, I've never felt like I was sacrificing anything.
Comments on 2024-10-30
Luke Moore đ¸ @ 2024-10-30T23:59 (+3) in response to Running a fundraiser as a University group?
Hey! Glad you want to bring more effective giving into your uni group. I myself took the 10% Pledge as student and still think it was amongst the best decisions I've ever made :)
I now work at Giving What We Can and we've developed a guide for how we can support / collaborate with EA groups to further our shared mission of spreading the ideas of effective giving, and effective altruism more broadly. I've DM'd you a link
Joris P @ 2024-10-30T23:22 (+14) in response to Updates on CEAâs Pilot University Program
Hey, Iâm Joris and I currently run CEAâs University Groups Team. I just wanted to share some more personal thoughts on this topic. My thoughts do not represent CEAâs official position and are also a bit messy, but I wanted to share them to be transparent and maybe provide some insight into how I am thinking about things.
In general, I often notice myself thinking that I feel sad to live in a world where a small fraction of the population has such outsized opportunities to shape the world. But people in certain positions do have outsized resources to impact the world, and if we get a chance to inspire these
people with EA ideas and motivate them to act on them, we should. Iâm excited to find someone who can work with us on making the most of this large opportunity for impact, and hope people apply for the role!
Matthew_Barnett @ 2024-10-30T23:09 (+2) in response to Pausing AI is the only safe approach to digital sentience
(I'm repeating something I said in another comment I wrote a few hours ago, but adapted to this post.)
On a basic level, I agree that we should take artificial sentience extremely seriously, and think carefully about the right type of laws to put in place to ensure that artificial life is able to happily flourish, rather than suffer. This includes enacting appropriate legal protections to ensure that sentient AIs are treated in ways that promote well-being rather than suffering. Relying solely on voluntary codes of conduct to govern the treatment of potentially sentient AIs seems deeply inadequate, much like it would be for protecting children against abuse. Instead, I believe that establishing clear, enforceable laws is essential for ethically managing artificial sentience.
That said, I'm skeptical that a moratorium is the best policy.
From a classical utilitarian perspective, the imposition of a lengthy moratorium on the development of sentient AI seems like it would help to foster a more conservative global cultureâone that is averse towards not only creating sentient AI, but also potentially towards other forms of life-expanding ventures, such as space colonization. Classical utilitarianism is typically seen as aiming to maximize the number of conscious beings in existence, advocating for actions that enable the flourishing and expansion of life, happiness, and fulfillment on as broad a scale as possible. However, implementing and sustaining a lengthy ban on AI would likely require substantial cultural and institutional shifts away from these permissive, exploratory values.
To enforce a moratorium of this nature, societies would likely adopt a framework centered around caution, restriction, and a deep-seated aversion to riskâvalues that would contrast sharply with those that encourage creating sentient life and proliferating this life on as large of a scale as possible. Maintaining a strict stance on AI development might lead governments, educational institutions, and media to promote narratives emphasizing the potential dangers of sentience and AI experimentation, instilling an atmosphere of risk-aversion rather than curiosity, openness, and progress. Over time, these narratives could lead to a culture less inclined to support or value efforts to expand sentient life.
Even if the ban is at some point lifted, there's no guarantee that the conservative attitudes generated under the ban would entirely disappear, or that all relevant restrictions on artificial life would completely go away. Instead, it seems more likely that many of these risk-averse attitudes would remain even after the ban is formally lifted, given the initially long duration of the ban, and the type of culture the ban would inculcate.
In my view, this type of cultural conservatism seems likely to, in the long run, undermine the core aims of classical utilitarianism. A shift toward a society that is fearful or resistant to creating new forms of life may restrict humanityâs potential to realize a future that is not only technologically advanced but also rich in conscious, joyful beings. If we accept the idea of 'value lock-in'âthe notion that the values and institutions we establish now may set a trajectory that lasts for billions of yearsâthen cultivating a culture that emphasizes restriction and caution may have long-term effects that are difficult to reverse. Such a locked-in value system could close off paths to outcomes that are aligned with maximizing the proliferation of happy, meaningful lives.
Thus, if a moratorium on sentient AI were to shape society's cultural values in a way that leans toward caution and restriction, I think the enduring impact would likely contradict classical utilitarianism's ultimate goal: the maximal promotion and flourishing of sentient life. Rather than advancing a world with greater life, joy, and meaningful experiences, these shifts might result in a more closed-off, limited society, actively impeding efforts to create a future rich with diverse and conscious life forms.
(Note that I have talked here mainly about these concerns from a classical utilitarian point of view. However, I concede that a negative utilitarian or antinatalist would find it much easier to rationally justify a long moratorium on AI.
It is also important to note that my conclusion holds even if one does not accept the idea of a 'value lock-in'. In that case, longtermists should likely focus on the near-term impacts of their decisions, as the long-term impacts of their actions may be impossible to predict. And I'd argue that a moratorium would likely have a variety of harmful near-term effects.)
Vasco Grilođ¸ @ 2024-10-30T22:50 (+2) in response to Cost-effectiveness analysis of Lafiya Nigeria intervention
Thanks for sharing. Do you think children born from unwanted pregnancies have positive lives? If so, would the family planning intervention still be beneficial accounting for the welfare loss of the children who would have been born from the prevented unwanted pregnancies? This seems like a crucial consideration.
Eeveeđš @ 2024-10-30T21:56 (+3) in response to Eevee's Quick takes
Asking for a friend - there's no dress code for EAG, right?
alex lawsen @ 2024-10-30T22:43 (+5)
I've seen people wear a very wide range of things at the EAGs I've been to.
yanni kyriacos @ 2024-10-30T19:30 (+2) in response to JWS's Quick takes
Hello Habryka! I occasionally see you post something OP critical and am now wondering âis there a single post where Habryka shares all of his OP related critiques in one spot?â
If that does exist I think it could be very valuable to do.
Habryka @ 2024-10-30T22:34 (+2)
I think this is the closest that I currently have (in-general, "sharing all of my OP related critiques" would easily be a 2-3 book sized project, so I don't think it's feasible, but I try to share what I think whenever it seems particularly pertinent):
https://www.lesswrong.com/posts/wn5jTrtKkhspshA4c/michaeldickens-s-shortform?commentId=zoBMvdMAwpjTEY4st
I also have some old memos I wrote for the 2023 Coordination Forum I would still be happy to share with people if they DM me that I referenced a few times in past discussions.
Vasco Grilođ¸ @ 2024-10-30T22:34 (+2) in response to The goal isnât 'end factory farming', but reduce as much suffering as possible
Thanks, Elliot.
There is another point which makes me especially in favour of focussing on reducing suffering, and also increasing happiness. Ending factory-farming only increases animal welfare if factory-farmed animals continue to have negative lives forever, whereas I would say they may become positive in the next few decades at least in some animal-friendly countries.
calebp @ 2024-10-30T22:18 (+10) in response to Running a fundraiser as a University group?
I think focussing on pledges of future income (if you are targeting students) seems great, most students don't have much money and are also used to living on a much lower amount than they will in a few years after graduating (particularly people in engineering, CS, and math).
Will Aldred @ 2024-10-26T21:56 (+45) in response to What should EAIF Fund?
Open Phil has seemingly moved away from funding âfrontier of weirdnessâ-type projects and cause areas; I therefore think a hole has opened up that EAIF is well-placed to fill. In particular, I think an FHI 2.0 of some sort (perhaps starting small and scaling up if itâs going well) could be hugely valuable, and that finding a leader for this new org could fit in with your ârunning specific application rounds to fund people to work on [particularly valuable projects].â
My sense is that an FHI 2.0 grant would align well with EAIFâs scope. Quoting from your announcement post for your new scope:
Having said this, I imagine that you saw Habrykaâs âFHI of the Westâ proposal from six months ago. The fact that that has not already been funded, and that talk around it has died down, makes me wonder if you have already ruled out funding such a project. (If so, Iâd be curious as to why, though of course no obligation on you to explain yourself.)
Jason @ 2024-10-30T22:11 (+2)
One possible concern with this idea is that the project would probably take a lot of funding to launch. With Open Phil's financial distancing from EA Funds, my guess is that EAIF may often not be in the ideal position to be an early funder of a seven-figure-a-year project, by which I mean one that comes on board earlier than individual major funders.
I can envision some cases in which EAIF might be a better fit for seed funding, such as cases where funding would allow further development or preliminary testing of a big-project proposal to the point it could be better evaluated by funders who can consistently offer mid-six figures plus a year. It's unclear how well that would describe something like the FHI/West proposal, though.
I could easily be wrong (or there could already be enough major funder interest to alleviate the first paragraph concern), and a broader discussion about EAIF's comparative advantages / disadvantages for various project characteristics might be helpful in any event.
Eeveeđš @ 2024-10-30T21:56 (+3) in response to Eevee's Quick takes
Asking for a friend - there's no dress code for EAG, right?
NickLaing @ 2024-10-30T21:52 (+6) in response to The goal isnât 'end factory farming', but reduce as much suffering as possible
I like the thought process and the sentiment, but I think big goals are a critical guiding light for the future. "Reducing suffering as much as possible" is neither inspirational enough nor concrete enough to work as as public waiting for
"End factory farming" is a clearer a inspiring rallying point, the same way we in global development do talk about ending poverty, and yes eradicating Malaria. The millennium and sustainable development goals use those kind of terms and I believe help light the way.
Call me naive, but I think distant hope is more likely to keep people going than lead to burnout, as long as we are realistic about out short term goals. I don't think ending factory farming is unrealistic long term.
david_reinstein @ 2024-10-30T21:41 (+2) in response to AI Safety Chatbot
Any updates on this? Have other projects and tools superceded it?
We're looking to do something similar with content from unjournal.org. We're exploring the alternatives, and considering hiring a specialist for this project.
See this attempt with NotebookLM.
MatthewDahlhausen @ 2024-10-30T21:24 (+1) in response to The goal isnât 'end factory farming', but reduce as much suffering as possible
Can you elaborate on why you think we will never eradicate factory farming? You point to near-term trends that suggest it will get worse over the coming decades. What about on a century long time scale or longer? Factory farming has only been around for a few generations, and food habits have changed tremendously over that time.
I think it's important to consider how some strategies may make future work difficult. For example, Martha Nussbaum highlights how much of the legal theory in the animal rights movement has relied on showing similarities between human and animal intelligence. Such a "like us" comparison limits consideration to a small subset of vertebrates. They are impotent at helping animals like chickens, were much legal work is happening now. Other legal theories are much more robust to expansion and consideration of other animals as the science improves to understand their needs and behavior.
Using your line of argument applied to the analogy you provided would suggest that efforts like developing a malaria vaccine are misguided, because malaria will always be with us, and we should just focus on reducing infection rates and treatment.
Toby_Ord @ 2024-10-30T12:07 (+9) in response to We should prevent the creation of artificial sentienceÂ
I also wanted to add that as someone who leans more towards valuing happiness and suffering equally, that I still find the moratorium to be a good idea and don't feel pangs of frustration about the lost positive experience if we were to delay. I would be very concerned if humanity chose now to never produce any new kinds of beings who could feel, as that may forever rule out some of the best futures available to us. But as you say, a 50-year delay in order to make this monumentally importance choice properly would seem to be a wise and patient decision by humanity (given that we can see this is a crucial choice for which we are ill prepared). It is important that our rapid decision to avoid doing something before we know what we're doing doesn't build in a bias towards never doing it, so some care might need to be taken with the end condition for a moratorium. Having it simply expire after 20 to 50 years (but where a new one could be added if desired) seems pretty good in this regard.
I think that thoughtful people who lean towards classical utilitarianism should generally agree with this (i.e. I don't think this is based on my idiosyncrasies). To get it to turn out otherwise would require extreme moral certainty and/or a combination of the total view with impatience in the form of temporal discounting.
Note that I think it is important to avoid biasing a moratorium towards being permanent even for your 5th option (moratorium on creating beings that suffer). c.f. we have babies despite knowing that their lives will invariably include periods of suffering (because we believe that these will usually be outweighed by other periods of love and joy and comfort). And most people (including me) think that allowing this is a good thing and disallowing it would be disastrous. At the moment, we aren't in a good position to understand the balances of suffering and joy in artificial beings and I'd be inclined to say that a moratorium on creating artificial suffering is a good thing, but when we do understand how to measure this and to tip the scales heavily in favour of positive experience, then a continued ban may be terrible. (That said, we may work also work out how to ensure they have good experiences with zero suffering, in which case a permanent moratorium may well turn out to be a good thing.)
Matthew_Barnett @ 2024-10-30T21:22 (+2)
Given your statement that "a 50-year delay in order to make this monumentally importance choice properly would seem to be a wise and patient decision by humanity", I'm curious if you have any thoughts on the comment I just wrote, particularly the part arguing against a long moratorium on creating sentient AI, and how this can be perceived from a classical utilitarian perspective.
Matthew_Barnett @ 2024-10-30T21:08 (+2) in response to We should prevent the creation of artificial sentienceÂ
On a basic level, I agree that we should take artificial sentience extremely seriously, and think carefully about the right type of laws to put in place to ensure that artificial life is able to happily flourish, rather than suffer. This includes enacting appropriate legal protections to ensure that sentient AIs are treated in ways that promote well-being rather than suffering. Relying solely on voluntary codes of conduct to govern the treatment of potentially sentient AIs seems deeply inadequate, much like it would be for protecting children against abuse. Instead, I believe that establishing clear, enforceable laws is essential for ethically managing artificial sentience.
However, it currently seems likely to me that sufficiently advanced AIs will be sentient by default. And if advanced AIs are sentient by default, then instituting a temporary ban on sentient AI development, say for 50 years, would likely be functionally equivalent to pausing the entire field of advanced AI for that period.
Therefore, despite my strong views on AI sentience, I am skeptical about the idea of imposing a moratorium on creating sentient AIs, especially in light of my general support for advancing AI capabilities.
Why I think sufficiently advanced AIs will likely be sentient by default
The idea that sufficiently advanced AIs will likely be sentient by default can be justified by three basic arguments:
Why I'm skeptical of a general AI moratorium
My skepticism of a general AI moratorium contrasts with those of (perhaps) most EAs, who appear to favor such a ban, for both AI safety reasons and to protect AIs themselves (as you argue here). I'm instead inclined to highlight the enormous costs of such a ban, compared to a variety of cheaper alternatives, such as targeted regulation that merely ensures AIs are strongly protected against abuse. These costs appear to include:
Moreover, from a classical utilitarian perspective, the imposition of a 50-year moratorium on the development of sentient AI seems like it would help to foster a more conservative global cultureâone that is averse towards not only creating sentient AI, but also potentially towards other forms of life-expanding ventures, such as space colonization. Classical utilitarianism is typically seen as aiming to maximize the number of conscious beings in existence, advocating for actions that enable the flourishing and expansion of life, happiness, and fulfillment on as broad a scale as possible. However, implementing and sustaining a lengthy ban on AI would likely require substantial cultural and institutional shifts away from these permissive, exploratory values.
To enforce a moratorium of this nature, societies would likely adopt a framework centered around caution, restriction, and a deep-seated aversion to riskâvalues that would contrast sharply with those that encourage creating sentient life and proliferating this life on as large of a scale as possible. Maintaining a strict stance on AI development might lead governments, educational institutions, and media to promote narratives emphasizing the potential dangers of sentience and AI experimentation, instilling an atmosphere of risk-aversion rather than curiosity, openness, and progress. Over time, these narratives could lead to a culture less inclined to support or value efforts to expand sentient life.
Even if the ban is at some point lifted, there's no guarantee that the conservative attitudes generated under the ban would entirely disappear, or that all relevant restrictions on artificial life would completely go away. Instead, it seems more likely that many of these risk-averse attitudes would remain even after the ban is formally lifted, given the initially long duration of the ban, and the type of culture the ban would inculcate.
In my view, this type of cultural conservatism seems likely to, in the long run, undermine the core aims of classical utilitarianism. A shift toward a society that is fearful or resistant to creating new forms of life may restrict humanityâs potential to realize a future that is not only technologically advanced but also rich in conscious, joyful beings. If we accept the idea of 'value lock-in'âthe notion that the values and institutions we establish now may set a trajectory that lasts for billions of yearsâthen cultivating a culture that emphasizes restriction and caution may have long-term effects that are difficult to reverse. Such a locked-in value system could close off paths to outcomes that are aligned with maximizing the proliferation of happy, meaningful lives.
Thus, if a moratorium on sentient AI were to shape society's cultural values in a way that leans toward caution and restriction, I think the enduring impact would likely contradict classical utilitarianism's ultimate goal: the maximal promotion and flourishing of sentient life. Rather than advancing a world with greater life, joy, and meaningful experiences, these shifts might result in a more closed-off, limited society, actively impeding efforts to create a future rich with diverse and conscious life forms.
(Note that I have talked here mainly about these concerns from a classical utilitarian point of view, and a person-affecting point of view. However, I concede that a negative utilitarian or antinatalist would find it much easier to rationally justify a long moratorium on AI.
It is also important to note that my conclusion holds even if one does not accept the idea of a 'value lock-in'. In that case, longtermists should likely focus on the near-term impacts of their decisions, as the long-term impacts of their actions may be impossible to predict. And my main argument here is that the near term impacts of such a moratorium are likely to be harmful in a variety of ways.)
Forumite @ 2024-10-30T20:51 (+14) in response to The goal isnât 'end factory farming', but reduce as much suffering as possible
I thought this was a well-written, thoughtful and highly intelligent piece, about a really important topic, where getting as close as possible to the truth is super-important and high-stakes. Kudos! I gave it a strong upvote. :)
I am starting from the point of being fairly attached to the âletâs try to end factory farming!â framing, but this post has given me a lot to think about.
I wanted to share a bunch of thoughts that sprung to my mind as I read the post:
One potential advantage of the âletâs try to end factory farming!â framing is that it encourages us to think long-term and systematically, rather than short-term and narrowly. I take long-termism to be true: future suffering matters as much as present-day suffering. I worry that a framing of âletâs accept that factory farming will endure; how can we reduce the most sufferingâ quickly becomes âhow can we reduce the most suffering *right now*, in a readily countable and observable wayâ. This might make us miss opportunities and theories of change which will take longer to work up a head of steam, but which over the long term, may lead to more suffering reduction. It may also push us towards interventions which are easily countable, numerically, at the expense of interventions which may actually, over time, lead to more suffering-reduction, but in more uncertain, unpredictable, indirect and harder-to-measure ways. It may push us towards very technocratic and limited types of intervention, missing things like politics, institutions, ideas, etc. It may discourage creativity and innovation. (To be clear: this is not meant to be a âwoo-wooâ point; Iâm suggesting that these tendencies may fail in their own terms to maximize expected suffering reduction over time).
Aiming to end factory farming encourages us to aim high. Imagine we have a choice between two options, as a movement: try to eradicate 100pc of the suffering caused by factory farming, by abolishing it (perhaps via bold, risky, ambitious theories-of-change). Or, try to eradicate 1pc of the suffering caused by factory farming, through present-day welfare improvements. The high potential payoff of eradicating factory farming seems to look good here, even if we think thereâs only (say) a 10pc chance of it working. I.e, perhaps the best way to maximise expected suffering reduction is, in fact, to âgambleâ a bit and take a shot at eradicating factory farming.
If we give up on even trying to end factory farming, doesnât this become a self-fulfilling prophecy? If we do this, we guarantee that we end up in a world where factory framing endures. Given uncertainty, shouldnât (at least some of) the movement try to aim high and eradicate it?
Iâm not sure that the analogy with malaria/poverty/health/development is perfect:
Iâm very unsure about this, but I *guess* that a framing of âfactory faming is a gigantic moral evil, letâs eradicate itâ is, on balance, more motivating/attracting than a framing of âfactory farming is a gigantic moral evil, weâll never defeat it, but we can help a tonne of animals, letâs do itâ (?)
*If* we knew the future for sure, and knew it would be impossible ever to eradicate factory farming, then I do agree that we should face facts and adjust our strategy accordingly, rather than live in hope. My gut instinct though is that we canât be sure of this, and there are arguments in favor of aiming for big, bold, systemic changes and wins for animals.
These are just some thoughts that sprang to mind, I don't think that in and of themselves they fully repudiate the case you thoughtfully made. I think more discussion and thought on this topic is important; kudos for kicking this off with your post!
(For those interested, the Sentience Institute have done some fascinating work on the analogies and dis-analogies of factory farming vs other moral crimes such as slavery - eg here and here.)
nil @ 2024-10-30T20:23 (+1) in response to Tell me what to do in the next months
Hi Cassidy! Organisation for the Prevention of Intense Suffering may still be looking for volunteers, as far as I know. If this sounds interesting to you, consider contacting them. All the best!
Habryka @ 2024-10-26T18:12 (+14) in response to JWS's Quick takes
The answer for a long time has been that it's very hard to drive any change without buy-in from Open Philanthropy. Most organizations in the space are directly dependent on their funding, and even beyond that, they have staff on the boards of CEA and other EA leadership organizations, giving them hard power beyond just funding. Lincoln might be on the EV board, but ultimately what EV and CEA do is directly contingent on OP approval.
OP however has been very uninterested in any kind of reform or structural changes, does not currently have any staff participate in discussion with stakeholders in the EA community beyond a very small group of people, and is majorly limited in what it can say publicly due to managing tricky PR and reputation issues with their primary funder Dustin and their involvement in AI policy.
It is not surprising to me that Lincoln would also feel unclear on how to drive leadership, given this really quite deep gridlock that things have ended up in, with OP having practically filled the complete power vacuum of leadership in EA, but without any interest in actually leading.
yanni kyriacos @ 2024-10-30T19:30 (+2)
Hello Habryka! I occasionally see you post something OP critical and am now wondering âis there a single post where Habryka shares all of his OP related critiques in one spot?â
If that does exist I think it could be very valuable to do.
Keyvan Mostafavi @ 2024-10-30T18:06 (+1) in response to The goal isnât 'end factory farming', but reduce as much suffering as possible
Could you develop this part please? The "why this problem is much harder and disanalogous" part.
ElliotTep @ 2024-10-30T19:05 (+1)
Good question, I wasn't sure how much to err on the side of brevity vs thoroughness.
To phrase it differently I think sometimes advocates start their strategy with the final line 'and then we end factory farming', and then try to develop a strategy about how do we get there. I don't think it is reasonable to assume this is going to happen, and I think this leads to overly optimistic theories of change. From time to time I see a claim about how meat consumption will be drastically reduced in the next few decades based on a theory that is far too optimistic and/or speculative.
For example, I've seen work claim that when plant-based meat reaches taste and price parity, people will choose plant-based over conventional meat, so if we raise the price of meat via regulation, and lower the cost of plant-based, there will be high adoption of plant-based, and meat reduction will be 30% lower by 2040 (those numbers are made up, but ball-park correct). I think these claims just aren't super well founded and some research showed that when a university cafeteria offered impossible and regular burgers, adoption was still quite low (anyone know the citation?).
Keyvan Mostafavi @ 2024-10-30T18:06 (+1) in response to The goal isnât 'end factory farming', but reduce as much suffering as possible
Could you develop this part please? The "why this problem is much harder and disanalogous" part.
Keyvan Mostafavi @ 2024-10-30T18:03 (+3) in response to The goal isnât 'end factory farming', but reduce as much suffering as possible
I found your article very useful.
Similar thoughts to the ones you express here led me to write this post: Fighting animal suffering: beyond the number of animals killed
Christoph Hartmann đ¸ @ 2024-10-18T11:44 (+1) in response to Are Organically Farmed Animals Already Living a Net-Positive Life?
Thanks for your thoughts!
On your question: I chose organic because I had initially planned to take the EU Organic one because itâs so wide spread here and has some animal welfare standards. In the end I chose Naturland though because it seems to be stronger on animal welfare, and I wanted to make a strong case.
I am not aware of any reported malpractices as the one you cited for that label but of course there is always a chance to have these outliers.
alene @ 2024-10-30T17:45 (+4)
Oh, got it! I am so sorry. I'm American and have a very American-centric worldview. I was thinking of organic as referring to the United States Department of Agriculture (USDA) Organic certification. I therefore feel like I pretty much totally missed what you actually meant by your post. I'm sorry! đŞđş
Moritz Stumpe đ¸ @ 2024-10-30T16:05 (+11) in response to Are Organically Farmed Animals Already Living a Net-Positive Life?
Thanks for your interest in our work!
I think the traditional settings are better for animal welfare, though there are huge differences and I've come to realise that traditional vs. intensive is a bit of a false dichotomy (but it's useful for communication purposes). To lay out my perspective in a bit more detail (I am not an animal scientist or anything and more of a generalist researcher who has read some of the work done by Welfare Footprint Project an others, attended some webinars, etc.):
All of these categories are of course still heavy simplifications (e.g., enriched battery cages and deep littre systems for hens could both fall into the better-regulated factory farming settings category). And of course none of this tells us much about which (if any) of these lives are net positive/negative, but we already discussed that :)
You may find the concept of a "animal welfare Kuznetz curve" interesting. Though I'm not sure how strong the evidence behind this is.
Sorry for the long answer, but hope it's relevant/interesting. I think our top priority should be to avoid the worst outcome on this list (the first bullet point), which is what we are trying to do at AAA. Also because the numbers in that category could grow massively (also think about largely unregulated industries such as shrimp or insect farming).
Final point: I think people strongly underestimate the extent to which animal agriculture is already industrialised in parts of Africa (I did so too before digging deeper into this). This 2022 source cites 60% of hens in Africa being kept in cages. There tend to be a lot of smallholder farmers, but they keep quite a small number of animals per capita, so their animal numbers are outweighed by bigger industrial producers.
Christoph Hartmann đ¸ @ 2024-10-30T17:09 (+1)
Thanks so much for writing out all of this!
I am really surprised by the 60% number. Will update my internal model ;)
And fully agree that highly intensive farming with no regulation is the worst of both worlds and very worthwhile to work on. Thank you for that work!!
Christoph Hartmann đ¸ @ 2024-10-24T14:44 (+2) in response to Are Organically Farmed Animals Already Living a Net-Positive Life?
Thanks! Indeed thinking along the same lines although I have a much stronger intuition that most human and wild animal lives are lives worth living.
From the comment section I liked
Somewhat unrelated to this but I read your work for Animal Advocacy Africa. How do you look at the welfare of animals farmed in more traditional settings there? E.g., chickens in a village or small cattle herds by roaming tribes like the Kenyan Maasai? Just from looking at them I always guessed that they have a "good life" but curious what you think! From some conversations I understood that factory farming also becomes more prominent in Kenya but the majority still seems to be farmed in more traditional settings.
Moritz Stumpe đ¸ @ 2024-10-30T16:05 (+11)
Thanks for your interest in our work!
I think the traditional settings are better for animal welfare, though there are huge differences and I've come to realise that traditional vs. intensive is a bit of a false dichotomy (but it's useful for communication purposes). To lay out my perspective in a bit more detail (I am not an animal scientist or anything and more of a generalist researcher who has read some of the work done by Welfare Footprint Project an others, attended some webinars, etc.):
All of these categories are of course still heavy simplifications (e.g., enriched battery cages and deep littre systems for hens could both fall into the better-regulated factory farming settings category). And of course none of this tells us much about which (if any) of these lives are net positive/negative, but we already discussed that :)
You may find the concept of a "animal welfare Kuznetz curve" interesting. Though I'm not sure how strong the evidence behind this is.
Sorry for the long answer, but hope it's relevant/interesting. I think our top priority should be to avoid the worst outcome on this list (the first bullet point), which is what we are trying to do at AAA. Also because the numbers in that category could grow massively (also think about largely unregulated industries such as shrimp or insect farming).
Final point: I think people strongly underestimate the extent to which animal agriculture is already industrialised in parts of Africa (I did so too before digging deeper into this). This 2022 source cites 60% of hens in Africa being kept in cages. There tend to be a lot of smallholder farmers, but they keep quite a small number of animals per capita, so their animal numbers are outweighed by bigger industrial producers.
Richard Y Chappellđ¸ @ 2024-10-30T14:38 (+6) in response to What should EAIF Fund?
I'd like to see more basic public philosophy arguing for effective altruism and against its critics. (I obviously do this a bunch, and am puzzled that there isn't more of it, particularly from philosophers who - unlike me - are actually employed by EA orgs!)
One way that EAIF could help with this is by reaching out to promising candidates (well-respected philosophers who seem broadly sympathetic to EA principles) to see whether they could productively use a course buyout to provide time for EA-related public philosophy. (This could of course include constructively criticizing EA, or suggesting ways to improve, in addition to - what I tend to see as the higher priority - drawing attention to apt EA criticisms of ordinary moral thought and behavior and ways that everyone else could clearly improve by taking these lessons on board.)
A specific example that springs to mind is Richard Pettigrew. He independently wrote an excellent, measured criticism of Leif Wenar's nonsense, and also reviewed the Crary et al volume in a top academic journal (Mind, iirc). He's a very highly-regarded philosopher, and I'd love to see him engage more with EA ideas. Maybe a course buyout from EAIF could make that happen? Seems worth exploring, in any case.
Jason @ 2024-10-30T15:54 (+2)
I'd be worried that -- even assuming the funding did not actually influence the content of the speech -- the author being perceived as on the EA payroll would seriously diminish the effectiveness of this work. Maybe that is less true in the context of a professional journal where the author's reputation is well-known to the reader than it would be somewhere like Wired, though?
Aaron Boddyđ¸ @ 2024-10-07T09:30 (+38) in response to Cost-effectiveness of Shrimp Welfare Projectâs Humane Slaughter Initiative
Thanks so much Vasco for your work on this! As with MHR in the past, we really appreciate folks doing in-depth analyses like this, and are very appreciative of the interest in our work :)
In the spirit of this weekâs Forum theme, I wanted to provide some more context regarding SWPâs room for more funding.
Our overheads (i.e. salaries, travel/conferences) and program costs for the India sludge removal work, are currently covered by grants until the end of 2026. Meaning that any additional funds are put towards HSI. (For context, our secured grants do also cover the cost of some stunners, but HSI as a program is still able to absorb more funding).
Each stunner costs us $55k and we ask the producers we work with to commit to stunning a minimum of 120 million shrimps per annum. This results in a cost-effectiveness of ~2,000+ shrimps helped / $ / year (i.e. our marginal impact of additional dollars is higher than our historical cost-effectiveness).
Weâre having our annual team retreat (which we call âShrimposiumâ) next week, during which we hope to map out how we can deploy stunners in such a way as to catalyse a tipping point so that pre-slaughter stunning becomes the industry standard.
Weâve had some good indications recently that HSI does contribute to âlocking-inâ industry adoption, with Tesco and Sainsburyâs recently publishing welfare policies, building on similar wins in the past (such as M&S and Albert Heijn).
This has always been the Theory of Change for the HSI project. Although weâre very excited by how cost-effective it is in its own right, ultimately we want to catalyse industry-wide adoption - deploying stunners to the early adopters in order to build towards a tipping point that achieves critical mass. In other words, over the next few years we want to take the HSI program from Growth to Scale.
I would be surprised if post-Shrimposium our targets regarding HSI required less funding than our current projections. In other words, though I donât currently have an exact sense of our room for more funding, Iâm confident SWP is in a position to absorb significantly more funding to support our HSI work.
If anyone wants to reach out to me directly, you can contact me at aaron@shrimpwelfareproject.org. You can also donate to SWP through our website, or book a meeting with me via this link.
Vasco Grilođ¸ @ 2024-10-30T15:47 (+2)
Thanks for the great context, Aaron!
Is there any chance HSI may increase the number of shrimp? I guess it would tend to increase costs, and therefore decrease the number of shrimp. I ask because I estimate that moving from ice slurry to electrical stunning only increases welfare by 4.34 % (= 1 - 4.85/5.07). In this case, since I think farmed shrimp have negative lives (for any slaughter method), an increase of more than 4.34 % in the number of shrimp would make HSI harmful.
Toby_Ord @ 2024-10-30T15:03 (+3) in response to We should prevent the creation of artificial sentienceÂ
In your piece you focus on artificial sentience. But similar arguments would apply to somewhat broader categories.
Wellbeing
For example, you could expand it to creating entities that can have wellbeing (or negative elements of wellbeing) even if that wellbeing can be determined by things other than conscious experience. If there were ways of creating millions of beings with negative wellbeing, I'd be very disturbed by that regardless of whether it happened by suffering or some other means. I'm sympathetic to views where suffering is the only form of wellbeing, but am by no means sure they are the correct account of wellbeing, so maybe what I really care about is avoiding creating beings that can have (negative) wellbeing.
Interests
One could also go a step further. Wellbeing is a broad category for all kinds of things that count towards how well your life goes. But on many people's understandings, it might not capture everything about ill treatment. In particular, it might not capture everything to do with deontological wrongs and/or rights violations, which may involve wronging someone in a way that can't be made up for by improvements in wellbeing and can't be cashed out purely in terms of its negative effects on wellbeing. So it may be that creating beings with interests or morally relevant interests is the relevant category.
That said, note that these are both steps towards greater abstraction, so even if they better capture what we really care about, they might still lose out on the grounds of being less compelling, more open to interpretation, and harder to operationalise.
Henri Thunberg đ¸ @ 2024-10-30T03:17 (+2) in response to Tell me what to do in the next months
Hi Cassidy!
Thanks for being so generous in offering up your time for EA initiatives, I hope you find a good fit that is full of both impact and meaning :) You might be interested in helping out in the Nordic effective giving landscape, where I am Chairman for Ge Effektivt.
At Gi Effektivt and Ge Effektivt in Norway/Sweden respectively, we work to fundraise for charities recommended by GiveWell, Animal Charity Evaluators, and Giving Green. You might be familar either with us or counterparts from other countries like Effektiv Spenden or Ayuda Efectiva. The total money raised is a few million dollars per year, with significant increase in the past year.
Norway and Sweden share backend and most aspects of frontend, and we have a full-time CTO who is significantly capacity constrained. It sounds like you have skills that could come in handy, and I think we could have the right size where you'd get the right kind of support/counterpart to get you leverage on your time â and still an org small enough that you could make an impressive difference in a few months.
Happy to answer any questions at an initial stage, but in case interested I think it would be even more useful would be for you to speak to our CTO to get an idea of whether you could be helpful and if the projects excite you. You can either DM me here, email henri[at]geeffektivt.se, or reach out to the technical team directly.
Cassidy @ 2024-10-30T14:58 (+2)
Hi Henri, Thanks for your comment and suggestion. I have contacted Hakon, assuming that's the CTO you mean. :)
Richard Y Chappellđ¸ @ 2024-10-30T14:38 (+6) in response to What should EAIF Fund?
I'd like to see more basic public philosophy arguing for effective altruism and against its critics. (I obviously do this a bunch, and am puzzled that there isn't more of it, particularly from philosophers who - unlike me - are actually employed by EA orgs!)
One way that EAIF could help with this is by reaching out to promising candidates (well-respected philosophers who seem broadly sympathetic to EA principles) to see whether they could productively use a course buyout to provide time for EA-related public philosophy. (This could of course include constructively criticizing EA, or suggesting ways to improve, in addition to - what I tend to see as the higher priority - drawing attention to apt EA criticisms of ordinary moral thought and behavior and ways that everyone else could clearly improve by taking these lessons on board.)
A specific example that springs to mind is Richard Pettigrew. He independently wrote an excellent, measured criticism of Leif Wenar's nonsense, and also reviewed the Crary et al volume in a top academic journal (Mind, iirc). He's a very highly-regarded philosopher, and I'd love to see him engage more with EA ideas. Maybe a course buyout from EAIF could make that happen? Seems worth exploring, in any case.
Ofer @ 2024-10-30T13:54 (+1) in response to Stable totalitarianism: an overview
How do totalitarian regimes compare to non-totalitarian regimes in this regard?
Notice that this definition may not apply to a hypothetical state that gives some freedoms to millions of people while mistreating 95% of humans on earth (e.g. enslaving and torturing people, using weapons of mass destruction against civilians, carrying out covert operations that cause horrible wars, enabling genocide, unjustly incarcerating people in for-profit prisons).
PeterMcCluskey @ 2024-09-08T18:42 (+1) in response to Fungal diseases: Health burden, neglectedness, and potential interventions
I want to emphasize that this just sets a lower bound on the importance.
E.g. there's a theory that fungal infections are the primary cause of cancer.
How much of chronic fatigue is due to undiagnosed fungal infections? Nobody knows. I know someone with chronic fatigue who can't tell whether it's due in part to a fungal infection. He's got elevated mycotoxins in his urine, but that might be due to past exposure to a moldy environment. He's trying antifungals, but so far the side effects have prevented him from taking more than a few doses of the two that he has tried.
It feels like we need something more novel than slightly better versions of existing approaches to fungal infections. Maybe something as radical as nanomedicine, but that's not very tractable yet.
Mo Putera @ 2024-10-30T13:41 (+1)
Agree with the lower bound on fungal burden. For the post you linked I'd signal-boost J Bostock's 7 criticisms too.
Remmelt @ 2024-10-30T13:21 (+2) in response to OpenAI defected, but we can take honest actions
Just found a podcast on OpenAIâs bad financial situation.
Itâs hosted by someone in AI Safety (Jacob Haimes) and an AI post-doc (Igor Krawzcuk).
https://kairos.fm/posts/muckraiker-episodes/muckraiker-episode-004/
Toby_Ord @ 2024-10-30T12:07 (+9) in response to We should prevent the creation of artificial sentienceÂ
I also wanted to add that as someone who leans more towards valuing happiness and suffering equally, that I still find the moratorium to be a good idea and don't feel pangs of frustration about the lost positive experience if we were to delay. I would be very concerned if humanity chose now to never produce any new kinds of beings who could feel, as that may forever rule out some of the best futures available to us. But as you say, a 50-year delay in order to make this monumentally importance choice properly would seem to be a wise and patient decision by humanity (given that we can see this is a crucial choice for which we are ill prepared). It is important that our rapid decision to avoid doing something before we know what we're doing doesn't build in a bias towards never doing it, so some care might need to be taken with the end condition for a moratorium. Having it simply expire after 20 to 50 years (but where a new one could be added if desired) seems pretty good in this regard.
I think that thoughtful people who lean towards classical utilitarianism should generally agree with this (i.e. I don't think this is based on my idiosyncrasies). To get it to turn out otherwise would require extreme moral certainty and/or a combination of the total view with impatience in the form of temporal discounting.
Note that I think it is important to avoid biasing a moratorium towards being permanent even for your 5th option (moratorium on creating beings that suffer). c.f. we have babies despite knowing that their lives will invariably include periods of suffering (because we believe that these will usually be outweighed by other periods of love and joy and comfort). And most people (including me) think that allowing this is a good thing and disallowing it would be disastrous. At the moment, we aren't in a good position to understand the balances of suffering and joy in artificial beings and I'd be inclined to say that a moratorium on creating artificial suffering is a good thing, but when we do understand how to measure this and to tip the scales heavily in favour of positive experience, then a continued ban may be terrible. (That said, we may work also work out how to ensure they have good experiences with zero suffering, in which case a permanent moratorium may well turn out to be a good thing.)
Toby_Ord @ 2024-10-30T11:44 (+6) in response to We should prevent the creation of artificial sentienceÂ
This is an excellent exploration of these issues. One of my favourite things about it is that it shows it is possible to write about these issues in a measured, sensible, warm, and wise way â i.e. it provides a model for others wanting to advance this conversation at this nascent stage to follow.
Re the 5 options, I think there is one that is notably missing, and that would probably be the leading option for many of your opponents. It is the wait-and-see approach â leave the space unregulated until a material (but not excessive) amount of harm has occurred and if/when that happens, regulate from this situation where much more information is available. This is the kind of strategy that the anti-SB 1047 coalition seems to have converged on. And it is the usual way that society proceeds with regulating unprecedented kinds of harm.
As it happens, I think your options 4 and 5 (ban creation of artificial sentience/suffering) are superior to the wait-and-see approach, but it is a harder case to argue. Some key points of the comparison are:
BjĂśrn Ălafsson @ 2024-10-30T11:21 (+14) in response to BjĂśrn Ălafsson's Quick takes
NEW event today: How To Get The Media Interested In Your Animal Story!
This one-hour workshop will cover how to reframe animal issues to get mainstream press attention. Today at 5pm CST. Organized in conjunction with the Hive team.
Register for free here: https://lu.ma/pxrx1axl
PeterMcCluskey @ 2024-09-08T18:42 (+1) in response to Fungal diseases: Health burden, neglectedness, and potential interventions
I want to emphasize that this just sets a lower bound on the importance.
E.g. there's a theory that fungal infections are the primary cause of cancer.
How much of chronic fatigue is due to undiagnosed fungal infections? Nobody knows. I know someone with chronic fatigue who can't tell whether it's due in part to a fungal infection. He's got elevated mycotoxins in his urine, but that might be due to past exposure to a moldy environment. He's trying antifungals, but so far the side effects have prevented him from taking more than a few doses of the two that he has tried.
It feels like we need something more novel than slightly better versions of existing approaches to fungal infections. Maybe something as radical as nanomedicine, but that's not very tractable yet.
jenny_kudymowa @ 2024-10-30T11:07 (+1)
I agree this is most likely a lower bound - we tried to emphasize this in the report.
I was not aware of the theory that fungal infections are the primary cause of cancer - many thanks for sharing!
tobycrisford đ¸ @ 2024-10-21T06:36 (+1) in response to Fungal diseases: Health burden, neglectedness, and potential interventions
This is a fascinating summary!
I have a bit of a nitpicky question on the use of the phrase 'confidence intervals' throughout the report. Are these really supposed to be interpreted as confidence intervals? Rather than the Bayesian alternative, 'credible intervals'..?
My understanding was that the phrase 'confidence interval' has a very particular and subtle definition, coming from frequentist statistics:
From my reading of the estimation procedure, it sounds a lot more like these CIs are supposed to be interpreted as the latter rather than the former? Or is that wrong?
Appreciate this is a bit of a pedantic question, that the same terms can have different definitions in different fields, and that discussions about the definitions of terms aren't the most interesting discussions to have anyway. But the term jumped out at me when reading and so thought I would ask the question!
jenny_kudymowa @ 2024-10-30T11:05 (+1)
Yes, indeed, what we call 'confidence interval' in our report is better described by the term 'credible interval'.
We chose to use with the term 'confidence interval' because my impression is that this is the more commonly used and understood terminology within EA specifically, but also global health in general - even though it is not technically entirely accurate.
Heramb Podar @ 2024-10-30T00:11 (+14) in response to Heramb Podar's Quick takes
At this point, we need an 80k page on "What to do after leaving Open AI"
Chris Leong @ 2024-10-30T10:44 (+7)
Did something happen?
Zachary Robinsonđ¸ @ 2024-10-29T13:33 (+3) in response to Reflections and lessons from Effective Ventures
I think it's possible our views are compatible here. I want expertise to be valued more on the margin because I found EV and many other EA orgs to tilt towards an extreme of prioritizing value alignment, but I certainly believe there are cases where value alignment and general intelligence matter most and also that there are cases where expertise matters more.
I think the key lies in trying to figure out which situations are which in advance.
Chris Leong @ 2024-10-30T10:41 (+2)
I guess the main thing to be aware of is how hiring non-value aligned people can lead to drift which isn't significant at first, but becomes significant over time. That said, I also agree that a certain level of professionalism within organisation becomes more important as they scale.
Engin ArÄąkan @ 2024-10-25T15:50 (+4) in response to Animal welfare is neglected in a particular way: it is fragile
Thanks for the comment!
And apologies for the late reply - I turned off the notifications after the debate week.
I think the main argument I tried to put forward was more about the dependency of many organisations to one single major donor and the risks associated with this (and how it would make sense to mitigate this via more donations). And to be clear, I wasnât criticising Open Philanthropy. I just think that given Open Philanthropy is a bit alone in the field, animal welfare is neglected in a unique way. If there were multiple donors that are equally major as Open Philanthropy, there wouldnât be such fragility. But as far as I am aware, EAAWF and ACE do not have such funds and do not provide such large grants as of now. They are much smaller than the OP farm animal welfare program.
I donât think it would be likely that OP, ACE and EAAWF simultaneously decide to downscale, since OP has multiple causes while ACE and EAAWF have sole focus on animal welfare. So I donât think having a bigger EAAWF or ACE would result in the same level of fragility, even if organisations still depend on major donors. The main difference would be that many organisations would rely on multiple major donors rather than one single major donor.
By the way, I am less concerned with which avenue (funds or individual organisations) one should choose to donate. But my initial concern with the âindividualâ approach was that more individual donors spread more funds to more organisations which at the end would not help to mitigate this fragility if OP withdraws from animal welfare or significantly downscales. In theory, individual donors can also coordinate to channel their donations to fill in the gaps if such an event occurs, but in practice, I think funds would be more able to do this more efficiently. This is more of a practical issue which I donât have super strong views about.
On the other hand, âfunds vs. individual donorsâ is another debate where I strongly agree with you that more oversight of individual donors is very needed. As you mentioned, this depends mostly on the level of knowledge of the donors, but I can also add (in favour of the individual approach) that this also depends on the level of engagement of the donors. I donât expect major funds with limited staff can engage with each of their grantees perfectly. I think individual donors can play a very important role in engaging with these organisations as âshareholdersâ (or grant managers) and hopefully improve their performance. Of course, donors can do that for funds to some degree as well.
To reply to the last paragraph: yes, I think this is a fair summary.
Felix_Werdermann đ¸ @ 2024-10-30T10:02 (+1)
Hi Engin, thanks for your reply!
I agree that it's better to have multiple major donors than one major donor (e.g. it's better to have four major donors who contribute to 20% of all funding each; than one major donor who gives 80% of all funding). I would assume that EAAWF and ACE rely on smaller donors who would have donated invidually otherwise. So in the case that - for example - there is one major donor (60%) and many small donors (summing up to 40%), I don't know if it's good to pool the money of the small donors by ACE or EAAWF (as long as they donate to equally effective charities) so that there are one major donor (60%), and e.g. ACE and EAAWF as further major donors (each 20%). On the one hand, it's easier for ACE and EAAWF to react to a cut of funding by the major donor. On the other hand, there will probably be many charities which depend on ACE or EAAWF instead of many small donors. Of course, if the total amount of donations increases by new major donors, it's a different thing.
Ariel Simnegar đ¸ @ 2024-10-29T23:57 (+17) in response to EA: Renaissance or Diaspora?
Thanks for the interesting conversation! Some scattered questions/observations:
MichaelStJules @ 2024-10-30T05:26 (+6)
I might say kidney donation is a moral imperative (or at least all-things-considered-good) if we consider only the effects on your welfare and the effects on the welfare of the beneficiaries. But when you consider indirect effects, things are less clear. There are effects on other people, nonhuman animals (farmed and wild), your productivity and time (which affects your EA work or income and donations), your motivation and your values. For an EA, productivity and time, motivation and values seem most important.
EDIT: And the same goes for veganism.
AbsurdlyMax @ 2024-10-30T04:51 (+5) in response to Reflections and lessons from Effective Ventures
Thank you for writing this! I imagine this took a lot of time to put together, and I really appreciated being able to read it.
From the position of someone without a lot of connection or insight into the day-to-day functioning of EV (and its projects) this provided a lot of context, and gave me confidence that reforms at EV were seriously considered, and then instituted. Its one thing to read an announcement that an organization is working on, or investigation reforms â but being able to see the specifics of those reforms feels differently, and meaningfully, important to me. I feel glad to have read this post, and for some of the updates it allowed me to make!
Henri Thunberg đ¸ @ 2024-10-30T03:17 (+2) in response to Tell me what to do in the next months
Hi Cassidy!
Thanks for being so generous in offering up your time for EA initiatives, I hope you find a good fit that is full of both impact and meaning :) You might be interested in helping out in the Nordic effective giving landscape, where I am Chairman for Ge Effektivt.
At Gi Effektivt and Ge Effektivt in Norway/Sweden respectively, we work to fundraise for charities recommended by GiveWell, Animal Charity Evaluators, and Giving Green. You might be familar either with us or counterparts from other countries like Effektiv Spenden or Ayuda Efectiva. The total money raised is a few million dollars per year, with significant increase in the past year.
Norway and Sweden share backend and most aspects of frontend, and we have a full-time CTO who is significantly capacity constrained. It sounds like you have skills that could come in handy, and I think we could have the right size where you'd get the right kind of support/counterpart to get you leverage on your time â and still an org small enough that you could make an impressive difference in a few months.
Happy to answer any questions at an initial stage, but in case interested I think it would be even more useful would be for you to speak to our CTO to get an idea of whether you could be helpful and if the projects excite you. You can either DM me here, email henri[at]geeffektivt.se, or reach out to the technical team directly.
Heramb Podar @ 2024-10-30T00:11 (+14) in response to Heramb Podar's Quick takes
At this point, we need an 80k page on "What to do after leaving Open AI"
NickLaing @ 2024-10-30T02:44 (+2)
Appreciate that, I woke up this morning after finally quitting (the new "ChatGPT 5 is your boss" pilot was too much!), about to register "AI4Gud.com" and get me in the race, but have reconsidered based on this excellent advice.
yanni kyriacos @ 2024-10-28T20:25 (0) in response to Yanni Kyriacos's Quick takes
I spent some time with Claude this morning trying to figure out why I find it cringe calling myself an EA (I never call myself an EA, even though many in EA would call me an EA).
The reason: calling myself "EA" feels cringe because it's inherently a movement/community label - it always carries that social identity baggage with it, even when I'm just trying to describe my personal philosophical views.
I am happy to describe myself as a Buddhist or Utilitarian because I don't think it does those things (at least, not within the broader community context I find myself in - Western, Online, Democratic, Australia, etc).
Heramb Podar @ 2024-10-30T00:11 (+2)
Reminds me of keeping your identity small by Paul Graham
Heramb Podar @ 2024-10-30T00:11 (+14) in response to Heramb Podar's Quick takes
At this point, we need an 80k page on "What to do after leaving Open AI"
Comments on 2024-10-29
Ariel Simnegar đ¸ @ 2024-10-29T23:57 (+17) in response to EA: Renaissance or Diaspora?
Thanks for the interesting conversation! Some scattered questions/observations:
Ozzie Gooen @ 2024-10-29T01:58 (+4) in response to Enhancing Mathematical Modeling with LLMs: Goals, Challenges, and Evaluations
Thanks for bringing this up. I was unsure what terminology would be best here.
I mainly have in mind fermi models and more complex but similar-in-theory estimations. But I believe this could extend gracefully for more complex models. I don't know of many great "ontologies of types of mathematical models," so am not sure how to best draw the line.
Here's a larger list that I think could work.
I think this framework is probably more relevant for models estimating an existing or future parameter, than models optimizing some process, if that helps at all.
david_reinstein @ 2024-10-29T23:22 (+5)
Maybe better to call these 'quantitative modeling techniques' or 'applied quantitative modeling'?
The term 'mathematical modeling' makes me think more about theoretical axiomatic modeling and work that doesn't actually use numbers or data.
Angelina Li @ 2024-10-28T14:57 (+2) in response to Tell me what to do in the next months
Good luck finding a good volunteering project! Consider reaching out to 80K for advice? ( https://80000hours.org/speak-with-us/ ) I wonder if they might have a good sense of which orgs might be excited to use your support.
Cassidy @ 2024-10-29T21:16 (+1)
Hi Angelina, thanks for the link. Didn't think that was fitting for me, considering they already rejected me twice in the past with reasons being that they mostly work with students. đ But I can give it another shot.
Michael D.M. @ 2024-10-29T20:44 (+1) in response to Exercise for 'Radical Empathy'
Hello my Cold-War friend,
I am aware that homosexuality is a scare of your time. Believe me, it is not nearly as bad as its made out to be. I understand that film often portrays them as selfish and villainous, but that's untrue. That's not even necessarily what film writers believe (though some surely do). There's actually specific codes in place that limit the way many characters like that are written--art under that isn't exactly a reflection of reality. Many of us have the same desires you do, of happiness and prosperity, a life of acceptance. They aren't deviants either. Statistics of my time show that they're no more or less likely than heterosexual people to be such. That's another damaging stereotype. The worrying reality is, a lot of such stereotypes come from people in power, and their own misguided fears. Though it's not necessarily easy for you, in such a politically rigid time, I hope that you and anyone else keeps a healthy questioning of power, using their own logic and knowledge to evaluate the soundness of their decisions. Recall, the government is for the needs of the many--and this includes people unlike yourself. And if you're worried of betraying religious teachings, the bible makes no mention of homosexuality (scholars believe that was a mistranslation). Furthermore, Jesus himself loved the outcasts--be like him some more. And this open attitude doesn't stop at sexuality. Some people are at odds with the gender they were assigned (their mind is truer than their body), but that doesn't get much attention until later. Still, keep an open mind to them and try to empathize with the struggle of being inside a body you don't believe to be yours. Act with compassion and consideration.
I believe in you,
MD
Michael D.M. @ 2024-10-29T20:31 (+1) in response to Exercise for 'Our Final Century?'
A good ancestor. One who has ancestors is a descendant. Descendants need not be a genetic lineage (not necessarily even human), but rather the constitution of the living world some time into the future. A good ancestor should act in the present, with a specific vision for the future. A good ancestor should ideally "add value" to the world, that is, reduce suffering one way or another. This of course could be through means of direct, focused work toward progression of a relevant and underserved cause, or by philanthropy of any means. A good ancestor is one who takes a look at every major action they take, every donation they make, beyond the numbers, and asks themselves, "how will this affect the world after I've gone?" They research this well and inform their opinions with such. A good ancestor provides.
MMathur @ 2024-10-29T20:24 (+2) in response to Who would you like to see speak at EA Global?
Have you considered having an open, competitive process to submit talk abstracts, similarly to academic conferences?
Hans Erickson @ 2024-10-26T17:58 (+9) in response to Open thread: October - December 2024
Hi everyone,
My name is Hans Erickson, I am a 65 year old IT professional that is semi-retired. I still own a small IT support company and have an employee who backfills for me, which allows me to travel.
On a trip to Africa in 2022, I was on a safari and was taken through a remote Botswana village that was the home of our tour guide. He pointed out the school house as we passed through. I had been in Africa once before 15 years earlier participating in a technology conference in Lagos, Nigeria. In my research at the time, I discovered the appalling lack of internet connectivity to the majority of the continent. I asked our tour guide about this, and he confirmed the school had no internet.
I volunteered to set up Starlink internet for the school when the service became available. Just a month ago Starlink officially began service in Botswana. I reached out to my contact and the school administrator that he had connected me to. Because it is a government school, they required formal approval, so I have written letters and responded to questions, but still no approval. I am hopeful now that it is in the hands of their IT administrators that a final approval is coming.
There are approx. 150 students attending the school. My plan is to install the starlink dish, Ubiquiti AP's and remote monitoring equipment, connect everything, and supply some chromebooks for student and administration use. I will also configure a google school account, which provides robust tools for school administrators and students.
I have volunteered to support the starlink subscription for a three year period, after which I hope to convince local authorities or Starlink to continue the service.
I only read the 'Doing Goog Better' book after having made this agreement. In the interest of effective altruism, I was hoping to learn from someone the metrics that would be most beneficial to track for a project like this. I am aware that risks are involved with providing high speed internet in a rural setting, but I am not sure exactly what those risks might be.
Any suggestions or thoughts would be appreciated.
NickLaing @ 2024-10-29T19:11 (+4)
Thanks so much for posting Hans and thanks for your efforts to help. I've lived in UgAnda for 10 years and worked in remote rural areas. This isn't my area of expertise but I might have something useful to share Have private messaged you and keen to have a chat if you are.
Engin ArÄąkan @ 2024-10-29T18:49 (+7) in response to What should EAIF Fund?
80000 Hours and Probably Good are great but their advice can be off putting, irrelevant or not useful enough for many people who are not their main audience. Having content about potentially many impactful careers in medicine, academia, or engineering, in Japan, Germany, Brazil, or India can be much more useful and engaging for those people who are in these categories. This can also be done at a relatively low cost - one or two able and willing writers per country/domain.
2. âBudget hawkâ organisation/consultancy that aims to propose budget cuts to EA organisations without compromising cost-effectiveness.
There is a lot of attention towards effective giving like %10 pledges. Another way of achieving similar outcomes is to make organisations spend less (%10 again?). We tend to assume that EA organisations are cost effective (which is true overall) but this does not mean that every EA organisation spends each penny with %100 cost-effectiveness. It is probable that many EA organisations can make cuts to their ineffective programs or manage their operations/taxes more efficiently. A lot of EA organisations have very large budgets, more than millions of dollars annually. So even modest improvements can be equivalent to adding many GWWC pledgers.
3. Historical case studies about movement or community building
Open philanthropy had commissioned some reports. But most of them are about certain policy reforms. Only a few are about movement or community building. I think more case studies can provide interesting insights. Sentience Instituteâs case studies were very useful for animal advocacy in my opinion.
4. Grand strategy research
This might be already being carried out by major EA organisations. But I can imagine that most leadership and key staff members in EA organisations typically focus on specific and urgent problems and never have enough time and focus on taking a (lot of) step back and think about the grand strategy. Other people might also have better skills to do this too. By the way, I am also more in favour of âlearning by doingâ and âmake decisions as you progressâ type of approaches but nevertheless at least having âsomeâ grand strategy can reveal important insights about what are the real bottlenecks and how to overcome them.
5. Commissioning impact evaluations of major EA organisations and EA funds.
I think the reasons for this are obvious. There are of course some impact evaluations in EA- GWWCâs evaluating the evaluators project was a good example (But note that this was done only last year, once - and from my perspective it evaluated the structure and framework of the funds, not the impact of the grants themselves). I definitely think there is a lot of room for improvement - especially on publicly accessible impact reports. I think this is all the more important for EA, since ânot assuming impact but looking for evidenceâ is one of the distinguishing features of it.
North And @ 2024-10-29T18:43 (+3) in response to We should prevent the creation of artificial sentienceÂ
I don't understand the core of your proposal. Like, to ban it you have to point at it. Do you have a pointer? Like, this post reads as "10 easy steps of how to ban X. What is X? Idk"
Is it a ban on use of loss functions or what? Like, if you say that pain is repulsive states and pleasure is attractive ones, the loss is always repulsive
Nathaniel @ 2024-10-29T18:17 (+3) in response to The default trajectory for animal welfare means vastly more suffering
Great post, James!
I'm curious if you have any sense of how the average conditions/welfare levels of farmed animals are expected to change on this default trajectory, or how they've changed in the last few decades. I imagine this is difficult to quantify, but seems important.
In particular, assuming market pressures stay as they are, how should we expect technological improvements to affect farmed animal welfare?
My uneducated guess: optimizing hard for (meat production / cost) generally leads to lower animal welfare. This seems roughly true of technological improvements in the past. For example:
ZY @ 2024-10-29T18:01 (+1) in response to New cause area: Violence against women and girls
Thanks for sharing this! I was looking at https://www.givingwhatwecan.org/best-charities-to-donate-to-2024# to find some good related NGOs to donate to for a friend's birthday but didn't find a section in the front page (maybe in some subpages?). But I will donate to some of the orgs mentioned here!
Silvan @ 2024-10-27T19:02 (+1) in response to Some reasons for not prioritising animal welfare very strongly
Thanks for your honesty Engin. This section truly reflects my doubts about animal welfare, which I guess has little to do with cost effectiveness or monitorability.. but more about the shadow of the the repugnant conclusion. The fear that we could end up prioritizing moths over humans simply because we keep insisting that the only thing that reflects value in the world is doing arithmetics with pain and pleasure.
I tried to express some of these fears in https://forum.effectivealtruism.org/posts/QFh6kiwv36mR8QSiE/are-we-as-rigorous-in-addressing-utilitarianism-s
Engin ArÄąkan @ 2024-10-29T17:31 (+1)
Thanks!
I definately agree there is a lot of room of improvement in animal ethics. Most animal welfare people are cool with being unconventional but I think this kind of misses the point which is that we might not currently have the right moral framework.
I also think utilitarianism got "some" things right like extreme pain is really immoral, or one should be seeking efficiency (within reasonable limits) etc. But it remains weak and weird by itself, without any additional (and multiple) values and principles.
ASuchy @ 2024-10-26T09:05 (+1) in response to Some reasons for not prioritising animal welfare very strongly
This is another great piece of writing from you Engin, thanks!
Engin ArÄąkan @ 2024-10-29T17:11 (+1)
Thank you @ASuchy
hbesceli @ 2024-10-29T11:57 (+1) in response to What should EAIF Fund?
Not sure I understand this part - curious if you could say more.
I like this idea. A related idea/ framing that comes to mind.
Brad West @ 2024-10-29T16:59 (+2)
Just when I have seen efforts to improve community relations it has typically been in the "Community Health" context relating to when people have had complaints about people in the community or other conflicts. I haven't seen as much concerted effort in connecting people working on different EA projects that might add value to each other.
Dylan Richardson @ 2024-10-26T20:19 (+3) in response to Our fight for the future of one billion chickens.
Great news!
I'm curious though if there has been any work done on the welfare math of this? Frankenchickens suffer more individually due to their size, but greater size also means less individual chickens are needed to satisfy demand. Furthermore, faster growth means less time spent alive and, presumably, suffering - or maybe more time, if slaughter makes up a large fraction of it?
It seems likely to me that Frankenchickens do entail more suffering and that banning them would mean less regardless, as increasing cost of production also lowers demand; plus the campaign is a good movement building endeavor. However, it would still be good to understand how much of priority this is relative to other policy changes.
Molly Archer-Zeff @ 2024-10-29T16:59 (+3)
Hi Dylan,
In response to your question, this RSPCA report explores the question of fast-growing breeds of broiler chicken. They highlighted the intense suffering that these birds face and the inefficiencies of this system of farming. It is a 36-page report so here are a few key bits of the text:
The Welfare Footprint Project used the Cumulative Pain Framework to investigate how the adoption of the Better Chicken Commitment (BCC) and similar welfare certification programs affect the welfare of broilers. Specifically, they examined concerns that the use of slower-growing breeds may increase suffering by extending the life of chickens for the production of the same amount of meat. From their main findings they stated:
'Our results strongly support the notion that adoption of BCC standards and slower-growing broiler strains have a net positive effect on the welfare of broiler chickens. Because most welfare offenses endured by broilers are strongly associated with fast growth, adoption of slower-growing breeds not only reduces the incidence of these offenses but also delays their onset. As a consequence, slower-growing birds are expected to experience a shorter, not longer, time in pain before being slaughtered.'
You can also read our own white paper on the welfare of broiler chickens.
I hope that helps answer your question.
Lorenzo Buonannođ¸ @ 2024-10-29T16:37 (+2) in response to Keynesian Altruism
Thanks for writing this! I would be curious to know what you think about this 4 years later, and now that interest rates are much higher.
Christopher Isu @ 2024-10-29T16:31 (+1) in response to Introducing Tech Governance Project
Awesome!..
Ben_Westđ¸ @ 2024-10-29T02:00 (+92) in response to Reflections and lessons from Effective Ventures
Thanks for writing this! One small comment:
I believed this and wanted EV to hire more outside experts. To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped. To my dismay, the result was pretty equivocal: there were certainly instances where non-EA experts outperformed EAs, but ~as many instances to the contrary.
I don't think EA is unique here; I have half a foot in the startup world and pg's recent Founder Mode post has ignited a bunch of discussion about how startup founders with ~0 experience often outperform the seasoned experts that they hire.
Unfortunately I don't have a good solution here - hiring/contracting good people is just actually a very hard problem. But at least from the issues I am aware of at EV I don't think the correct update was in favor of experience and away from value alignment.[1]
If I had to come up with advice, I think it would be to note the scare quotes around "value alignment". Someone sincerely trying to do well at the thing they are hired for is very valuable; someone professing to care about the organization's mission but not actually doing anything is not very valuable. And sometimes people confuse the two. [This is a general comment, not specific to EV.]
Grayden đ¸ @ 2024-10-29T16:31 (+5)
Surely itâs not a case of either-or. EA exists because we all found that existing charity was not up to scratch, hence we do want EA to take different approaches. However, I think itâs important to also have people from outside EA (but with good value alignment) to provide diversity of thought and make sure there are no blindspots.
CJP @ 2024-10-29T15:54 (+1) in response to Marginal Revolution: Effective Altruists and Finance Theory
Follow up to: https://marginalrevolution.com/marginalrevolution/2024/10/a-funny-feature-of-the-ai-doomster-argument.html
Ben_Westđ¸ @ 2024-10-29T15:18 (+6) in response to Reflections and lessons from Effective Ventures
Thanks for sharing this Zach! I know it must have been a ton of work.
Joseph ADEBOYE @ 2024-10-29T12:42 (+3) in response to Introducing Tech Governance Project
Impressive! A big "Weldone" to the TGov team. With your approach, dedication and achievement in just few months, I believe the TGov initiative will be a game-changer for Africa's emerging technologies governance.
However, I have two concerns running through my mind :
African Leaders are used to receiving policy recommendations but with little political will for implementation. How will you ensure your policy suggestions are really implemented?
Also, Are there any contingency plan perhaps for unforseen obstacles that can hinder TGovs work like Geopolitical crises in pilot countries, key personnel or team member departure? etc...
Zakariyau Yusuf @ 2024-10-29T15:17 (+3)
Thanks for your comments, Joseph. Regarding your raised points.
Our bio aspect focuses primarily on advisory policies, such as guidelines and protocols, rather than improving the local domestication of already established international standards. Our AI aspect is more about getting African stakeholders impactfully engaged in the global forum that defines redlines and boundaries, where their participation is currently lacking, resulting in significant gaps.
I think effective local domestication/implementation is a problem of its own.
Adebayo Mubarak @ 2024-10-29T09:36 (+3) in response to Introducing Tech Governance Project
This is a great stride in the right direction. Looking forward to your exploration of the Africa Tech Space.
A quick question, how do you plan on bringing the seeming lack of government collaboration with this type of project especially in some parts of Africa?
Zakariyau Yusuf @ 2024-10-29T14:35 (+3)
Thanks, Adebayo.
We are actively building relationships with our identified stakeholders and aim to leverage these connections to further our mission. In some cases, we will utilize existing engagement and networks, such as those established by the APET, Africa CDCs, or agencies at the national level, to drive our mission.
L Rudolf L @ 2024-10-29T13:38 (+6) in response to Winners of the Essay competition on the Automation of Wisdom and Philosophy
I've now posted my entries on LessWrong:
I'd also like to really thank the judges for their feedback. It's a great luxury to be able to read many pages of thoughtful, probing questions about your work. I made several revisions & additions (and also split the entire thing into parts) in response to feedback, which I think improved the finished sequence a lot, and wish I had had the time to engage even more with the feedback.
ClaireZabel @ 2024-10-29T02:18 (+30) in response to Reflections and lessons from Effective Ventures
Seconding Ben, I did a similar exercise and got similarly mixed (with stark examples in both directions) results (including in some instances you allude to in the post)
Zachary Robinsonđ¸ @ 2024-10-29T13:33 (+3)
I think it's possible our views are compatible here. I want expertise to be valued more on the margin because I found EV and many other EA orgs to tilt towards an extreme of prioritizing value alignment, but I certainly believe there are cases where value alignment and general intelligence matter most and also that there are cases where expertise matters more.
I think the key lies in trying to figure out which situations are which in advance.
Hans Erickson @ 2024-10-26T17:58 (+9) in response to Open thread: October - December 2024
Hi everyone,
My name is Hans Erickson, I am a 65 year old IT professional that is semi-retired. I still own a small IT support company and have an employee who backfills for me, which allows me to travel.
On a trip to Africa in 2022, I was on a safari and was taken through a remote Botswana village that was the home of our tour guide. He pointed out the school house as we passed through. I had been in Africa once before 15 years earlier participating in a technology conference in Lagos, Nigeria. In my research at the time, I discovered the appalling lack of internet connectivity to the majority of the continent. I asked our tour guide about this, and he confirmed the school had no internet.
I volunteered to set up Starlink internet for the school when the service became available. Just a month ago Starlink officially began service in Botswana. I reached out to my contact and the school administrator that he had connected me to. Because it is a government school, they required formal approval, so I have written letters and responded to questions, but still no approval. I am hopeful now that it is in the hands of their IT administrators that a final approval is coming.
There are approx. 150 students attending the school. My plan is to install the starlink dish, Ubiquiti AP's and remote monitoring equipment, connect everything, and supply some chromebooks for student and administration use. I will also configure a google school account, which provides robust tools for school administrators and students.
I have volunteered to support the starlink subscription for a three year period, after which I hope to convince local authorities or Starlink to continue the service.
I only read the 'Doing Goog Better' book after having made this agreement. In the interest of effective altruism, I was hoping to learn from someone the metrics that would be most beneficial to track for a project like this. I am aware that risks are involved with providing high speed internet in a rural setting, but I am not sure exactly what those risks might be.
Any suggestions or thoughts would be appreciated.
Toby Tremlettđš @ 2024-10-29T13:28 (+2)
That's lovely Hans! Perhaps @NickLaing might have takes on your measurement question?
Thanks for joining the EA Forum.
I'm Toby, the Content Manager for the Forum (I run events, write newsletters, and talk with authors about their work).
Let me know if you have any questions about EA, or using the Forum.
Joseph ADEBOYE @ 2024-10-29T12:42 (+3) in response to Introducing Tech Governance Project
Impressive! A big "Weldone" to the TGov team. With your approach, dedication and achievement in just few months, I believe the TGov initiative will be a game-changer for Africa's emerging technologies governance.
However, I have two concerns running through my mind :
African Leaders are used to receiving policy recommendations but with little political will for implementation. How will you ensure your policy suggestions are really implemented?
Also, Are there any contingency plan perhaps for unforseen obstacles that can hinder TGovs work like Geopolitical crises in pilot countries, key personnel or team member departure? etc...
huw @ 2024-10-29T10:28 (+12) in response to Reflections and lessons from Effective Ventures
This is an awesome post, and it's a strong update in the direction of EV & CEA being much more transparent under your leadership. Very keen on hearing more from you in the future!
One other risk vector to EV stood out to me as concerning, but went somewhat unaddressed in this post. Consider:
I worry that the focus on legal risks is potentially missing a counterfactual here where a funding source is systematically upset. EV was not just banking on FTX to stay solvent / unfraudulent, but was also implicitly depending on cryptocurrency to remain frothy (the same can be said for EA, especially long-term risk cause areas, more broadly). Counterfactually, had FTX not been fraudulent, I still think that's it's likely that cryptocurrency would have collapsed over the following years. Assuming that the LTFF was receiving a proportion of FTX's funds, this still could've meant more than a 50% drop in funding from FTX (for example, Ethereum lost ~3/5ths of its market cap between November 2021 to November 2022).
You note:
I would love to understand more about these financial controls. I can imagine that EV could probably withstand a sudden halving in funding from a major donor, by reallocating funding between projects, which is probably what's alluded to here.
(It's outside the scope of this post, but I'm not so sure that the broader long-term risk cause areas could have withstood this, and indeed, in the present scenario many organisations did not. I sort of worry about this kind of systematic risk with Anthropic, who could be hit quite hard if the current AI bubble starts winding down, even if they aren't directly responsible for it; I'm sure there are others.)
Jason @ 2024-10-29T12:15 (+9)
FTX as a funding source also had plenty of non-fraudulent failure modes. Having "banked on receiving millions from FTX over the coming years" to the extent that not receiving those funds created a crisis seems like a serious misjudgment. That being said, it isn't clear to me the extent to which FTX's donation amounts would have tied into short-term fluctuations in crypto values.
The extent to which donations could be reallocated is unclear to me; it is possible for a donor to restrict donations to a specific purpose in a legally binding way. At least in some jurisdictions, those restrictions can often be binding even against the charity's creditors if the charity manages its finances correctly.
I read Zach to mean that projects need to have enough funding on hand to shut down in an orderly enough way -- which includes a way that does not create problems for sister projects -- in a near-worst case scenario. This could be a problem, for instance, if a project had financial commitments that bound EV but could not be satisfied out of resources allocated to the project.
There are, however, limits on what good financial controls can do for you if there's a massive funding shortfall and/or a massive unplanned liability. If (e.g.) a 50% revenue loss (not of a short-term nature) wouldn't seriously disrupt a charity's work, then that charity is probably too conservative on its spending or is raising excessive amounts of money that should go elsewhere.
Brad West @ 2024-10-28T15:51 (+6) in response to What should EAIF Fund?
A lot of what I have seen regarding "EA Community teams" seems to be be about managing conflicts between different individuals.
It would be interesting to see an organization or individual that was explicitly an expert in knowing different individuals and organizations and the projects that they are working on and could potentially connect people who might be able to add value to each other's projects. It strikes me that there are a lot of opportunities for collaboration but not as much organization around mapping out the EA space on a more granular level.
hbesceli @ 2024-10-29T11:57 (+1)
Not sure I understand this part - curious if you could say more.
I like this idea. A related idea/ framing that comes to mind.
Ben_Westđ¸ @ 2024-10-29T02:00 (+92) in response to Reflections and lessons from Effective Ventures
Thanks for writing this! One small comment:
I believed this and wanted EV to hire more outside experts. To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped. To my dismay, the result was pretty equivocal: there were certainly instances where non-EA experts outperformed EAs, but ~as many instances to the contrary.
I don't think EA is unique here; I have half a foot in the startup world and pg's recent Founder Mode post has ignited a bunch of discussion about how startup founders with ~0 experience often outperform the seasoned experts that they hire.
Unfortunately I don't have a good solution here - hiring/contracting good people is just actually a very hard problem. But at least from the issues I am aware of at EV I don't think the correct update was in favor of experience and away from value alignment.[1]
If I had to come up with advice, I think it would be to note the scare quotes around "value alignment". Someone sincerely trying to do well at the thing they are hired for is very valuable; someone professing to care about the organization's mission but not actually doing anything is not very valuable. And sometimes people confuse the two. [This is a general comment, not specific to EV.]
Jason @ 2024-10-29T11:37 (+16)
Were these mostly situations in which EV had run into a major issue and then an outside expert was brought in? To the extent that the underlying developments that led to an issue came about from an EA / EV-insider way of thinking, I would expect significant performance costs associated with changing horses in midstream. So I wouldn't update much on the advisability of bringing in outside experts before a problem happens, or after a problem happens if the outside experts had played a role in setting up the underlying developments.
As a rough analogy, one can imagine a gridiron football offense that has been built (in terms of training, personnel, etc.) to align with a particular offensive strategy (e.g., the West Coast offense). If your team is set up that way, subbing in a key player whose skill set doesn't align to the previously chosen offensive strategy isn't usually going to work well in the short to medium run. This doesn't imply that the new player is bad -- just that your team has pre-committed to playing a particular offense. Ex ante, the new guy could have been the right player for your team contingent on your team having built a flexible enough system for him to work effectively in.
Vasco Grilođ¸ @ 2024-10-29T10:44 (+3) in response to Farmed animals may have positive lives now or in a few decades?
Thanks for another relevant question too! I do not think that alone would make dairy production net negative:
CBđ¸ @ 2024-10-29T10:53 (+3)
Thanks for the answer ! I wish more people thought about these questions.
CBđ¸ @ 2024-10-28T21:02 (+3) in response to Farmed animals may have positive lives now or in a few decades?
Thanks for the answer !
This is really interesting. Do you think that the fact cows are separated from their child, and arguably really don't like that, would change significantly the results?
Vasco Grilođ¸ @ 2024-10-29T10:44 (+3)
Thanks for another relevant question too! I do not think that alone would make dairy production net negative:
huw @ 2024-10-29T10:28 (+12) in response to Reflections and lessons from Effective Ventures
This is an awesome post, and it's a strong update in the direction of EV & CEA being much more transparent under your leadership. Very keen on hearing more from you in the future!
One other risk vector to EV stood out to me as concerning, but went somewhat unaddressed in this post. Consider:
I worry that the focus on legal risks is potentially missing a counterfactual here where a funding source is systematically upset. EV was not just banking on FTX to stay solvent / unfraudulent, but was also implicitly depending on cryptocurrency to remain frothy (the same can be said for EA, especially long-term risk cause areas, more broadly). Counterfactually, had FTX not been fraudulent, I still think that's it's likely that cryptocurrency would have collapsed over the following years. Assuming that the LTFF was receiving a proportion of FTX's funds, this still could've meant more than a 50% drop in funding from FTX (for example, Ethereum lost ~3/5ths of its market cap between November 2021 to November 2022).
You note:
I would love to understand more about these financial controls. I can imagine that EV could probably withstand a sudden halving in funding from a major donor, by reallocating funding between projects, which is probably what's alluded to here.
(It's outside the scope of this post, but I'm not so sure that the broader long-term risk cause areas could have withstood this, and indeed, in the present scenario many organisations did not. I sort of worry about this kind of systematic risk with Anthropic, who could be hit quite hard if the current AI bubble starts winding down, even if they aren't directly responsible for it; I'm sure there are others.)
TheAntithesis @ 2023-08-02T13:44 (+5) in response to If your AGI x-risk estimates are low, what scenarios make up the bulk of your expectations for an OK outcome?
I can think of a few scenarios where AGI doesn't kill us.
Greg_Colbourn @ 2024-10-29T10:04 (+2)
Isaac Fasipe @ 2024-10-29T09:44 (+1) in response to Why you think you're right - even when you're wrong
Thank you for the insightful talk on scout mindset.My key take away is that good judgement from evidence based information helps to make better decision.Also,embracing growth mindset is key to an effective life.
Adebayo Mubarak @ 2024-10-29T09:36 (+3) in response to Introducing Tech Governance Project
This is a great stride in the right direction. Looking forward to your exploration of the Africa Tech Space.
A quick question, how do you plan on bringing the seeming lack of government collaboration with this type of project especially in some parts of Africa?
Mo Putera @ 2024-10-29T09:35 (+2) in response to undefined
Also relevant: EA: Renaissance or Diaspora?
Bella @ 2024-10-29T09:35 (+11) in response to Reflections and lessons from Effective Ventures
Thanks for writing this Zach! The broad strokes of the dynamics here are not news to me (I work at 80k which is a project of EV) but lots of the detail was novel and feels good to know.
Ben_Westđ¸ @ 2024-10-29T02:00 (+92) in response to Reflections and lessons from Effective Ventures
Thanks for writing this! One small comment:
I believed this and wanted EV to hire more outside experts. To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped. To my dismay, the result was pretty equivocal: there were certainly instances where non-EA experts outperformed EAs, but ~as many instances to the contrary.
I don't think EA is unique here; I have half a foot in the startup world and pg's recent Founder Mode post has ignited a bunch of discussion about how startup founders with ~0 experience often outperform the seasoned experts that they hire.
Unfortunately I don't have a good solution here - hiring/contracting good people is just actually a very hard problem. But at least from the issues I am aware of at EV I don't think the correct update was in favor of experience and away from value alignment.[1]
If I had to come up with advice, I think it would be to note the scare quotes around "value alignment". Someone sincerely trying to do well at the thing they are hired for is very valuable; someone professing to care about the organization's mission but not actually doing anything is not very valuable. And sometimes people confuse the two. [This is a general comment, not specific to EV.]
Lorenzo Buonannođ¸ @ 2024-10-29T09:17 (+6)
I would find it valuable if you could share some public version of the spreadsheet, or if you quickly remember some specific examples. Hiring/contracting is very hard but almost always necessary.
Michelle_Hutchinson @ 2024-10-29T08:46 (+4) in response to Reflections and lessons from Effective Ventures
Thanks for such a thorough update!
Chris Leong @ 2024-10-28T23:02 (+6) in response to Tentatively against making AIs 'wise'
My position (to be articulated in an upcoming sequence) is the exact opposite of this, but fascinating post anyway and congrats on winning a prize!
OscarDđ¸ @ 2024-10-29T08:24 (+2)
Cheers, OK look forward to reading it!
Chris Leong @ 2024-10-29T05:50 (+3) in response to Winners of the Essay competition on the Automation of Wisdom and Philosophy
Just wanted to mention that if anyone liked my submissions (3rd prize, An Overview of âObviousâ Approaches to Training Wise AI Advisors, Some Preliminary Notes on the Promise of a Wisdom Explosion),
I'll be running a project related to this work as part of AI Safety Camp. Join me if you want to help innovate a new paradigm in AI safety.
Ozzie Gooen @ 2024-10-29T03:10 (+15) in response to What should EAIF Fund?
I still think that EA Reform is pretty important. I believe that there's been very little work so far on any of the initiatives we discussed here.
My impression is that the vast majority of money that CEA gets is from OP. I think that in practice, this means that they represent OP's interests significantly more than I feel comfortable with. While I generally like OP a lot, I think OP's focuses are fairly distinct from those of the regular EA community.
Some things I'd be eager to see funded:
- Work with CEA to find specific pockets of work that the EA community might prioritize, but OP wouldn't. Help fund these things.
- Fund other parties to help represent / engage / oversee the EA community.
- Audit/oversee key EA funders (OP, SFF, etc); as these often aren't reviewed by third parties.
- Make sure that the management in key EA orgs are strong, including the boards.
- Make sure that many key EA employees and small donors are properly taken care of and are provided with support. (I think that OP has reason to neglect this area, as it can be difficult to square with naive cost-effectiveness calculations)
- Identify voices that want to tackle some of these issues head-on, and give them a space to do so. This could mean bloggers / key journalists / potential community leaders in the future.
- Help encourage or set up new EA organizations to sit apart from CEA, but help oversee/manage the movement.
- Help out the Community Health team at CEA. This seems like a very tough job that could arguably use more support, some of which might be best done outside of CEA.
Generally, I feel like there's a very significant vacuum of leadership and managerial visibility in the EA community. I think that this is a difficult area to make progress on, but also consider it much more important than other EA donation targets.
ClaireZabel @ 2024-10-29T02:22 (+25) in response to Reflections and lessons from Effective Ventures
I really appreciate this post! I have a few spots of disagreement, but many more of agreement, and appreciate the huge amount of effort that went into summarizing a very complicated situation with lots of stakeholders over an extended period of time in a way that feels sincere and has many points of resonance with my own experience.
Ben_Westđ¸ @ 2024-10-29T02:00 (+92) in response to Reflections and lessons from Effective Ventures
Thanks for writing this! One small comment:
I believed this and wanted EV to hire more outside experts. To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped. To my dismay, the result was pretty equivocal: there were certainly instances where non-EA experts outperformed EAs, but ~as many instances to the contrary.
I don't think EA is unique here; I have half a foot in the startup world and pg's recent Founder Mode post has ignited a bunch of discussion about how startup founders with ~0 experience often outperform the seasoned experts that they hire.
Unfortunately I don't have a good solution here - hiring/contracting good people is just actually a very hard problem. But at least from the issues I am aware of at EV I don't think the correct update was in favor of experience and away from value alignment.[1]
If I had to come up with advice, I think it would be to note the scare quotes around "value alignment". Someone sincerely trying to do well at the thing they are hired for is very valuable; someone professing to care about the organization's mission but not actually doing anything is not very valuable. And sometimes people confuse the two. [This is a general comment, not specific to EV.]
ClaireZabel @ 2024-10-29T02:18 (+30)
Seconding Ben, I did a similar exercise and got similarly mixed (with stark examples in both directions) results (including in some instances you allude to in the post)