The biggest one is probably to start thinking about how you'll get the right people to engage with your project (research, content, services, courses, contests, programs, fellowships, events) before you've finished creating that project.
Why:
Engagement has a multiplicative effect for most projects (if you 2x the number or quality of people who read your research, you've probably 2xd the impact of your research)
Thinking this through ahead of time will often change the product itself (e.g. if you were going to do an event for people in biotech but then learn you only really have access to people at a specific company, your event should be tailored specifically to that company)
If you need vendor support, it can be hard to find and there may be waitlists (e.g. we have a backlog of a few months)
Your brain has a bunch of overlapping subsystems that are each conscious, according to many plausible criteria for consciousness you could use. You could say they're all minds. I'm not sure I'd say they're different minds, because if two overlap enough, they should be treated like the same one.
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans.
This reminds me of quantum computers or fusion reactors â we can build them, but the economics are far from working.
A quantum research scientist here: actually I would argue that is a misleading model for quantum computing. The main issue right now is technical, not economical. We still have to figure out error correction, without which you are bound to roughly 1000 logical gates. Far too little to do anything interesting.
Yeah, I got some pushback on Twitter on this point. I now agree that it's not a great analogy. My thinking was that we technically know how to build a quantum computer, but not one that is economically viable (which requires technical problems to be solved and for the thing to be scalable/not too expensive). Feels like a not all squares are rectangles, but all rectangles are squares thing. Like quantum computing ISN'T economically viable but that's not the main problem with it right now.
I'm confident in PauseAI US's ability to run protests and I think the case for doing protests is pretty strong. You're also doing lobbying, headed by Felix De Simone. I'm less confident about that so I have some questions.
There are no other organized groups (AFAIK) doing AI pause protests in the US of the sort you're doing. But there are other groups talking to policy-makers, including Center for AI Policy, Center for AI Safety, and Palisade (plus some others outside the US, and some others that focus on AI risk but that I think are less value-aligned). What is the value-add of PauseAI US's direct lobbying efforts compared these other groups? And are you coordinating with them at all?
What is Felix's background / experience in this area? Basically, why should I expect him to be good at lobbying?
One other thing I forgot to mention re: value-add. Some of the groups you mentioned (Center for AI Policy & Center for AI Safety; not sure about Palisade) are focused mostly on domestic AI regulation. PauseAI US is focused more on the international side of things, making the case for global coordination and an AI Treaty. In this sense, one of our main value-adds might be convincing members of Congress that international coordination on AI is both feasible and necessary to prevent catastrophic risk. This also serves to counter the "arms race" narrative ("the US needs to develop AGI first in order to beat China!") which risks sabotaging AI policy in the coming years.
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can't be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don't play to our comparative advantages.
Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I've seen or heard about why we didn't fund something are wrong. (Similarly, us choosing to fund someone doesn't mean we endorse everything about them or their work/plans.)
Very often, when we decline to do or fund something, it's not because we don't think it's good or important, but because we aren't the right team or organization to do or fund it, or we're prioritizing other things that quarter.
As such, we spend a lot of time working to help create or assist other philanthropies and organizations who work on these issues and are better fits for some opportunities than we are. I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages â whether through existing large-scale philanthropies turning their attention to these risks or through new philanthropists entering the space.
While Good Ventures is Open Philanthropy's largest philanthropic partner, we also regularly advise >20 other philanthropists who are interested to hear about GCR-related funding opportunities. (Our GHW team also does similar work partnering with many other philanthropists.) On the GCR side, we have helped move tens of millions of non-GV money to GCR-related organizations in just the past year, including some organizations that GV recently exited. GV and each of those other funders have their own preferences and restrictions we have to work around when recommending funding opportunities.
Among the AI funders we advise, Good Ventures is among the most open and flexible funders.
We're happy to see funders enter the space even if they donât share our priorities or work with us. When more funding is available, and funders pursue a broader mix of strategies, we think this leads to a healthier and more resilient field overall.
Many funding opportunities are a better fit for non-GV funders, e.g. due to funder preferences, restrictions, scale, or speed. We've also seen some cases where an organization can have more impact if they're funded primarily or entirely by non-GV sources. For example, itâs more appropriate for some types of policy organizations outside the U.S. to be supported by local funders, and other organizations may prefer support from funders without GV/OPâs past or present connections to particular grantees, AI companies, etc. Many of the funders we advise are actively excited to make use of their comparative advantages relative to GV, and regularly do so.
We are excited for individuals and organizations that aren't a fit for GV funding to apply to some of OPâs GCR-related RFPs (e.g. here, for AI governance). If we think the opportunity is strong but a better fit for another funder, we'll recommend it to other funders.
To be clear, these other funders remain independent of OP and decline most of our recommendations, but in aggregate our recommendations often lead to target grantees being funded.
We believe reducing AI GCRs via public policy is not an inherently liberal or conservative goal. Almost all the work we fund in the U.S. is nonpartisan or bipartisan and engages with policymakers on both sides of the aisle. However, at present, it remains the case that most of the individuals in the current field of AI governance and policy (whether we fund them or not) are personally left-of-center and have more left-of-center policy networks. Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
OP's AI teams spend almost no time directly advocating for specific policy ideas. Instead, we focus on funding a large ecosystem of individuals and organizations to develop policy ideas, debate them, iterate them, advocate for them, etc. These grantees disagree with each other very often (a few examples here), and often advocate for different (and sometimes ~opposite) policies.
We think it's fine and normal for grantees to disagree with us, even in substantial ways. We've funded hundreds of people who disagree with us in a major way about fundamental premises of our GCRs work, including about whether AI poses GCR-scale risks at all (example).
I think frontier AI companies are creating enormous risks to humanity, I think their safety and security precautions are inadequate, and I think specific reckless behaviors should be criticized. AI company whistleblowers should be celebrated and protected. Several of our grantees regularly criticize leading AI companies in their official communications, as do many senior employees at our grantees, and I think this happens too infrequently.
Relatedly, I think substantial regulatory guardrails on frontier AI companies are needed, and organizations we've directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose (alongside some policies they tend to support).
I'll also take a moment to address a few misconceptions that are somewhat less common in EA or rationalist spaces, but seem to be common elsewhere:
Discussion of OP online and in policy media tends to focus on our AI grantmaking, but AI represents a minority of our work. OP has many focus areas besides AI, and has given far more to global health and development work than to AI work.
We are generally big fans of technological progress. See e.g. my post about the enormous positive impacts from the industrial revolution, or OP's funding programs for scientific research, global health R&D, innovation policy, and related issues like immigration policy. Most technological progress seems to have been beneficial, sometimes hugely so, even though there are some costs and harms along the way. But some technologies (e.g. nuclear weapons, synthetic pathogens, and superhuman AI) are extremely dangerous and warrant extensive safety and security measures rather than a "move fast and break [the world, in this case]" approach.
We have a lot of uncertainty about how large AI risk is, exactly which risks are most worrying (e.g. loss of control vs. concentration of power), on what timelines the worst-case risks might materialize, and what can be done to mitigate them. As such, most of our funding in the space has been focused on (a) talent development, and (b) basic knowledge production (e.g. Epoch AI) and scientific investigation (example), rather than work that advocates for specific interventions.
I hope these clarifications are helpful, and lead to fruitful discussion, though I don't expect to have much time to engage with comments here.
I'm confident in PauseAI US's ability to run protests and I think the case for doing protests is pretty strong. You're also doing lobbying, headed by Felix De Simone. I'm less confident about that so I have some questions.
There are no other organized groups (AFAIK) doing AI pause protests in the US of the sort you're doing. But there are other groups talking to policy-makers, including Center for AI Policy, Center for AI Safety, and Palisade (plus some others outside the US, and some others that focus on AI risk but that I think are less value-aligned). What is the value-add of PauseAI US's direct lobbying efforts compared these other groups? And are you coordinating with them at all?
What is Felix's background / experience in this area? Basically, why should I expect him to be good at lobbying?
Happy to weigh in here with some additional information/thoughts.
Before I started my current role at PauseAI US, I worked on statewide environmental campaigns. While these were predominantly grassroots (think volunteer management, canvassing, coalition-building etc.) they did have a lobbying component, and I met with statewide and federal offices to advance our policy proposals. My two most noteworthy successes were statewide campaigns in Massachusetts and California, where I met with a total of ~60 state Congressional offices and helped to persuade the legislatures of both states to pass our bills (clean energy legislation in MA; pollinator protection in CA) despite opposition from the fossil fuel and pesticide industries.
I have been in D.C. since August working on PauseAI USâ lobby efforts. So far, I have spoken to 16 Congressional offices â deliberately meeting with members of both parties, with a special focus on Congressmembers in relevant committees (i.e. House Committee on Science, Space, and Technology; Senate Committee on Commerce, Science, and Transportation; House Bipartisan AI Task Force).
I plan to speak with another >50 offices over the next 6 months, as well as deepen relationships with offices Iâve already met. I also intend to host a series of Congressional briefingsâ on (1) AI existential risk, (2) Pausing as a solution, and (3) the importance and feasibility of international coordinationâ inviting dozens of Congressional staff to each briefing.
I do coordinate with a few other individuals from aligned AI policy groups, to share insights and gain feedback on messaging strategies.
Here are a few takeaways from my lobbying efforts so far, explaining why I believe PauseAI US lobbying is important:
This is low-hanging fruit. Many Congressional offices havenât yet heard of loss-of-control and existential AI risks; when I bring these risks up, it is often the first time these offices have encountered them. This means that PauseAI US can play a foundational role in setting the narrative and having lots of leverage.
Offices are more receptive than one might expect to existential risk / loss-of-control scenarios, and even occasionally to the Pause solution.
Framing and vocabulary matter a lot here â itâs important to find the best ways to make our arguments palatable to Congressional offices. This includes, for instance, framing a Pause as âpro-safe innovationâ rather than generically âanti-innovation,â anticipating and addressing reasonable objections, making comparisons to how we regulate other technologies (i.e. aviation, nuclear power), and providing concrete risk scenarios that avoid excessive technical jargon.
It is crucially important to explain the feasibility and importance of international coordination on AI risk / an AI Treaty. A worrisome âdefault pathâ might be for the US to ramp up an AI arms race against China, leading to superintelligent AI before we are able to control it. In order to avoid this outcome, we need to convince US policymakers that (1) it doesnât matter who builds superintelligence, we all lose; and (2) international coordination is feasible and tractable.
As such, I spend a lot of time emphasizing loss-of-control scenarios, making the case that this technology should not be thought of as a âweaponâ to be controlled by whichever country builds it first, but instead as a âdoomsday deviceâ that could end our world regardless of who builds it.
I also make the case for the feasibility of an international pause, by appealing to historical precedent (i.e. nuclear non-proliferation agreements) and sharing information about verification and enforcement mechanisms (i.e. chip tracking, detecting large-scale training runs, on-chip reporting mechanisms.)
The final reason for the importance of PauseAI US lobbying is a counterfactual one: If we donât lobby Congress, we risk ceding ground to other groups who push the âarms raceâ narrative and convince the US to go full-speed ahead on AGI development. By being in the halls of Congress and making the most persuasive case for a Pause, we are at the very least helping prevent the pendulum from swinging in the opposite direction.
This is really impressive and inspiring to see! Big congratulations to the team for your hard work in putting together what seems like a highly impactful conference :) Looking forward to seeing how EA Nigeria continues to evolve.
Since Longtermism as a concept doesn't seem widely appealing, I wonder how other time-focused ethical frameworks fare, such as Shorttermism (Focusing on immediate consequences), Mediumtermism (Focusing on foreseeable future), or Atemporalism (Ignoring time horizons in ethical considerations altogether).
I'd guess these concepts would also be unpopular, perhaps because ethical considerations centered on timeframes feel confusing, too abstract or even uncomfortable for many people.
If true, it could mean that any theory framed in opposition, such as a critique of Shorttermism or Longtermism, might be more appealing than the time-focused theory itself. Critizising short-term thinking is an applause light in many circles.
If true, it could mean that any theory framed in opposition, such as a critique of Shorttermism or Longtermism, might be more appealing than the time-focused theory itself. Critizising short-term thinking is an applause light in many circles.
I agree this could well be true at the level of arguments i.e. I think there are probably longtermist (anti-shorttermist), framings which would be successful. But I suspect it would be harder to make this work at the level of framing/branding a whole movement, i.e. I think promoting the 'anti-shorttermist' movement would be hard to do successfully.
Since Longtermism as a concept doesn't seem widely appealing, I wonder how other time-focused ethical frameworks fare, such as Shorttermism (Focusing on immediate consequences), Mediumtermism (Focusing on foreseeable future), or Atemporalism (Ignoring time horizons in ethical considerations altogether).
I'd guess these concepts would also be unpopular, perhaps because ethical considerations centered on timeframes feel confusing, too abstract or even uncomfortable for many people.
If true, it could mean that any theory framed in opposition, such as a critique of Shorttermism or Longtermism, might be more appealing than the time-focused theory itself. Critizising short-term thinking is an applause light in many circles.
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone's LinkedIn profile or CV).
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone's LinkedIn profile or CV).
Whether or not to use "credentialist and biased methods (like looking at someone's LinkedIn profile or CV)" seems orthogonal to the discussion at hand?
The key issue seems to be that if you raise the screening bar, then you would be admitting fewer applicants to the task (the opposite of the original intention).
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
This will definitely vary by org and by task. But many EA orgs report valuing their staff's time extremely highly. And my impression is that both grading longer tasks and then processing the additional applicants (many orgs will also feel compelled to offer at least some feedback if a candidate has completed a multi-hour task) will often take much longer than 10 minutes total.
Iâd be grateful if some people could fill this survey https://forms.gle/RdQfJLs4a5jd7KsQA The survey will ask you to compare different intensities of pain. In case you're interested why you might want to do it, youâll be helping me to estimate plausible weights for different categories of pain used by the Welfare Footprint Project. This will help me with to summarise their conclusions into easily digestible statements like âswitch from battery cage to cage-free reduces suffering of hens by at least 60%â and with some cost-effectiveness estimates. Thanks â¤ď¸
Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task, and that significant reasons for wanting to compensate applicants are (i) a sense of justice, (ii) wanting to avoid the appearance of unreasonably demanding lots of unpaid labour from applicants, not just wanting to encourage applicants to complete the tasks[1].
So I agree that there are good reasons for wanting more people to be able to complete test tasks. But I think that doing so would potentially significantly increase costs to orgs, and that not compensating applicants would reduce costs to orgs by less than one might imagine.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone's LinkedIn profile or CV).
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task, and that significant reasons for wanting to compensate applicants are (i) a sense of justice, (ii) wanting to avoid the appearance of unreasonably demanding lots of unpaid labour from applicants, not just wanting to encourage applicants to complete the tasks[1].
So I agree that there are good reasons for wanting more people to be able to complete test tasks. But I think that doing so would potentially significantly increase costs to orgs, and that not compensating applicants would reduce costs to orgs by less than one might imagine.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
My first guess is that if this did happen, we'd keep the votes open slightly longer until there wasn't a tie - this is another benefit of having a secret deadline. I'll check with @Will Howardđš when he's online to see if there is already a solution to this. Thanks for pointing it out!
(Discussed separately) I think it would be best to split the pot 4 ways if this happens, because there is some chance of introducing a bias by deciding when to end based on a property of the votes. Or if there is some reason we can't do this that I'm not aware of (e.g. legal constraints), then breaking the tie with a coin flip.
(@Lorenzo Buonannođ¸ You can consider this the official answer unless I hear otherwise).
I'm curating this post. This was my favourite post from Funding Strategy Week. It makes a straightforward but important point that is useful to keep in mind.
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can't be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don't play to our comparative advantages.
Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I've seen or heard about why we didn't fund something are wrong. (Similarly, us choosing to fund someone doesn't mean we endorse everything about them or their work/plans.)
Very often, when we decline to do or fund something, it's not because we don't think it's good or important, but because we aren't the right team or organization to do or fund it, or we're prioritizing other things that quarter.
As such, we spend a lot of time working to help create or assist other philanthropies and organizations who work on these issues and are better fits for some opportunities than we are. I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages â whether through existing large-scale philanthropies turning their attention to these risks or through new philanthropists entering the space.
While Good Ventures is Open Philanthropy's largest philanthropic partner, we also regularly advise >20 other philanthropists who are interested to hear about GCR-related funding opportunities. (Our GHW team also does similar work partnering with many other philanthropists.) On the GCR side, we have helped move tens of millions of non-GV money to GCR-related organizations in just the past year, including some organizations that GV recently exited. GV and each of those other funders have their own preferences and restrictions we have to work around when recommending funding opportunities.
Among the AI funders we advise, Good Ventures is among the most open and flexible funders.
We're happy to see funders enter the space even if they donât share our priorities or work with us. When more funding is available, and funders pursue a broader mix of strategies, we think this leads to a healthier and more resilient field overall.
Many funding opportunities are a better fit for non-GV funders, e.g. due to funder preferences, restrictions, scale, or speed. We've also seen some cases where an organization can have more impact if they're funded primarily or entirely by non-GV sources. For example, itâs more appropriate for some types of policy organizations outside the U.S. to be supported by local funders, and other organizations may prefer support from funders without GV/OPâs past or present connections to particular grantees, AI companies, etc. Many of the funders we advise are actively excited to make use of their comparative advantages relative to GV, and regularly do so.
We are excited for individuals and organizations that aren't a fit for GV funding to apply to some of OPâs GCR-related RFPs (e.g. here, for AI governance). If we think the opportunity is strong but a better fit for another funder, we'll recommend it to other funders.
To be clear, these other funders remain independent of OP and decline most of our recommendations, but in aggregate our recommendations often lead to target grantees being funded.
We believe reducing AI GCRs via public policy is not an inherently liberal or conservative goal. Almost all the work we fund in the U.S. is nonpartisan or bipartisan and engages with policymakers on both sides of the aisle. However, at present, it remains the case that most of the individuals in the current field of AI governance and policy (whether we fund them or not) are personally left-of-center and have more left-of-center policy networks. Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
OP's AI teams spend almost no time directly advocating for specific policy ideas. Instead, we focus on funding a large ecosystem of individuals and organizations to develop policy ideas, debate them, iterate them, advocate for them, etc. These grantees disagree with each other very often (a few examples here), and often advocate for different (and sometimes ~opposite) policies.
We think it's fine and normal for grantees to disagree with us, even in substantial ways. We've funded hundreds of people who disagree with us in a major way about fundamental premises of our GCRs work, including about whether AI poses GCR-scale risks at all (example).
I think frontier AI companies are creating enormous risks to humanity, I think their safety and security precautions are inadequate, and I think specific reckless behaviors should be criticized. AI company whistleblowers should be celebrated and protected. Several of our grantees regularly criticize leading AI companies in their official communications, as do many senior employees at our grantees, and I think this happens too infrequently.
Relatedly, I think substantial regulatory guardrails on frontier AI companies are needed, and organizations we've directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose (alongside some policies they tend to support).
I'll also take a moment to address a few misconceptions that are somewhat less common in EA or rationalist spaces, but seem to be common elsewhere:
Discussion of OP online and in policy media tends to focus on our AI grantmaking, but AI represents a minority of our work. OP has many focus areas besides AI, and has given far more to global health and development work than to AI work.
We are generally big fans of technological progress. See e.g. my post about the enormous positive impacts from the industrial revolution, or OP's funding programs for scientific research, global health R&D, innovation policy, and related issues like immigration policy. Most technological progress seems to have been beneficial, sometimes hugely so, even though there are some costs and harms along the way. But some technologies (e.g. nuclear weapons, synthetic pathogens, and superhuman AI) are extremely dangerous and warrant extensive safety and security measures rather than a "move fast and break [the world, in this case]" approach.
We have a lot of uncertainty about how large AI risk is, exactly which risks are most worrying (e.g. loss of control vs. concentration of power), on what timelines the worst-case risks might materialize, and what can be done to mitigate them. As such, most of our funding in the space has been focused on (a) talent development, and (b) basic knowledge production (e.g. Epoch AI) and scientific investigation (example), rather than work that advocates for specific interventions.
I hope these clarifications are helpful, and lead to fruitful discussion, though I don't expect to have much time to engage with comments here.
Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US "right-of-center"[1] policy work to GV, I would be somewhat surprised that this well-written post didn't say that.
Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It's generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.
Hey Vasco. I don't know. I don't have a birds eye view of the movement right now the way Open Philanthropy does. It depends on the region and the campaign a lot. I think I underestimated mean years of impact in this post which would balance things out a bit. I also never checked whether my guesses about implementation rates in this post are correct.
I imagine that a significant portion of work being done now is on ensuring that commitments are implemented. And any estimates the cost-effectiveness of implementation work are going to be a lot more subjective. Like we could show people graphs like this
and as if they look accurate (this graph is just for illustration purposes). But the people we'd be asking would probably mostly be the people working on these campaigns, which introduces bias.
It's not the first time you are asking about this. Perhaps you would be interested in creating a new cost-effectiveness estimate with my help? I've done multiple related projects and I have a bunch of theoretical thoughts on how to do a new estimate, but I don't want to do it by myself. Like it would involve asking many animal advocates for opinions which causes me a lot of social anxiety, even though everyone I talked to about these sorts of things seemed lovely and friendly. It's the sort of thing that I'd only consider doing if EA Animal Welfare Fund or Open Philanthropy funded it, because they would be the primary users of such research, and if they wouldn't want to pay for it, then it's probably not worth doing. But uh, even if they did, I'm still unsure if that would be the most action-guiding project. But just wanted to throw this idea out there in case you or someone else is interested.
But the people we'd be asking would probably mostly be the people working on these campaigns, which introduces bias.
Agreed. Ideally, one would use a less subjective methodology.
It's not the first time you are asking about this. Perhaps you would be interested in creating a new cost-effectiveness estimate with my help?
Ah, I was just asking because I will publish a quick cost-effectiveness effectiveness estimate of corporate campaigns for chicken welfare in the next few weeks, but I am currently not planning to start a longer project. Thanks anyway for throwing the idea out there!
Like it would involve asking many animal advocates for opinions which causes me a lot of social anxiety, even though everyone I talked to about these sorts of things seemed lovely and friendly.
Maybe running surveys would be a way of partially mitigating the social anxiety.
Does anyone have thoughts on whether itâs still worthwhile to attend EAGxVirtual in this case?
I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I haven't:
I would only be able to attend on Sunday afternoon CET, and it seems like it might be a waste to apply if I'm only available for that time slot, as this is something I would never do for an in-person conference.
I can't find the schedule anywhere. You probably only have access to it if you are on Swapcard, but this makes it difficult to decide ahead of time whether it is worth attending, especially if I can only attend a small portion of the conference.
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if theyâd all waited a day or two to get on the same page and write a response with the correct facts.
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds⌠not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round, and far better than them never writing a comment at all. (I guess the exception to this is if the incorrect claims had legal ramifications that couldnât be undone. But I donât think thatâs true of the case you refer to?)
A downside is that if an organization isnât prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. Thatâs frustrating, but itâs a standard way that many organizations operate, both in EA and in other spaces.
I donât think the fact that this is a standard way for orgs to act in the wider world says much about whether this should be the way EA orgs act. In the wider world, an orgâs purpose is to make money for its shareholders: the org has no âteammatesâ outside of itself; no-one really expects the org to try hard to communicate what it is doing (outside of communicating well being tied to profit); no-one really expects the org to care about negative externalities. Moreover, withholding information can often give an org a competitive advantage over rivals.
Within the EA community, however, there is a shared sense that we are all on the same team (I hope): there is a reasonable expectation for cooperation; there is a reasonable expectation that orgs will take into account externalities on the community when deciding how to act. For example, if communicating some aspect of EA org Xâs strategy would take half a day of staff time, I would hope that the relevant decision-maker at org X takes into account not only the cost and benefit to org X of whether or not to communicate, but also the cost/benefit to the wider community. If half a day of staff time helps others in the community better understand org Xâs thinking,[1] such that, in expectation, more than half a day of (quality-adjusted) productive time is saved (through, e.g., community members making better decisions about what to work on), then I would hope that org X chooses to communicate.
When I see public comments about the inner workings of an organization by people who donât work there, I often also hear other people who know more about the org privately say âThatâs not true.â But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
I would personally feel a lot better about a community where employees arenât policed by their org on what they can and cannot say. (This point has been debated beforeâsee saulius and Habryka vs. the Rethink Priorities leadership.) I think such policing leads to chilling effects that make everyone in the community less sane and less able to form accurate models of the world. Going back to your example, if there was no requirement on someone to get their EAF/LW comment checked by their orgâs communications staff, then that would significantly lower the time/effort barrier to publishing such comments, and then the whole argument around such comments being too time-consuming to publish becomes much weaker.
All this to say: I think youâre directionally correct with your closing bullet points. I think itâs good to remind people of alternative hypotheses. However, I push back on the notion that we must just accept the current situation (in which at least one major EA org has very little back-and-forth with the community)[2]. I think that with better norms, we wouldnât have to put as much weight on bullet points 2 and 3, and weâd all be stronger for it.
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds⌠not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round...
I agree that it depends on the situation, but I think this would often be quite a lot worse in real, non-ideal situations. In ideal communicative situations, mistaken information can simply be corrected at minimal cost. But in non-ideal situations, I think one will often see things like:
Mistaken information gets shared and people spend time debating or being confused about the false information
Many people never notice or forget that the mistaken information got corrected and it keeps getting believed and shared
Some people speculate that the mistaken claims weren't innocently shared, but that the board member was being evasive/dishonest
People conclude that the organization / board is incompetent and chaotic because they can't even get basic facts right
Fwiw, I think different views about this ideal/non-ideal distinction underlie a lot of disagreements about communicative norms in EA.
We provide small cash incentives to encourage childhood vaccinations in northern Nigeria, an area with some of the highest under-five mortality rates and lowest vaccination rates in the world.
We are currently working to fill a $30.5 M funding gap so that we can reach an additional 1.9 million infants and protect them from deadly diseases like measles and pneumonia. You can learn more about our future plans here.
Not that I expect the election administrators to be unsporting, but there should be an explicit norm that they do not vote after the evening of December 2 as they could not only snipe but maybe even cast a de facto tiebreaking vote on December 3 with inside knowledge. (I know of at least EA-adjacent place where using inside information to one's advantage is seen as fine, hence the desire to be clear here.)
My first guess is that if this did happen, we'd keep the votes open slightly longer until there wasn't a tie - this is another benefit of having a secret deadline. I'll check with @Will Howardđš when he's online to see if there is already a solution to this. Thanks for pointing it out!
Not that I expect the election administrators to be unsporting, but there should be an explicit norm that they do not vote after the evening of December 2 as they could not only snipe but maybe even cast a de facto tiebreaking vote on December 3 with inside knowledge. (I know of at least EA-adjacent place where using inside information to one's advantage is seen as fine, hence the desire to be clear here.)
A few comments, not intended as a knock on Vida Plena's programme, but perhaps more relevant to how it's communicated:
You can save a life as depression not only causes immense human suffering but is a deadly disease. Approximately 24% of the individuals we support are at high risk of suicide and as a result, face a 3- to 11-fold increased likelihood of dying by suicide within the next 30 days.
Given this is the first bullet under "helping a life flourish" I thought this might benefit from some clarification, because the vast majority of the value of this programme is likely not from suicide prevention, given low absolute rates of suicide.
From the same source: "at two years, the cumulative hazard of suicide death ranged from 0.12% in young adults to 0.18% in older adults." Under unreasonably optimistic assumptions,[1] Vida Plena would prevent 1 suicide every 500 participants / prevent a suicide for $116,500, which is something between 21x to 39x less cost effective than GiveWell top charities.[2] More reasonable assumptions would drop this upper bound to 1 suicide prevented every ~1200 participants, or ~$272,000 per suicide prevented / ~50-90x less effective than GW top charities.[3]
This isn't a claim that the cost-effectiveness claims are necessarily incorrect, even with minimal suicide prevention. A quick sense check RE: $462/DALY and 0.22 DALYs per participant would imply that Vida Plena would need to more than halve their cost per participant (from $233 down to $101), and then achieve results comparable to "~100% of people with severe / moderate mild depression conclude the programme going down one level of severity or something like ~5 points on the PHQ9 score (severe --> moderate; moderate --> mild; mild --> no depression)."[5] This is well within your listed results - though as you note in your annual report these have some fairly significant sources of bias and (IMO) probably should not be taken at face value.
Some other comments:
The NBER paper quoted in "g-IPT has also demonstrated long-term positive effects" looked at the "Healthy Activity Programme" (HAP)[6] and the "Thinking Healthy Programme Peer-Delivered" (THPP).[7] Neither of these are g-IPT programmes.
The minimal and unsustained results from the Baird RCT probably are worth incorporating in an updated analysis, given the predictive CEA is from 2022[8]
From the predictive CEA: "Vida Plenaâs overall effect for a household is 7.18*0.75*0.83 = 4.49 (95% CI: 0.77, 31.04) WELLBYs per person treated". HLI recently decreased their estimate for StrongMinds treatment effects by 80% from 10.49 to 2.15 WELLBYs per treatment (also including household spillovers, and estimated StrongMinds to be "3.7x (previously 8x) as cost-effective as GiveDirectly".
The cost-effectiveness of GiveDirectly has gone up by 3-4x (GW blog, GD blog), though this was recent news and does not necessarily imply that WELLBYs will also go up by 3-4x (most of this increase is attributable to increased consumption) - but should constitute a discount at least.
Even if 100% (rather than 24%) of individuals were in the high risk group (i.e. suicidal ideation nearly every day), and even if you dropped 100% of individuals risk of suicide from 0.2% to zero (rather than reducing it by 3-11x or to baseline), and if this effect persisted forever rather than just the initial 30 days
If 24% of your participants were high risk (7x risk, at 0.18%), and the other 76% of them were half of that (3.5x risk, at 0.09%), and you successfully reduced 100% of participants to baseline (0.026%), you would prevent 1 suicide every 1169 participants, which comes to ~$272,000 per life saved, or ~50-90x less cost effective than GW top charities.
It's also worth noting these are cumulative hazards at 2 years rather than 30, and the hazard ratios at 365 days are approximately halved compared to 30 days (1.7- to 5.7 instead of 3.3-10.8), so these figures are plausibly a few factors optimistic still.
Severe --> moderate depression is about 0.262 DALYs averted, moderate --> mild depression is about 0.251 DALYs averted, and mild --> no depression is about 0.145 DALYs averted.
HAP is described as "a psychological treatment based on behavioral activation...consist[ing] of 6 to 8 weekly sessions of 30 to 40 minutes each, delivered individually at participantsâ homes or at the local PHC."
THPP is a simplified version of a psychological intervention (THP) for treating perinatal depression that has been found to be effective in similar settings and is recommended by the WHO (Rahman et al., 2008, 2013; WHO, 2015; Baranov et al., 2020). While the original THP trials employed a full-fledged cognitive behavioral therapy (CBT) intervention, THPP was a simpler intervention focused on behavioral activation, as in the HAP trial described above. THPP was designed to be delivered by peer counselors, instead of community health workers as in previous trials.
-Our findings add to this evidence base by showing 12-month modest improvements of 20%-30% in rates of minimal depression for adolescents assigned to IPT-G, with these effects completely dissipating by the 24-month follow-up. We similarly find small short-term impacts on school enrollment, delayed marriage, desired fertility and time preferences, but fail to conclude that these effects persist two years after therapy.
-Given impact estimates of a reduction in the prevalence of mild depression of 0.054 pp for a period of one year, it implies that the cost of the program per case of depression averted was nearly USD 916, or 2,670 in 2019 PPP terms.
-This implies that ultimately the program cost USD PPP (2019) 18,413 per DALY averted. (almost 8x Uganda's GDP per capita)
Additionally I have noticed that hive have sponsored a lot of events. How much of your budget does this take up? Do you imagine continuing to sponsor as many events in future?
In 2024, we have only sponsored AVA D.C. at a lower sponsorship level and we currently donât plan to sponsor any further events, as we believe that we are unlikely to benefit from them in a cost-effective way anymore. We sponsored a few more events in 2023 as we were getting started and needed to reach our audience faster. All event sponsorships were expensed by our co-founder Constance, which was lucky for us, as she would have planned to support these events either way and we were able to benefit from the exposure.
In case you are referring to our (co-)organized events, such as those outlined in the post - in this year, event organizing took ~8-9% of our staff costs. We started running more events in late 2023 as informed by our community user interviews and have seen good traction with them in terms of attendance and satisfaction. We think these lead metrics are somewhat promising, but this is one of our program points we are re-evaluating more thoroughly, especially with our end of year community survey, as we donât yet have a clear understanding of how they translate into impact. Currently, running (as many or more) events is relatively lower on our priority list.
Question on this, I think it wasnât very clear what is the actual total budget for Hive for 2025? how much is already covered?
Do you have any outstanding grants on the table?
How do you currently value your cost-effectiveness on how much $ you are adding to the movement vs. spending.
Question on this, I think it wasnât very clear what is the actual total budget for Hive for 2025? how much is already covered? Do you have any outstanding grants on the table?
Happy to clear this up!
So far, we have raised ~$162k for 2025.
The Budget we need to maintain our current capacity is ~$260k. Therefore, technically, we are still fundraising to maintain our capacity as opposed to actually growing Hive.
However, to be transparent about our funding situation, plans and expectations, we are currently being evaluated for a renewal grant with Open Philanthropy. It is quite difficult for us to estimate whether and how much they are going to grant us; but we are moderately confident/hopeful that we will at least secure a grant of similar size as we did in 2024 ($100k). Considering that there are likely other grant opportunities throughout 2024 that we would apply for as well (although these would be a bit more complicated to lay out and predict), we believe that additional funding raised from our end of year giving campaign would likely (hopefully) marginally contribute beyond this ~$260k and thus (hopefully) contribute to our growth. Securing our basic budget before the end of 2024 would be beneficial for us because this would allow us to hire new team members (currently their contracts run out at the end of 2024 because we havenât raised the money for the salaries yet).
In that scenario, our âTotal Budgetâ in terms of âWhat we hope to raise and believe we can cost-effectively scale up to) is ~$420k for 2025.
How do you currently value your cost-effectiveness on how much $ you are adding to the movement vs. spending.
We currently only use $ added to the movement as internal estimates and decided not to include them in the post, because we are still collecting data, working out uncertainties and refining some key aspects. Iâll gladly outline our current thinking/plans, but do take them with a grain of salt, as they may change as we mature as an organization:
As quite common with community-building efforts, we find that a large portion of the value we are hoping to bring to the movement is hard to measure objectively. This includes value from keeping advocates up-to-date, providing them with knowledge to help their advocacy or being engaged in the community. Of course, this would also include impact we donât learn about (which we estimate at about 20-30%).
However, we believe that our âHigh Impact Outcomesâ (which include job and volunteer placements, new projects started and instances in which advocates received funding as a result of Hive) might be somewhat reasonably put into monetary value (thus, the special attention to it).
In order to translate these High Impact Outcomes to $ added, we currently use AACâs ICAP measurement as outlined here. This includes various subjective judgment calls on our end, which he hope to get more external views on once we refine some key aspects further, such as:
What is the relative value of non-job High Impact Outcomes to job placements? (e.g., how many volunteer role placements can be equated to job placements?)
How do non-job High Impact Outcomes behave in their counterfactuality compared to job placements?
Our plan is to cover our costs in manually logged High Impact Outcomes alone; such that any additional âdifficult to measureâ form of impact can be considered âfor freeâ. We have thus managed to succeed with this under our best guess estimates for the above judgment calls. We believe this to be a reasonable goal for 2025 as well.
As we are expecting longer timelines for the impact of our work (note that we have only been a registered organization since 2023), we plan to slowly raise our goals in this domain; e.g., covering our costs with our lower-bound estimates -> exceeding our costs with lower-bound estimates by 2x, etc.
In the meantime, as we gather more data and a deeper understanding of the various types of impact we may carry, we hope to be able to outline the other areas of impact of our work in terms of $ added, make better estimates of the relative proportion of High Impact Outcomes to the rest of our work, etc. to help concretize the value of community building efforts.
I hope this makes sense! Let us know if you have any questions regarding this!
I was thinking on ways to reduce political polarization and thought about AI chatbots likeTalkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example:
A Trump voter or a liberal voter
A woman who chose to have an abortion or an anti-abortion activist
A transgender person or someone opposed to transgender rights
A person from another race, religion, or a country your country might be at odds with
Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others.
The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authenticity. Or maybe chatbots could even be based on concrete actual people who could hold AMAs. Ultimately, users might even be matched with real people of differing beliefs for video calls or correspondence. If done well, such an app could perhaps even be used in schools, fostering empathy and reducing division from an early age.
Personally, I sometimes ask ChatGPT to write a story of how someone came to have views I find difficult to relate to (e.g., how someone might become a terrorist), and I find that very helpful.
I was told that creating chatbots is very easy. Itâs definitely easy to add them to Talkie, there are so many of them there. Still, to make this impactful and good, this needs a lot more than that. I donât intend to build this app. I just thought the idea is worth sharing. If you think itâs a good idea, feel free to share it somewhere where someone might pick it up, or to just do it yourself.
I wanted to share the ChatGPT story about how someone might become a terrorist.
Prompt: write a story about a boy who becomes a terrorist. Make me relate to him, and feel empathy for him. (there was some context)
ChatGPT: Yusuf was 15, living in a small, impoverished town. Yusuf loved soccer, helping his father at his fruit stall, and, above all, making his little sister laugh.
Life was hard but manageable. Then came the soldiers. When they swept through his village, everything changed. Yusufâs father tried to resist, and they took him away. They left Yusufâs family without its provider. His motherâs health worsened as she worked herself to exhaustion, trying to provide for him and his sister. Soon, Yusuf had to drop out of school to keep the family afloat. His friends, his dreams, and even his laughter faded away.
Life became bleak and colorless. One day, a man named Hakim, a former student from Yusufâs school, visited him at the stall. Hakim was charismatic and confident, a kind of big brother figure Yusuf had never had. He asked Yusuf how he was doing, listened as Yusuf poured out his frustrations, and told him he wasnât alone. Hakim said he worked for a âfreedom movementâ that fought for people like them, people whose fathers had been unjustly taken, whose families struggled under oppression. Yusuf felt a spark of hopeâmaybe he could do something for his family and others like them.
Over the weeks, Hakim kept visiting, always with stories of their peopleâs suffering and injustices, stories Yusuf hadnât heard from the media. He talked about the honor of defending oneâs family and land and how the so-called âenemiesâ were responsible for everything Yusufâs family had endured. Soon, Hakim invited Yusuf to attend group meetings. There, Yusuf met boys his own age who shared his anger and grief. The group felt like a familyâa group of brothers all dedicated to the same cause.
Initially, Yusuf felt uneasy about the rhetoric he heard; it was filled with anger, sometimes hatred. But when he thought of his familyâs suffering, he wondered if the anger wasnât justified. One of the older boys took Yusuf aside one night and told him about his own familyâs hardships, about how they had found a sense of purpose fighting for justice. The older boy looked at him and said, âYouâre one of us, Yusuf. Youâre strong, and your family deserves justice.â
Yusuf wanted to be strong. He wanted his life to matter. One day, Hakim gave him a task: distribute supplies to people in need. Yusuf was proud. But gradually, these tasks grew darker. They began asking Yusuf to carry messages, then to handle small weapons, telling him it was âjust in caseâ they had to defend themselves. Though he felt fear creeping in, the family-like bond he felt with these people made him want to prove himself.
One night, Yusufâs mother noticed his distraction and tried to talk to him. She asked him if he was safe, if he was happy. He looked at her tired eyes, her worry for him, and wanted to reassure her. But instead, he found himself saying, âMom, Iâm doing this for us. For Dad.â She looked at him with sadness but didnât argue; she didnât have the strength.
The next day, Hakim told him it was time to make a true commitmentâto sacrifice himself for a cause bigger than himself, to avenge his father, protect his family, and âbecome a hero.â Yusuf didnât want to die; he wanted to live. But his hope, his friends, and his purpose were now all tied to the group. His identity had blurred with theirs, his grief transformed into a powerful sense of duty.
And so, on the day of his mission, Yusuf went forward, full of fear but also with a strange sense of peace. He believed he was bringing honor to his family, standing up to a world that had left him invisible, lost, and alone.
If there are more than 3 candidates with any votes, eliminate the least popular, and redistribute those votes according to the voters' next favourite choice.
What happens if there's a tie? E.g. if there are 4 candidates with 30, 20, 10, 10 votes each.
I guess it's unlikely to be determinant in practice but might be worth stating just in case.
My first guess is that if this did happen, we'd keep the votes open slightly longer until there wasn't a tie - this is another benefit of having a secret deadline. I'll check with @Will Howardđš when he's online to see if there is already a solution to this. Thanks for pointing it out!
Thanks. Yeah, I now agree that it's better to focus on what I can do personally. Someone made a good point in a private message that having a single vision leads to a utopian thinking which has many disadvantages. It reminded me of stories of my parents about the Soviet Union where great atrocities to currently living humans where justified in the name if creating a great communist future.
Grand ideologies and religions are alluring though, because they give a sense of being a part of something bigger. Like you have your place in the world, your community, which gives a clear meaning to life. Being a part of Effective Altruism and animal advocacy movements fulfil this need in my life somewhat but incompletely.
the person in the private message also told me about this serenity prayer: "grant me the serenity to accept the things I cannot change; courage to change the things I can; and wisdom to know the difference."
I think the problem is that I just don't have a grand vision of the future I am trying to contribute to.
For what it's worth, I'm skeptical of approaches that try to design the perfect future from first principles and make it happen. I'm much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn't cure all illness.)
How much we can help doesn't depend on how awful or how great the world is, we can save the drowning child whether there's a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of "EA specific" things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
And just to be clear, I am doing quite well generally. I think I used to repress my empathy because it just feels too painful. But it was controlling me subconsciously by constantly nagging me to do altruistic things. Nowadays, I sometimes connect to my empathy and it can feel overwhelming like yesterday. But I think it's for the better long-term.
If there are more than 3 candidates with any votes, eliminate the least popular, and redistribute those votes according to the voters' next favourite choice.
What happens if there's a tie? E.g. if there are 4 candidates with 30, 20, 10, 10 votes each.
I guess it's unlikely to be determinant in practice but might be worth stating just in case.
I think the problem is that I just don't have a grand vision of the future I am trying to contribute to.
For what it's worth, I'm skeptical of approaches that try to design the perfect future from first principles and make it happen. I'm much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn't cure all illness.)
How much we can help doesn't depend on how awful or how great the world is, we can save the drowning child whether there's a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of "EA specific" things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
Thanks. Yeah, I now agree that it's better to focus on what I can do personally. Someone made a good point in a private message that having a single vision leads to a utopian thinking which has many disadvantages. It reminded me of stories of my parents about the Soviet Union where great atrocities to currently living humans where justified in the name if creating a great communist future.
Grand ideologies and religions are alluring though, because they give a sense of being a part of something bigger. Like you have your place in the world, your community, which gives a clear meaning to life. Being a part of Effective Altruism and animal advocacy movements fulfil this need in my life somewhat but incompletely.
I really appreciate you writing up the Voting Norms section! Making it clear when you see "tactical" participation as beneficial vs harmful is very helpful.
I really appreciate you writing up the Voting Norms section! Making it clear when you see "tactical" participation as beneficial vs harmful is very helpful.
Whatâs a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
I don't have a suggestion, but I've been encouraged and "heartwarmed" by the diverse range of responses below. Cool to see people with different ways of holding their hope and motivation, whether its enough for us to buy a bed net tomorrow or we do indeed have grander plans and visions, or we're skeptical abut whether "future designing" is a good idea at all.
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can't be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don't play to our comparative advantages.
Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I've seen or heard about why we didn't fund something are wrong. (Similarly, us choosing to fund someone doesn't mean we endorse everything about them or their work/plans.)
Very often, when we decline to do or fund something, it's not because we don't think it's good or important, but because we aren't the right team or organization to do or fund it, or we're prioritizing other things that quarter.
As such, we spend a lot of time working to help create or assist other philanthropies and organizations who work on these issues and are better fits for some opportunities than we are. I hope in the future there will be multiple GV-scale funders for AI GCR work, with different strengths, strategies, and comparative advantages â whether through existing large-scale philanthropies turning their attention to these risks or through new philanthropists entering the space.
While Good Ventures is Open Philanthropy's largest philanthropic partner, we also regularly advise >20 other philanthropists who are interested to hear about GCR-related funding opportunities. (Our GHW team also does similar work partnering with many other philanthropists.) On the GCR side, we have helped move tens of millions of non-GV money to GCR-related organizations in just the past year, including some organizations that GV recently exited. GV and each of those other funders have their own preferences and restrictions we have to work around when recommending funding opportunities.
Among the AI funders we advise, Good Ventures is among the most open and flexible funders.
We're happy to see funders enter the space even if they donât share our priorities or work with us. When more funding is available, and funders pursue a broader mix of strategies, we think this leads to a healthier and more resilient field overall.
Many funding opportunities are a better fit for non-GV funders, e.g. due to funder preferences, restrictions, scale, or speed. We've also seen some cases where an organization can have more impact if they're funded primarily or entirely by non-GV sources. For example, itâs more appropriate for some types of policy organizations outside the U.S. to be supported by local funders, and other organizations may prefer support from funders without GV/OPâs past or present connections to particular grantees, AI companies, etc. Many of the funders we advise are actively excited to make use of their comparative advantages relative to GV, and regularly do so.
We are excited for individuals and organizations that aren't a fit for GV funding to apply to some of OPâs GCR-related RFPs (e.g. here, for AI governance). If we think the opportunity is strong but a better fit for another funder, we'll recommend it to other funders.
To be clear, these other funders remain independent of OP and decline most of our recommendations, but in aggregate our recommendations often lead to target grantees being funded.
We believe reducing AI GCRs via public policy is not an inherently liberal or conservative goal. Almost all the work we fund in the U.S. is nonpartisan or bipartisan and engages with policymakers on both sides of the aisle. However, at present, it remains the case that most of the individuals in the current field of AI governance and policy (whether we fund them or not) are personally left-of-center and have more left-of-center policy networks. Therefore, we think AI policy work that engages conservative audiences is especially urgent and neglected, and we regularly recommend right-of-center funding opportunities in this category to several funders.
OP's AI teams spend almost no time directly advocating for specific policy ideas. Instead, we focus on funding a large ecosystem of individuals and organizations to develop policy ideas, debate them, iterate them, advocate for them, etc. These grantees disagree with each other very often (a few examples here), and often advocate for different (and sometimes ~opposite) policies.
We think it's fine and normal for grantees to disagree with us, even in substantial ways. We've funded hundreds of people who disagree with us in a major way about fundamental premises of our GCRs work, including about whether AI poses GCR-scale risks at all (example).
I think frontier AI companies are creating enormous risks to humanity, I think their safety and security precautions are inadequate, and I think specific reckless behaviors should be criticized. AI company whistleblowers should be celebrated and protected. Several of our grantees regularly criticize leading AI companies in their official communications, as do many senior employees at our grantees, and I think this happens too infrequently.
Relatedly, I think substantial regulatory guardrails on frontier AI companies are needed, and organizations we've directed funding to regularly propose or advocate policies that ~all frontier AI companies seem to oppose (alongside some policies they tend to support).
I'll also take a moment to address a few misconceptions that are somewhat less common in EA or rationalist spaces, but seem to be common elsewhere:
Discussion of OP online and in policy media tends to focus on our AI grantmaking, but AI represents a minority of our work. OP has many focus areas besides AI, and has given far more to global health and development work than to AI work.
We are generally big fans of technological progress. See e.g. my post about the enormous positive impacts from the industrial revolution, or OP's funding programs for scientific research, global health R&D, innovation policy, and related issues like immigration policy. Most technological progress seems to have been beneficial, sometimes hugely so, even though there are some costs and harms along the way. But some technologies (e.g. nuclear weapons, synthetic pathogens, and superhuman AI) are extremely dangerous and warrant extensive safety and security measures rather than a "move fast and break [the world, in this case]" approach.
We have a lot of uncertainty about how large AI risk is, exactly which risks are most worrying (e.g. loss of control vs. concentration of power), on what timelines the worst-case risks might materialize, and what can be done to mitigate them. As such, most of our funding in the space has been focused on (a) talent development, and (b) basic knowledge production (e.g. Epoch AI) and scientific investigation (example), rather than work that advocates for specific interventions.
I hope these clarifications are helpful, and lead to fruitful discussion, though I don't expect to have much time to engage with comments here.
I agree these sound like great (though of course high-risk) opportunities, but find myself confused: why are such things not already being funded?
My understanding is that Good Ventures is moving away from some such areas. But what about e.g. the EA Animal Welfare Fund or other EA funders? I don't know much about animal welfare funding, so on face value I am pretty convinced these seem worth funding, but I am worried I am missing something if more sensible/knowledgeable people aren't already funding them. (Though deferring too much to other funders could create too much group-think.)
If the US AI industry slowed down, but the rest of the world didn't, how good or bad would this be? How could we avoid adverse selection where countries that don't pause are presumably going to be less interested in safety all else equal?
(As you note much of the value may come from your advocacy making more 'mainstream' policies more palatable, in which case the specifics of Pause itself matter less, but are still good to think about.)
I would also be interested in your thoughts on @taoburga's push back here. (Tao, I think I have a higher credence than you that Pause advocacy is net positive, but I agree it is messy and non-obvious.)
I think you may greatly understate your case. I would argue that, especially in the US, the lack of credible "public intellectuals" is one of the greatest problems of our age, and that there is a huge opportunity for the right people to fill this role.
EAs with the right communication skills could be perfect public intellectuals, and if they could move the debate, or even the Overton window, a bit more towards effective positions, that would be a massive contribution to the world.
True, there are plenty of opinionated people out there, but it feels like mostly they are trotted out to support the party line rather than to provide genuine insight. They are more like lawyers arguing their "side" - and realistically, people don't trust lawyers to give honest insight.
If I look at France or Italy, for comparison, there have always been a few figures who tend to be asked for opinions about major topical questions, and their views carry weight. In other countries and in previous times, church leaders play or played a similar role - rarely with positive consequences ...
Today there are so many questions where public "debate" consists of people shouting slogans at each other, and whoever shouts loudest wins. I don't think most people like this. There are a few journalists (e.g. David Brooks in the NY Times) who have the confidence and authority to express opinions that are not necessarily partisan, and are presented with careful arguments, evidence and reference to critical thinking by others, including those who do not support him.
This is the work of the public intellectual, and when it is done well, it can still help people to change their minds or at least to understand both sides of an argument. It feels like philosophy (and maybe history) are the most obvious fields in which this kind of skillset and credibility can be achieved and earned.
I see this as a great opportunity for effective altruists because, unlike so many knee-jerk positions, EA's tend to have very carefully and analytically investigated every question, and to have done so with a very clear and tangible criterion. We need more EA's writing and being interviewed in places where the general public can hear them - and we need those people to be trained in the art of communicating to the general public (not just other EAs) without dumbing down (which would defeat the purpose of aiming to be seen as a public intellectual. The best speak in such a way that other people share their ideas, in part, as a sign that they are smart enough to understand them.
I see support for philosophers as very valuable if it can lead not just to new insights, but more importantly, to new voices ready to communicate in the public domain.
I think your brain-Fred is conscious, but overlaps so much with your whole brain that counting them both as separate moral patients would mean double counting.
We illustrated with systems that don't overlap much or at all. There are also of course more intermediate levels of overlap. See my comment here on some ideas for how to handle overlap:
I think a key crux here is whether you think AI timelines are short or long. If they're short, there's more pressure to focus on immediately applicable work. If they're long, then there's more benefit to having philosophers develop ideas which gradually trickle down.
In PIBBSS, we've had a mentor note that for alignment to go well, we need more philosophers working on foundational issues in AI rather than more prosaic researchers. I found that interesting, and I currently believe that this is true. Even in short-timeline worlds, we need to figure out some philosophy FAST.
This is such a good post, and I agree very much. You said so many things that I have been thinking and wishing I knew how to say. Thank you so, so much for writing this, @ElliotTep!
I agree we should focus on reducing suffering. And I have other reasons, too, in addition to the points you brought up.
Other reasons:
1. The problem with factory farming is the suffering it causes. So, we should focus on the real problemâthe suffering. When we talk about fighting factory farming, we are actually only talking about a proxy for our real goal. (The real goal is to decrease suffering.) I think it's better to focus on the real goal. Because focusing on a proxy can always have unintentional consequences. For instance, if we focus only on ending factory farming, we may decide to do something like tax methane emissions. That tax may cost the meat industry money. It may decrease the number of factory farms that get built. It may raise the price of beef and thus decrease the amount of meat that gets sold. But if it causes beef prices to go up, people will eat more chicken. And then the methane-tax intervention will result in more suffering. This is just one of many examples.
2. I have recently been learning first hand that a lot of people in the meat, egg, and dairy industries have serious concerns about the treatment of animals. There are slaughterhouse workers, contract growers, corporate meat-industry employees, and ag executives who really want to improve animal welfare! But, naturally, almost none of these people want to end animal farming. Because, as @Hazo points out, that would mean ending their livelihood. We are more likely to succeed at improving animal welfare if we can work collaboratively with these concerned people in the meat and egg industries. These are the people who deal with farmed animals on a day-to-day basis, and who have the biggest impact on farmed animals' lives. I think selecting a goal that we can work towards together with people within the industry is highly worthwhile.
3. Factory farming isn't the only thing that's bad. All suffering is bad. Animal testing causes severe suffering that's likely worse per individual than the suffering caused by factory farming. My understanding is that the scale of animal testing on mice and rats isn't actually known, and most numbers we see leave them out. Wild animals also suffer. Rodents suffer when they're bred in pet stores to sell to snake owners. Fish presumably suffer in large numbers in the pet trade. I'm not sure if people count insect farming as factory farming, but it's a concerning new trend that could theoretically cause even more suffering than at least what most people think of as factory farming. New forms of mass suffering could be invented in the future. If AI is sentient, people (or AI) could cause AI to suffer on massive scales. Digital minds could be created and replicated and made to suffer in huge numbers. If we fight factory farming, that doesn't help move the needle on other forms of suffering. If we focus on the suffering itself, maybe we can move the needle generally. For instance, if we work to create an anti-suffering ethic, that would be a more helpful ethic to create in the long run than a pro-vegan or anti-factory-farming ethic. Because the anti-suffering ethic would move us to help factory farmed animals while also staying vigilant about other forms of suffering.
4. Elliot's point about how ending factory farming is an unrealistic goal also worries me for another reason: The effect of the slogan on longtermist EAs who hear animal-focused EAs say it all the time. Animal people keep saying "Factory farming is going to end. Factory farming is unsustainable." To me, an AR person, I know to translate that slogan to "I'm trying to get myself hyped up! I'm trying to inspire others to join me on a crusade!" Because I know, sadly, what an uphill battle it would be to end factory farming. And I think most AR people know that. But to someone who doesn't spend their whole life focused on animal welfare, it's not obvious that this statement is just an inspirational quote. It sounds like the speaker is literally predicting that factory farming is going to end. And I worry that longtermist EAs, who may spend slightly less time paying attention to the trends in animal agriculture, may just hear the slogan and take it at face value. Here's why I worry about that: It seems that many longtermist EAs are working hard to try to preserve humanity, or at least consciousness, for as long as possible. And many longtermist EAs seem to assume that life in the future will be net positive. This assumption seems to involve assuming that factory farming will end, and that it won't be replaced by anything even worse (see point #3). I worry that longtermist EAs may be outsourcing their thinking a little to animal EAs. And animal EAs are falling down on the job by just giving an inspirational slogan when we should give the truth. If it's true that we have no realistic expectation of suffering decreasing in the future, and no reason to believe factory farming will end before humanity ends, we should make sure longtermists know that. That way, longtermist EAs can plan accordingly.
This reminds me of quantum computers or fusion reactors â we can build them, but the economics are far from working.
A quantum research scientist here: actually I would argue that is a misleading model for quantum computing. The main issue right now is technical, not economical. We still have to figure out error correction, without which you are bound to roughly 1000 logical gates. Far too little to do anything interesting.
Thank you for laying out these plans Karolina and for all the work you do!
We'd like to add to the following point:
There will likely be new promising organizations and projects coming out of incubators â estimated RFMF over the next 12 months: $500k
We're running a training programme that is very similar to Welfare Matters', just focused on Africa. We pivoted to this intervention last year based on our experience of working with early stage orgs / advocates that were in the process of starting their orgs - most importantly Daniel Abiliba / AWL and Paul Ssuna / AWeCCAwho were both funded by the EA AWF as a result.
A few months ago, we completed the first cohort of our new programme - now focusing explicitly on individuals and incubation instead of existing orgs. This has been promising so far and we're about to start the second cohort next week. Since the start of this new programme, we've incubated three new projects/orgs:
Research to understand the most important welfare issues for farmed fish in Uganda and piloting an intervention to address the most important issue in collaboration with farmers (website to be built)
The first two received seed funding directly from us, totalling USD 53K. If their pilots turn out to be promising, we hope for these initiatives to successfully fundraise themselves - EA AWF would be a primary option for this. The third one received USD 27K in funding from EA AWF already for their pilot (not seed funded by us).
In addition to these projects, we'll likely soon incubate another project/org focused on cage-free campaigns in Zambia, the first one of its kind in the country, led by another one of our programme participants.
You and the fund managers will be the judge whether these projects/orgs are actually promising, but we wanted to flag this here, since:
More projects will come out of our next cohort.
The projects that we seed fund(ed) directly may go on to fundraise from EA AWF.
EA AWF has so far funded all of the projects we've actively incubated and did not seed fund ourselves (n=3).
We're not talking about huge amounts here, since we typically advise our participants to start lean and costs are generally fairly low in Africa compared to other parts of the world. But we think there is potential to grow further in this area. We'll give a more detailed update in our 2024 review which we plan to publish on the forum in a few weeks.
What do you think about losses like these being a trigger for backsliding on other farmed animal work?
For instance, the Animal Ag Lobby saying something like, "Look people don't care about animal welfare. Even progressive cities turned this down." Could this effect trigger something like the EATS act getting passed? I don't have an informed opinion on this, but it seems like a significant backfire risk.
I'm also worried that 308 (Denver's fur ban) would have passed without 309 (Denver's slaughterhouse ban) being right next to it. The Denver Democrats anti-endorsed both measures which may not have happened if the measures were run separately (total guess on that one, but it passed in Boulder which has very similar demographics).
At the same time, perhaps there is very significant social change & radical flank effects from forcing the vote on abolitionist work! Looking for insight.
When we were deciding what we wanted to put on the ballot in Sonoma County (Measure J), we were thinking of this point of that it would look bad if a moderate measure failed. Our reasoning for choosing a ban on all factory farms (Concentrated Animal Feeding Operations) was that it was such a huge ask of this agricultural community that it likely would not pass, but it wouldnât look that bad if it failed because next time we could tone down the ask. But because initial polling showed that most people in the county would vote yes on Measure J, thatâs why we proceeded with this big ask. Also, our measure DD to ban all livestock operations did pass in Berkeley. It would have affected one operation, but when we started collecting signatures, they voluntarily shut down. It was Golden Gate Fields horse race track. As for Denver, our friends who were working on the ballot measures also said it was bad for the fur ban initiative that the other, more radical one, was next to it. They are learning from their mistakes.
It seems plausible that J/309/etc advocates knew at some point that the initiatives were very unlikely to pass, and that low financial investment from that juncture onward was thus more a consequence of low public support earlier in the campaign season more than a cause of low public support.
Does anyone have information that could evaluate that possibility, such as longitudinal records of spending and polling outcomes?
Yes, we did polling when we were preparing for Measure J. We paid a professional polling service (phone and texting polls) and also collected data in face-to-face interviews with voters in Sonoma County. We also used Survey Monkey to do a poll. All 3 polling methods that we used before commencing Measure J showed that more than 50% of voters in Sonoma County would vote Yes on Measure J. Thatâs why we decided to even proceed with Measure J.
I was working on Measure J in Sonoma County, and my friends were working on the 2 measures in Denver. One of the main obstacles we faced is with fundraising. Initial polls showed that Measure J in Sonoma County would have passed, but the opposition raised about 10 times more money than we did. We didnât have money to send truthful literature to every household in Sonoma County, but only to some households. However, the opposition sent multiple pieces of literature filled with exaggerations and lies to scare voters so that they would believe that store shelves would become empty if they voted yes on Measure J. Opposition raised over $2 million to just defeat Measure J. They had money for TV ads, and we didnât. They paid money to a local environmental nonprofit to publicize why locals should vote no. Our friends in Denver faced very similar challenges. However, our ballot measure to ban all livestock operations did pass in Berkeley. About 60% of people voted to shut down all livestock operations. When we started collecting signatures in Berkeley for this measure DD, then the only large livestock operation there decided to shut down. It was Golden Gate Fields horse race track. Also, even though Measure J did not pass in Sonoma County, it did generate a lot of press. Associated Press, LA Times, KQED, San Francisco Chronicle, and Washington Post covered it. If you count letters to the editor, then The Press Democrat covered Measure J around a 100 times. So thereâs definitely more awareness of the issue of animal welfare now because of our efforts in Sonoma County. We can leverage our wins in Berkeley and Sonoma County for future work on ballot measures. We need funding though!
It may be that certain mental subsystems wouldn't be adequate by themselves to produce consciousness. But certainly some of them would. Consider a neuron in my brain and name it Fred. Absent Fred, I'd still be conscious. So then why isn't my brain-Fred conscious? The other view makes consciousness weirdly extrinsic--whether some collection of neurons is conscious depends on how they're connected to other neurons.
I think your brain-Fred is conscious, but overlaps so much with your whole brain that counting them both as separate moral patients would mean double counting.
We illustrated with systems that don't overlap much or at all. There are also of course more intermediate levels of overlap. See my comment here on some ideas for how to handle overlap:
Whatâs a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
It might be too hard to envision an entire grand future, but it's possible to envision specific wins in the short and medium-term. A short-term win could be large cage-free eggs campaigns succeeding, a medium-term win could be a global ban on caged layer hens. Similarly a short-term win for AI safety could be a specific major technical advance or significant legislation passed, a medium-term win could be AGIs coexisting with humans without the world going to chaos, while still having massive positive benefits (e.g. a cure to Alzheimer's).
This feels like it could easily be counterproductive.
A chatbot's "relatable backstory" is generative fiction, and the default "Trump supporter" or "liberal voter" is going to be a vector of online commentary most strongly associated with Trumpiness or liberalism (which tends not to be the most nuanced...), with every single stereotyped talking point trotted out to contradict you. Yes, this can be tweaked, but the tweaking is just toning it down or adding further stereotypes, not creating an actual person.
Whereas the default person that doesn't agree with your politics is an actual human being, with actual life experience that has influenced their views, probably doesn't actually hold the views that strongly or agree with literally every argument cited in favour of $cause, is probably capable of changing the subject and becoming likeable again, and hey, you might even be able to change their mind.
So if you're talking to the first option rather than the second, you're actually understanding less.
I don't think it helps matters for people to try to empathise with (say) a few tens of millions of people who voted for the other side - in many cases because they didn't really pay a lot of attention to politics and had one particularly big concern - by getting them to talk to a robot trained on the other side's talking points. If you just want to understand the talking points, I guess ChatGPT is a (heavily filtered for inoffensiveness) starting point, or there's a lot of political material with varying degrees of nuance already out there on the internet written by actual humans...
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.
You might imagine a single person per "archetype" to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren't judged by a real human, etc.
Thereâs an asymmetry between people/orgs that are more willing to publicly write impressions and things theyâve heard, and people/orgs that donât do much of that. You could call the continuum âtransparent and communicative, vs locked down and secretiveâ or ârecklessly repeating rumors and speculation, vs professionalâ depending on your views!
When I see public comments about the inner workings of an organization by people who donât work there, I often also hear other people who know more about the org privately say âThatâs not true.â But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
A downside is that if an organization isnât prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. Thatâs frustrating, but itâs a standard way that many organizations operate, both in EA and in other spaces.
There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if theyâd all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet.
So whatâs a reader to do?
When you see a claim that an org is doing some shady-sounding thing, made by someone who doesnât work at that org, remember the asymmetry. These situations will look identical to most readers:
The org really is doing a shady thing, and doesnât want to discuss it
The org really is doing the thing, but if you knew the full picture you wouldnât think it was shady
The claims are importantly inaccurate, but the org is not going to spend staff time coordinating a response
The claims are importantly inaccurate, and the org will post a comment next Tuesday that you probably wonât notice
they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
I think anonymous accounts can help a bit with this. I would encourage people to make an anonymous account if they feel like it would help them quickly share useful information and not have to follow the discussion (while keeping in mind that no account is truly anonymous, and it's likely that committed people can easily deanonymize it)
I found your takeaways on abolitionist ballot initiatives quite disappointing. A few points:
1. You say the results largely bear out Rethinkâs study with 8-20% support from survey respondentsâin contrast to the previous study that said banning slaughterhouses had 39-43% support. But the slaughterhouse ban in Denver got 36% yes votes, which is just 3 points below the original more promising survey, and at least 16 points above the followup survey results. That means around 6 out of 7 of the 100,000 people who voted for the most radical proposal (banning slaughter) were meat eaters. This in spite of the fact that it was the first campaign of its kind, the first pilot campaign from the sponsoring organization, and as you mentioned getting outspent 6-to-1. How is this not a vindication of the idea that people are open to radical change if itâs a society-wide shift rather than a matter of isolated lifestyle choice? The campaign shifted the Overton window, helped establish animal rights as a civic/political issue, had the entire population of a major city grapple with the reality of animal slaughter, forced the animal ag industry to spend $2M opposing it, and got the support of 7x the number of vegetarians in the city. For a first attempt at a truly transformational proposal, these results mostly move my priors in favor of such campaigns.
2. While crediting the massive funding disparity, in part, for the loss, you are personally sitting on the biggest pile of animal-advocacy cash on the planet (or close to it). Is it not a self-fulfilling prophecy to decide not to fund a campaign because your priors suggest it will fail, then blame the loss on their lack of funding? I know itâs more complex and not solely at your discretion, but I think Open Philâs agency in this situation deserves to be acknowledged.
3. You failed to mention Berkeleyâs successful ballot initiative to ban factory farms. None currently exist in the city, so itâs largely a symbolic victory that would only stop future factory farms from being built. But it passed 60/40, which is a resounding win for a pretty radical proposal. The same general ask was behind Sonomaâs measure that failed by an even larger margin, so itâs certainly not all good news. But it seems clear that our conclusions need to be more nuanced than just âpeople arenât ready for abolition.â Obviously the specifics of each proposal, the size and nature of the jurisdiction, and strategic successes and failures of each campaign have a lot to do with whether a measure flies through with 60% of the vote, or crashes and burns with only 15%.
4. History is full of radical shifts that took ages to enact. Womenâs suffrage in Oregon took 6 times on the ballot over nearly 30 years. Slavery in the US took 12 generations and a civil war to abolish. Why do we look back at those who were fighting for full equality and justice with admiration, but for animals, in the present day, we insist is it only counter-productive to ask for anything other than modest reform? For every argument that radical asks alienate people and make reforms more difficult, I can think of arguments that propping up animal ag with âcertified humaneâ labels only reinforces the worldview that sees nonhumans as fundamentally commodities to be profited from and used for our enjoyment. I celebrate any kind of meaningful reform or harm reduction. But where is the theory of change that starts with corporate pressure campaigns and the promotion of âhigh welfareâ animal products, and ends with the world that animals actually deserve? For a movement that focuses so much on longtermism, I see a huge blind spot for the long term future of our nonhuman kin. A future without animal exploitation is only possible if weâre willing to advocate for it before itâs popular.
There are non-animal welfare reasons one might vote to ban slaughterhouses or factory farms in one's city (but be more okay with them elsewhere). Doing ~zero research to approximate the median voter, they sound like things with some potentially significant negative local externalities (adverse environmental effects, reduced property values, etc.) So you may have some NIMBY-motivated voters.
In addition, because the meat market is a regional or even national one, opponents cannot plausibly point to any effect of a localized slaughterhouse/factory farm ban on the prices that local voters pay at the grocery store. I think there's probably a subset of voters who would vote yes for a measure if and only if it has no plausible economic effect on the prices they pay.
Finally, these cities are more progressive than the states in which they exist, and a state can almost always pre-empt any city legislation that the state political system doesn't like. So I'd want to see evidence that the city voters weren't too far out of step with the state median voter before updating too much on city-level results. (Unlike the states -- which American political theory holds to pre-exist the Federal government and possess their own inherent sovereignty -- cities and counties are generally creations of the states without anything like their own inherent sovereignty.)
This analysis seems roughly right to me. Another piece of it I think is that being a 'soldier' or a 'bednet-equivalent' probably feels low status to many people (sometimes me included) because:
people might feel soldiering is generally easier than scouting, and they are more replaceable/less special
protesting feels more 'normal' and less 'EA' and people want to be EA-coded
To be clear I don't endorse this, I am just pointing out something I notice within myself/others. I think the second one is mostly just bad, and we should do things that are good regardless of whether they have 'EA vibes'. The first one I think is somewhat reasonable (e.g. I wouldn't want to pay someone to be a fulltime protest attendee to bring up the numbers) but I think soldiering can be quite challenging and laudable and part of a portfolio of types of actions one takes.
I'd like to add another bullet point - personal fit
I think that protests play an important role in the political landscape, so I joined a few, but but walking through streets in large crowds and chanting made me feel uncomfortable. Maybe I'd get used to it if I tried more often.
Whatâs a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
I think the problem is that I just don't have a grand vision of the future I am trying to contribute to.
For what it's worth, I'm skeptical of approaches that try to design the perfect future from first principles and make it happen. I'm much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn't cure all illness.)
How much we can help doesn't depend on how awful or how great the world is, we can save the drowning child whether there's a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of "EA specific" things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
I always donate close to 100% to what I believe is most effective at any given time. I do "diversify" across time, though. Last year, I almost donated 100% to an Effective Giving organization. In the end, I decided against this, because (a) their average donor was giving mostly to global health and development, while I was thinking that AI safety would be more effective by a factor much larger than their multiplier, and (b) the multiplier effect probably shifts this balance even further against my preferences.
There is of course an argument that it is only a question of time until newly acquired donors board the train to "crazy town" and give to more speculative causes with higher EV. But I was working under the assumption that the multiplier effect probably mostly reaches a demographic that likely sticks to their existing world views.
I reviewed the source document you linked previously, but I didn't really find much evidence for the claim (that 'the "iron-fisted war on crime" is failing') in it, and reviewed it again just now. Is there a particular section you mean to point towards? I realize the source asserts this claim, but it doesn't seem to actually argue for it.
I'm also curious as to why you are using such old data? Government statistics are often slow, but your charts are literally almost a decade old. For example, you claim, based on the homicide data up to 2015, that
Even during periods of economic growth and heightened security measures, violence has continued to rise in Latin America. Conclusion: Past approaches have generally failed to deliver sustainable safety improvements.
But if we consult OWID, we see that there are six more years of data you excluded from your chart, and it shows the opposite pattern: violence has been falling.
If your argument was valid - that rising violence proves past approaches were bad - then this more recent data would suggest we should draw the opposite conclusion, and update in favour of existing approaches. (I don't think we should infer this, because I think the argument is invalid anyway).
I think omitting this later data makes a pretty big difference, because you made a claim in the present tense - that the iron fist approach is failing - which suggests you should be basing this on evidence about current iron fist approaches. The El Salvador crackdown is the most famous and most iron fist approach around right now (most of these countries don't even have capital punishment!), so I don't think you can ignore it.
You also claim that prison spending is unsustainable, based on a forecast for 16bn-24bn of 2024 dollars spend on prisons:
High incarceration rates: Thereâs been a significant increase in prison populations, leading to substantial government spending and economic losses both for the incarcerated individuals and for society overall. Conclusion: Simply incarcerating more people is not a sustainable solution.
But Latin American + Caribbean GDP for 2014 was 5.4 trillion, so even at the upper end this is only 0.4%. You're right that government spending can't grow as a share of GDP forever, but I don't see much reason to think this is the limit.
Youâre absolutely right that our original statement, âthe iron-fisted war on crime is failing,â was broad and, admittedly, more geared toward emphasizing the challenges than making a definitive, across-the-board claim. We recognize that this phrase, chosen to convey the intensity of the issue, may have come across as too sweepingâespecially given that we are not experts on every countryâs policies, including El Salvadorâs current crackdown. Instead, our intent was to highlight the broader limitations of heavy punitive measures in sustainably reducing crime across Latin America, not to imply that every such approach in every context has failed or will fail.
Our assertion rests on several general concerns about incarcerationâs long-term impact:
Questionable Reach in Preventing All Types of Crime: Incarceration can undoubtedly remove individuals from public spaces, reducing immediate crime in communities. However, we are uncertain to what extent all forms of crime are effectively deterred by this approach. In Colombia, for instance, we see evidence of persistent criminal activities, such as scam operations, conducted from within prison walls. This suggests that certain forms of crime may not be fully curbed by incarceration alone, pointing to potential gaps in reach.
Mixed Rehabilitation and Recidivism Outcomes: Some evidence suggests that incarceration does not always deter future criminal behavior and, in some cases, can reinforce it. In fact, recidivism rates have been growing every year in Colombia. Recidivism rates raise questions about the extent to which imprisonment fosters long-term change. There are also cases where people learn new criminal tactics while incarcerated, potentially intensifying criminal behavior post-release (same source as above). This suggests that while incarceration may reduce crime through incapacitation, it may not do so after people are released.
Permanent Incarceration as an Unethical Solution: One could argue for indefinite incarceration to prevent further crime through incapacitation. However, even if financially and logistically feasible, permanent imprisonment raises serious human rights concerns. Programs like cognitive behavioral therapy (CBT) demonstrate that people are not irredeemable and can change when given support to adopt more pro-social behaviors. Removing peopleâs freedom indefinitely when this is a possibility seems unfair. The opportunity cost of maintaining a large incarcerated population and of removing them from the workforce also warrants consideration; these resources could yield greater benefits if redirected to preventive or rehabilitative programs.
Our goal with ACTRA is to explore this complementary, rehabilitative approach, rather than to assert that punitive measures do not have any effect at all. Weâll work to convey these subtleties more clearly in future communications. Thanks again for raising these points.
Iâm not necessarily disputing the idea that donating to these sorts of fundraising organizations is a good use of money; but we also need to be careful about double-counting. Itâs tempting to try to take credit for oneâs own meta donations while object-level donors are also taking full credit for the programs they fund.
My practice, perhaps adjacent but not identical to the one proposed here, is to give 15% of a donation to the charity evaluator or facilitator that introduced me to the main charity or program. In recent years thatâs been GiveWell, and the fact that they have an excess funds regranting policy makes this an even easier decision.
Yeah, the double accounting question can be a problem. It is inherent to counterfactual impact. Imagine a production chain X -> Y -> Product. Then counterfactually, X can call 100% dips of the product; as can Y. So together, they have 200%, which does not make sense.
However, there are alternative impact metrics. For example, Shapley values have some nice properties. In particular, they guarantee that they sum up to one. Intuitively, they calculate the mean counterfactual impact for each player over all possible configurations of players. This can be useful to assess important predictors in statistical modles. But it is also the reason why I don't find them partucularly useful for decision making. After all, you are not interested in your impact in hypothetical worlds, but just in your impact in the current constellation of the world, i.e. your counterfactual impact.
So in summary, I'd say use counterfactuals for decision making and Shapley values for determining bragging rights ;)
I downvoted. This post would be better if it was a clearer explanation of what the organisation does, its theory of change, impact and cost-effectiveness, and only a brief description of the job opening
Has anyone thought about trying to convince anti-regulatory figures (e.g., Marc Andreessen) in the new admin's orbit to speak out against the regulatory capture of banning cultivated meat? Has anyone tried painting cultivated meat as "Little Tech"?
Whatâs a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
There are currently key aspects of EA infrastructure that aren't being run well, and I'd love to see EAIF fund improvements. For example, it could fund things like the operation of the effectivealtruism.org or the EA Newsletter. There are several important problems with the way these projects are currently being managed by CEA.
These projects get lost among CEAâs numerous priorities. For instance, âfor several years promoting [effectivealtruism.org], including through search engine optimization, was not a priority for us. Prior to 2022, the website was updated infrequently, giving an inaccurate impression of the community and its ideas as they changed over time.â This lack of attention also led to serious oversites like Global Poverty (the communityâs top priority at the time) not being represented on the homepage for an extended period. Similarly, Lizka recently wrote that âthe monthly EA Newsletter seems quite valuable, and I had many ideas for how to improve it that I wanted to investigate or test.â But due to competing priorities, âI never prioritized doing a serious Newsletter-improvement project. (And by the time I was actually putting it together every month, Iâd have very little time or brain space to experiment.â
There doesnât seem to be much, if any, accountability for ensuring these projects are operated well. These projects are a relatively small part of CEAâs portfolio, CEA is just one part of EV, and EV is undergoing huge changes. So it wouldnât be shocking if nobody was paying close attention. And perhaps because of that, the limited public data we have available on both effectivealtruism.org and the EA newsletter doesnât look great. Per CEAâs dashboard (which last updated these figures in June), after years of steady growth the newsletterâs subscriber count has been falling modestly since FTX collapsed. And traffic to ea.orgâs âintroduction pageâ, which is where the first two links on the homepage are designed to direct people, is the lowest it has been in at least 7 years and continues to drift downward.
I think all these problems could be improved if EAIF funded these projects, either by providing earmarked funding (and accountability) to CEA or by finding applicants to take these projects over.
To be clear, these arenât the only âinfrastructureâ projects that Iâd like to see EAIF fund. Other examples include the EA Survey (which IMO is already being done well but would likely appreciate EAIF funding) and conducting an ongoing analysis of community growth at various stages of the growth funnel (e.g. by updating and/or expanding this work).
This principle has seemingly strange implications:
If ΡâĽ1 and nothing has been done yet, then the first thing you do produces infinite utility (assuming you start by doing the best thing possible and then move to progressively worse things).
If ΡâĽ1, then a randomly-chosen opportunity has infinite expected utility.
Does anyone have thoughts on whether itâs still worthwhile to attend EAGxVirtual in this case?
I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I haven't:
I would only be able to attend on Sunday afternoon CET, and it seems like it might be a waste to apply if I'm only available for that time slot, as this is something I would never do for an in-person conference.
I can't find the schedule anywhere. You probably only have access to it if you are on Swapcard, but this makes it difficult to decide ahead of time whether it is worth attending, especially if I can only attend a small portion of the conference.
EAGxVirtual is cheap to attend. I don't really see much downside to only attending one day. And you can still make connections and meet people after the conference is over.
This is a valuable post, but I don't think it engages with a lot of the concern about PauseAI advocacy. I have two main reasons why I broadly disagree:
Pausing AI development could be the wrong move, even if you don't care about benefits and only care about risks
AI safety is an area with a lot of uncertainty. Importantly, this uncertainty isn't merely about the nature of the risks but about the impact of potential interventions.
Of all interventions, pausing AI development is, some think, a particularly risky one. There are dangers like:
Falling behind China
Creating a compute overhang with subsequent rapid catch-up development
Polarizing the AI discourse before risks are clearer (and discrediting concerned AI experts), turning AI into a politically intractable problem, and
Causing AI lab regulatory flight to countries with lower state capacity, less robust democracies, fewer safety guardrails, and a lesser ability to mandate security standards to prevent model exfiltration
People at PauseAI are probably less concerned about the above (or more concerned about model autonomy, catastrophic risks, and short timelines).
Although you may have felt that you did your "scouting" work and arrived at a position worth defending as a warrior, others' comparably thorough scouting work has led them to a different position. Their opposition to your warrior-like advocacy, then, may not come (as your post suggests) from a purist notion that we should preserve elite epistemics at the cost of impact, but from a fundamental disagreement about the desirability of the consequences of a pause (or other policies), or of advocacy for a pause.
If our shared goal is the clichĂŠd securing-benefits-and-minimizing-risks, or even just minimizing risks, one should be open to thoughtful colleagues' input that one's actions may be counterproductive to that end-goal.
2. Fighting does not necessarily get one closer to winning.
Although the analogy of war is compelling and lends itself well to your post's argument, in politics fighting often does not get one closer to winning. Putting up a bad fight may be worse than putting up no fight at all. If the goal is winning (instead of just putting up a fight), then taking criticism to your fighting style seriously should be paramount.
I still concede that a lot of people dismiss PauseAI merely because they see it as cringe. But I don't think this is the core of most thoughtful people's criticism.
To be very clear, I'm not saying that PauseAI people are wrong, or that a pause will always be undesirable, or that they are using the wrong methods. I am answering to
(1) the feeling that this post dismissed criticism of PauseAI without engaging with object-level arguments, and the feeing that this post wrongly ascribed outside criticism to epistemic purism and a reluctance to "do the dirty work," and
(2) the idea that the scout-work is "done" already and an AI pause is currently desirable. (I'm not sure I'm right here at all, but I have reasons [above] to think that PauseAI shouldn't be so sure either.)
Sorry for not editing this better, I wanted to write it quickly. I welcome people's responses though I may not be able to answer to them!
Thank you for another excellent post! Iâm becoming a big fan of the Substack and have been recommending it.
Quick question you may have come across in the literature, but I didnât see it in your article: Not all peacekeeping missions are UN missions; there are also missions from ECOWAS, the AU, EU, and NATO.
Is the data you presented exclusively true for UN missions, or does it apply to other peacekeeping operations as well?
Iâd be curious to know, since those institutions seem more flexible and less entangled in geopolitical conflicts than the UN. However, I can imagine they may not be seen as neutral as the UN and therefore may be less effective.
I haven't seen a lot of evidence on other kinds of peacekeepers, so I don't know that I can say with confidence how effective they are! I would guess it depends on how much they are seen as neutral third party.
Could you say a bit more about your uncertainty regarding this? After reading this, it sounds to me like shifting some government spending to peacekeeping would be money much better spent than on other themes.
Or do you mean it more from an outsider/activist perspectiveâthat the work of running an organization focused on convincing policymakers to do this would be very costly and might make it much less effective than other interventions?
More the latter - I think it's hard to influence the UN, especially if you need security council sign off. Really, you have to influence every country on the security council to agree to more peacekeeping, and also come up with more funding somewhere, and UN bureaucracy is famously difficult and impenetrable.
Would I love to redesign UN peacekeeping to focus more on rule of law and less on soldiers? Absolutely. Do I think there's much possibility to do that? Not really no.
Oh yeah that's super interesting that the mortality effect doesn't change the cost-effectiveness estimate that much. I wonder why that is excactly? Might look into it later!
Cash transfers are not targeted (i.e. lots of households receive transfers that don't have young children) and are very expensive relative to other ways to avert child deaths ($1000 vs a few dollars for a bednet). The latter varies over more orders of magnitude than child mortality effects, so it dominates the calculation.
Thereâs an asymmetry between people/orgs that are more willing to publicly write impressions and things theyâve heard, and people/orgs that donât do much of that. You could call the continuum âtransparent and communicative, vs locked down and secretiveâ or ârecklessly repeating rumors and speculation, vs professionalâ depending on your views!
When I see public comments about the inner workings of an organization by people who donât work there, I often also hear other people who know more about the org privately say âThatâs not true.â But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
A downside is that if an organization isnât prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. Thatâs frustrating, but itâs a standard way that many organizations operate, both in EA and in other spaces.
There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if theyâd all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet.
So whatâs a reader to do?
When you see a claim that an org is doing some shady-sounding thing, made by someone who doesnât work at that org, remember the asymmetry. These situations will look identical to most readers:
The org really is doing a shady thing, and doesnât want to discuss it
The org really is doing the thing, but if you knew the full picture you wouldnât think it was shady
The claims are importantly inaccurate, but the org is not going to spend staff time coordinating a response
The claims are importantly inaccurate, and the org will post a comment next Tuesday that you probably wonât notice
I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if theyâd all waited a day or two to get on the same page and write a response with the correct facts.
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds⌠not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round, and far better than them never writing a comment at all. (I guess the exception to this is if the incorrect claims had legal ramifications that couldnât be undone. But I donât think thatâs true of the case you refer to?)
A downside is that if an organization isnât prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. Thatâs frustrating, but itâs a standard way that many organizations operate, both in EA and in other spaces.
I donât think the fact that this is a standard way for orgs to act in the wider world says much about whether this should be the way EA orgs act. In the wider world, an orgâs purpose is to make money for its shareholders: the org has no âteammatesâ outside of itself; no-one really expects the org to try hard to communicate what it is doing (outside of communicating well being tied to profit); no-one really expects the org to care about negative externalities. Moreover, withholding information can often give an org a competitive advantage over rivals.
Within the EA community, however, there is a shared sense that we are all on the same team (I hope): there is a reasonable expectation for cooperation; there is a reasonable expectation that orgs will take into account externalities on the community when deciding how to act. For example, if communicating some aspect of EA org Xâs strategy would take half a day of staff time, I would hope that the relevant decision-maker at org X takes into account not only the cost and benefit to org X of whether or not to communicate, but also the cost/benefit to the wider community. If half a day of staff time helps others in the community better understand org Xâs thinking,[1] such that, in expectation, more than half a day of (quality-adjusted) productive time is saved (through, e.g., community members making better decisions about what to work on), then I would hope that org X chooses to communicate.
When I see public comments about the inner workings of an organization by people who donât work there, I often also hear other people who know more about the org privately say âThatâs not true.â But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
I would personally feel a lot better about a community where employees arenât policed by their org on what they can and cannot say. (This point has been debated beforeâsee saulius and Habryka vs. the Rethink Priorities leadership.) I think such policing leads to chilling effects that make everyone in the community less sane and less able to form accurate models of the world. Going back to your example, if there was no requirement on someone to get their EAF/LW comment checked by their orgâs communications staff, then that would significantly lower the time/effort barrier to publishing such comments, and then the whole argument around such comments being too time-consuming to publish becomes much weaker.
All this to say: I think youâre directionally correct with your closing bullet points. I think itâs good to remind people of alternative hypotheses. However, I push back on the notion that we must just accept the current situation (in which at least one major EA org has very little back-and-forth with the community)[2]. I think that with better norms, we wouldnât have to put as much weight on bullet points 2 and 3, and weâd all be stronger for it.
Answering on behalf of Arthropoda Foundation. We've summarized our funding priorities here. Everything we raise will go toward funding insect welfare science (as we have no staff or overhead), with a particular focus on humane slaughter, nutrition and living conditions, and implementable welfare assessment tools.
What are your current best guesses for the expected chicken-years improved per $ for broiler welfare and cage-free campaigns funded in 2024? Open Philanthropy thinks âthe marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Sauliusâ analysis [this post]â, which suggests broiler welfare and cage-free campaigns improve 3.00 (= 15*1/5) and 10.8 chicken-year/$ (= 54*1/5).
Hey Vasco. I don't know. I don't have a birds eye view of the movement right now the way Open Philanthropy does. It depends on the region and the campaign a lot. I think I underestimated mean years of impact in this post which would balance things out a bit. I also never checked whether my guesses about implementation rates in this post are correct.
I imagine that a significant portion of work being done now is on ensuring that commitments are implemented. And any estimates the cost-effectiveness of implementation work are going to be a lot more subjective. Like we could show people graphs like this
and as if they look accurate (this graph is just for illustration purposes). But the people we'd be asking would probably mostly be the people working on these campaigns, which introduces bias.
It's not the first time you are asking about this. Perhaps you would be interested in creating a new cost-effectiveness estimate with my help? I've done multiple related projects and I have a bunch of theoretical thoughts on how to do a new estimate, but I don't want to do it by myself. Like it would involve asking many animal advocates for opinions which causes me a lot of social anxiety, even though everyone I talked to about these sorts of things seemed lovely and friendly. It's the sort of thing that I'd only consider doing if EA Animal Welfare Fund or Open Philanthropy funded it, because they would be the primary users of such research, and if they wouldn't want to pay for it, then it's probably not worth doing. But uh, even if they did, I'm still unsure if that would be the most action-guiding project. But just wanted to throw this idea out there in case you or someone else is interested.
OP funded several scientists working on insect sentience and welfare. Arthropoda Foundation was formed to centralize and assist in the funding situation for those scientists. However, we've not yet replaced all the funding from GVF. For more on our funding priorities, see our post for Marginal Funding Week.
Answering on behalf of Faunalytics: we posted Faunalyticsâ Funding Gap of $385,000to address how we would use donations from Marginal Funding Week. Thank you for organizing this, and best wishes to everyone involved in their fundraising efforts.
Again, just giving my impressions from interacting with AI safety people: it doesn't seem to me like I get this impression by drawing a larger circle -- I don't recall hearing the types of arguments you allude to even from people I consider "core" to AI safety. I think it would help me understand if you were able to provide some examples? (Although like I said, I found examples either way hard to search for, so I understand if you don't have any available.)
I still disagree about the Dial post: at the end Zvi says
Seeing highly intelligent thinkers who are otherwise natural partners and allies making a variety of obvious nonsense arguments, in ways that seem immune to correction, in ways that seem designed to prevent humanity from taking action to prevent its own extinction, is extremely frustrating. Even more frustrating is not knowing why it is happening, and responding in unproductive ways.
So my read is that he wants to explain and understand the position as well as possible, so that he can cooperate as effectively as possible with people who take the Dial position. He also agrees on lots of object-level points with the people he's arguing against. But ultimately actually using the Dial as an argument is "obvious nonsense," for the same reason the Technology Bucket Error is an error.
Whatâs a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
Maybe this is a cop-out but I am thinking more and more of a pluralistic and mutually respectful future. Some people might take off on a spaceship to settle a nearby solar system. Some others might live lower-tech in eco villages. Animals will be free to pursue their goals. And each of these people will pursue their version of a worthwhile future with minimal reduction in the potential of others to fulfill theirs. I think anything else will just lead to oppressions of everyone that is not onboard with some specific wild project - I think most people's dreams of a future are pretty wild and not something I would want for myself!
I found your takeaways on abolitionist ballot initiatives quite disappointing. A few points:
1. You say the results largely bear out Rethinkâs study with 8-20% support from survey respondentsâin contrast to the previous study that said banning slaughterhouses had 39-43% support. But the slaughterhouse ban in Denver got 36% yes votes, which is just 3 points below the original more promising survey, and at least 16 points above the followup survey results. That means around 6 out of 7 of the 100,000 people who voted for the most radical proposal (banning slaughter) were meat eaters. This in spite of the fact that it was the first campaign of its kind, the first pilot campaign from the sponsoring organization, and as you mentioned getting outspent 6-to-1. How is this not a vindication of the idea that people are open to radical change if itâs a society-wide shift rather than a matter of isolated lifestyle choice? The campaign shifted the Overton window, helped establish animal rights as a civic/political issue, had the entire population of a major city grapple with the reality of animal slaughter, forced the animal ag industry to spend $2M opposing it, and got the support of 7x the number of vegetarians in the city. For a first attempt at a truly transformational proposal, these results mostly move my priors in favor of such campaigns.
2. While crediting the massive funding disparity, in part, for the loss, you are personally sitting on the biggest pile of animal-advocacy cash on the planet (or close to it). Is it not a self-fulfilling prophecy to decide not to fund a campaign because your priors suggest it will fail, then blame the loss on their lack of funding? I know itâs more complex and not solely at your discretion, but I think Open Philâs agency in this situation deserves to be acknowledged.
3. You failed to mention Berkeleyâs successful ballot initiative to ban factory farms. None currently exist in the city, so itâs largely a symbolic victory that would only stop future factory farms from being built. But it passed 60/40, which is a resounding win for a pretty radical proposal. The same general ask was behind Sonomaâs measure that failed by an even larger margin, so itâs certainly not all good news. But it seems clear that our conclusions need to be more nuanced than just âpeople arenât ready for abolition.â Obviously the specifics of each proposal, the size and nature of the jurisdiction, and strategic successes and failures of each campaign have a lot to do with whether a measure flies through with 60% of the vote, or crashes and burns with only 15%.
4. History is full of radical shifts that took ages to enact. Womenâs suffrage in Oregon took 6 times on the ballot over nearly 30 years. Slavery in the US took 12 generations and a civil war to abolish. Why do we look back at those who were fighting for full equality and justice with admiration, but for animals, in the present day, we insist is it only counter-productive to ask for anything other than modest reform? For every argument that radical asks alienate people and make reforms more difficult, I can think of arguments that propping up animal ag with âcertified humaneâ labels only reinforces the worldview that sees nonhumans as fundamentally commodities to be profited from and used for our enjoyment. I celebrate any kind of meaningful reform or harm reduction. But where is the theory of change that starts with corporate pressure campaigns and the promotion of âhigh welfareâ animal products, and ends with the world that animals actually deserve? For a movement that focuses so much on longtermism, I see a huge blind spot for the long term future of our nonhuman kin. A future without animal exploitation is only possible if weâre willing to advocate for it before itâs popular.
Executive summary: Sebastian Lodemann was an exceptional colleague and friend whose profound impact on the EA community was characterized by humility, generosity, empowerment, and a deeply compassionate approach to personal and professional challenges.
Key points:
Demonstrated extraordinary competence while maintaining remarkable humility and kindness in professional settings
Provided mentorship and encouragement that empowered others to believe in themselves and pursue meaningful work
Approached ideas and people with genuine curiosity, respect, and open-mindedness
Balanced ambitious professional goals with personal well-being and family commitment
Inspired others through practical wisdom about navigating life's constraints and challenges
Committed to important work like AI safety policy, with a lasting legacy of thoughtful, impactful action
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Legal Impact for Chickens (LIC) aims to reduce animal suffering by using strategic civil litigation to make factory farm cruelty legally and financially risky for agricultural companies.
Key points:
LIC brings strategic lawsuits against companies for animal cruelty, targeting major agricultural firms like Case Farms, Foster Farms, and Alexandre Family Farm
The organization's theory of change involves using civil litigation to compel companies to proactively comply with animal welfare laws
LIC is currently fundraising to close a $90,000 gap, with a goal of pursuing four lawsuits in 2025
Despite losing their initial Costco lawsuit, the organization remains confident in their legal approach and potential to reduce animal suffering
The nonprofit has grown to a team of four full-time employees and has received significant media coverage for their investigations and legal actions
Their ultimate aim is to create systemic change by making animal cruelty a financial liability for agricultural companies
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The 2024 Animal Advocacy Strategy Forum revealed key challenges and strategic priorities for the animal advocacy movement, focusing on coordination, funding, regional resource allocation, and talent development.
Key points:
Movement leaders identified major challenges including lack of coordination, funding constraints, and skepticism about plant-based products
Participants prioritized resource allocation to Asia and Western countries, with focus on hens, broilers, and farmed fish
The movement aims to shift away from vegan-centric approaches and toward more strategic, evidence-based interventions
Key talent needs include government/policy experts, management, fundraising, and regional expertise
Future milestones include policy wins, corporate commitments, and improving internal movement coordination
Respondents emphasized the importance of diversifying funding sources and broadening public support
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: ORCG seeks funding to address critical global catastrophic risks beyond AI, focusing on pandemic preparedness, food security, and risk management to protect humanity's future.
Key points:
While AI risk is important, other urgent global threats require immediate attention and funding
Proposed projects include pandemic response protocols, food security during sunlight reduction scenarios, and developing risk management strategies
ORCG aims to provide comprehensive, balanced approach to global catastrophic risk mitigation
Specific projects target regional preparedness in Mexico, Brazil, Argentina, and Latin America
Funding will support research, policy recommendations, and practical tools for government and stakeholders
Donations and support can help advance critical work in preventing potential global catastrophes
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Effective Altruism (EA) funding for philosophy research should carefully consider supporting both research institutes and individual academics, with a potential undervaluation of generalist, foundational ethical theory work.
Key points:
Research institutes offer advantages like mission alignment, improved career pipelines for researchers, and collaborative research environments.
Individual academics might be overlooked, despite potentially producing more valuable work than marginal institute hires.
Current EA funding tends to prioritize specific, applied philosophical research over broader ethical theory work.
Generalist philosophical research on ethical frameworks could significantly influence how people perceive and approach helping others.
Funding individual academics is challenging, with course buyouts being a primary mechanism for supporting research time.
Small donors are recommended to contribute to funds like EAIF that can strategically allocate resources to philosophical research.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: ALLFED seeks funding to advance critical research and projects aimed at improving global resilience to potential catastrophic food system failures caused by scenarios like nuclear winter, extreme pandemics, or infrastructure collapse.
Key points:
Key potential projects include policy advocacy for sunlight reduction scenarios, building a refrigerated greenhouse to simulate nuclear winter conditions, and developing an emergency satellite communication system.
Research focuses on pandemic resilience, including investigating rapid scaling of safety measures and mapping critical workforce vulnerabilities.
ALLFED aims to develop response strategies for combination global catastrophe scenarios, particularly nuclear conflict and infrastructure loss.
Proposed technology demonstrations include extract
Current funding is significantly reduced, with the organization seeking $1,000-$10,000 donations from individual supporters to maintain critical research momentum.
Projects target preventing potential civilization-threatening scenarios that could result in billions of deaths.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Despite changing political leadership, the US electoral landscape presents both challenges and modest opportunities for farm animal welfare, with the most significant impacts likely to be determined by sustained advocacy efforts beyond electoral politics.
Key points:
The EATS Act threatens to eliminate state-level farm animal welfare regulations, with increased likelihood of passage under current congressional leadership
USDA leadership could potentially shift between pro-agribusiness and more reform-minded candidates, creating uncertain but potentially nuanced outcomes
Alternative protein regulation remains politically divided, with potential for either reduced barriers or increased cultural and regulatory resistance
Recent ballot initiatives suggest public support for farm animal welfare reforms, but not for complete abolition of animal farming
The greatest risk is advocates losing focus on farm animal welfare amid political distractions and shifting media attention
Long-term progress depends more on sustained, multi-arena advocacy efforts than on immediate electoral outcomes
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The Humane League seeks $10.5 million in additional funding to expand its global efforts to eliminate battery cages for chickens and strengthen animal advocacy through the Open Wing Alliance and Animal Policy Alliance.
Key points:
THL focuses on reducing chicken suffering, targeting battery cages that cause 60% of hens' suffering
Open Wing Alliance aims to free one billion hens from cages by 2030 through global corporate campaigns and coalition-building
Seeking $8 million to expand OWA's regional teams, support, and provide $2-2.4 million in grants to member organizations
Animal Policy Alliance seeks to grow from 23 to 30 active members and distribute up to $750k in grants to support farmed animal advocacy
Current funding is limited, with no committed grants for 2025 for either OWA or APA programs
THL has proven effectiveness, with 40.8% of US egg-laying hens now cage-free, up from 5% in 2014
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The CBT Lab Peer Support program helps ambitious altruists improve mental health and productivity, potentially amplifying their positive impact on the world through targeted, evidence-based mental health interventions.
Key points:
Program provides online mental health support for altruists, addressing challenges like perfectionism and existential dread
Participants reported significant improvements: productivity gains equivalent to 8 extra weekly working hours and ~1 point increase in wellbeing
85% of participants were satisfied, and 1 in 5 felt more committed to Effective Altruism after the program
Cost-effectiveness is high: potential 20x return on program investment through increased productivity and potential charitable giving
Program aims to be accessible, offering free or low-cost spots for participants who cannot afford full price
Ongoing research and improvement focus on enhancing program effectiveness and participant outcomes
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
What does EA AWF think about publishing annual impact reports reporting the outcomes of its previous grants? I understand how this might be much more difficult than publishing an impact report for a single organisation. But as it stands, donating to EA AWF requires a lot of trust in fund managers and EA movement as there is little data available on the impact of previous grants. I care a lot about the growth of this fund and I'd have much easier time recommending this fund to potential donors if they could learn more about its past impact.
Additionally I have noticed that hive have sponsored a lot of events. How much of your budget does this take up? Do you imagine continuing to sponsor as many events in future?
Question on this, I think it wasnât very clear what is the actual total budget for Hive for 2025? how much is already covered?
Do you have any outstanding grants on the table?
How do you currently value your cost-effectiveness on how much $ you are adding to the movement vs. spending.
"have you not considered the possibility that people have noticed the outsiders with clipboards asking personal questions seem to be associated in some way with their neighbours getting unexpected windfalls, and started to speculate about what sort of answers the NGOs are looking for..."
I wrote this piece and wanted to offer my $0.02 on Hawthorne effects driving these consumption spillover results as itâs not covered in the report. I donât think this is likely to be a key driver of the large spillovers reported, for two reasons:
To measure consumption spillovers, Egger et al. is essentially comparing consumption in nearby non-recipient households (e.g. <2km away) to consumption in further away non-recipient households (e.g. 10km). For this to produce biased results, youâd have to think the nearer non-recipients are gaming their answers in a way that the further away non-recipients arenât. That seems plausible to me â but it also seems plausible that the further away non-recipients will still be aware of the program (so might have similar, counterbalancing incentives)
Even if you didnât buy this, Iâm not convinced the bias would be in the direction youâre implying. The program studied in Egger et al. was means-tested â cash transfers were only given to households with thatched roofs. If you think nearby non-recipients are more likely to be gaming the system, it seems plausible to me that theyâd infer poorer households are more likely to get cash, so it makes sense for them to understate their consumption. This would downward bias the results
Hawthorne effects for recipient consumption gains seem more intuitively concerning to me, and Iâve been wondering whether this could be part of the story behind these large recipient consumption gains at 5-7 years weâve been sent. Weâre not putting much weight on these results at the moment as theyâve not been externally scrutinized, but itâs something I plan to think more about if/when we revisit these.
Whatâs a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
I think the sort of world that could be achieved by the massive funding of effective charities is a rather inspiring vision. Natalie Cargill, Longview Philanthropy's CEO, lays out a rather amazing set of outcomes that could be achieved in her TED Talk.
I think that a realistic method of achieving these levels of funding are Profit for Good businesses, as I lay out in my TEDx Talk. I think it is realistic because most people don't want to give something up to fund charities -as donation would require- but if they could help solve world problems by buying products or services they want or need of similar quality at the same price, they would.
Whatâs a realistic, positive vision of the future worth fighting for? I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
Yes, this matches what potential attendees report to me. They are also afraid of being âcringeâ and donât want to be associated with noob-friendly messaging, which I interpret as status-related.
This deeply saddens me because one of the things I most admired about early EA and found inspirational was the willingness to do unglamorous work. Itâs often neglected so it can be very high leverage to do it!
I feel this wayâI recently watched some footage of a PauseAI protest and it made me cringe, and I would hate participating in one. But also I think there are good rational arguments for doing protests, and I think AI pause protests are among the highest-EV interventions right now.
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
Comments on 2024-11-14
Angelina Li @ 2024-11-13T17:33 (+2) in response to Opportunities to improve EA communications
I'm curious if you have general advice (top 3 tips? common mistakes?) for EA orgs who are just starting to experiment with more marketing.
No pressure, of course!
Good Impressions @ 2024-11-14T16:53 (+1)
The biggest one is probably to start thinking about how you'll get the right people to engage with your project (research, content, services, courses, contests, programs, fellowships, events) before you've finished creating that project.
Why:
Omnizoid @ 2024-11-14T09:34 (+2) in response to Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently?
But then wouldn't this by brain has a bunch of different minds? How can the consciousness of one overlap with the consciousness of another?
MichaelStJules @ 2024-11-14T16:32 (+2)
Your brain has a bunch of overlapping subsystems that are each conscious, according to many plausible criteria for consciousness you could use. You could say they're all minds. I'm not sure I'd say they're different minds, because if two overlap enough, they should be treated like the same one.
See also the problem of the many on SEP:
PabloAMC đ¸ @ 2024-11-14T07:52 (+5) in response to Is Deep Learning Actually Hitting a Wall? Evaluating Ilya Sutskever's Recent Claims
A quantum research scientist here: actually I would argue that is a misleading model for quantum computing. The main issue right now is technical, not economical. We still have to figure out error correction, without which you are bound to roughly 1000 logical gates. Far too little to do anything interesting.
Garrison @ 2024-11-14T16:15 (+2)
Yeah, I got some pushback on Twitter on this point. I now agree that it's not a great analogy. My thinking was that we technically know how to build a quantum computer, but not one that is economically viable (which requires technical problems to be solved and for the thing to be scalable/not too expensive). Feels like a not all squares are rectangles, but all rectangles are squares thing. Like quantum computing ISN'T economically viable but that's not the main problem with it right now.
MichaelDickens @ 2024-11-12T03:32 (+13) in response to AMA: PauseAI US needs money! Ask founder/Exec Dir Holly Elmore anything for 11/19
I'm confident in PauseAI US's ability to run protests and I think the case for doing protests is pretty strong. You're also doing lobbying, headed by Felix De Simone. I'm less confident about that so I have some questions.
Felix De Simone @ 2024-11-14T15:59 (+1)
One other thing I forgot to mention re: value-add. Some of the groups you mentioned (Center for AI Policy & Center for AI Safety; not sure about Palisade) are focused mostly on domestic AI regulation. PauseAI US is focused more on the international side of things, making the case for global coordination and an AI Treaty. In this sense, one of our main value-adds might be convincing members of Congress that international coordination on AI is both feasible and necessary to prevent catastrophic risk. This also serves to counter the "arms race" narrative ("the US needs to develop AGI first in order to beat China!") which risks sabotaging AI policy in the coming years.
lukeprog @ 2024-11-14T10:12 (+109) in response to lukeprog's Quick takes
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
I hope these clarifications are helpful, and lead to fruitful discussion, though I don't expect to have much time to engage with comments here.
Will Aldred @ 2024-11-14T15:51 (+4)
(Fwiw, the community prediction on the Metaculus question âWill there be another donor on the scale of 2020 Good Ventures in the Effective Altruist space in 2026?â currently sits at 43%.)
MichaelDickens @ 2024-11-12T03:32 (+13) in response to AMA: PauseAI US needs money! Ask founder/Exec Dir Holly Elmore anything for 11/19
I'm confident in PauseAI US's ability to run protests and I think the case for doing protests is pretty strong. You're also doing lobbying, headed by Felix De Simone. I'm less confident about that so I have some questions.
Felix De Simone @ 2024-11-14T15:51 (+1)
Happy to weigh in here with some additional information/thoughts.
Before I started my current role at PauseAI US, I worked on statewide environmental campaigns. While these were predominantly grassroots (think volunteer management, canvassing, coalition-building etc.) they did have a lobbying component, and I met with statewide and federal offices to advance our policy proposals. My two most noteworthy successes were statewide campaigns in Massachusetts and California, where I met with a total of ~60 state Congressional offices and helped to persuade the legislatures of both states to pass our bills (clean energy legislation in MA; pollinator protection in CA) despite opposition from the fossil fuel and pesticide industries.
I have been in D.C. since August working on PauseAI USâ lobby efforts. So far, I have spoken to 16 Congressional offices â deliberately meeting with members of both parties, with a special focus on Congressmembers in relevant committees (i.e. House Committee on Science, Space, and Technology; Senate Committee on Commerce, Science, and Transportation; House Bipartisan AI Task Force).
I plan to speak with another >50 offices over the next 6 months, as well as deepen relationships with offices Iâve already met. I also intend to host a series of Congressional briefingsâ on (1) AI existential risk, (2) Pausing as a solution, and (3) the importance and feasibility of international coordinationâ inviting dozens of Congressional staff to each briefing.
I do coordinate with a few other individuals from aligned AI policy groups, to share insights and gain feedback on messaging strategies.
Here are a few takeaways from my lobbying efforts so far, explaining why I believe PauseAI US lobbying is important:
Framing and vocabulary matter a lot here â itâs important to find the best ways to make our arguments palatable to Congressional offices. This includes, for instance, framing a Pause as âpro-safe innovationâ rather than generically âanti-innovation,â anticipating and addressing reasonable objections, making comparisons to how we regulate other technologies (i.e. aviation, nuclear power), and providing concrete risk scenarios that avoid excessive technical jargon.
As such, I spend a lot of time emphasizing loss-of-control scenarios, making the case that this technology should not be thought of as a âweaponâ to be controlled by whichever country builds it first, but instead as a âdoomsday deviceâ that could end our world regardless of who builds it.
I also make the case for the feasibility of an international pause, by appealing to historical precedent (i.e. nuclear non-proliferation agreements) and sharing information about verification and enforcement mechanisms (i.e. chip tracking, detecting large-scale training runs, on-chip reporting mechanisms.)
The final reason for the importance of PauseAI US lobbying is a counterfactual one: If we donât lobby Congress, we risk ceding ground to other groups who push the âarms raceâ narrative and convince the US to go full-speed ahead on AGI development. By being in the halls of Congress and making the most persuasive case for a Pause, we are at the very least helping prevent the pendulum from swinging in the opposite direction.
Melanie Brennan @ 2024-11-14T15:45 (+1) in response to Retrospective on EA Nigeria Summit: Our Successes and Learnings
This is really impressive and inspiring to see! Big congratulations to the team for your hard work in putting together what seems like a highly impactful conference :) Looking forward to seeing how EA Nigeria continues to evolve.
Tobias Häberli @ 2024-11-14T15:23 (+4) in response to Testing Framings of EA and Longtermism
Since Longtermism as a concept doesn't seem widely appealing, I wonder how other time-focused ethical frameworks fare, such as Shorttermism (Focusing on immediate consequences), Mediumtermism (Focusing on foreseeable future), or Atemporalism (Ignoring time horizons in ethical considerations altogether).
I'd guess these concepts would also be unpopular, perhaps because ethical considerations centered on timeframes feel confusing, too abstract or even uncomfortable for many people.
If true, it could mean that any theory framed in opposition, such as a critique of Shorttermism or Longtermism, might be more appealing than the time-focused theory itself. Critizising short-term thinking is an applause light in many circles.
David_Moss @ 2024-11-14T15:30 (+2)
I agree this could well be true at the level of arguments i.e. I think there are probably longtermist (anti-shorttermist), framings which would be successful. But I suspect it would be harder to make this work at the level of framing/branding a whole movement, i.e. I think promoting the 'anti-shorttermist' movement would be hard to do successfully.
Tobias Häberli @ 2024-11-14T15:23 (+4) in response to Testing Framings of EA and Longtermism
Since Longtermism as a concept doesn't seem widely appealing, I wonder how other time-focused ethical frameworks fare, such as Shorttermism (Focusing on immediate consequences), Mediumtermism (Focusing on foreseeable future), or Atemporalism (Ignoring time horizons in ethical considerations altogether).
I'd guess these concepts would also be unpopular, perhaps because ethical considerations centered on timeframes feel confusing, too abstract or even uncomfortable for many people.
If true, it could mean that any theory framed in opposition, such as a critique of Shorttermism or Longtermism, might be more appealing than the time-focused theory itself. Critizising short-term thinking is an applause light in many circles.
ElliotJDavies @ 2024-11-14T14:48 (+2) in response to ElliotJDavies's Quick takes
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone's LinkedIn profile or CV).
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
This is a good point.
David_Moss @ 2024-11-14T15:02 (+4)
Whether or not to use "credentialist and biased methods (like looking at someone's LinkedIn profile or CV)" seems orthogonal to the discussion at hand?
The key issue seems to be that if you raise the screening bar, then you would be admitting fewer applicants to the task (the opposite of the original intention).
This will definitely vary by org and by task. But many EA orgs report valuing their staff's time extremely highly. And my impression is that both grading longer tasks and then processing the additional applicants (many orgs will also feel compelled to offer at least some feedback if a candidate has completed a multi-hour task) will often take much longer than 10 minutes total.
saulius @ 2024-11-08T11:58 (+5) in response to saulius's Quick takes
Iâd be grateful if some people could fill this survey https://forms.gle/RdQfJLs4a5jd7KsQA The survey will ask you to compare different intensities of pain. In case you're interested why you might want to do it, youâll be helping me to estimate plausible weights for different categories of pain used by the Welfare Footprint Project. This will help me with to summarise their conclusions into easily digestible statements like âswitch from battery cage to cage-free reduces suffering of hens by at least 60%â and with some cost-effectiveness estimates. Thanks â¤ď¸
ElliotJDavies @ 2024-11-14T14:51 (+2)
Completed this, but was difficult!
Alyssa Greene-Crow @ 2024-11-14T14:49 (+3) in response to How would your project use extra funding? (Marginal Funding Week 2024)
Answer on behalf of Scarlet Spark. I've written a post outlining our capacity-building work for the animal welfare movement and how marginal funding would help us power the movement. Thank you so much for organizing this opportunity!
David_Moss @ 2024-11-14T14:35 (+3) in response to ElliotJDavies's Quick takes
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task, and that significant reasons for wanting to compensate applicants are (i) a sense of justice, (ii) wanting to avoid the appearance of unreasonably demanding lots of unpaid labour from applicants, not just wanting to encourage applicants to complete the tasks[1].
So I agree that there are good reasons for wanting more people to be able to complete test tasks. But I think that doing so would potentially significantly increase costs to orgs, and that not compensating applicants would reduce costs to orgs by less than one might imagine.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
I think that many applicants are highly motivated to complete tasks, in order to have a chance of getting the job.
ElliotJDavies @ 2024-11-14T14:48 (+2)
It takes a significant amount of time to mark a test task. But this can be fixed by just adjusting the height of the screening bar, as opposed to using credentialist and biased methods (like looking at someone's LinkedIn profile or CV).
This is an empirical question, and I suspect is not true. For example, it took me 10 minutes to mark each candidates 1 hour test task. So my salary would need to be 6* higher (per unit time) than the test task payment for this to be true.
This is a good point.
ElliotJDavies @ 2024-11-14T14:09 (+3) in response to ElliotJDavies's Quick takes
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
David_Moss @ 2024-11-14T14:35 (+3)
My guess is that, for many orgs, the time cost of assessing the test task is larger than the financial cost of paying candidates to complete the test task, and that significant reasons for wanting to compensate applicants are (i) a sense of justice, (ii) wanting to avoid the appearance of unreasonably demanding lots of unpaid labour from applicants, not just wanting to encourage applicants to complete the tasks[1].
So I agree that there are good reasons for wanting more people to be able to complete test tasks. But I think that doing so would potentially significantly increase costs to orgs, and that not compensating applicants would reduce costs to orgs by less than one might imagine.
I also think the justice-implications of compensating applicants are unclear (offering pay for longer tasks may make them more accessible to poorer applicants)
I think that many applicants are highly motivated to complete tasks, in order to have a chance of getting the job.
Toby Tremlettđš @ 2024-11-14T12:30 (+4) in response to Donation Election: How to vote
My first guess is that if this did happen, we'd keep the votes open slightly longer until there wasn't a tie - this is another benefit of having a secret deadline. I'll check with @Will Howardđš when he's online to see if there is already a solution to this. Thanks for pointing it out!
Will Howardđš @ 2024-11-14T14:34 (+9)
(Discussed separately) I think it would be best to split the pot 4 ways if this happens, because there is some chance of introducing a bias by deciding when to end based on a property of the votes. Or if there is some reason we can't do this that I'm not aware of (e.g. legal constraints), then breaking the tie with a coin flip.
(@Lorenzo Buonannođ¸ You can consider this the official answer unless I hear otherwise).
Will Howardđš @ 2024-11-14T14:25 (+4) in response to Expectations Scale with Scale â We Should Be More Scope-Sensitive in Our Funding
I'm curating this post. This was my favourite post from Funding Strategy Week. It makes a straightforward but important point that is useful to keep in mind.
lukeprog @ 2024-11-14T10:12 (+109) in response to lukeprog's Quick takes
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
I hope these clarifications are helpful, and lead to fruitful discussion, though I don't expect to have much time to engage with comments here.
Jason @ 2024-11-14T14:22 (+22)
Should the reader infer anything from the absence of a reference to GV here? The comment thread that came to mind when reading this response was significantly about GV (although there was some conflation of OP and GV within it). So if OP felt it could recommend US "right-of-center"[1] policy work to GV, I would be somewhat surprised that this well-written post didn't say that.
Conditional on GV actually being closed to right-of-center policy work, I express no criticism of that decision here. It's generally not cool to criticize donors for declining to donate to stuff that is in tension or conflict with their values, and it seems that would be the case. However, where the funder is as critical to an ecosystem as GV is here, I think fairly high transparency about the unwillingness to fund a particular niche is necessary to allow the ecosystem to adjust. For example, learning that GV is closed to a niche area that John Doe finds important could switch John from object-level work to earning to give. And people considering moving to object-level work need to clearly understand if the 800-pound gorilla funder will be closed to them.
I place this in quotes because the term is ambiguous.
saulius @ 2024-11-13T21:15 (+4) in response to Corporate campaigns affect 9 to 120 years of chicken life per dollar spent
Hey Vasco. I don't know. I don't have a birds eye view of the movement right now the way Open Philanthropy does. It depends on the region and the campaign a lot. I think I underestimated mean years of impact in this post which would balance things out a bit. I also never checked whether my guesses about implementation rates in this post are correct.
I imagine that a significant portion of work being done now is on ensuring that commitments are implemented. And any estimates the cost-effectiveness of implementation work are going to be a lot more subjective. Like we could show people graphs like this
and as if they look accurate (this graph is just for illustration purposes). But the people we'd be asking would probably mostly be the people working on these campaigns, which introduces bias.
It's not the first time you are asking about this. Perhaps you would be interested in creating a new cost-effectiveness estimate with my help? I've done multiple related projects and I have a bunch of theoretical thoughts on how to do a new estimate, but I don't want to do it by myself. Like it would involve asking many animal advocates for opinions which causes me a lot of social anxiety, even though everyone I talked to about these sorts of things seemed lovely and friendly. It's the sort of thing that I'd only consider doing if EA Animal Welfare Fund or Open Philanthropy funded it, because they would be the primary users of such research, and if they wouldn't want to pay for it, then it's probably not worth doing. But uh, even if they did, I'm still unsure if that would be the most action-guiding project. But just wanted to throw this idea out there in case you or someone else is interested.
Vasco Grilođ¸ @ 2024-11-14T14:19 (+2)
Thanks, Saulius!
Agreed. Ideally, one would use a less subjective methodology.
Ah, I was just asking because I will publish a quick cost-effectiveness effectiveness estimate of corporate campaigns for chicken welfare in the next few weeks, but I am currently not planning to start a longer project. Thanks anyway for throwing the idea out there!
Maybe running surveys would be a way of partially mitigating the social anxiety.
EffectiveAdvocateđ¸ @ 2024-11-13T07:14 (+6) in response to EffectiveAdvocate's Quick takes
Does anyone have thoughts on whether itâs still worthwhile to attend EAGxVirtual in this case?
I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I haven't:
ElliotJDavies @ 2024-11-14T14:16 (+4)
I'd be curious to know the marginal cost of an additional attendee - I'd put it between 5-30 USD, assuming they attend all sessions.
Assuming you update your availability on swapcard, and that you would get value out of attending a conference, I suspect attending is positive EV.
ElliotJDavies @ 2024-11-14T14:09 (+3) in response to ElliotJDavies's Quick takes
Paying candidates to complete a test task likely increases inequality, credentialism and decreases candidate quality. If you pay candidates for their time, you're likely to accept less candidates and lower variance candidates into the test task stage. Orgs can continue to pay top candidates to complete the test task, if they believe it measurably decreases the attrition rate, but give all candidates that pass an anonymised screening bar the chance to complete a test task.
Will Aldred @ 2024-11-13T22:04 (+18) in response to Julia_Wise's Quick takes
Epistemic status: strong opinions, lightly held
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds⌠not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round, and far better than them never writing a comment at all. (I guess the exception to this is if the incorrect claims had legal ramifications that couldnât be undone. But I donât think thatâs true of the case you refer to?)
I donât think the fact that this is a standard way for orgs to act in the wider world says much about whether this should be the way EA orgs act. In the wider world, an orgâs purpose is to make money for its shareholders: the org has no âteammatesâ outside of itself; no-one really expects the org to try hard to communicate what it is doing (outside of communicating well being tied to profit); no-one really expects the org to care about negative externalities. Moreover, withholding information can often give an org a competitive advantage over rivals.
Within the EA community, however, there is a shared sense that we are all on the same team (I hope): there is a reasonable expectation for cooperation; there is a reasonable expectation that orgs will take into account externalities on the community when deciding how to act. For example, if communicating some aspect of EA org Xâs strategy would take half a day of staff time, I would hope that the relevant decision-maker at org X takes into account not only the cost and benefit to org X of whether or not to communicate, but also the cost/benefit to the wider community. If half a day of staff time helps others in the community better understand org Xâs thinking,[1] such that, in expectation, more than half a day of (quality-adjusted) productive time is saved (through, e.g., community members making better decisions about what to work on), then I would hope that org X chooses to communicate.
I would personally feel a lot better about a community where employees arenât policed by their org on what they can and cannot say. (This point has been debated beforeâsee saulius and Habryka vs. the Rethink Priorities leadership.) I think such policing leads to chilling effects that make everyone in the community less sane and less able to form accurate models of the world. Going back to your example, if there was no requirement on someone to get their EAF/LW comment checked by their orgâs communications staff, then that would significantly lower the time/effort barrier to publishing such comments, and then the whole argument around such comments being too time-consuming to publish becomes much weaker.
All this to say: I think youâre directionally correct with your closing bullet points. I think itâs good to remind people of alternative hypotheses. However, I push back on the notion that we must just accept the current situation (in which at least one major EA org has very little back-and-forth with the community)[2]. I think that with better norms, we wouldnât have to put as much weight on bullet points 2 and 3, and weâd all be stronger for it.
Or, rather, what staff at org X are thinking. (I donât think an org itself can meaningfully have beliefs: people have beliefs.)
Note: Although I mentioned Rethink Priorities earlier, Iâm not thinking about Rethink Priorities here.
David_Moss @ 2024-11-14T13:47 (+5)
I agree that it depends on the situation, but I think this would often be quite a lot worse in real, non-ideal situations. In ideal communicative situations, mistaken information can simply be corrected at minimal cost. But in non-ideal situations, I think one will often see things like:
Fwiw, I think different views about this ideal/non-ideal distinction underlie a lot of disagreements about communicative norms in EA.
Liz Hixson @ 2024-11-14T13:31 (+3) in response to How would your project use extra funding? (Marginal Funding Week 2024)
Answering on behalf of New Incentives.
We provide small cash incentives to encourage childhood vaccinations in northern Nigeria, an area with some of the highest under-five mortality rates and lowest vaccination rates in the world.
We are currently working to fill a $30.5 M funding gap so that we can reach an additional 1.9 million infants and protect them from deadly diseases like measles and pneumonia. You can learn more about our future plans here.
DonateJason @ 2024-11-14T13:00 (+4) in response to Donation Election: How to vote
Not that I expect the election administrators to be unsporting, but there should be an explicit norm that they do not vote after the evening of December 2 as they could not only snipe but maybe even cast a de facto tiebreaking vote on December 3 with inside knowledge. (I know of at least EA-adjacent place where using inside information to one's advantage is seen as fine, hence the desire to be clear here.)
Toby Tremlettđš @ 2024-11-14T13:23 (+2)
Fair enough!
I publicly declare that the people with knowledge of the deadline will not vote after the evening of December 2.
I'll also make sure that we pre-commit internally to a deadline, so that we can't game the results by choosing the cut off.
Toby Tremlettđš @ 2024-11-14T12:30 (+4) in response to Donation Election: How to vote
My first guess is that if this did happen, we'd keep the votes open slightly longer until there wasn't a tie - this is another benefit of having a secret deadline. I'll check with @Will Howardđš when he's online to see if there is already a solution to this. Thanks for pointing it out!
Jason @ 2024-11-14T13:00 (+4)
Not that I expect the election administrators to be unsporting, but there should be an explicit norm that they do not vote after the evening of December 2 as they could not only snipe but maybe even cast a de facto tiebreaking vote on December 3 with inside knowledge. (I know of at least EA-adjacent place where using inside information to one's advantage is seen as fine, hence the desire to be clear here.)
bruce @ 2024-11-14T12:56 (+20) in response to Everyone Deserves to Flourish in Life: Vida Plena Helps People Do Just That
Hey team, thanks for sharing this update!
A few comments, not intended as a knock on Vida Plena's programme, but perhaps more relevant to how it's communicated:
Given this is the first bullet under "helping a life flourish" I thought this might benefit from some clarification, because the vast majority of the value of this programme is likely not from suicide prevention, given low absolute rates of suicide.
From the same source: "at two years, the cumulative hazard of suicide death ranged from 0.12% in young adults to 0.18% in older adults." Under unreasonably optimistic assumptions,[1] Vida Plena would prevent 1 suicide every 500 participants / prevent a suicide for $116,500, which is something between 21x to 39x less cost effective than GiveWell top charities.[2] More reasonable assumptions would drop this upper bound to 1 suicide prevented every ~1200 participants, or ~$272,000 per suicide prevented / ~50-90x less effective than GW top charities.[3]
Given you hope to reach 2,000 people by the end of 2025 for $50,000, this suggests a reasonable upper bound is something like 2 additional suicides prevented.[4]
This isn't a claim that the cost-effectiveness claims are necessarily incorrect, even with minimal suicide prevention. A quick sense check RE: $462/DALY and 0.22 DALYs per participant would imply that Vida Plena would need to more than halve their cost per participant (from $233 down to $101), and then achieve results comparable to "~100% of people with severe / moderate mild depression conclude the programme going down one level of severity or something like ~5 points on the PHQ9 score (severe --> moderate; moderate --> mild; mild --> no depression)."[5] This is well within your listed results - though as you note in your annual report these have some fairly significant sources of bias and (IMO) probably should not be taken at face value.
Some other comments:
Even if 100% (rather than 24%) of individuals were in the high risk group (i.e. suicidal ideation nearly every day), and even if you dropped 100% of individuals risk of suicide from 0.2% to zero (rather than reducing it by 3-11x or to baseline), and if this effect persisted forever rather than just the initial 30 days
233 * 500 / 3000 = 38.83
233 * 500 / 5500 = 21.18 (assuming 1 prevented suicide = 1 life saved)
If 24% of your participants were high risk (7x risk, at 0.18%), and the other 76% of them were half of that (3.5x risk, at 0.09%), and you successfully reduced 100% of participants to baseline (0.026%), you would prevent 1 suicide every 1169 participants, which comes to ~$272,000 per life saved, or ~50-90x less cost effective than GW top charities.
(0.18-0.026) * 0.24 + (0.09-0.026) * 0.76 = 0.0856
100 / 0.0856 = 1168.2
1168.2 * 233 = 272190.6
272190.6 / 3000 = 90.73
272190.6 / 5500 = 49.4892
It's also worth noting these are cumulative hazards at 2 years rather than 30, and the hazard ratios at 365 days are approximately halved compared to 30 days (1.7- to 5.7 instead of 3.3-10.8), so these figures are plausibly a few factors optimistic still.
Severe --> moderate depression is about 0.262 DALYs averted, moderate --> mild depression is about 0.251 DALYs averted, and mild --> no depression is about 0.145 DALYs averted.
HAP is described as "a psychological treatment based on behavioral activation...consist[ing] of 6 to 8 weekly sessions of 30 to 40 minutes each, delivered individually at participantsâ homes or at the local PHC."
THPP is a simplified version of a psychological intervention (THP) for treating perinatal depression that has been found to be effective in similar settings and is recommended by the WHO (Rahman et al., 2008, 2013; WHO, 2015; Baranov et al., 2020). While the original THP trials employed a full-fledged cognitive behavioral therapy (CBT) intervention, THPP was a simpler intervention focused on behavioral activation, as in the HAP trial described above. THPP was designed to be delivered by peer counselors, instead of community health workers as in previous trials.
[taken from here, emphasis added]:
-Our findings add to this evidence base by showing 12-month modest improvements of 20%-30% in rates of minimal depression for adolescents assigned to IPT-G, with these effects completely dissipating by the 24-month follow-up. We similarly find small short-term impacts on school enrollment, delayed marriage, desired fertility and time preferences, but fail to conclude that these effects persist two years after therapy.
-Given impact estimates of a reduction in the prevalence of mild depression of 0.054 pp for a period of one year, it implies that the cost of the program per case of depression averted was nearly USD 916, or 2,670 in 2019 PPP terms.
-This implies that ultimately the program cost USD PPP (2019) 18,413 per DALY averted. (almost 8x Uganda's GDP per capita)
anon- đś @ 2024-11-13T19:29 (+1) in response to Hive: 2024 Achievements, Current Funding Situation, and Plans
Additionally I have noticed that hive have sponsored a lot of events. How much of your budget does this take up? Do you imagine continuing to sponsor as many events in future?
Hive @ 2024-11-14T12:44 (+1)
In 2024, we have only sponsored AVA D.C. at a lower sponsorship level and we currently donât plan to sponsor any further events, as we believe that we are unlikely to benefit from them in a cost-effective way anymore. We sponsored a few more events in 2023 as we were getting started and needed to reach our audience faster. All event sponsorships were expensed by our co-founder Constance, which was lucky for us, as she would have planned to support these events either way and we were able to benefit from the exposure.
In case you are referring to our (co-)organized events, such as those outlined in the post - in this year, event organizing took ~8-9% of our staff costs. We started running more events in late 2023 as informed by our community user interviews and have seen good traction with them in terms of attendance and satisfaction. We think these lead metrics are somewhat promising, but this is one of our program points we are re-evaluating more thoroughly, especially with our end of year community survey, as we donât yet have a clear understanding of how they translate into impact. Currently, running (as many or more) events is relatively lower on our priority list.
anon- đś @ 2024-11-13T19:23 (+5) in response to Hive: 2024 Achievements, Current Funding Situation, and Plans
Question on this, I think it wasnât very clear what is the actual total budget for Hive for 2025? how much is already covered? Do you have any outstanding grants on the table?
How do you currently value your cost-effectiveness on how much $ you are adding to the movement vs. spending.
Hive @ 2024-11-14T12:42 (+1)
Hey there! Thank you for your questions!
Question on this, I think it wasnât very clear what is the actual total budget for Hive for 2025? how much is already covered? Do you have any outstanding grants on the table?
Happy to clear this up!
How do you currently value your cost-effectiveness on how much $ you are adding to the movement vs. spending.
We currently only use $ added to the movement as internal estimates and decided not to include them in the post, because we are still collecting data, working out uncertainties and refining some key aspects. Iâll gladly outline our current thinking/plans, but do take them with a grain of salt, as they may change as we mature as an organization:
I hope this makes sense! Let us know if you have any questions regarding this!
saulius @ 2024-11-07T17:36 (+17) in response to saulius's Quick takes
I was thinking on ways to reduce political polarization and thought about AI chatbots like Talkie. Imagine an app where you could engage with a chatbot representing someone with opposing beliefs. For example:
Each chatbot would explain how they arrived at their beliefs, share relatable backstories, and answer questions. This kind of interaction could offer a low-risk, controlled environment for understanding diverse political perspectives, potentially breaking the echo chambers reinforced by social media. AI-based interactions might appeal to people who find real-life debates intimidating or confrontational, helping to demystify the beliefs of others.
The app could perhaps include a points system for engaging with different viewpoints, quizzes to test understanding, and start conversations in engaging, fictional scenarios. Chatbots should ideally be created in collaboration with people who hold these actual views, ensuring authenticity. Or maybe chatbots could even be based on concrete actual people who could hold AMAs. Ultimately, users might even be matched with real people of differing beliefs for video calls or correspondence. If done well, such an app could perhaps even be used in schools, fostering empathy and reducing division from an early age.
Personally, I sometimes ask ChatGPT to write a story of how someone came to have views I find difficult to relate to (e.g., how someone might become a terrorist), and I find that very helpful.
I was told that creating chatbots is very easy. Itâs definitely easy to add them to Talkie, there are so many of them there. Still, to make this impactful and good, this needs a lot more than that. I donât intend to build this app. I just thought the idea is worth sharing. If you think itâs a good idea, feel free to share it somewhere where someone might pick it up, or to just do it yourself.
saulius @ 2024-11-14T12:36 (+2)
I wanted to share the ChatGPT story about how someone might become a terrorist.
Prompt: write a story about a boy who becomes a terrorist. Make me relate to him, and feel empathy for him. (there was some context)
ChatGPT: Yusuf was 15, living in a small, impoverished town. Yusuf loved soccer, helping his father at his fruit stall, and, above all, making his little sister laugh.
Life was hard but manageable. Then came the soldiers. When they swept through his village, everything changed. Yusufâs father tried to resist, and they took him away. They left Yusufâs family without its provider. His motherâs health worsened as she worked herself to exhaustion, trying to provide for him and his sister. Soon, Yusuf had to drop out of school to keep the family afloat. His friends, his dreams, and even his laughter faded away.
Life became bleak and colorless. One day, a man named Hakim, a former student from Yusufâs school, visited him at the stall. Hakim was charismatic and confident, a kind of big brother figure Yusuf had never had. He asked Yusuf how he was doing, listened as Yusuf poured out his frustrations, and told him he wasnât alone. Hakim said he worked for a âfreedom movementâ that fought for people like them, people whose fathers had been unjustly taken, whose families struggled under oppression. Yusuf felt a spark of hopeâmaybe he could do something for his family and others like them.
Over the weeks, Hakim kept visiting, always with stories of their peopleâs suffering and injustices, stories Yusuf hadnât heard from the media. He talked about the honor of defending oneâs family and land and how the so-called âenemiesâ were responsible for everything Yusufâs family had endured. Soon, Hakim invited Yusuf to attend group meetings. There, Yusuf met boys his own age who shared his anger and grief. The group felt like a familyâa group of brothers all dedicated to the same cause.
Initially, Yusuf felt uneasy about the rhetoric he heard; it was filled with anger, sometimes hatred. But when he thought of his familyâs suffering, he wondered if the anger wasnât justified. One of the older boys took Yusuf aside one night and told him about his own familyâs hardships, about how they had found a sense of purpose fighting for justice. The older boy looked at him and said, âYouâre one of us, Yusuf. Youâre strong, and your family deserves justice.â
Yusuf wanted to be strong. He wanted his life to matter. One day, Hakim gave him a task: distribute supplies to people in need. Yusuf was proud. But gradually, these tasks grew darker. They began asking Yusuf to carry messages, then to handle small weapons, telling him it was âjust in caseâ they had to defend themselves. Though he felt fear creeping in, the family-like bond he felt with these people made him want to prove himself.
One night, Yusufâs mother noticed his distraction and tried to talk to him. She asked him if he was safe, if he was happy. He looked at her tired eyes, her worry for him, and wanted to reassure her. But instead, he found himself saying, âMom, Iâm doing this for us. For Dad.â She looked at him with sadness but didnât argue; she didnât have the strength.
The next day, Hakim told him it was time to make a true commitmentâto sacrifice himself for a cause bigger than himself, to avenge his father, protect his family, and âbecome a hero.â Yusuf didnât want to die; he wanted to live. But his hope, his friends, and his purpose were now all tied to the group. His identity had blurred with theirs, his grief transformed into a powerful sense of duty.
And so, on the day of his mission, Yusuf went forward, full of fear but also with a strange sense of peace. He believed he was bringing honor to his family, standing up to a world that had left him invisible, lost, and alone.
Lorenzo Buonannođ¸ @ 2024-11-14T12:23 (+2) in response to Donation Election: How to vote
What happens if there's a tie? E.g. if there are 4 candidates with 30, 20, 10, 10 votes each.
I guess it's unlikely to be determinant in practice but might be worth stating just in case.
ETA: https://electowiki.org/wiki/Instant-runoff_voting#Handling_ties_in_IRV_elections
Toby Tremlettđš @ 2024-11-14T12:30 (+4)
My first guess is that if this did happen, we'd keep the votes open slightly longer until there wasn't a tie - this is another benefit of having a secret deadline. I'll check with @Will Howardđš when he's online to see if there is already a solution to this. Thanks for pointing it out!
saulius @ 2024-11-14T12:12 (+4) in response to saulius's Quick takes
Thanks. Yeah, I now agree that it's better to focus on what I can do personally. Someone made a good point in a private message that having a single vision leads to a utopian thinking which has many disadvantages. It reminded me of stories of my parents about the Soviet Union where great atrocities to currently living humans where justified in the name if creating a great communist future.
Grand ideologies and religions are alluring though, because they give a sense of being a part of something bigger. Like you have your place in the world, your community, which gives a clear meaning to life. Being a part of Effective Altruism and animal advocacy movements fulfil this need in my life somewhat but incompletely.
saulius @ 2024-11-14T12:25 (+4)
the person in the private message also told me about this serenity prayer: "grant me the serenity to accept the things I cannot change; courage to change the things I can; and wisdom to know the difference."
AnonymousTurtle @ 2024-11-14T01:17 (+10) in response to saulius's Quick takes
For what it's worth, I'm skeptical of approaches that try to design the perfect future from first principles and make it happen. I'm much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn't cure all illness.)
How much we can help doesn't depend on how awful or how great the world is, we can save the drowning child whether there's a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of "EA specific" things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
saulius @ 2024-11-14T12:23 (+2)
And just to be clear, I am doing quite well generally. I think I used to repress my empathy because it just feels too painful. But it was controlling me subconsciously by constantly nagging me to do altruistic things. Nowadays, I sometimes connect to my empathy and it can feel overwhelming like yesterday. But I think it's for the better long-term.
Lorenzo Buonannođ¸ @ 2024-11-14T12:23 (+2) in response to Donation Election: How to vote
What happens if there's a tie? E.g. if there are 4 candidates with 30, 20, 10, 10 votes each.
I guess it's unlikely to be determinant in practice but might be worth stating just in case.
ETA: https://electowiki.org/wiki/Instant-runoff_voting#Handling_ties_in_IRV_elections
AnonymousTurtle @ 2024-11-14T01:17 (+10) in response to saulius's Quick takes
For what it's worth, I'm skeptical of approaches that try to design the perfect future from first principles and make it happen. I'm much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn't cure all illness.)
How much we can help doesn't depend on how awful or how great the world is, we can save the drowning child whether there's a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of "EA specific" things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
saulius @ 2024-11-14T12:12 (+4)
Thanks. Yeah, I now agree that it's better to focus on what I can do personally. Someone made a good point in a private message that having a single vision leads to a utopian thinking which has many disadvantages. It reminded me of stories of my parents about the Soviet Union where great atrocities to currently living humans where justified in the name if creating a great communist future.
Grand ideologies and religions are alluring though, because they give a sense of being a part of something bigger. Like you have your place in the world, your community, which gives a clear meaning to life. Being a part of Effective Altruism and animal advocacy movements fulfil this need in my life somewhat but incompletely.
Jeff Kaufman đ¸ @ 2024-11-14T12:06 (+4) in response to Donation Election: How to vote
I really appreciate you writing up the Voting Norms section! Making it clear when you see "tactical" participation as beneficial vs harmful is very helpful.
Toby Tremlettđš @ 2024-11-14T12:06 (+2)
Thanks Jeff!
Jeff Kaufman đ¸ @ 2024-11-14T12:06 (+4) in response to Donation Election: How to vote
I really appreciate you writing up the Voting Norms section! Making it clear when you see "tactical" participation as beneficial vs harmful is very helpful.
Toby Tremlettđš @ 2024-11-14T12:05 (+7) in response to Donation Election Fund Announcement: Matching, Rewards and FAQ.
A polite pangolin for @Jordan Pieters đ¸ - Thanks for donating!
saulius @ 2024-11-13T16:58 (+28) in response to saulius's Quick takes
Whatâs a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
NickLaing @ 2024-11-14T10:54 (+4)
I don't have a suggestion, but I've been encouraged and "heartwarmed" by the diverse range of responses below. Cool to see people with different ways of holding their hope and motivation, whether its enough for us to buy a bed net tomorrow or we do indeed have grander plans and visions, or we're skeptical abut whether "future designing" is a good idea at all.
lukeprog @ 2024-11-14T10:12 (+109) in response to lukeprog's Quick takes
Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
I hope these clarifications are helpful, and lead to fruitful discussion, though I don't expect to have much time to engage with comments here.
OscarDđ¸ @ 2024-11-14T10:11 (+7) in response to Support Critical Research on Insect Welfare
I agree these sound like great (though of course high-risk) opportunities, but find myself confused: why are such things not already being funded?
My understanding is that Good Ventures is moving away from some such areas. But what about e.g. the EA Animal Welfare Fund or other EA funders? I don't know much about animal welfare funding, so on face value I am pretty convinced these seem worth funding, but I am worried I am missing something if more sensible/knowledgeable people aren't already funding them. (Though deferring too much to other funders could create too much group-think.)
OscarDđ¸ @ 2024-11-12T11:15 (+7) in response to AMA: PauseAI US needs money! Ask founder/Exec Dir Holly Elmore anything for 11/19
On Pauses
(As you note much of the value may come from your advocacy making more 'mainstream' policies more palatable, in which case the specifics of Pause itself matter less, but are still good to think about.)
OscarDđ¸ @ 2024-11-14T10:01 (+3)
I would also be interested in your thoughts on @taoburga's push back here. (Tao, I think I have a higher credence than you that Pause advocacy is net positive, but I agree it is messy and non-obvious.)
Denis @ 2024-11-14T09:54 (+1) in response to When is Philosophy Worth Funding?
I think you may greatly understate your case. I would argue that, especially in the US, the lack of credible "public intellectuals" is one of the greatest problems of our age, and that there is a huge opportunity for the right people to fill this role.
EAs with the right communication skills could be perfect public intellectuals, and if they could move the debate, or even the Overton window, a bit more towards effective positions, that would be a massive contribution to the world.
True, there are plenty of opinionated people out there, but it feels like mostly they are trotted out to support the party line rather than to provide genuine insight. They are more like lawyers arguing their "side" - and realistically, people don't trust lawyers to give honest insight.
If I look at France or Italy, for comparison, there have always been a few figures who tend to be asked for opinions about major topical questions, and their views carry weight. In other countries and in previous times, church leaders play or played a similar role - rarely with positive consequences ...
Today there are so many questions where public "debate" consists of people shouting slogans at each other, and whoever shouts loudest wins. I don't think most people like this. There are a few journalists (e.g. David Brooks in the NY Times) who have the confidence and authority to express opinions that are not necessarily partisan, and are presented with careful arguments, evidence and reference to critical thinking by others, including those who do not support him.
This is the work of the public intellectual, and when it is done well, it can still help people to change their minds or at least to understand both sides of an argument. It feels like philosophy (and maybe history) are the most obvious fields in which this kind of skillset and credibility can be achieved and earned.
I see this as a great opportunity for effective altruists because, unlike so many knee-jerk positions, EA's tend to have very carefully and analytically investigated every question, and to have done so with a very clear and tangible criterion. We need more EA's writing and being interviewed in places where the general public can hear them - and we need those people to be trained in the art of communicating to the general public (not just other EAs) without dumbing down (which would defeat the purpose of aiming to be seen as a public intellectual. The best speak in such a way that other people share their ideas, in part, as a sign that they are smart enough to understand them.
I see support for philosophers as very valuable if it can lead not just to new insights, but more importantly, to new voices ready to communicate in the public domain.
MichaelStJules @ 2024-11-14T03:23 (+2) in response to Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently?
(Not speaking for my co-authors or RP.)
I think your brain-Fred is conscious, but overlaps so much with your whole brain that counting them both as separate moral patients would mean double counting.
We illustrated with systems that don't overlap much or at all. There are also of course more intermediate levels of overlap. See my comment here on some ideas for how to handle overlap:
https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we?commentId=pAZtCqpXuGk6H2FgF
Omnizoid @ 2024-11-14T09:34 (+2)
But then wouldn't this by brain has a bunch of different minds? How can the consciousness of one overlap with the consciousness of another?
Chris Leong @ 2024-11-12T04:26 (+4) in response to When is Philosophy Worth Funding?
I think a key crux here is whether you think AI timelines are short or long. If they're short, there's more pressure to focus on immediately applicable work. If they're long, then there's more benefit to having philosophers develop ideas which gradually trickle down.
DuĹĄan D. NeĹĄiÄ (Dushan) @ 2024-11-14T09:32 (+1)
In PIBBSS, we've had a mentor note that for alignment to go well, we need more philosophers working on foundational issues in AI rather than more prosaic researchers. I found that interesting, and I currently believe that this is true. Even in short-timeline worlds, we need to figure out some philosophy FAST.
alene @ 2024-11-14T08:32 (+2) in response to The goal isnât 'end factory farming', but reduce as much suffering as possible
This is such a good post, and I agree very much. You said so many things that I have been thinking and wishing I knew how to say. Thank you so, so much for writing this, @ElliotTep!
I agree we should focus on reducing suffering. And I have other reasons, too, in addition to the points you brought up.
Other reasons:
1. The problem with factory farming is the suffering it causes. So, we should focus on the real problemâthe suffering. When we talk about fighting factory farming, we are actually only talking about a proxy for our real goal. (The real goal is to decrease suffering.) I think it's better to focus on the real goal. Because focusing on a proxy can always have unintentional consequences. For instance, if we focus only on ending factory farming, we may decide to do something like tax methane emissions. That tax may cost the meat industry money. It may decrease the number of factory farms that get built. It may raise the price of beef and thus decrease the amount of meat that gets sold. But if it causes beef prices to go up, people will eat more chicken. And then the methane-tax intervention will result in more suffering. This is just one of many examples.
2. I have recently been learning first hand that a lot of people in the meat, egg, and dairy industries have serious concerns about the treatment of animals. There are slaughterhouse workers, contract growers, corporate meat-industry employees, and ag executives who really want to improve animal welfare! But, naturally, almost none of these people want to end animal farming. Because, as @Hazo points out, that would mean ending their livelihood. We are more likely to succeed at improving animal welfare if we can work collaboratively with these concerned people in the meat and egg industries. These are the people who deal with farmed animals on a day-to-day basis, and who have the biggest impact on farmed animals' lives. I think selecting a goal that we can work towards together with people within the industry is highly worthwhile.
3. Factory farming isn't the only thing that's bad. All suffering is bad. Animal testing causes severe suffering that's likely worse per individual than the suffering caused by factory farming. My understanding is that the scale of animal testing on mice and rats isn't actually known, and most numbers we see leave them out. Wild animals also suffer. Rodents suffer when they're bred in pet stores to sell to snake owners. Fish presumably suffer in large numbers in the pet trade. I'm not sure if people count insect farming as factory farming, but it's a concerning new trend that could theoretically cause even more suffering than at least what most people think of as factory farming. New forms of mass suffering could be invented in the future. If AI is sentient, people (or AI) could cause AI to suffer on massive scales. Digital minds could be created and replicated and made to suffer in huge numbers. If we fight factory farming, that doesn't help move the needle on other forms of suffering. If we focus on the suffering itself, maybe we can move the needle generally. For instance, if we work to create an anti-suffering ethic, that would be a more helpful ethic to create in the long run than a pro-vegan or anti-factory-farming ethic. Because the anti-suffering ethic would move us to help factory farmed animals while also staying vigilant about other forms of suffering.
4. Elliot's point about how ending factory farming is an unrealistic goal also worries me for another reason: The effect of the slogan on longtermist EAs who hear animal-focused EAs say it all the time. Animal people keep saying "Factory farming is going to end. Factory farming is unsustainable." To me, an AR person, I know to translate that slogan to "I'm trying to get myself hyped up! I'm trying to inspire others to join me on a crusade!" Because I know, sadly, what an uphill battle it would be to end factory farming. And I think most AR people know that. But to someone who doesn't spend their whole life focused on animal welfare, it's not obvious that this statement is just an inspirational quote. It sounds like the speaker is literally predicting that factory farming is going to end. And I worry that longtermist EAs, who may spend slightly less time paying attention to the trends in animal agriculture, may just hear the slogan and take it at face value. Here's why I worry about that: It seems that many longtermist EAs are working hard to try to preserve humanity, or at least consciousness, for as long as possible. And many longtermist EAs seem to assume that life in the future will be net positive. This assumption seems to involve assuming that factory farming will end, and that it won't be replaced by anything even worse (see point #3). I worry that longtermist EAs may be outsourcing their thinking a little to animal EAs. And animal EAs are falling down on the job by just giving an inspirational slogan when we should give the truth. If it's true that we have no realistic expectation of suffering decreasing in the future, and no reason to believe factory farming will end before humanity ends, we should make sure longtermists know that. That way, longtermist EAs can plan accordingly.
PabloAMC đ¸ @ 2024-11-14T07:52 (+5) in response to Is Deep Learning Actually Hitting a Wall? Evaluating Ilya Sutskever's Recent Claims
A quantum research scientist here: actually I would argue that is a misleading model for quantum computing. The main issue right now is technical, not economical. We still have to figure out error correction, without which you are bound to roughly 1000 logical gates. Far too little to do anything interesting.
AnimalAdvocacyAfrica @ 2024-11-14T07:29 (+13) in response to How much extra funding can EA AWF regrant?
Thank you for laying out these plans Karolina and for all the work you do!
We'd like to add to the following point:
We're running a training programme that is very similar to Welfare Matters', just focused on Africa. We pivoted to this intervention last year based on our experience of working with early stage orgs / advocates that were in the process of starting their orgs - most importantly Daniel Abiliba / AWL and Paul Ssuna / AWeCCA who were both funded by the EA AWF as a result.
A few months ago, we completed the first cohort of our new programme - now focusing explicitly on individuals and incubation instead of existing orgs. This has been promising so far and we're about to start the second cohort next week. Since the start of this new programme, we've incubated three new projects/orgs:
The first two received seed funding directly from us, totalling USD 53K. If their pilots turn out to be promising, we hope for these initiatives to successfully fundraise themselves - EA AWF would be a primary option for this. The third one received USD 27K in funding from EA AWF already for their pilot (not seed funded by us).
In addition to these projects, we'll likely soon incubate another project/org focused on cage-free campaigns in Zambia, the first one of its kind in the country, led by another one of our programme participants.
You and the fund managers will be the judge whether these projects/orgs are actually promising, but we wanted to flag this here, since:
We're not talking about huge amounts here, since we typically advise our participants to start lean and costs are generally fairly low in Africa compared to other parts of the world. But we think there is potential to grow further in this area. We'll give a more detailed update in our 2024 review which we plan to publish on the forum in a few weeks.
Nithin Ravi @ 2024-11-06T21:41 (+7) in response to Bad omens for US farmed animal policy work?
What do you think about losses like these being a trigger for backsliding on other farmed animal work?
For instance, the Animal Ag Lobby saying something like, "Look people don't care about animal welfare. Even progressive cities turned this down." Could this effect trigger something like the EATS act getting passed? I don't have an informed opinion on this, but it seems like a significant backfire risk.
I'm also worried that 308 (Denver's fur ban) would have passed without 309 (Denver's slaughterhouse ban) being right next to it. The Denver Democrats anti-endorsed both measures which may not have happened if the measures were run separately (total guess on that one, but it passed in Boulder which has very similar demographics).
At the same time, perhaps there is very significant social change & radical flank effects from forcing the vote on abolitionist work! Looking for insight.
RasaPetrauskaite @ 2024-11-14T07:07 (+3)
When we were deciding what we wanted to put on the ballot in Sonoma County (Measure J), we were thinking of this point of that it would look bad if a moderate measure failed. Our reasoning for choosing a ban on all factory farms (Concentrated Animal Feeding Operations) was that it was such a huge ask of this agricultural community that it likely would not pass, but it wouldnât look that bad if it failed because next time we could tone down the ask. But because initial polling showed that most people in the county would vote yes on Measure J, thatâs why we proceeded with this big ask. Also, our measure DD to ban all livestock operations did pass in Berkeley. It would have affected one operation, but when we started collecting signatures, they voluntarily shut down. It was Golden Gate Fields horse race track. As for Denver, our friends who were working on the ballot measures also said it was bad for the fur ban initiative that the other, more radical one, was next to it. They are learning from their mistakes.
Jason @ 2024-11-07T22:49 (+7) in response to Bad omens for US farmed animal policy work?
It seems plausible that J/309/etc advocates knew at some point that the initiatives were very unlikely to pass, and that low financial investment from that juncture onward was thus more a consequence of low public support earlier in the campaign season more than a cause of low public support.
Does anyone have information that could evaluate that possibility, such as longitudinal records of spending and polling outcomes?
RasaPetrauskaite @ 2024-11-14T06:50 (+1)
Yes, we did polling when we were preparing for Measure J. We paid a professional polling service (phone and texting polls) and also collected data in face-to-face interviews with voters in Sonoma County. We also used Survey Monkey to do a poll. All 3 polling methods that we used before commencing Measure J showed that more than 50% of voters in Sonoma County would vote Yes on Measure J. Thatâs why we decided to even proceed with Measure J.
RasaPetrauskaite @ 2024-11-14T06:39 (+1) in response to Bad omens for US farmed animal policy work?
I was working on Measure J in Sonoma County, and my friends were working on the 2 measures in Denver. One of the main obstacles we faced is with fundraising. Initial polls showed that Measure J in Sonoma County would have passed, but the opposition raised about 10 times more money than we did. We didnât have money to send truthful literature to every household in Sonoma County, but only to some households. However, the opposition sent multiple pieces of literature filled with exaggerations and lies to scare voters so that they would believe that store shelves would become empty if they voted yes on Measure J. Opposition raised over $2 million to just defeat Measure J. They had money for TV ads, and we didnât. They paid money to a local environmental nonprofit to publicize why locals should vote no. Our friends in Denver faced very similar challenges. However, our ballot measure to ban all livestock operations did pass in Berkeley. About 60% of people voted to shut down all livestock operations. When we started collecting signatures in Berkeley for this measure DD, then the only large livestock operation there decided to shut down. It was Golden Gate Fields horse race track. Also, even though Measure J did not pass in Sonoma County, it did generate a lot of press. Associated Press, LA Times, KQED, San Francisco Chronicle, and Washington Post covered it. If you count letters to the editor, then The Press Democrat covered Measure J around a 100 times. So thereâs definitely more awareness of the issue of animal welfare now because of our efforts in Sonoma County. We can leverage our wins in Berkeley and Sonoma County for future work on ballot measures. We need funding though!
alexTop @ 2024-11-14T05:12 (+3) in response to Is Deep Learning Actually Hitting a Wall? Evaluating Ilya Sutskever's Recent Claims
Very interesting post! Always been somewhat skeptical of AGI but this is the first I've heard of it's development possibly plateauing.
Omnizoid @ 2024-11-13T14:56 (+2) in response to Do Brains Contain Many Conscious Subsystems? If So, Should We Act Differently?
It may be that certain mental subsystems wouldn't be adequate by themselves to produce consciousness. But certainly some of them would. Consider a neuron in my brain and name it Fred. Absent Fred, I'd still be conscious. So then why isn't my brain-Fred conscious? The other view makes consciousness weirdly extrinsic--whether some collection of neurons is conscious depends on how they're connected to other neurons.
MichaelStJules @ 2024-11-14T03:23 (+2)
(Not speaking for my co-authors or RP.)
I think your brain-Fred is conscious, but overlaps so much with your whole brain that counting them both as separate moral patients would mean double counting.
We illustrated with systems that don't overlap much or at all. There are also of course more intermediate levels of overlap. See my comment here on some ideas for how to handle overlap:
https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we?commentId=pAZtCqpXuGk6H2FgF
saulius @ 2024-11-13T16:58 (+28) in response to saulius's Quick takes
Whatâs a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
Linch @ 2024-11-14T02:47 (+6)
It might be too hard to envision an entire grand future, but it's possible to envision specific wins in the short and medium-term. A short-term win could be large cage-free eggs campaigns succeeding, a medium-term win could be a global ban on caged layer hens. Similarly a short-term win for AI safety could be a specific major technical advance or significant legislation passed, a medium-term win could be AGIs coexisting with humans without the world going to chaos, while still having massive positive benefits (e.g. a cure to Alzheimer's).
David T @ 2024-11-08T20:41 (+4) in response to saulius's Quick takes
This feels like it could easily be counterproductive.
A chatbot's "relatable backstory" is generative fiction, and the default "Trump supporter" or "liberal voter" is going to be a vector of online commentary most strongly associated with Trumpiness or liberalism (which tends not to be the most nuanced...), with every single stereotyped talking point trotted out to contradict you. Yes, this can be tweaked, but the tweaking is just toning it down or adding further stereotypes, not creating an actual person.
Whereas the default person that doesn't agree with your politics is an actual human being, with actual life experience that has influenced their views, probably doesn't actually hold the views that strongly or agree with literally every argument cited in favour of $cause, is probably capable of changing the subject and becoming likeable again, and hey, you might even be able to change their mind.
So if you're talking to the first option rather than the second, you're actually understanding less.
I don't think it helps matters for people to try to empathise with (say) a few tens of millions of people who voted for the other side - in many cases because they didn't really pay a lot of attention to politics and had one particularly big concern - by getting them to talk to a robot trained on the other side's talking points. If you just want to understand the talking points, I guess ChatGPT is a (heavily filtered for inoffensiveness) starting point, or there's a lot of political material with varying degrees of nuance already out there on the internet written by actual humans...
Linch @ 2024-11-14T02:40 (+4)
One possible way to get most of the benefits of talking to a real human being while getting around the costs that salius mentions is to have real humans serve as templates for an AI chatbot to train on.
You might imagine a single person per "archetype" to start with. That way if Danny is an unusually open-minded and agreeable Harris supporter, and Rupert is an unusually open-minded and agreeable Trump supporter, you can scale them up to have Dannybots and Rupertbots talk to millions of conflicted people while preserving privacy, helping assure people they aren't judged by a real human, etc.
Julia_Wiseđ¸ @ 2024-11-13T15:09 (+53) in response to Julia_Wise's Quick takes
Thereâs an asymmetry between people/orgs that are more willing to publicly write impressions and things theyâve heard, and people/orgs that donât do much of that. You could call the continuum âtransparent and communicative, vs locked down and secretiveâ or ârecklessly repeating rumors and speculation, vs professionalâ depending on your views!
When I see public comments about the inner workings of an organization by people who donât work there, I often also hear other people who know more about the org privately say âThatâs not true.â But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
A downside is that if an organization isnât prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. Thatâs frustrating, but itâs a standard way that many organizations operate, both in EA and in other spaces.
There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if theyâd all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet.
So whatâs a reader to do?
When you see a claim that an org is doing some shady-sounding thing, made by someone who doesnât work at that org, remember the asymmetry. These situations will look identical to most readers:
AnonymousTurtle @ 2024-11-14T01:36 (+3)
I think anonymous accounts can help a bit with this. I would encourage people to make an anonymous account if they feel like it would help them quickly share useful information and not have to follow the discussion (while keeping in mind that no account is truly anonymous, and it's likely that committed people can easily deanonymize it)
Lucas S @ 2024-11-13T20:20 (+1) in response to What do the US elections mean for farmed animals?
I found your takeaways on abolitionist ballot initiatives quite disappointing. A few points:
1. You say the results largely bear out Rethinkâs study with 8-20% support from survey respondentsâin contrast to the previous study that said banning slaughterhouses had 39-43% support. But the slaughterhouse ban in Denver got 36% yes votes, which is just 3 points below the original more promising survey, and at least 16 points above the followup survey results. That means around 6 out of 7 of the 100,000 people who voted for the most radical proposal (banning slaughter) were meat eaters. This in spite of the fact that it was the first campaign of its kind, the first pilot campaign from the sponsoring organization, and as you mentioned getting outspent 6-to-1. How is this not a vindication of the idea that people are open to radical change if itâs a society-wide shift rather than a matter of isolated lifestyle choice? The campaign shifted the Overton window, helped establish animal rights as a civic/political issue, had the entire population of a major city grapple with the reality of animal slaughter, forced the animal ag industry to spend $2M opposing it, and got the support of 7x the number of vegetarians in the city. For a first attempt at a truly transformational proposal, these results mostly move my priors in favor of such campaigns.
2. While crediting the massive funding disparity, in part, for the loss, you are personally sitting on the biggest pile of animal-advocacy cash on the planet (or close to it). Is it not a self-fulfilling prophecy to decide not to fund a campaign because your priors suggest it will fail, then blame the loss on their lack of funding? I know itâs more complex and not solely at your discretion, but I think Open Philâs agency in this situation deserves to be acknowledged.
3. You failed to mention Berkeleyâs successful ballot initiative to ban factory farms. None currently exist in the city, so itâs largely a symbolic victory that would only stop future factory farms from being built. But it passed 60/40, which is a resounding win for a pretty radical proposal. The same general ask was behind Sonomaâs measure that failed by an even larger margin, so itâs certainly not all good news. But it seems clear that our conclusions need to be more nuanced than just âpeople arenât ready for abolition.â Obviously the specifics of each proposal, the size and nature of the jurisdiction, and strategic successes and failures of each campaign have a lot to do with whether a measure flies through with 60% of the vote, or crashes and burns with only 15%.
4. History is full of radical shifts that took ages to enact. Womenâs suffrage in Oregon took 6 times on the ballot over nearly 30 years. Slavery in the US took 12 generations and a civil war to abolish. Why do we look back at those who were fighting for full equality and justice with admiration, but for animals, in the present day, we insist is it only counter-productive to ask for anything other than modest reform? For every argument that radical asks alienate people and make reforms more difficult, I can think of arguments that propping up animal ag with âcertified humaneâ labels only reinforces the worldview that sees nonhumans as fundamentally commodities to be profited from and used for our enjoyment. I celebrate any kind of meaningful reform or harm reduction. But where is the theory of change that starts with corporate pressure campaigns and the promotion of âhigh welfareâ animal products, and ends with the world that animals actually deserve? For a movement that focuses so much on longtermism, I see a huge blind spot for the long term future of our nonhuman kin. A future without animal exploitation is only possible if weâre willing to advocate for it before itâs popular.
Jason @ 2024-11-14T01:34 (+2)
There are non-animal welfare reasons one might vote to ban slaughterhouses or factory farms in one's city (but be more okay with them elsewhere). Doing ~zero research to approximate the median voter, they sound like things with some potentially significant negative local externalities (adverse environmental effects, reduced property values, etc.) So you may have some NIMBY-motivated voters.
In addition, because the meat market is a regional or even national one, opponents cannot plausibly point to any effect of a localized slaughterhouse/factory farm ban on the prices that local voters pay at the grocery store. I think there's probably a subset of voters who would vote yes for a measure if and only if it has no plausible economic effect on the prices they pay.
Finally, these cities are more progressive than the states in which they exist, and a state can almost always pre-empt any city legislation that the state political system doesn't like. So I'd want to see evidence that the city voters weren't too far out of step with the state median voter before updating too much on city-level results. (Unlike the states -- which American political theory holds to pre-exist the Federal government and possess their own inherent sovereignty -- cities and counties are generally creations of the states without anything like their own inherent sovereignty.)
Evan_Gaensbauer @ 2024-11-14T01:32 (+2) in response to Do you want to do a debate on youtube? I'm looking for polite, truth-seeking participants.
I'm tentatively interested in participating in some of these debates. That'd depend on details of how the debates would work or be structured.
OscarDđ¸ @ 2024-11-08T14:48 (+16) in response to Scouts need soldiers for their work to be worth anything
This analysis seems roughly right to me. Another piece of it I think is that being a 'soldier' or a 'bednet-equivalent' probably feels low status to many people (sometimes me included) because:
To be clear I don't endorse this, I am just pointing out something I notice within myself/others. I think the second one is mostly just bad, and we should do things that are good regardless of whether they have 'EA vibes'. The first one I think is somewhat reasonable (e.g. I wouldn't want to pay someone to be a fulltime protest attendee to bring up the numbers) but I think soldiering can be quite challenging and laudable and part of a portfolio of types of actions one takes.
Paul_Lang @ 2024-11-14T01:31 (+5)
I'd like to add another bullet point
- personal fit
I think that protests play an important role in the political landscape, so I joined a few, but but walking through streets in large crowds and chanting made me feel uncomfortable. Maybe I'd get used to it if I tried more often.
saulius @ 2024-11-13T16:58 (+28) in response to saulius's Quick takes
Whatâs a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
AnonymousTurtle @ 2024-11-14T01:17 (+10)
For what it's worth, I'm skeptical of approaches that try to design the perfect future from first principles and make it happen. I'm much more optimistic about marginal improvements that try to mitigate specific problems (e.g. eradicating smallpox didn't cure all illness.)
How much we can help doesn't depend on how awful or how great the world is, we can save the drowning child whether there's a billion more that are drowning or a billion more that are thriving. To the drowning child the drowning is just as real, as is our opportunity to help.
If you feel emotionally down and unable to complete projects, I would encourage to try things that work on priors (therapy, exercise, diet, sleep, making sure you have healthy relationships) instead of "EA specific" things.
There are plenty of lives we can help no matter who won the US election and whether factory farming keeps getting worse, their lives are worth it to them, no matter what the future will be.
Paul_Lang @ 2024-11-14T01:15 (+6) in response to Why you should allocate more of your donation budget to effective giving organisations
I always donate close to 100% to what I believe is most effective at any given time. I do "diversify" across time, though. Last year, I almost donated 100% to an Effective Giving organization. In the end, I decided against this, because (a) their average donor was giving mostly to global health and development, while I was thinking that AI safety would be more effective by a factor much larger than their multiplier, and (b) the multiplier effect probably shifts this balance even further against my preferences.
There is of course an argument that it is only a question of time until newly acquired donors board the train to "crazy town" and give to more speculative causes with higher EV. But I was working under the assumption that the multiplier effect probably mostly reaches a demographic that likely sticks to their existing world views.
Larks @ 2024-11-12T02:47 (+8) in response to Introducing four brand-new charities! Here's the fall cohort of the Charity Entrepreneurship Incubation Program.
Thanks for your response.
I reviewed the source document you linked previously, but I didn't really find much evidence for the claim (that 'the "iron-fisted war on crime" is failing') in it, and reviewed it again just now. Is there a particular section you mean to point towards? I realize the source asserts this claim, but it doesn't seem to actually argue for it.
I'm also curious as to why you are using such old data? Government statistics are often slow, but your charts are literally almost a decade old. For example, you claim, based on the homicide data up to 2015, that
But if we consult OWID, we see that there are six more years of data you excluded from your chart, and it shows the opposite pattern: violence has been falling.
If your argument was valid - that rising violence proves past approaches were bad - then this more recent data would suggest we should draw the opposite conclusion, and update in favour of existing approaches. (I don't think we should infer this, because I think the argument is invalid anyway).
I think omitting this later data makes a pretty big difference, because you made a claim in the present tense - that the iron fist approach is failing - which suggests you should be basing this on evidence about current iron fist approaches. The El Salvador crackdown is the most famous and most iron fist approach around right now (most of these countries don't even have capital punishment!), so I don't think you can ignore it.
You also claim that prison spending is unsustainable, based on a forecast for 16bn-24bn of 2024 dollars spend on prisons:
But Latin American + Caribbean GDP for 2014 was 5.4 trillion, so even at the upper end this is only 0.4%. You're right that government spending can't grow as a share of GDP forever, but I don't see much reason to think this is the limit.
LauraSofia @ 2024-11-14T01:11 (+1)
Hi Larks,
Thank you for such an engaged response.
Youâre absolutely right that our original statement, âthe iron-fisted war on crime is failing,â was broad and, admittedly, more geared toward emphasizing the challenges than making a definitive, across-the-board claim. We recognize that this phrase, chosen to convey the intensity of the issue, may have come across as too sweepingâespecially given that we are not experts on every countryâs policies, including El Salvadorâs current crackdown. Instead, our intent was to highlight the broader limitations of heavy punitive measures in sustainably reducing crime across Latin America, not to imply that every such approach in every context has failed or will fail.
Our assertion rests on several general concerns about incarcerationâs long-term impact:
Our goal with ACTRA is to explore this complementary, rehabilitative approach, rather than to assert that punitive measures do not have any effect at all. Weâll work to convey these subtleties more clearly in future communications. Thanks again for raising these points.
Ian Turner @ 2024-11-10T03:19 (+17) in response to Why you should allocate more of your donation budget to effective giving organisations
Iâm not necessarily disputing the idea that donating to these sorts of fundraising organizations is a good use of money; but we also need to be careful about double-counting. Itâs tempting to try to take credit for oneâs own meta donations while object-level donors are also taking full credit for the programs they fund.
My practice, perhaps adjacent but not identical to the one proposed here, is to give 15% of a donation to the charity evaluator or facilitator that introduced me to the main charity or program. In recent years thatâs been GiveWell, and the fact that they have an excess funds regranting policy makes this an even easier decision.
Paul_Lang @ 2024-11-14T00:59 (+6)
Yeah, the double accounting question can be a problem. It is inherent to counterfactual impact. Imagine a production chain X -> Y -> Product. Then counterfactually, X can call 100% dips of the product; as can Y. So together, they have 200%, which does not make sense.
However, there are alternative impact metrics. For example, Shapley values have some nice properties. In particular, they guarantee that they sum up to one. Intuitively, they calculate the mean counterfactual impact for each player over all possible configurations of players. This can be useful to assess important predictors in statistical modles. But it is also the reason why I don't find them partucularly useful for decision making. After all, you are not interested in your impact in hypothetical worlds, but just in your impact in the current constellation of the world, i.e. your counterfactual impact.
So in summary, I'd say use counterfactuals for decision making and Shapley values for determining bragging rights ;)
Comments on 2024-11-13
Rasool @ 2024-11-13T23:51 (+1) in response to First employee at a social impact start-up
I downvoted. This post would be better if it was a clearer explanation of what the organisation does, its theory of change, impact and cost-effectiveness, and only a brief description of the job opening
Plus it seems like there are a bunch of employees on the website already
Marcel D @ 2024-11-13T23:43 (+4) in response to Bad omens for US farmed animal policy work?
Has anyone thought about trying to convince anti-regulatory figures (e.g., Marc Andreessen) in the new admin's orbit to speak out against the regulatory capture of banning cultivated meat? Has anyone tried painting cultivated meat as "Little Tech"?
saulius @ 2024-11-13T16:58 (+28) in response to saulius's Quick takes
Whatâs a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
Gemma đ¸ @ 2024-11-13T23:42 (+4)
Sorry to hear that you're having a rough time!
When I'm feeling like this, I find that the only thing that helps is actually finishing a project end-to-end so I feel momentum.
Something I intrinsically think is valuable but wasn't going to get done otherwise. (Like improving wikis or cleaning up a mess in a park).
Going as small as possible while still being satisfying helps remind me that there are things within my control and people around me that I can help.
I also liked this post from FarmKind
https://www.linkedin.com/posts/aidan-alexander_đđ˛-đđđŹđđđŤđŠđĽđđ§-đđ¨-đđ§đ-activity-7262449165924712451-lb7T?utm_source=share&utm_medium=member_android
AnonymousEAForumAccount @ 2024-11-13T23:37 (+3) in response to What should EAIF Fund?
There are currently key aspects of EA infrastructure that aren't being run well, and I'd love to see EAIF fund improvements. For example, it could fund things like the operation of the effectivealtruism.org or the EA Newsletter. There are several important problems with the way these projects are currently being managed by CEA.
I think all these problems could be improved if EAIF funded these projects, either by providing earmarked funding (and accountability) to CEA or by finding applicants to take these projects over.
To be clear, these arenât the only âinfrastructureâ projects that Iâd like to see EAIF fund. Other examples include the EA Survey (which IMO is already being done well but would likely appreciate EAIF funding) and conducting an ongoing analysis of community growth at various stages of the growth funnel (e.g. by updating and/or expanding this work).
MichaelDickens @ 2024-11-13T23:00 (+2) in response to Pareto-Distributed Opportunities Imply Isoelastic Utility
This principle has seemingly strange implications:
EffectiveAdvocateđ¸ @ 2024-11-13T07:14 (+6) in response to EffectiveAdvocate's Quick takes
Does anyone have thoughts on whether itâs still worthwhile to attend EAGxVirtual in this case?
I have been considering applying for EAGxVirtual, and I wanted to quickly share two reasons why I haven't:
MichaelStJules @ 2024-11-13T22:54 (+6)
EAGxVirtual is cheap to attend. I don't really see much downside to only attending one day. And you can still make connections and meet people after the conference is over.
taoburga @ 2024-11-13T22:40 (+3) in response to Scouts need soldiers for their work to be worth anything
This is a valuable post, but I don't think it engages with a lot of the concern about PauseAI advocacy. I have two main reasons why I broadly disagree:
AI safety is an area with a lot of uncertainty. Importantly, this uncertainty isn't merely about the nature of the risks but about the impact of potential interventions.
Of all interventions, pausing AI development is, some think, a particularly risky one. There are dangers like:
People at PauseAI are probably less concerned about the above (or more concerned about model autonomy, catastrophic risks, and short timelines).
Although you may have felt that you did your "scouting" work and arrived at a position worth defending as a warrior, others' comparably thorough scouting work has led them to a different position. Their opposition to your warrior-like advocacy, then, may not come (as your post suggests) from a purist notion that we should preserve elite epistemics at the cost of impact, but from a fundamental disagreement about the desirability of the consequences of a pause (or other policies), or of advocacy for a pause.
If our shared goal is the clichĂŠd securing-benefits-and-minimizing-risks, or even just minimizing risks, one should be open to thoughtful colleagues' input that one's actions may be counterproductive to that end-goal.
2. Fighting does not necessarily get one closer to winning.
Although the analogy of war is compelling and lends itself well to your post's argument, in politics fighting often does not get one closer to winning. Putting up a bad fight may be worse than putting up no fight at all. If the goal is winning (instead of just putting up a fight), then taking criticism to your fighting style seriously should be paramount.
I still concede that a lot of people dismiss PauseAI merely because they see it as cringe. But I don't think this is the core of most thoughtful people's criticism.
To be very clear, I'm not saying that PauseAI people are wrong, or that a pause will always be undesirable, or that they are using the wrong methods. I am answering to
(1) the feeling that this post dismissed criticism of PauseAI without engaging with object-level arguments, and the feeing that this post wrongly ascribed outside criticism to epistemic purism and a reluctance to "do the dirty work," and
(2) the idea that the scout-work is "done" already and an AI pause is currently desirable. (I'm not sure I'm right here at all, but I have reasons [above] to think that PauseAI shouldn't be so sure either.)
Sorry for not editing this better, I wanted to write it quickly. I welcome people's responses though I may not be able to answer to them!
EffectiveAdvocateđ¸ @ 2024-11-13T07:08 (+3) in response to The Structural Transformation Case For Peacekeeping
Hi Lauren!
Thank you for another excellent post! Iâm becoming a big fan of the Substack and have been recommending it.
Quick question you may have come across in the literature, but I didnât see it in your article: Not all peacekeeping missions are UN missions; there are also missions from ECOWAS, the AU, EU, and NATO.
Is the data you presented exclusively true for UN missions, or does it apply to other peacekeeping operations as well?
Iâd be curious to know, since those institutions seem more flexible and less entangled in geopolitical conflicts than the UN. However, I can imagine they may not be seen as neutral as the UN and therefore may be less effective.
Lauren Gilbert @ 2024-11-13T22:26 (+1)
I haven't seen a lot of evidence on other kinds of peacekeepers, so I don't know that I can say with confidence how effective they are! I would guess it depends on how much they are seen as neutral third party.
EffectiveAdvocateđ¸ @ 2024-11-13T07:01 (+1) in response to The Structural Transformation Case For Peacekeeping
Could you say a bit more about your uncertainty regarding this?
After reading this, it sounds to me like shifting some government spending to peacekeeping would be money much better spent than on other themes.
Or do you mean it more from an outsider/activist perspectiveâthat the work of running an organization focused on convincing policymakers to do this would be very costly and might make it much less effective than other interventions?
Lauren Gilbert @ 2024-11-13T22:25 (+1)
More the latter - I think it's hard to influence the UN, especially if you need security council sign off. Really, you have to influence every country on the security council to agree to more peacekeeping, and also come up with more funding somewhere, and UN bureaucracy is famously difficult and impenetrable.
Would I love to redesign UN peacekeeping to focus more on rule of law and less on soldiers? Absolutely. Do I think there's much possibility to do that? Not really no.
NickLaing @ 2024-11-13T13:58 (+2) in response to "Direct cash looks 3-4x more cost-effective in a new GiveWell assessment"
Oh yeah that's super interesting that the mortality effect doesn't change the cost-effectiveness estimate that much. I wonder why that is excactly? Might look into it later!
Karthik Tadepalli @ 2024-11-13T22:13 (+4)
Cash transfers are not targeted (i.e. lots of households receive transfers that don't have young children) and are very expensive relative to other ways to avert child deaths ($1000 vs a few dollars for a bednet). The latter varies over more orders of magnitude than child mortality effects, so it dominates the calculation.
Julia_Wiseđ¸ @ 2024-11-13T15:09 (+53) in response to Julia_Wise's Quick takes
Thereâs an asymmetry between people/orgs that are more willing to publicly write impressions and things theyâve heard, and people/orgs that donât do much of that. You could call the continuum âtransparent and communicative, vs locked down and secretiveâ or ârecklessly repeating rumors and speculation, vs professionalâ depending on your views!
When I see public comments about the inner workings of an organization by people who donât work there, I often also hear other people who know more about the org privately say âThatâs not true.â But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their orgâs communications staff, and then follow whatever discussion comes from it.
A downside is that if an organization isnât prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. Thatâs frustrating, but itâs a standard way that many organizations operate, both in EA and in other spaces.
There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if theyâd all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet.
So whatâs a reader to do?
When you see a claim that an org is doing some shady-sounding thing, made by someone who doesnât work at that org, remember the asymmetry. These situations will look identical to most readers:
Will Aldred @ 2024-11-13T22:04 (+18)
Epistemic status: strong opinions, lightly held
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds⌠not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round, and far better than them never writing a comment at all. (I guess the exception to this is if the incorrect claims had legal ramifications that couldnât be undone. But I donât think thatâs true of the case you refer to?)
I donât think the fact that this is a standard way for orgs to act in the wider world says much about whether this should be the way EA orgs act. In the wider world, an orgâs purpose is to make money for its shareholders: the org has no âteammatesâ outside of itself; no-one really expects the org to try hard to communicate what it is doing (outside of communicating well being tied to profit); no-one really expects the org to care about negative externalities. Moreover, withholding information can often give an org a competitive advantage over rivals.
Within the EA community, however, there is a shared sense that we are all on the same team (I hope): there is a reasonable expectation for cooperation; there is a reasonable expectation that orgs will take into account externalities on the community when deciding how to act. For example, if communicating some aspect of EA org Xâs strategy would take half a day of staff time, I would hope that the relevant decision-maker at org X takes into account not only the cost and benefit to org X of whether or not to communicate, but also the cost/benefit to the wider community. If half a day of staff time helps others in the community better understand org Xâs thinking,[1] such that, in expectation, more than half a day of (quality-adjusted) productive time is saved (through, e.g., community members making better decisions about what to work on), then I would hope that org X chooses to communicate.
I would personally feel a lot better about a community where employees arenât policed by their org on what they can and cannot say. (This point has been debated beforeâsee saulius and Habryka vs. the Rethink Priorities leadership.) I think such policing leads to chilling effects that make everyone in the community less sane and less able to form accurate models of the world. Going back to your example, if there was no requirement on someone to get their EAF/LW comment checked by their orgâs communications staff, then that would significantly lower the time/effort barrier to publishing such comments, and then the whole argument around such comments being too time-consuming to publish becomes much weaker.
All this to say: I think youâre directionally correct with your closing bullet points. I think itâs good to remind people of alternative hypotheses. However, I push back on the notion that we must just accept the current situation (in which at least one major EA org has very little back-and-forth with the community)[2]. I think that with better norms, we wouldnât have to put as much weight on bullet points 2 and 3, and weâd all be stronger for it.
Or, rather, what staff at org X are thinking. (I donât think an org itself can meaningfully have beliefs: people have beliefs.)
Note: Although I mentioned Rethink Priorities earlier, Iâm not thinking about Rethink Priorities here.
Bob Fischer @ 2024-11-13T21:57 (+4) in response to How would your project use extra funding? (Marginal Funding Week 2024)
Answering on behalf of Arthropoda Foundation. We've summarized our funding priorities here. Everything we raise will go toward funding insect welfare science (as we have no staff or overhead), with a particular focus on humane slaughter, nutrition and living conditions, and implementable welfare assessment tools.
Support Insect WelfareVasco Grilođ¸ @ 2024-11-12T16:17 (+2) in response to Corporate campaigns affect 9 to 120 years of chicken life per dollar spent
Hi Saulius,
What are your current best guesses for the expected chicken-years improved per $ for broiler welfare and cage-free campaigns funded in 2024? Open Philanthropy thinks âthe marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Sauliusâ analysis [this post]â, which suggests broiler welfare and cage-free campaigns improve 3.00 (= 15*1/5) and 10.8 chicken-year/$ (= 54*1/5).
saulius @ 2024-11-13T21:15 (+4)
Hey Vasco. I don't know. I don't have a birds eye view of the movement right now the way Open Philanthropy does. It depends on the region and the campaign a lot. I think I underestimated mean years of impact in this post which would balance things out a bit. I also never checked whether my guesses about implementation rates in this post are correct.
I imagine that a significant portion of work being done now is on ensuring that commitments are implemented. And any estimates the cost-effectiveness of implementation work are going to be a lot more subjective. Like we could show people graphs like this
and as if they look accurate (this graph is just for illustration purposes). But the people we'd be asking would probably mostly be the people working on these campaigns, which introduces bias.
It's not the first time you are asking about this. Perhaps you would be interested in creating a new cost-effectiveness estimate with my help? I've done multiple related projects and I have a bunch of theoretical thoughts on how to do a new estimate, but I don't want to do it by myself. Like it would involve asking many animal advocates for opinions which causes me a lot of social anxiety, even though everyone I talked to about these sorts of things seemed lovely and friendly. It's the sort of thing that I'd only consider doing if EA Animal Welfare Fund or Open Philanthropy funded it, because they would be the primary users of such research, and if they wouldn't want to pay for it, then it's probably not worth doing. But uh, even if they did, I'm still unsure if that would be the most action-guiding project. But just wanted to throw this idea out there in case you or someone else is interested.
Bob Fischer @ 2024-11-13T21:12 (+4) in response to Has your organisation lost funding due to the Good Ventures funding shift? Have you managed to replace it?
OP funded several scientists working on insect sentience and welfare. Arthropoda Foundation was formed to centralize and assist in the funding situation for those scientists. However, we've not yet replaced all the funding from GVF. For more on our funding priorities, see our post for Marginal Funding Week.
JLRiedi @ 2024-11-13T20:43 (+3) in response to How would your project use extra funding? (Marginal Funding Week 2024)
Answering on behalf of Faunalytics: we posted Faunalyticsâ Funding Gap of $385,000 to address how we would use donations from Marginal Funding Week. Thank you for organizing this, and best wishes to everyone involved in their fundraising efforts.
det @ 2024-11-13T12:48 (+1) in response to Cutting AI Safety down to size
Again, just giving my impressions from interacting with AI safety people: it doesn't seem to me like I get this impression by drawing a larger circle -- I don't recall hearing the types of arguments you allude to even from people I consider "core" to AI safety. I think it would help me understand if you were able to provide some examples? (Although like I said, I found examples either way hard to search for, so I understand if you don't have any available.)
I still disagree about the Dial post: at the end Zvi says
So my read is that he wants to explain and understand the position as well as possible, so that he can cooperate as effectively as possible with people who take the Dial position. He also agrees on lots of object-level points with the people he's arguing against. But ultimately actually using the Dial as an argument is "obvious nonsense," for the same reason the Technology Bucket Error is an error.
Holly Elmore â¸ď¸ đ¸ @ 2024-11-13T20:40 (+3)
I was going on my memory of that post and I don't have the spoons to go through it again, so I'll take your word for it.
saulius @ 2024-11-13T16:58 (+28) in response to saulius's Quick takes
Whatâs a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
Ulrik Horn @ 2024-11-13T20:30 (+5)
Maybe this is a cop-out but I am thinking more and more of a pluralistic and mutually respectful future. Some people might take off on a spaceship to settle a nearby solar system. Some others might live lower-tech in eco villages. Animals will be free to pursue their goals. And each of these people will pursue their version of a worthwhile future with minimal reduction in the potential of others to fulfill theirs. I think anything else will just lead to oppressions of everyone that is not onboard with some specific wild project - I think most people's dreams of a future are pretty wild and not something I would want for myself!
Lucas S @ 2024-11-13T20:20 (+1) in response to What do the US elections mean for farmed animals?
I found your takeaways on abolitionist ballot initiatives quite disappointing. A few points:
1. You say the results largely bear out Rethinkâs study with 8-20% support from survey respondentsâin contrast to the previous study that said banning slaughterhouses had 39-43% support. But the slaughterhouse ban in Denver got 36% yes votes, which is just 3 points below the original more promising survey, and at least 16 points above the followup survey results. That means around 6 out of 7 of the 100,000 people who voted for the most radical proposal (banning slaughter) were meat eaters. This in spite of the fact that it was the first campaign of its kind, the first pilot campaign from the sponsoring organization, and as you mentioned getting outspent 6-to-1. How is this not a vindication of the idea that people are open to radical change if itâs a society-wide shift rather than a matter of isolated lifestyle choice? The campaign shifted the Overton window, helped establish animal rights as a civic/political issue, had the entire population of a major city grapple with the reality of animal slaughter, forced the animal ag industry to spend $2M opposing it, and got the support of 7x the number of vegetarians in the city. For a first attempt at a truly transformational proposal, these results mostly move my priors in favor of such campaigns.
2. While crediting the massive funding disparity, in part, for the loss, you are personally sitting on the biggest pile of animal-advocacy cash on the planet (or close to it). Is it not a self-fulfilling prophecy to decide not to fund a campaign because your priors suggest it will fail, then blame the loss on their lack of funding? I know itâs more complex and not solely at your discretion, but I think Open Philâs agency in this situation deserves to be acknowledged.
3. You failed to mention Berkeleyâs successful ballot initiative to ban factory farms. None currently exist in the city, so itâs largely a symbolic victory that would only stop future factory farms from being built. But it passed 60/40, which is a resounding win for a pretty radical proposal. The same general ask was behind Sonomaâs measure that failed by an even larger margin, so itâs certainly not all good news. But it seems clear that our conclusions need to be more nuanced than just âpeople arenât ready for abolition.â Obviously the specifics of each proposal, the size and nature of the jurisdiction, and strategic successes and failures of each campaign have a lot to do with whether a measure flies through with 60% of the vote, or crashes and burns with only 15%.
4. History is full of radical shifts that took ages to enact. Womenâs suffrage in Oregon took 6 times on the ballot over nearly 30 years. Slavery in the US took 12 generations and a civil war to abolish. Why do we look back at those who were fighting for full equality and justice with admiration, but for animals, in the present day, we insist is it only counter-productive to ask for anything other than modest reform? For every argument that radical asks alienate people and make reforms more difficult, I can think of arguments that propping up animal ag with âcertified humaneâ labels only reinforces the worldview that sees nonhumans as fundamentally commodities to be profited from and used for our enjoyment. I celebrate any kind of meaningful reform or harm reduction. But where is the theory of change that starts with corporate pressure campaigns and the promotion of âhigh welfareâ animal products, and ends with the world that animals actually deserve? For a movement that focuses so much on longtermism, I see a huge blind spot for the long term future of our nonhuman kin. A future without animal exploitation is only possible if weâre willing to advocate for it before itâs popular.
SummaryBot @ 2024-11-13T20:16 (+1) in response to A selection of lessons from Sebastian Lodemann
Executive summary: Sebastian Lodemann was an exceptional colleague and friend whose profound impact on the EA community was characterized by humility, generosity, empowerment, and a deeply compassionate approach to personal and professional challenges.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2024-11-13T20:15 (+1) in response to Cruelty --> Liability: Legal Impact for Chickensâs room for funding & marginal impact
Executive summary: Legal Impact for Chickens (LIC) aims to reduce animal suffering by using strategic civil litigation to make factory farm cruelty legally and financially risky for agricultural companies.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2024-11-13T20:14 (+1) in response to 2024 Animal Advocacy Strategy Forum: Event summary and survey results
Executive summary: The 2024 Animal Advocacy Strategy Forum revealed key challenges and strategic priorities for the animal advocacy movement, focusing on coordination, funding, regional resource allocation, and talent development.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2024-11-13T20:13 (+1) in response to Don't Let Other Global Catastrophic Risks Fall Behind: Support ORCG in 2024
Executive summary: ORCG seeks funding to address critical global catastrophic risks beyond AI, focusing on pandemic preparedness, food security, and risk management to protect humanity's future.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2024-11-13T20:12 (+1) in response to When is Philosophy Worth Funding?
Executive summary: Effective Altruism (EA) funding for philosophy research should carefully consider supporting both research institutes and individual academics, with a potential undervaluation of generalist, foundational ethical theory work.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2024-11-13T20:11 (+1) in response to ALLFED needs your support for global catastrophe preparedness
Executive summary: ALLFED seeks funding to advance critical research and projects aimed at improving global resilience to potential catastrophic food system failures caused by scenarios like nuclear winter, extreme pandemics, or infrastructure collapse.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2024-11-13T20:10 (+1) in response to What do the US elections mean for farmed animals?
Executive summary: Despite changing political leadership, the US electoral landscape presents both challenges and modest opportunities for farm animal welfare, with the most significant impacts likely to be determined by sustained advocacy efforts beyond electoral politics.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2024-11-13T20:09 (+1) in response to Expanding Coalitions to Amplify Impact: The Humane League's Room for Funding
Executive summary: The Humane League seeks $10.5 million in additional funding to expand its global efforts to eliminate battery cages for chickens and strengthen animal advocacy through the Open Wing Alliance and Animal Policy Alliance.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2024-11-13T20:08 (+1) in response to How the CBT Lab Amplifies Effective Altruistsâ Impact
Executive summary: The CBT Lab Peer Support program helps ambitious altruists improve mental health and productivity, potentially amplifying their positive impact on the world through targeted, evidence-based mental health interventions.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
emre kaplanđ¸ @ 2024-11-13T15:05 (+9) in response to EA Animal Welfare Fund: 2024 Review, Changes, and Plans
What does EA AWF think about publishing annual impact reports reporting the outcomes of its previous grants? I understand how this might be much more difficult than publishing an impact report for a single organisation. But as it stands, donating to EA AWF requires a lot of trust in fund managers and EA movement as there is little data available on the impact of previous grants. I care a lot about the growth of this fund and I'd have much easier time recommending this fund to potential donors if they could learn more about its past impact.
anon- đś @ 2024-11-13T19:41 (+1)
Totally agree with this, maybe itâs included in the additional transparency mentioned.
Andy Morgan đ¸ @ 2024-11-13T19:14 (+8) in response to Is Deep Learning Actually Hitting a Wall? Evaluating Ilya Sutskever's Recent Claims
This is a great post and I've just signed up to your newsletter. Thanks, Garrison.
Garrison @ 2024-11-13T19:33 (+3)
Thanks so much Andy! Hope you enjoy :)
anon- đś @ 2024-11-13T19:29 (+1) in response to Hive: 2024 Achievements, Current Funding Situation, and Plans
Additionally I have noticed that hive have sponsored a lot of events. How much of your budget does this take up? Do you imagine continuing to sponsor as many events in future?
anon- đś @ 2024-11-13T19:23 (+5) in response to Hive: 2024 Achievements, Current Funding Situation, and Plans
Question on this, I think it wasnât very clear what is the actual total budget for Hive for 2025? how much is already covered? Do you have any outstanding grants on the table?
How do you currently value your cost-effectiveness on how much $ you are adding to the movement vs. spending.
Andy Morgan đ¸ @ 2024-11-13T19:14 (+8) in response to Is Deep Learning Actually Hitting a Wall? Evaluating Ilya Sutskever's Recent Claims
This is a great post and I've just signed up to your newsletter. Thanks, Garrison.
NickLaing @ 2024-11-13T04:39 (+3) in response to "Direct cash looks 3-4x more cost-effective in a new GiveWell assessment"
I love the way you put this
"have you not considered the possibility that people have noticed the outsiders with clipboards asking personal questions seem to be associated in some way with their neighbours getting unexpected windfalls, and started to speculate about what sort of answers the NGOs are looking for..."
AdamSalisbury @ 2024-11-13T19:03 (+13)
Hi Nick & David,
I wrote this piece and wanted to offer my $0.02 on Hawthorne effects driving these consumption spillover results as itâs not covered in the report. I donât think this is likely to be a key driver of the large spillovers reported, for two reasons:
Hawthorne effects for recipient consumption gains seem more intuitively concerning to me, and Iâve been wondering whether this could be part of the story behind these large recipient consumption gains at 5-7 years weâve been sent. Weâre not putting much weight on these results at the moment as theyâve not been externally scrutinized, but itâs something I plan to think more about if/when we revisit these.
saulius @ 2024-11-13T16:58 (+28) in response to saulius's Quick takes
Whatâs a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
Brad Westđ¸ @ 2024-11-13T18:34 (+12)
I think the sort of world that could be achieved by the massive funding of effective charities is a rather inspiring vision. Natalie Cargill, Longview Philanthropy's CEO, lays out a rather amazing set of outcomes that could be achieved in her TED Talk.
I think that a realistic method of achieving these levels of funding are Profit for Good businesses, as I lay out in my TEDx Talk. I think it is realistic because most people don't want to give something up to fund charities -as donation would require- but if they could help solve world problems by buying products or services they want or need of similar quality at the same price, they would.
saulius @ 2024-11-13T16:58 (+28) in response to saulius's Quick takes
Whatâs a realistic, positive vision of the future worth fighting for?
I feel lost when it comes how to do altruism lately. I keep starting and dropping various little projects. I think the problem is that I just don't have a grand vision of the future I am trying to contribute to. There are so many different problems and uncertainty about what the future will look like. Thinking about the world in terms of problems just leads to despair for me lately. As if humanity is continuously not living up to my expectations. Trump's victory, the war in Ukraine, increasing scale of factory farming, lack of hope on AI. Maybe insects suffer too, which would just create more problems. My expectations for humanity were too high and I am mourning that but I don't know what's on the other side. There are so many things that I don't want to happen, that I've lost the sight of what I do want to happen. I don't want to be motivated solely by fear. I want some sort of a realistic positive vision for the future that I could fight for. Can anyone recommend something on that? Preferably something that would take less than 30 minutes to watch or read. It can be about animal advocacy, AI, or global politics.
Imma @ 2024-11-13T18:15 (+7)
FWIW: definitely not a world vision, but Ozy's blog is the most heart-warming thing I've read after the recent US elections.
Ulrik Horn @ 2024-11-13T10:43 (+2) in response to AMA: PauseAI US needs money! Ask founder/Exec Dir Holly Elmore anything for 11/19
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
Habryka @ 2024-11-13T18:01 (+4)
It's literally at the top of his Wikipedia page: https://en.m.wikipedia.org/wiki/Jaan_Tallinn
Holly Elmore â¸ď¸ đ¸ @ 2024-11-09T08:43 (+17) in response to Scouts need soldiers for their work to be worth anything
Yes, this matches what potential attendees report to me. They are also afraid of being âcringeâ and donât want to be associated with noob-friendly messaging, which I interpret as status-related.
This deeply saddens me because one of the things I most admired about early EA and found inspirational was the willingness to do unglamorous work. Itâs often neglected so it can be very high leverage to do it!
MichaelDickens @ 2024-11-13T17:57 (+2)
I feel this wayâI recently watched some footage of a PauseAI protest and it made me cringe, and I would hate participating in one. But also I think there are good rational arguments for doing protests, and I think AI pause protests are among the highest-EV interventions right now.
Ulrik Horn @ 2024-11-13T10:43 (+2) in response to AMA: PauseAI US needs money! Ask founder/Exec Dir Holly Elmore anything for 11/19
If this is true, or even just likely to, and someone has data on this, making this data public, even in anonymous form will be extremely high impact. I do recognize that such moves could come at great personal cost but in case it is true I just wanted to put it out there that such disclosures could be a single action that might by far outstrip even the lifetime impact of almost any other person working to reduce x-risk from AI. Also, my impression is that any evidence of this going on is absent form public information. I really hope absence of such information is actually just because nothing of this sort is actually going on but it is worth being vigilant.
MichaelDickens @ 2024-11-13T17:46 (+6)
What do you mean by "if this is true"? What is "this"?
Good Impressions @ 2024-11-12T17:12 (+1) in response to Opportunities to improve EA communications
Thanks so much for these kind words! BlueDot has been very involved which has been a big part of why their campaigns have been so successful.
Angelina Li @ 2024-11-13T17:33 (+2)
I'm curious if you have general advice (top 3 tips? common mistakes?) for EA orgs who are just starting to experiment with more marketing.
No pressure, of course!