Latest comments on the EA Forum

Comments on 2024-02-24

MarcusAbramovitch @ 2024-02-24T00:30 (+2) in response to New Open Philanthropy Grantmaking Program: Forecasting

Yes, it's a meta topic; I'm commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn't get funding outside of EA, and even inside EA had no institutional commitment;

 

  1. I don't think it's necessary to talk in terms of an ITN framework but something being neglected isn't nearly reason enough to fund it. Neglectedness is perhaps the least important part of the framework and something being neglected alone isn't a reason to fund it. Getting 6 year olds in race cars for example seems like a neglected cause but one that isn't worth pursuing.
  2. I think something not getting funding outside of EA is probably a medium-sized update to the thing not being important enough to work on. Things start to get EA funding once a sufficient number of the community finds the arguments for working on a problem sufficiently convincing. But many many many problems have come across EA's eyes and very few of them have stuck. For something to not get funding from others suggests that very few others found it to be important.
  3. Forecasting still seems to get a fair amount of dollars, probably about half as much as animal welfare. https://docs.google.com/spreadsheets/d/1ip7nXs7l-8sahT6ehvk2pBrlQ6Umy5IMPYStO3taaoc/edit?usp=sharing

Your points on helping future people (and non-human animals) are well taken.

Austin @ 2024-02-24T02:07 (+2)
  1. Yeah, I agree neglectedness is less important but it does capture something important; I think eg climate change is both important and tractable but not neglected. In my head, "importance" is about "how much would a perfectly rational world direct at this?" while "neglected" is "how far are we from that world?".
  2. Also agreed that the lack of external funding is an update that forecasting (as currently conceived) has more hype than real utility. I tend to think this is because of the narrowness of how forecasting is currently framed, though (see my comments on tractability above)
  3. That's a great resource I wasn't aware of, thanks (did you make it?). I do think that OpenPhil has spent a commendable amount of money on forecasting to date (though: nowhere near half Animal Welfare, more like a tenth). But I think this has been done very unsystematically, with no dedicated grantmaker. My understanding it was like, a side project of Luke Muehlhauser for a long time; when I reached out in Jan '23 he said they were not making new forecasting grants until they filled this role. Even if it took a year, I'm glad this program is now launched!

 

Arepo @ 2024-02-23T11:47 (+2) in response to Let's advertise EA infrastructure projects, Feb 2024

Thanks Saul. I've added Manifund. I'm unsure whether to add the map, since I want to keep this list to 'services' (or products) that are actively being worked on rather than lists that might go stale. How much continuing work are you putting into upkeeping the map?

Saul Munn @ 2024-02-24T02:06 (+1)

ahh, sorry — i meant that there are a bunch of things on the map that you might consider adding, particularly in the "forecasting tools" section (e.g. manifold, metaculus, squiggle, guesstimate, metaforecast, etc). i didn't necessarily mean to imply that you should also add the map, though i could be persuaded either way.

also re: manifund, this is sorta hard to convey concisely, but we do both of:

  1. fund impactful projects (e.g. you can submit an application and get funded)
  2. provide infrastructure to fund projects (e.g. we're hosting the ACX Grants on manifund)

not sure exactly how to describe this, and i think you did a pretty good job in your description!

(edit: added the last sentence of the first paragraph)

emre kaplan @ 2024-02-20T22:33 (+2) in response to James Özden's Quick takes

Hi James, did you make this?

James Özden @ 2024-02-24T01:04 (+2)

No I didn't sadly - I started using Readwise instead to capture learnings from books & other mediums, as it's got better UX than Anki in my opinion. Still yet to make a good list of concepts/facts though so ideas welcome!

Austin @ 2024-02-23T17:20 (+8) in response to New Open Philanthropy Grantmaking Program: Forecasting

Yes, it's a meta topic; I'm commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn't get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I'm aware of over the last 2 years were $30k in "minigrants" funded by Scott Alexander out of pocket.

But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the future is paramount. Steering today's world without understanding the future would be like trying to help people in Africa, but without overseas reporting to guide you - you'll obviously do worse if you can't see outcomes of your actions.

You can make a reasonable argument (as some other commenters do!) that the tractability of forecasting to date hasn't been great; I agree that the most common approaches of "tournament setting forecasting" or "superforecaster consulting" haven't produced much of decision-relevance. But there are many other possible approaches (eg FutureSearch.ai is doing interesting things using an LLM to forecast), and I'm again excited to see what Ben and Javier do here.

MarcusAbramovitch @ 2024-02-24T00:30 (+2)

Yes, it's a meta topic; I'm commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn't get funding outside of EA, and even inside EA had no institutional commitment;

 

  1. I don't think it's necessary to talk in terms of an ITN framework but something being neglected isn't nearly reason enough to fund it. Neglectedness is perhaps the least important part of the framework and something being neglected alone isn't a reason to fund it. Getting 6 year olds in race cars for example seems like a neglected cause but one that isn't worth pursuing.
  2. I think something not getting funding outside of EA is probably a medium-sized update to the thing not being important enough to work on. Things start to get EA funding once a sufficient number of the community finds the arguments for working on a problem sufficiently convincing. But many many many problems have come across EA's eyes and very few of them have stuck. For something to not get funding from others suggests that very few others found it to be important.
  3. Forecasting still seems to get a fair amount of dollars, probably about half as much as animal welfare. https://docs.google.com/spreadsheets/d/1ip7nXs7l-8sahT6ehvk2pBrlQ6Umy5IMPYStO3taaoc/edit?usp=sharing

Your points on helping future people (and non-human animals) are well taken.



Comments on 2024-02-23

SiebeRozendal @ 2024-02-23T21:21 (+3) in response to Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy

Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism.

This suggests people's expected x-risk levels are really small ('extreme levels of caution'), which isn't what people believe.

I think "if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" would be endorsed by a large majority of the general population & intellectual 'elite'. It's not at all a fringe moral position.

Matthew_Barnett @ 2024-02-23T23:31 (+2)

I think "if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" would be endorsed by a large majority of the general population & intellectual 'elite'.

I'm not sure we disagree. A lot seems to depend on what is meant by "very very cautious". If it means shutting down AI as a field, I'm pretty skeptical. If it means regulating AI, then I agree, but I also think Sam Altman advocates regulation too.

I agree the general population would probably endorse the statement "if a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" if given to them in a survey of some kind, but I think this statement is vague, and somewhat misleading as a frame for how people would think about AI if they were given more facts about the situation.

Firstly, we're not merely talking about any technology here; we're talking about a technology that has the potential to both disempower humans, but also make their lives dramatically better. Almost every technology has risks as well as benefits. Probably the most common method people use when deciding whether to adopt a technology themselves is to check whether the risks outweigh the benefits. Just looking at the risks alone gives a misleading picture.

The relevant statistic is the risk to benefit ratio, and here it's really not obvious that most people would endorse shutting down AI if they were aware of all the facts. Yes, the risks are high, but so are the benefits. 

If elites were made aware of both the risks and the benefits from AI development, most of them seem likely to want to proceed cautiously, rather than not proceed at all, or pause AI for many years, as many EAs have suggested. To test this claim empirically, we can just look at what governments are already doing with regards to AI risk policy, after having been advised by experts; and as far as I can tell, all of the relevant governments are substantially interested in both innovation and safety regulation.

Secondly, there's a persistent and often large gap between what people say through their words (e.g. when answering surveys) and what they actually wants as measured by their behavior. For example, plenty of polling has indicated that a large fraction of people are very cautious regarding GMOs, but in practice most people are willing to eat GM foods happily without much concern. People are often largely thoughtless when answering many types of abstract questions posed to them, especially about topics they have little knowledge about: and this makes sense, because their responses typically have almost no impact on anything that might immediately or directly impact them. Bryan Caplan has discussed these issues in surveys and voting systems before.

Holly Morgan @ 2024-02-15T19:32 (+4) in response to On being an EA for decades

A lot of pop, a lot of musicals... I'd like to say that my music taste has become a lot more sophisticated over the past 12 years, but that would be false.

And shout-out to this old favourite from @Raemon ✨.

lauragreen @ 2024-02-23T22:37 (+3)

♥️ that's great taste!

Matthew_Barnett @ 2024-02-22T19:39 (+1) in response to Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy

There's an IMO fairly simple and compelling explanation for why Sam Altman would want to accelerate AI that doesn't require positing massive cognitive biases or dark motives. The explanation is simply: according to his moral views, accelerating AI is a good thing to do.

It wouldn't be unusual for him to have such a moral view. If one's moral view puts substantial weight on the lives and preferences of currently existing humans, then plausible models of the tradeoff between safety and capabilities say that acceleration can easily be favored. This idea was illustrated by Nick Bostrom in 2003 and more recently by Chad Jones.

Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism. But most people, probably including Sam Altman, are not strong longtermists.

David Mathers @ 2024-02-23T21:33 (+3)

I think that whilst utilitarian but not longtermist views might well justify full-speed ahead, normal people are quite risk averse, and are not likely to react well to someone saying "let's take a 7% chance of extinction if it means we reach immortality slightly quicker and it benefits current people, rather than being a bit slower so that some people die and miss out". That's just a guess though. (Maybe Altman's probability is actually way lower, mine would be, but I don't think a probability more than an order of magnitude lower than that fits with the sort of stuff about X-risk he's said in the past.) 

Matthew_Barnett @ 2024-02-22T19:39 (+1) in response to Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy

There's an IMO fairly simple and compelling explanation for why Sam Altman would want to accelerate AI that doesn't require positing massive cognitive biases or dark motives. The explanation is simply: according to his moral views, accelerating AI is a good thing to do.

It wouldn't be unusual for him to have such a moral view. If one's moral view puts substantial weight on the lives and preferences of currently existing humans, then plausible models of the tradeoff between safety and capabilities say that acceleration can easily be favored. This idea was illustrated by Nick Bostrom in 2003 and more recently by Chad Jones.

Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism. But most people, probably including Sam Altman, are not strong longtermists.

SiebeRozendal @ 2024-02-23T21:21 (+3)

Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism.

This suggests people's expected x-risk levels are really small ('extreme levels of caution'), which isn't what people believe.

I think "if you believe the probability that a technology will make humanity go extinct with a probability of 1% or more, be very very cautious" would be endorsed by a large majority of the general population & intellectual 'elite'. It's not at all a fringe moral position.

Yonatan Cale @ 2023-11-25T11:52 (+16) in response to Yonatan Cale's Quick takes

I have thoughts on how to deal with this. My priors are this won't work if I communicate it through text (but I have no idea why). Still, seems like the friendly thing would be to write it down

 

My recommendation on how to read this:

  1. If this advice fits you, it should read as "ah obviously, how didn't I think of that?". If it reads as "this is annoying, I guess I'll do it, okay...." - then something doesn't fit you well, I missed some preference of yours. Please don't make me a source of annoying social pressure
  2. Again, for some reason this works better when speaking than in writing. So, eh, ... idk.. imagine me speaking?? get a friend to read this to you?
    1. (whatever you chose, consider telling me how it went? this part is a mystery to me)

 

So,

TL;DR:

  1. The goal of interviews is not to pass them (that's the wrong goal, I claim). The goals I recommend are:
    1. Reducing uncertainty regarding what places will accept you. (so you should get many rejections, it's by-design, otherwise you're not searching well)
    2. Practicing interviews. Interviews are different than actual work, and there's skill to build there. So after interviews, I'll review stuff I didn't know, and I'll ask for feedback about my blind spots. I have some embarrassing stories about blind spots I had in interviews and would never notice without asking for feedback. Like, eh, taking off my shoes and walking around the room including the interviewer 🫣 these are actual blind spots I had which are absolutely unrelated to my profession of software development
  2. Something about the framing of "people who interview a lot beat others in getting better jobs" - and motivation to be one of those
  3. Get yourself ice cream or so after interviewing
    1. Important sub point: Positive reinforcement should be for "doing good moves" (like scheduling an interview, or like reviewing what you could do better), and NOT for passing interviews (which imply to your brain that not-passing is negative, and so if your brain has uncertainty about this - it will want to avoid interviewing)
  4. Asking a close friend / partner / roommate what they think could work for you. They might say something like "play beat saber, that always makes you feel good" which I couldn't guess
  5. Sometimes people spend a lot of time on things like writing cover letters (or other things that I think are a wrong use of time and frustrating (and in my model of people: some part of them knows this isn't a good idea and it manifests as stress/avoidance, though I'm no therapist)). I'd just stop doing those things, few things are (imo) worth the tradeoff of having more stress from interviews. It's a tradeoff, not a game of "do interviews perfectly and sacrafice everything else"
Richard_Leyba_Tejada @ 2024-02-23T20:27 (+1)

"The goal of interviews is not to pass them (that's the wrong goal, I claim). The goals I recommend are:

  1. Reducing uncertainty regarding what places will accept you. (so you should get many rejections, it's by-design, otherwise you're not searching well)"

I get very anxious the closer I am to interview day. I am researching how to get really good and started doing mock interviews to practice. 

 

Shifting to reducing uncertainty/research vs passing seems helpful.

SanteriK @ 2024-02-21T08:00 (+31) in response to New Open Philanthropy Grantmaking Program: Forecasting

As the program is about forecasting, what is your stance on the broader field of foresight & futures studies? Why is forecasting more promising than some other approaches to foresight?

EffectiveAdvocate @ 2024-02-23T20:08 (+3)

I'm not OP, obviously, and I am only speaking from experience here, so I have no data to back this up, but:

My feeling is that foresight projects have a tendency to become political very quickly, and they are much more about stakeholder engagement than they are about finding the truth, whereas forecasting can remain relatively objective for longer.

That being said: I am very excited about combining these approaches.

AmAristizabal @ 2024-02-22T21:50 (+27) in response to Coworking space in Mexico City

Hey Sandra, thanks for your questions. Hopefully the following clarifications will help give useful context as to why we’re excited about this space. 

The scope of our program

  • The office space and our broader project is a university program focused exclusively on AI. It is not an EA space, and it’s not meant to do EA community building in Mexico. Many of our fellows and visitors are not part of the EA community. We would be happy to see other initiatives aimed at EA community building in Mexico and Mexico City. 
  • We would like to point out that the program is part of a Mexican university. Jaime and I (the two primary staff members) are from Colombia, and the vast majority of our colleagues at ITAM who have worked closely with us on various aspects of the fellowship are Mexicans. We're really grateful for their work and want to make sure their work is acknowledged.

Some benefits of this space

  • We have carefully considered the upsides and downsides of the current coworking space, and are now pretty confident about choosing it. This is both for logistical reasons and because we’ve had overwhelmingly positive feedback from fellows and visitors (several of them Latin Americans). 
  • We’ve found the space is worth the cost and in practice cheaper than many alternatives because it offers all the operational facilities that the fellowship needs. If we had picked a different coworking space, we would have had to compensate by hiring an additional staff member to figure out things like catering, hosting talks, furniture, etc. It is worth noting that the staff curates a weekly menu for us to accommodate vegans. From our experiences with other event spaces in CDMX and LMICs, this is quite hard to find. Given this is a university program, there are additional constraints and requirements for the space(s) we use. 

We also have considered locals, and people from latam and LMICs more generally

  • We have thought a lot about the effects of programs like these on locals, and much of our work is aimed at diversifying the pool of people working on important problems within AI. 
  • The current set up of the coworking space has meant we have been able to accept visitors from LMICs and subsidize spots for those who wouldn’t be able to attend otherwise. 
  • Condesa is a more gentrified and international area of Mexico City. In our experience, that has come with some benefits for a global program like ours. For example amenities as you mention, but also allowing fellows and visitors from other low and middle income countries and underrepresented backgrounds to move comfortably around the area (e.g. non-spanish speakers from other LMICs).
  • We were surprised to hear your concerns, as we haven’t received any similar feedback so far (just for quick context to readers: the writer of these comments has never been to our office space). We aren’t aware of any incidents of discrimination experienced during our fellowship or the co-working space more generally -  we’ve found the staff (most are Mexican) of the broader co-working space (imagine a WeWork) to be very kind and welcoming. If there are specific incidents you’re aware of, we’d encourage you to let us, or the Community Health team know. 

While we are part of a Mexican university, and are mindful and respectful of local norms, we are also proud of having kickstarted a programme with a truly global focus in which members from various cultural backgrounds feel welcome.

Sandra Malagon @ 2024-02-23T19:05 (–4)

Hi Angela, thank you for your response. I never questioned this space for the AI fellowship, I'm glad you chose the best space for your program. 

But now you are talking about a coworking in Mexico or Mexico City (obviously related to the EA alignment fields, that's why you are posting it on this forum). It would be naive to think that it wouldn't be the main space where people and organizations related to the themes we work on would attend, that's what you are inviting them to in the post. 

Moreover, you know that for people in Mexico, coworking is very important; you reviewed a project presented by locals about it and know what the local community wants and our concerns about that space (And last year you believed that was not a good idea). If it's a coworking exclusively for the AI fellowship, you should clarify that and also that at least part of the community in Mexico does not feel comfortable attending. And invite you to reflect on the negative impact of this situation, again not because of the space you chose for your program but because you want this space to be "the space in Mexico"  ignoring several complaints that several ones had since the last year. 

Grayden @ 2024-02-21T06:53 (+76) in response to New Open Philanthropy Grantmaking Program: Forecasting

I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.

EffectiveAdvocate @ 2024-02-23T19:00 (+8)

I'm considering elaborating on this in a full post, but I will do so quickly here as well: It appears to me that there's potentially a misunderstanding here, leading to unnecessary disagreement.

I think that the nature of forecasting in the context of decision-making within governments and other large institutions is very different from what is typically seen on platforms like Manifold, PolyMarket, or even Metaculus. I agree that these platforms often treat forecasting more as a game or hobby, which is fine, but very different from the kind of questions policymakers want to see answered.

I (and I hope this aligns with OP's vision) would want to see a greater emphasis on advancing forecasting specifically tailored for decision-makers. This focus diverges significantly from the casual or hobbyist approach observed on these platforms. The questions you ask should probably not be public, and they are usually far more boring. In practice, it looks more like an advanced Delphi method than it looks like Manifold Markets. I'm somewhat surprised to see interpretations of this post suggesting a need for more funding in the type of forecasting that is more recreational, which, in my view, is and should not be a priority.

E: One obvious exemption to the dichotomy I describe above is that the more fun forecasting platforms can be a good way of identifying Superforecasters good forecasters. 

NickLaing @ 2024-02-21T04:21 (+2) in response to Can we help individual people cost-effectively? Our trial with three sick kids

Thanks Ian that's an interesting reflection, to be honest I hadn't really thought that way before. Can you share the kind of things you think might sometimes be cost effective in everyday life, if you are comfortable with that all good if not!

Ian Turner @ 2024-02-23T18:34 (+11)

So, to be clear, it's not like I have a back-of-the-envelope calculation or anything.

The way I see it, charity is hard mainly because it's hard to identify opportunities that scale, and even when we do, most of our efforts are wasted. With Deworm The World, for example, only about half of treated children have any worm infection at all. Targeting charitable interventions is usually not cost-effective because the best beneficiaries can be hard to find. This is even harder if we need the reasoning and evidence to be legible.

But, if we are able to identify targeted cases "by accident" (or, in the course of living live), then we get the benefits of targeting for free, without either the cost of finding beneficiaries or the cost of legible/rigorous impact evaluation.

In the rich world, I think this sort of impact usually comes from behaviors that are free or very low cost to the donor. An example is giving CPR in a public place — it could potentially save a life, for a pretty small opportunity cost, but it wouldn't be worth it to give up your career just to be around in case someone needs CPR. Or a more minor (but also maybe more common) example might be introducing two people who are well positioned to help one another, where the potential connection is discovered incidentally, or by accident.

Does that make sense?

NickLaing @ 2024-02-23T12:45 (+9) in response to From salt intake reduction to labor migration: Announcing top ideas for the AIM 2024 CE Incubation Program

Thanks David appreciate the article - I think its a good indication of how complex the question of immigration  is and how I don't think its a slamdunk in either direction.

My impression is though that the article is a pretty poorly researched and misleading piece - even though some of its arguments might still stand in many cases despite that.

First its weird that the article makes zero mention of the state of the Nigerian health system, nor how this mass emmigration might be affecting it. Is staffing getting better or worse? Are outcomes getting better or worse? How many nurses are actually needed in the system? Building your entire argument on "nurses trained" vs "nurses immigrating to England" seems quite short sighted and reductionst.

Second (probably most important), they only taken into account nurses leaving for England - a weird comparison decision. That 2300 nurses left for england that year is fairly irrelevant, what matters is the total number. Nurses leave for other european countries and the middle east too .The Nigerin government says 42,000 nurses left in the last 3 years, that's 14,000 a year, more than they are even training per year. 

https://africa.cgtn.com/nigeria-says-42000-nurses-left-the-country-in-3-years/

So their basic argument that enough new nurses are being trained is bogus

In addition, you must consider the increasing population. The population of Nigeria has grown by 5,000,000 people in that one year (2.5% increaser ). Nigeria has something like 180,000 nurses.  This means even just to maintain their already poor nursing ratios, they would need to train and put into the workforce an extra 3000 or so nurses each year just to maintain the status quo, without even improving nurse/population ratios.

Its also likely that many of the the best and brightest that are leaving Nigeria. They are more likely to pass English exams and be accepted (unless they cheat as ofen happens) and have the drive and gumption to try and move overseas. My guess would be its most likely that England is taking the better nurses to work in a health system that is 10x better while leaving the lower quality nurses in Nigeria, the health system which really needs the best nurse to lead and drive the system. The qualification itself is only a small part of the story, the difference in ability, skills and leadership potential between nurses is immense.

There are also other second order effects, If you've ever been in a country where many people are trying to emmigrate because many people are leaving, its hard to retain stability in your hospitals and health systems. People are distracted and staff turnover is high and morale can be low. This can really hurt productivity of those who remain.

I'm also more concerned about Doctors than nurses - but that's a whole nother story.

I probably wasted too much time hacking away at this poor article, but it annoyed me a little ;). I'm not anti immigration at all, but I am for medical staff in this kind of scenario and there are many, many factors to consider in the discussion.

Filip_Murar @ 2024-02-23T18:01 (+11)

Thanks for raising this point, Nick, and for the many good arguments you’re making!

Out of all the forms of labor emigration, I find physician and nurse migration to be the most concerning. I’d stress that the idea proposed in our report doesn’t focus on skilled workers (only as a potential later extension, needing careful consideration), so it largely avoids this concern. We focus on low- and mid-skilled workers, as those are poorer to begin with, much more numerous, and there’s an oversupply of them in many LMICs (as opposed to shortages).

I did spend a little bit of time looking into the literature on brain drain and didn’t arrive at a clear conclusion. There are many factors pointing in different directions, and whether the overall effect is net positive or net negative may vary between countries and professions.

Aside from the considerations that you and David mentioned, there are also remittances, the effects of return migration (the rates of which vary a lot) and associated "brain gain", or the fact emigrating physicians are more likely to come from well-staffed urban areas. E.g. this (very old) article by Clemens and McKenzie says that, in Kenya, some 66% of physicians live in Nairobi where only 8% of the national population lives. They argue that low incentives to work in rural areas are a much bigger problem than the total supply of physicians (and how that supply is affected by emigration).

Concerning the CGD, I’m actually quite excited about their efforts to push for so-called global skills partnerships in the skilled space. Within these programs, countries like the UK would pay countries like Nigeria to train nurses and have agreed quotas on how many nurses can stay vs migrate. This seems like a more sophisticated solution to the issue than saying “nurse emigration is good.” Here is their proposal specifically for Nigeria.

In any case, this is not a topic that we at CE decided to focus on at this point. If we do look into skilled migration in the future, we will do a much more thorough dive (and will be keen to get your input!).

VictorW @ 2024-02-22T20:19 (+1) in response to Detachment vs attachment [AI risk and mental health]

An example of invested but not attached: I'm investing time/money/energy into taking classes about subject X. I chose subject X because it could help me generate more value Y that I care about. But I'm not attached to getting good at X, I'm invested in the process of learning it.

I feel more confused after reading your other points. What is your definition of rationality? Is this definition also what EA/LW people usually mean? (If so, who introduces this definition?)

When you say rationally is "what gets you good performance", that seems like it could lead to arbitrary circular reasoning about what is and isn't rational. If I exaggerate this concern and define rationality as "what gets you the best life possible", that's not a helpful definition because it leads to the unfalsifiable claim that rationality is optimal while providing no practical insight.

Neil Warren @ 2024-02-23T17:51 (+1)

Okay forget what I said, I sure can tie myself up in knots. Here's another attempt:

If a person is faced with the decision to either save 100 out of 300 people for sure, or have a 60% chance of saving everyone, they are likely (in my experience asking friends) to answer something like "I don't gamble with human lives" or "I don't see the point of thought experiments like this". Eliezer Yudkowsky claims in his "something to protect" post that if those same people were faced with this problem and a loved one was among the 300, they would have more incentive to 'shut up and multiply'. People are more likely to choose what has more expected value if they are more entangled with the end result (and less likely to eg signal indignation at having to gamble with lives). 

I see this in practice, and I'm sure you can relate: I've often been told by family members that putting numbers on altruism takes the whole spirit out of it, or that "malaria isn't the only important thing, coral is important too! " , or that "money is complicated and you can't equate wasted money with wasted opportunities for altruism". 

These ideas look perfectly reasonable to them but I don't think they would hold up for a second if their child had cancer: "putting numbers on cancer treatment for your child takes the whole spirit out of saving them (like you could put a number on love)", or "your child surviving isn't the only  important thing, coral is important too" or "money is complicated, and you can't equate wasting money with spending less on your child's treatment". 

Those might be a bit personal. My point is that entangling the outcome with something you care about makes you more likely to try making the right choice. Perhaps I shouldn't have used the word "rationality" at all. "Rationality" might be a valuable component in making the right choice, but for my purposes I only care about making the right choice no matter how you get there. 

The practical insight is that you should start by thinking about what you actually care about, and then backchain from there. If I start off deciding that I want to maximize my family's odds of survival, I think I am more likely to take AI risk seriously (in no small part, I think, because signalling sanity by scoffing at 'sci-fi scenarios' is no longer something that matters). 

I am designing a survey I will send tonight to some university students to test this claim. 

Nate Soares excellently describes this process. 

Probably Good @ 2024-02-23T17:41 (+5) in response to Join Probably Good for a live conversation and Q&A with Alec Stapp on careers in US policy

Yes, this will be recorded. If you'd like access to the recording after the session, reach out to Vaish at vaishnav@probablygood.org. 

Jack Malde @ 2024-02-23T17:47 (+2)

Thank you!

Jack Malde @ 2024-02-23T17:39 (+2) in response to Join Probably Good for a live conversation and Q&A with Alec Stapp on careers in US policy

Will this be recorded?

Probably Good @ 2024-02-23T17:41 (+5)

Yes, this will be recorded. If you'd like access to the recording after the session, reach out to Vaish at vaishnav@probablygood.org. 

Jack Malde @ 2024-02-23T17:39 (+2) in response to Join Probably Good for a live conversation and Q&A with Alec Stapp on careers in US policy

Will this be recorded?

Grayden @ 2024-02-23T08:39 (+11) in response to New Open Philanthropy Grantmaking Program: Forecasting

Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals

Austin @ 2024-02-23T17:20 (+8)

Yes, it's a meta topic; I'm commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn't get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I'm aware of over the last 2 years were $30k in "minigrants" funded by Scott Alexander out of pocket.

But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the future is paramount. Steering today's world without understanding the future would be like trying to help people in Africa, but without overseas reporting to guide you - you'll obviously do worse if you can't see outcomes of your actions.

You can make a reasonable argument (as some other commenters do!) that the tractability of forecasting to date hasn't been great; I agree that the most common approaches of "tournament setting forecasting" or "superforecaster consulting" haven't produced much of decision-relevance. But there are many other possible approaches (eg FutureSearch.ai is doing interesting things using an LLM to forecast), and I'm again excited to see what Ben and Javier do here.

Arepo @ 2024-02-23T10:58 (+4) in response to Let's advertise EA infrastructure projects, Feb 2024

Thanks Oliver. I've added Forum Magnum and Lighthaven.

I'm reluctant to have links to other forums through some combination of

a) it being a potential floodgate to start linking to communities that aren't fairly explicitly a subset of EA, and 

b) those forums being already well known, and I want to highlight projects that people  might realistically miss after a couple of months in the EA fold.

I'm wondering whether I should take some of the bigger organisations off the list for the latter reason, but I haven't managed to come up with a consistent principle here. I'm open to being persuaded that there's a way to be more consistent in either direction.

Habryka @ 2024-02-23T17:10 (+3)

Nah, seems reasonable to me.

Daniel_Friedrich @ 2024-02-23T17:04 (+1) in response to Is waste management a neglected cause area?

If you're especially motivated by environmental problems, I recommend reading the newly released book by Hannah Ritchie Not the End of the World (here's her TED talk as a trailer).

I'd like to correct something I mentioned in my post - I implied one reason I didn't find plastic pollution impactful, is that it just doesn't have an easy fix. I no longer think that's quite true - Hannah says it actually could be solved tomorrow, if the Western leaders decided to finance waste infrastructure in developing countries. Most of ocean plastic pollution comes from a handful of rivers in Asia. Since we have this kind of infrastructure in Europe and North America, our waste is only responsible for ~5 % of the ocean plastic (Our World in Data). Presumably, such infrastructure would also lay ground for reducing the harms coming from other waste.

I think there are two other reasons for the low attention to waste:

  1. EA is a young do-ocracy - i.e. everybody is trying to spot their "market advantage" that allows them to nudge the world in a way that triggers a positive ripple effect - and so far, everybody's attention got caught up by problems that seem bigger. While I have identified ~4 possibly important problems that come with waste in my post (diseases, air pollution, heavy metal pollution, animal suffering), if you asked a random person who lives in extreme poverty how to help them, waste probably wouldn't be on the top of their mind.
  2. Most people are often reminded of the aesthetic harm of waste. Since people's moral actions are naturally motivated by disgust, I would presume a lot of smart people who do not take much time to reflect on their moral prioritization would already have found a way to trigger the ripple effect in this area - if there was one.

While I think one would do more good if they convinced a politician to target developmental aid at alleviating diseases and extreme poverty, than if they convinced them to run the project suggested by Hannah, perhaps given the bias I mentioned in point 2), it may be that politicians are more willing to provide funding for a project that would have the ambition to eradicate ocean plastic (constituting one of these ripple effects). So if you feel motivated to embark on a similar project, best of luck! :)

(The same potentially goes for the other 2 waste projects I've suggested - supporting biogas plants and improving e-waste monitoring/worker equipment)

saulius @ 2024-02-23T15:37 (+7) in response to Short agony or long ache: comparing sources of suffering that differ in duration and intensity

I see why you made your decisions, but I still think that it would be very useful if people could cite you to say stuff like “According to Welfare Footprint Project, broiler reforms decrease chicken suffering by very roughly 40%-60%”. It’s not for researchers at OpenPhil to decide what should be the next welfare ask. It’s for donors, volunteers, researchers who want to mention your conclusions in passing, and even retailers considering whether to sign the Better Chicken Commitment. I don’t know if animal charities would do it, but such a sentence could even be included in petitions urging retailers and restaurants to implement welfare reforms. Yes, it would be more accurate to write that according to your research, “broiler reforms increase annoying pain by 4.5% but decrease hurtful pain by….” But that is clunky and then the reader might be confused about whether the welfare reform are even good because there is more annoying pain. So it’s more difficult to make a point that these reforms are very impactful using your research. Hence, your great work is cited less and has less impact than it could have. And yes, you can’t accurately say whether the reform decreases suffering by 30% or 60% because it might depend on what weights for different categories of pain you will use. But I think that many people assume that it’s more like 5% so whatever you write on the subject, I think it would be useful. 

Also, if you don’t do it well, someone else will do it poorly. I wrote sentences based on your research like “broiler reforms avert 50% of suffering” in this comment but I had to use my own weights for categories of pain. But your weights would be much better than mine because clearly you thought about it more. I think I later saw my weights being used in some serious cost-effectiveness estimate, but I don’t remember where.

Also, I want to say that I really appreciate and respect your work, thank you for doing it :)

cynthiaschuck @ 2024-02-23T16:35 (+8)

Thanks for your nice words about our work :) . Yes, I see it can be frustrating to have estimates disaggregated (it is very much for us too), and that it can reduce the use and impact of the work. At this moment though we feel it is important to have a solid evidence-based model to quantify animal suffering. That is, a model that is very robust to scrutiny by academics (so they are more likely to adopt it) and by the industry, one in which all estimates can be justified thoroughly.Traction in the academic community is important because as a small team, we would be unable to analyse all situations of animal suffering (in farming context, research, etc) by ourselves, so ideally academics should adopt it too to enable increasing the coverage of the analyses substantially. Robustness against criticisms by the industry is also important to ensure the credibility of this new type of evidence, as used by advocates, in this early stage.  So while we can justify well time in four intensities of suffering, knowledge is not yet available for us to do the same regarding equivalence weights. That said, we have been using summaries of the estimates like "there is a decrease of about 60% of the time in pain for every hen raised in an aviary instead of a cage". Will try to add summaries like this in the forthcoming work, thanks!  

Nick Whitaker @ 2024-02-22T10:52 (+23) in response to New Open Philanthropy Grantmaking Program: Forecasting

If they mostly care about AI timelines, subsidize some markets on it. Funding platforms and research doesn’t seem particularly useful here (as opposed to much more direct research).

David Mathers @ 2024-02-23T16:27 (+1)

Fair point. 

Arepo @ 2024-02-23T11:12 (+3) in response to Let's advertise EA infrastructure projects, Feb 2024

A general paired criterion I have is that the services either have to be targeted to EA individuals (which I don't think research qualifies as) or to offer pretty substantial discounts to them (the logic being that I want to support people supporting the little guy) - Arb assured me in an earlier post they do the latter. Do you know if any of these people would do so?

niplav @ 2024-02-23T15:58 (+1)

Ah, makes sense. I don't know whether others do this. I will have to think on how I handle this myself, but I want to make it cheaper for individuals & EA topics.

Joseph Pusey @ 2024-02-23T15:55 (+3) in response to Can we help individual people cost-effectively? Our trial with three sick kids

Nick, this is one of the best posts I've ever read on the Forum. As you already know, I have huge respect for your commitment to living out your values and I can't wait to read more about your efforts.

saulius @ 2024-02-23T15:37 (+7) in response to Short agony or long ache: comparing sources of suffering that differ in duration and intensity

I see why you made your decisions, but I still think that it would be very useful if people could cite you to say stuff like “According to Welfare Footprint Project, broiler reforms decrease chicken suffering by very roughly 40%-60%”. It’s not for researchers at OpenPhil to decide what should be the next welfare ask. It’s for donors, volunteers, researchers who want to mention your conclusions in passing, and even retailers considering whether to sign the Better Chicken Commitment. I don’t know if animal charities would do it, but such a sentence could even be included in petitions urging retailers and restaurants to implement welfare reforms. Yes, it would be more accurate to write that according to your research, “broiler reforms increase annoying pain by 4.5% but decrease hurtful pain by….” But that is clunky and then the reader might be confused about whether the welfare reform are even good because there is more annoying pain. So it’s more difficult to make a point that these reforms are very impactful using your research. Hence, your great work is cited less and has less impact than it could have. And yes, you can’t accurately say whether the reform decreases suffering by 30% or 60% because it might depend on what weights for different categories of pain you will use. But I think that many people assume that it’s more like 5% so whatever you write on the subject, I think it would be useful. 

Also, if you don’t do it well, someone else will do it poorly. I wrote sentences based on your research like “broiler reforms avert 50% of suffering” in this comment but I had to use my own weights for categories of pain. But your weights would be much better than mine because clearly you thought about it more. I think I later saw my weights being used in some serious cost-effectiveness estimate, but I don’t remember where.

Also, I want to say that I really appreciate and respect your work, thank you for doing it :)

calebp @ 2024-02-21T23:43 (+9) in response to Rethink Priorities’ Cross-Cause Cost-Effectiveness Model: Introduction and Overview

Here are some very brief takes on the CCM web app now that RP has had a chance to iron out any initial bugs. I'm happy to elaborate more on any of these comments.

  • Some praise
    • This is an extremely ambitious project, and it's very surprising that this is the first unified model of this type I've seen (though I'm sure various people have their own private models).
      • I have a bunch of quantitative models on cause prio sub-questions, but I don't like to share these publicly because of the amount of context that's required to interpret them (and because the methodology is often pretty unrefined) - props to RP for sharing theirs!
    • I could see this product being pretty useful to new funders who have a lot of flexibility over where donations go.
    • I think the intra-worldview models (e.g. comparing animal welfare interventions) seem reasonable to me (though I only gave these a quick glance)
    • I see this as a solid contribution to cause prioritisation efforts and I admire the focus on trying to do work that people might actually use - rather than just producing a paper with no accompanying tool.
  • Some critiques
    • I think RP underrates the extent to which their default values will end up being the defaults for model users (particularly some of the users they most want to influence)
      • I think the default values are (in my personal view) pretty far from my values or the mean of values for people who have thought hard about these topics in the EA community.
    • The x-risk model in particular seems to make bake-in quite conservative assumptions (medium-high confidence)
      • I found it difficult to provide very large numbers on future population per star - I think with current rates of economic and compute growth, the number of digital people could be extremely high very quickly.
      • I think some x-risk interventions could plausibly have very long run effects on x-risk (e.g. by building an aligned super intelligence)
    • The x-risk model seems to confuse existential risk and extinction risk (medium confidence - maybe this was explained somewhere, and I missed it)
    • Using the model felt clunky to me, it didn't handle extremely large values well and it made iterating on values difficult, it's not the kind of thing that you can "play with" imo.
  • Some improvements I'd like to see
    • I'd be interested in seeing RP commission some default values from researchers/EAs who can explain their suggested values well.
    • I would like for the overall app to feel more polished/responsive/usable - idk how much this would cost. I'd guess it's at least a month's work for a competent dev, maybe more.
Derek Shiller @ 2024-02-23T15:19 (+3)

Thanks for recording these thoughts!

Here are a few responses to the criticisms.

I think RP underrates the extent to which their default values will end up being the defaults for model users (particularly some of the users they most want to influence)

This is a fair criticism: we started this project with the plan of providing somewhat authoritative numbers but discovered this to be more difficult than we initially expected and instead opted to express significant skepticism about the default choices. Where there was controversy (for instance, in how many years forward we should look), we opted for middle-of-the-road choices. I agree that it would add a lot of value to get reasonable and well-thought-out defaults. Maybe the best way to approach controversy would be to opt for different sets of parameter defaults that users could toggle between based on what different people in the community think.

I found it difficult to provide very large numbers on future population per star - I think with current rates of economic and compute growth, the number of digital people could be extremely high very quickly.

The ability to try to represent digital people with populations per star was a last-minute choice. We originally just aimed for that parameter to represent human populations. (It isn’t even completely obvious to me that stars are the limiting factor on the number of digital people.) However, I also think these things don’t matter since the main aim of the project isn’t really affected by exactly how valuable x-risk projects are in expectation. If you think there may be large populations, the model is going to imply incredibly high rates of return on extinction risk work. Whether those are the obvious choice or not depends not on exactly how high the return, but on how you feel about the risk, and the risks won't change with massively higher populations.

I think some x-risk interventions could plausibly have very long run effects on x-risk (e.g. by building an aligned super intelligence)

If you think we’ll likely have an aligned super-intelligence within 100 years, then you might try to model this by setting risks very low after the next century and treating your project as just a small boost on its eventual discovery. However, you might not think that either superaligned AI or extinction is inevitable. One thing we don’t try to do is model trajectory changes, and those seem potentially hugely significant, but also rather difficult to model with any degree of confidence.

The x-risk model seems to confuse existential risk and extinction risk (medium confidence - maybe this was explained somewhere, and I missed it)

We distinguish extinction risk from risks of sub-extinction catastrophes, but we don’t model any kind of as-bad-as-extinction risks.

Vasco Grilo @ 2024-02-23T14:43 (+2) in response to How We Think about Expected Impact in Climate Philanthropy

Hi,

Could you share your best guess for the expected/mean cost-effectiveness of the Climate Change Fund in tCOeq/$?

BTW, you might have missed this comment.

Stan Pinsent @ 2024-02-23T10:38 (+3) in response to African Heads of States Ban Donkey Skin Trade

Can anyone convince me that this is a robustly good move for donkey welfare? Working donkeys seem to have quite bad lives, so a falling population because people are deciding to sell donkeys for slaughter might be a good thing.

Erich_Grunewald @ 2024-02-23T14:16 (+3)

I don't know if these things make it robustly good, but some considerations:

  • Raising and killing donkeys for their skin seems like it could scale up more than the use of working donkeys, since (1) there may be increasing demand for donkey skin as China develops economically, and (2) there may be diminishing demand for working donkeys as Africa develops economically. So it could be valuable to have a preemptive norm/ban against slaughtering donkeys for this use, even if the short-term effect is net-negative.
  • It is not obvious that working donkeys have net-negative lives. My impression is that their lives are substantially better than the lives of most factory farmed animals, though that is a low bar. One reason to think that is the case is that working donkeys' owners live more closely to, and are more dependent on, their animals, than operators of factory farms, meaning they benefit more from their animals being healthy and happy.
  • Markets in donkey skin could have some pretty bad externalities, e.g., with people who rely on working donkeys for a living seeing their animals illegally poached. (On the other hand, this ban could also make such effects worse, by pushing the market underground.) Meanwhile, working donkeys do useful work, so they probably improve human welfare a bit. (I doubt donkey skin used for TCM improves human welfare.)
  • On non-utilitarian views, you may place relatively more value on not killing animals, and/or relatively less value on reducing suffering. So if you give some weight to those views, that may be another reason to think this ban is net positive.
Maja Nenadov Webster @ 2024-02-23T14:05 (+4) in response to Let's advertise EA infrastructure projects, Feb 2024

Hope it's okay to share Freelancing For Good which I co-founded. This is a new EA-aligned community specifically aimed at freelancers. Our mission is to introduce freelancers both within and outside EA to different high-impact pathways for doing good e.g. earning to give, working on projects addressing pressing issues, or starting their own charitable project.

We also have a Slack channel if anyone wants to join! 

SummaryBot @ 2024-02-23T13:59 (+1) in response to Could Transparency International Be a Model to Improve Farm Animal Welfare?

Executive summary: The post argues for establishing an organization modeled after Transparency International to improve farm animal welfare by increasing transparency in the production chain through standard auditing methods, public reporting, ranking systems, traceability measures, and labeling schemes.

Key points:

  1. Lack of transparency enables companies to circumvent reforms and prevents assessing the true effectiveness of animal welfare policies.
  2. Increased transparency would promote accountability, compliance with standards, and enhancement of welfare practices.
  3. An organization focused on transparency could develop reporting frameworks, auditing processes, traceability systems, sourcing policies requiring transparency, and welfare labeling schemes.
  4. Transparency initiatives could include public sharing of independent audit results, animal-based health monitoring, stockmanship qualifications, slaughter line inspections, and transparency rankings of companies.
  5. Increased consumer awareness through transparent labeling, traceability, sourcing policies, and educational campaigns could help bridge the gap between preferences and realities.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

SummaryBot @ 2024-02-23T13:54 (+3) in response to AI Policy Insights from the AIMS Survey

Executive summary: The AIMS survey provides key insights into US public opinion on AI risks and governance, showing high concern about the pace of AI development, expectations for advanced AI soon, widespread worries about existential and catastrophic threats, support for regulations and bans to slow AI advancement, and concern for AI welfare.

Key points:

  1. 49% believe the pace of AI development is too fast, showing public support for slowing things down.
  2. The public expects advanced AIs like AGI, HLAI, and ASI within the next 5 years.
  3. 48-53% are concerned about existential threats, human extinction risks, and harm to AIs from AI.
  4. 63-72% support various bans and regulations to slow AI advancement and development.
  5. 53-68% want to protect AI welfare through campaigns, standards, and avoiding unnecessary suffering.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

alex lawsen (previously alexrjl) @ 2024-02-17T07:47 (+26) in response to "No-one in my org puts money in their pension"

I'm still saving for retirement in various ways, including by making pension contributions.

If you're working on GCR reduction, you can always consider your pension savings a performance bonus for good work :)

NickLaing @ 2024-02-23T13:48 (+2)

Sometimes I wish we had laughing emoji on the form for nice comments like this. But I get the downsides too :D.

Dane Magaway @ 2024-02-23T13:14 (+4) in response to undefined

Sharing this podcast interview with Mark Zuckerberg. He chats about his thoughts on the Apple Vision Pro, shares some of his AI predictions, and even spills some beans on what's next for Meta. Here's the link.

DavidNash @ 2024-02-23T10:46 (+6) in response to From salt intake reduction to labor migration: Announcing top ideas for the AIM 2024 CE Incubation Program

CGD has a different take on this type of migration.

"Between the start of 2021 and 2022, the number of Nigerian-born nurses joining the UK nursing register more than quadrupled, an increase of 2,325. Becker’s human capital theory would suggest that this increase in the potential wages earned by Nigerian-trained nurses should lead to an increase in Nigerians choosing to train as nurses. So what happened? Between late 2021 and 2022, the number of successful national nursing exam candidates increased by 2,982—that is, more than enough to replace those who had left for the UK."

"To fully realise these benefits, Nigeria would need to embrace emigration, realising that nurses are likely going to leave anyway and doing everything they can to reap the benefits. Yet, they appear to be doing the opposite. New guidelines announced on 7 February 2024 state that nurses must work for two and half years before being allowed to work overseas, a move nurses contest. This policy is far from optimal; restrictions on emigration are inefficient, inequitable, and unethical. Indeed, Ghana had a similar scheme, but ended up scrapping it because they were unable to employ all of their nurse trainees at home."

NickLaing @ 2024-02-23T12:45 (+9)

Thanks David appreciate the article - I think its a good indication of how complex the question of immigration  is and how I don't think its a slamdunk in either direction.

My impression is though that the article is a pretty poorly researched and misleading piece - even though some of its arguments might still stand in many cases despite that.

First its weird that the article makes zero mention of the state of the Nigerian health system, nor how this mass emmigration might be affecting it. Is staffing getting better or worse? Are outcomes getting better or worse? How many nurses are actually needed in the system? Building your entire argument on "nurses trained" vs "nurses immigrating to England" seems quite short sighted and reductionst.

Second (probably most important), they only taken into account nurses leaving for England - a weird comparison decision. That 2300 nurses left for england that year is fairly irrelevant, what matters is the total number. Nurses leave for other european countries and the middle east too .The Nigerin government says 42,000 nurses left in the last 3 years, that's 14,000 a year, more than they are even training per year. 

https://africa.cgtn.com/nigeria-says-42000-nurses-left-the-country-in-3-years/

So their basic argument that enough new nurses are being trained is bogus

In addition, you must consider the increasing population. The population of Nigeria has grown by 5,000,000 people in that one year (2.5% increaser ). Nigeria has something like 180,000 nurses.  This means even just to maintain their already poor nursing ratios, they would need to train and put into the workforce an extra 3000 or so nurses each year just to maintain the status quo, without even improving nurse/population ratios.

Its also likely that many of the the best and brightest that are leaving Nigeria. They are more likely to pass English exams and be accepted (unless they cheat as ofen happens) and have the drive and gumption to try and move overseas. My guess would be its most likely that England is taking the better nurses to work in a health system that is 10x better while leaving the lower quality nurses in Nigeria, the health system which really needs the best nurse to lead and drive the system. The qualification itself is only a small part of the story, the difference in ability, skills and leadership potential between nurses is immense.

There are also other second order effects, If you've ever been in a country where many people are trying to emmigrate because many people are leaving, its hard to retain stability in your hospitals and health systems. People are distracted and staff turnover is high and morale can be low. This can really hurt productivity of those who remain.

I'm also more concerned about Doctors than nurses - but that's a whole nother story.

I probably wasted too much time hacking away at this poor article, but it annoyed me a little ;). I'm not anti immigration at all, but I am for medical staff in this kind of scenario and there are many, many factors to consider in the discussion.

CAISID @ 2024-02-23T11:45 (+1) in response to Could Transparency International Be a Model to Improve Farm Animal Welfare?

This is actually a really well thought out, feasible, implementable idea. One aspect of leveraging impact to consider would be the supply chain impacts of the Transparency Index. I can certainly see valuable buy-in from stores who buy the products to want to show that they are ethically sourcing their products, and so would potentially buy in to the transparency index model. Some would likely then have a 'minimum transparency rating' policy in their procurement and compliance rules which would be a good avenue for impact as it then forces the producers to achieve that level of lose major contracts to competitors.

cynthiaschuck @ 2024-02-23T12:18 (+4)

Thanks! I like the idea of a 'minimum transparency rating' policy in the supply chain.

saulius @ 2024-02-23T11:55 (+2) in response to Could Transparency International Be a Model to Improve Farm Animal Welfare?

Thanks for the post, it's an interesting idea. I slightly worry about corruption in "unannounced animal welfare audits by accredited and independent third parties". Someone told me years ago that such audits by government agents in Lithuania were a farce. Do you know if such audits work well in practice? One way I see it working is if the auditor is an animal advocate. Is that what tends to happen in practice?

cynthiaschuck @ 2024-02-23T12:12 (+1)

You are absolutely right. From my personal experience in Spain, animal welfare audits are often announced to farmers weeks in advance, so even if they happen (often they only happen in the paper, without the actual visit) farmers have time to correct whatever needs to be corrected, just for the visit. Hence the idea of creating mechanisms to enable auditing by independent parties (other than the companies' own vets or governmental auditors). There is also potential for corruption here, but if there is an organization behind certifing auditors, creating standards for how these audits should happen, or simply reporting the willingness of companies to adhere to these standards (e.g., through a transparency index or something of the sort), the risk could be reduced somehow.

saulius @ 2024-02-23T11:55 (+2) in response to Could Transparency International Be a Model to Improve Farm Animal Welfare?

Thanks for the post, it's an interesting idea. I slightly worry about corruption in "unannounced animal welfare audits by accredited and independent third parties". Someone told me years ago that such audits by government agents in Lithuania were a farce. Do you know if such audits work well in practice? One way I see it working is if the auditor is an animal advocate. Is that what tends to happen in practice?

Saul Munn @ 2024-02-21T20:21 (+3) in response to Let's advertise EA infrastructure projects, Feb 2024

you might find a number of good resources — specifically within forecasting — here: predictionmarketmap.com. i would particularly highlight Manifund as a way for EAs to get funding~

 

coi: i built the aforementioned map, and i currently work at manifund.

Arepo @ 2024-02-23T11:47 (+2)

Thanks Saul. I've added Manifund. I'm unsure whether to add the map, since I want to keep this list to 'services' (or products) that are actively being worked on rather than lists that might go stale. How much continuing work are you putting into upkeeping the map?

CAISID @ 2024-02-23T11:45 (+1) in response to Could Transparency International Be a Model to Improve Farm Animal Welfare?

This is actually a really well thought out, feasible, implementable idea. One aspect of leveraging impact to consider would be the supply chain impacts of the Transparency Index. I can certainly see valuable buy-in from stores who buy the products to want to show that they are ethically sourcing their products, and so would potentially buy in to the transparency index model. Some would likely then have a 'minimum transparency rating' policy in their procurement and compliance rules which would be a good avenue for impact as it then forces the producers to achieve that level of lose major contracts to competitors.

Jessica Wen @ 2022-10-20T15:04 (+1) in response to Let's advertise infrastructure projects

This is a great idea! Would also add Magnify Mentoring, which provides (free!) services to support more people with mentorship, particularly those from traditionally underrepresented groups.

Arepo @ 2024-02-23T11:33 (+2)

Sorry Jessica, I somehow missed this comment until now. I've just added them to the latest version of this post.

Adam Binks @ 2024-02-22T19:23 (+3) in response to Let's advertise EA infrastructure projects, Feb 2024

As well as Fatebook for Slack, at Sage we've made other infrastructure aimed at EAs (amongst others!):

  • Fatebook: the fastest way to make and track predictions
  • Fatebook for Chrome: Instantly make and embed predictions, in Google Docs and anywhere else on the web
  • Quantified Intuitions: Practice assigning credences to outcomes with a quick feedback loop
Arepo @ 2024-02-23T11:30 (+2)

Thanks Adam. I've edited those in.

niplav @ 2024-02-21T23:05 (+1) in response to Let's advertise EA infrastructure projects, Feb 2024

Reach heaven through research consulting.

People other than at Arb also offering it (at various rates):

I remember Sarah Constantin having been available for this too, but I don't know whether she still does research consulting.

Arepo @ 2024-02-23T11:12 (+3)

A general paired criterion I have is that the services either have to be targeted to EA individuals (which I don't think research qualifies as) or to offer pretty substantial discounts to them (the logic being that I want to support people supporting the little guy) - Arb assured me in an earlier post they do the latter. Do you know if any of these people would do so?

lynettebye @ 2024-02-23T10:11 (+2) in response to Let's advertise EA infrastructure projects, Feb 2024

Can we add EA Mental Health Navigator, especially the provider database? It's a list of coaches and therapists recommended by EAs. It is available as a resource, and would also benefit from more people leaving reviews of providers they've worked with! 

Arepo @ 2024-02-23T10:58 (+2)

Thanks Lynette. I've added them now.

Habryka @ 2024-02-21T19:51 (+7) in response to Let's advertise EA infrastructure projects, Feb 2024

A bunch of projects by Lightcone Infrastructure that likely qualify: 

  • We run LessWrong.com and the AI Alignment Forum (alignmentforum.org)
  • We also built and continue to maintain the codebase that runs LessWrong and the EA Forum (together with the EA Forum team), which is now also being used by a bunch of other forums (like the Progress Forum, the recently launched Animal Advocacy Forum, and the Sam Harris "Waking Up" community)
  • We also run Lighthaven, a large event and office space in Downtown Berkeley, which provides heavily subsidized event space for various EA-aligned programs and events (currently hosting the MATS program)
Arepo @ 2024-02-23T10:58 (+4)

Thanks Oliver. I've added Forum Magnum and Lighthaven.

I'm reluctant to have links to other forums through some combination of

a) it being a potential floodgate to start linking to communities that aren't fairly explicitly a subset of EA, and 

b) those forums being already well known, and I want to highlight projects that people  might realistically miss after a couple of months in the EA fold.

I'm wondering whether I should take some of the bigger organisations off the list for the latter reason, but I haven't managed to come up with a consistent principle here. I'm open to being persuaded that there's a way to be more consistent in either direction.

NickLaing @ 2024-02-23T09:42 (+4) in response to From salt intake reduction to labor migration: Announcing top ideas for the AIM 2024 CE Incubation Program

That's great that you are focusing on low and mid-skill workers. Its a complicated question, but I think that moderate high skill workers leaving low and middle income for western jobs can be net-negative, especially in the healthcare field - for example the flood of West African doctors and nurses heading to Europe and the Middle east at the moment. I really like that some countries like Nigeria are bonding medical staff for 2+ years after they complete their training.

https://www.semafor.com/article/02/15/2024/nigerian-nurses-reject-rules-attacking-japa

Sometimes though there can be claims of "overproduction" of positions such as nurses like in Kenya, but when public hospitals are grossly understaffed then why is the governement spending money on training nurses rather than actually putting the resources directly into their medical system?

https://ntvkenya.co.ke/news/we-are-overproducing-graduates-health-ps-defends-govt-plan-to-export-nurses/

DavidNash @ 2024-02-23T10:46 (+6)

CGD has a different take on this type of migration.

"Between the start of 2021 and 2022, the number of Nigerian-born nurses joining the UK nursing register more than quadrupled, an increase of 2,325. Becker’s human capital theory would suggest that this increase in the potential wages earned by Nigerian-trained nurses should lead to an increase in Nigerians choosing to train as nurses. So what happened? Between late 2021 and 2022, the number of successful national nursing exam candidates increased by 2,982—that is, more than enough to replace those who had left for the UK."

"To fully realise these benefits, Nigeria would need to embrace emigration, realising that nurses are likely going to leave anyway and doing everything they can to reap the benefits. Yet, they appear to be doing the opposite. New guidelines announced on 7 February 2024 state that nurses must work for two and half years before being allowed to work overseas, a move nurses contest. This policy is far from optimal; restrictions on emigration are inefficient, inequitable, and unethical. Indeed, Ghana had a similar scheme, but ended up scrapping it because they were unable to employ all of their nurse trainees at home."

SuperDuperForecasting @ 2024-02-21T11:22 (+26) in response to New Open Philanthropy Grantmaking Program: Forecasting

This will be a total waste of time and money unless OpenPhil actually pushes the people it funds towards achieving real-world impact. The typical pattern in the past has been to launch yet another forecasting tournament to try to find better forecasts and forecasters. No one cares, we already know how to do this since at least 2012!

The unsolved problem is translating the research into real-world impact. Does the Forecasting Research Institute have any actual commercial paying clients? What is Metaculus's revenue from actual clients rather than grants? Who are they working with and where is the evidence that they are helping high-stakes decision makers improve their thought processes?

Incidentally, I note that forecasting is not actually successful even within EA at changing anything: superforecasters are generally far more relaxed about Xrisk than the median EA, but has this made any kind of difference to how EA spends its money? It seems very unlikely. 

David Mathers @ 2024-02-23T10:42 (+4)

It's worth saying also that we already have 1 commercial forecasting organisation Good Judgment (I do a little bit of professional forecasting for them though it's not my main job.) Not clear why we need another.  (I don't know who GJ clients actually are though, plus presumably I wouldn't be allowed to tell you even if I did. EDIT: Actually, in some cases I think client info became public and/or we were internally told who they were, but I have just forgotten who.) 

Tyner @ 2024-02-23T04:50 (+3) in response to Let's advertise EA infrastructure projects, Feb 2024

I don't think that SEADS still exists.  They haven't posted in a while and their website is dead

https://seads-ai.org/portfolio.html

Arepo @ 2024-02-23T10:40 (+2)

Thanks Tyner - I've removed them.

Stan Pinsent @ 2024-02-23T10:38 (+3) in response to African Heads of States Ban Donkey Skin Trade

Can anyone convince me that this is a robustly good move for donkey welfare? Working donkeys seem to have quite bad lives, so a falling population because people are deciding to sell donkeys for slaughter might be a good thing.

lynettebye @ 2024-02-23T10:11 (+2) in response to Let's advertise EA infrastructure projects, Feb 2024

Can we add EA Mental Health Navigator, especially the provider database? It's a list of coaches and therapists recommended by EAs. It is available as a resource, and would also benefit from more people leaving reviews of providers they've worked with! 

Filip_Murar @ 2024-02-22T16:31 (+3) in response to From salt intake reduction to labor migration: Announcing top ideas for the AIM 2024 CE Incubation Program

Hi Gemma, thanks for sharing! That platform indeed has several similarities with our proposed nonprofit idea (though also some differences, such as our focus low- and mid-skilled workers and on a specific country of origin rather than a specific destination country). Exciting to see more work being done in this otherwise quite neglected space!

NickLaing @ 2024-02-23T09:42 (+4)

That's great that you are focusing on low and mid-skill workers. Its a complicated question, but I think that moderate high skill workers leaving low and middle income for western jobs can be net-negative, especially in the healthcare field - for example the flood of West African doctors and nurses heading to Europe and the Middle east at the moment. I really like that some countries like Nigeria are bonding medical staff for 2+ years after they complete their training.

https://www.semafor.com/article/02/15/2024/nigerian-nurses-reject-rules-attacking-japa

Sometimes though there can be claims of "overproduction" of positions such as nurses like in Kenya, but when public hospitals are grossly understaffed then why is the governement spending money on training nurses rather than actually putting the resources directly into their medical system?

https://ntvkenya.co.ke/news/we-are-overproducing-graduates-health-ps-defends-govt-plan-to-export-nurses/

Richard Okoe @ 2024-02-23T09:05 (+1) in response to Farmed animal funding towards Africa is growing but remains highly neglected

Great work. This is illuminating for some of us in Africa ready to get involved in the Advocacy. Thank you.

Austin @ 2024-02-20T00:23 (+3) in response to New Open Philanthropy Grantmaking Program: Forecasting

Awesome to hear! I'm happy that OpenPhil has promoted forecasting to its own dedicated cause area with its own team; I'm hoping this provides more predictable funding for EA forecasting work, which otherwise has felt a bit like a neglected stepchild compared to GCR/GHD/AW. I've spoken with both Ben and Javier, who are both very dedicated to the cause of forecasting, and am excited to see what their team does this year!

Grayden @ 2024-02-23T08:39 (+11)

Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals

Vasco Grilo @ 2024-02-22T20:14 (+4) in response to New Open Philanthropy Grantmaking Program: Forecasting

Thanks for the comment, Grayden. For context, readers may want to check the question post Why is EA so enthusiastic about forecasting?.

Grayden @ 2024-02-23T08:35 (+4)

Thanks for sharing, but nobody on that thread seems to be able to explain it! Most people there, like here, seem very sceptical

Matthew_Barnett @ 2024-02-22T19:39 (+1) in response to Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy

There's an IMO fairly simple and compelling explanation for why Sam Altman would want to accelerate AI that doesn't require positing massive cognitive biases or dark motives. The explanation is simply: according to his moral views, accelerating AI is a good thing to do.

It wouldn't be unusual for him to have such a moral view. If one's moral view puts substantial weight on the lives and preferences of currently existing humans, then plausible models of the tradeoff between safety and capabilities say that acceleration can easily be favored. This idea was illustrated by Nick Bostrom in 2003 and more recently by Chad Jones.

Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism. But most people, probably including Sam Altman, are not strong longtermists.

Nick K. @ 2024-02-23T07:59 (+1)

You don't need to be an extreme longtermist to be sceptical about AI, it suffices to care about the next generation and not want extreme levels of change. I think looking too much into differing morals is the wrong lens here.

The most obvious explanation for how Altman and people more concerned about AI safety (not specifically EAs) differ seems to be in their estimates about how likely AI risk is vs other risks.

That being said, the point that it's disingenuous to ascribe cognitive bias to Altman for having whatever opinion he has, is a fair one - and one shouldn't go too far with it in view of general discourse norms. That said, given Altman's exceptional capability for unilateral action due to his position, it's reasonable to be at least concerned about it.

Grayden @ 2024-02-21T06:53 (+76) in response to New Open Philanthropy Grantmaking Program: Forecasting

I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.

MWStory @ 2024-02-23T06:53 (+30)

I think the fact that forecasting is a popular hobby is probably pretty distorting of priorities.

There are now thousands of EAs whose experience of forecasting is participating in fun competitions which have been optimised for their enjoyment. This mass of opinion and consequent discourse has very little connection to what should be the ultimate end goal of forecasting: providing useful information to decision makers.

For example, I’d love to know how INFER is going. Are the forecasts relevant to decision makers? Who reads their reports? How well do people figuring out what to forecast understand the range of policy options available and prioritise forecasts to inform them? Is there regular contact and a trusting relationship at senior executive level? Would it help more if the forecasting were faster, or broader in scope?

These are all very important questions but are invisible to forecaster participants so end up not being talked about much.

MichaelDickens @ 2024-02-23T05:05 (+4) in response to New Open Philanthropy Grantmaking Program: Forecasting

IMO if a forecasting org does manage to make money selling predictions to companies, that's a good positive update, but if they fail, that's only a weak negative update—my prior is that the vast majority of companies don't care about getting good predictions even if those predictions would be valuable. (Execs might be exposed as making bad predictions; good predictions should increase the stock price, but individual execs only capture a small % of the upside to the stock price vs. 100% of the downside of looking stupid.)

MWStory @ 2024-02-23T06:37 (+5)

I think if you extend this belief outwards it starts to look unwieldy and “proves too much”. Even if you think that executives don’t care about having access to good predictions the way that business owners do, then why not ask why business owners aren’t paying?

Matthew_Barnett @ 2024-02-22T19:39 (+1) in response to Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy

There's an IMO fairly simple and compelling explanation for why Sam Altman would want to accelerate AI that doesn't require positing massive cognitive biases or dark motives. The explanation is simply: according to his moral views, accelerating AI is a good thing to do.

It wouldn't be unusual for him to have such a moral view. If one's moral view puts substantial weight on the lives and preferences of currently existing humans, then plausible models of the tradeoff between safety and capabilities say that acceleration can easily be favored. This idea was illustrated by Nick Bostrom in 2003 and more recently by Chad Jones.

Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism. But most people, probably including Sam Altman, are not strong longtermists.

NickLaing @ 2024-02-23T05:48 (+2)

I agree that's possible, but I'm not sure I've seen his rhetoric put that view forward in a clear way.

MWStory @ 2024-02-22T04:54 (+8) in response to New Open Philanthropy Grantmaking Program: Forecasting

What better test of the claim "we are producing useful/actionable information about the future, and/or developing workable processes for others to do the same" do we have than some of the thousands of organisations whose survival depends on this kind of information being willing to pay for it? 

MichaelDickens @ 2024-02-23T05:05 (+4)

IMO if a forecasting org does manage to make money selling predictions to companies, that's a good positive update, but if they fail, that's only a weak negative update—my prior is that the vast majority of companies don't care about getting good predictions even if those predictions would be valuable. (Execs might be exposed as making bad predictions; good predictions should increase the stock price, but individual execs only capture a small % of the upside to the stock price vs. 100% of the downside of looking stupid.)

Tyner @ 2024-02-23T04:50 (+3) in response to Let's advertise EA infrastructure projects, Feb 2024

I don't think that SEADS still exists.  They haven't posted in a while and their website is dead

https://seads-ai.org/portfolio.html



Comments on 2024-02-22

havequick @ 2024-02-22T23:19 (+1) in response to A moral backlash against AI will probably slow down AGI development

If an anti-AI backlash gets formalized into strong laws and regulations against AGI development, leading governments could make it prohibitively difficult, costly, and risky to develop AGI. This doesn’t necessarily require a global totalitarian government panopticon monitoring all computer research. Instead, the moral stigmatization automatically imposes the panopticon. If most people in the world agree that AGI development is evil, they will be motivated to monitor their friends, family, colleagues, neighbors, and everybody else who might be involved in AI. They become the eyes and ears ensuring compliance. They can report evil-doers (AGI developers) to the relevant authorities – just as they would be motivated to report human traffickers or terrorists. And, unlike traffickers and terrorists, AI researchers are unlikely to have the capacity or willingness to use violence to deter whistle-blowers from whistle-blowing.

Something to add is that this sort of outcome can be augmented/bootstrapped into reality with economic incentives that make it risky to work to develop AGI-like systems while simultaneously providing economic incentives to report those doing so -- and again, without any sort of nightmare global world government totalitarian thought police panopticon (the spectre of which is commonly invoked by certain AI accelerationists as a reason not to regulate/stop work towards AGI).

These two posts (by the same person, I think) give an example of a scheme like this (ironically inspired by Hanson's writings on fine-insured-bounties): https://andrew-quinn.me/ai-bounties/ and https://www.lesswrong.com/posts/AAueKp9TcBBhRYe3K/fine-insured-bounties-as-ai-deterrent

Things to note not in either of those posts (though possibly in other writings by the author[s]) is:

  • the technical capabilities to allow for decentralized robust coordination that creates/responds to real-world money incentives have drastically improved in the past decade. It is an incredibly hackneyed phrase but...cryptocurrency does provide a scaffold onto which such systems can be built.

  • even putting aside the extinction/x-risk stuff there are financial incentives for the median person to support systems which can peaceably yet robustly deter the creation of AI systems which would take any of the jobs they could get ("AGI") and thereby leave them in an abyssal state of dependence without income and without a stake or meaningful role in society for the rest of their life

Jeff Kaufman @ 2022-12-20T13:46 (+34) in response to A Case for Voluntary Abortion Reduction

From a perspective where embryos are moral patients, I think preventing otherwise healthy embryos from failing to implant or otherwise make it to term looks pretty promising, especially since these are generally wanted pregnancies. A few years ago I took some time with a med student thinking through some options:

  • Reduce pelvic inflammatory disease. This causes uterine scarring, which leads to implantation failure, and the main causes are STIs like gonorrhea and chlamydia. Cervical cancer is also an issue, and we do have the HPV vaccine for this. These are already bad things we'd like to prevent, but this raises the stakes a lot.

  • Decrease the C-section rate. Abdominal surgery is another thing that gives uterine scarring. The US C-section rate is much higher than the rest of the world, for reasons that seem to be more about how we allocate medical providers and less about people's health, and there are already good options here like providing doulas for anyone who wants one. C-sections are worse than other abdominal surgeries from this perspective because the embryo can implant into the surgical scar, which gives you an ectopic pregnancy and generally requires an abortion to save the life of the mother. Appendectomies might be another good candidate here, because outside the US people have way fewer of them. Though some of that is that different health systems respond differently to appendicitis: it recurs about 20% of the time, so different places have made different calls about whether removing it the first time it happens is worth it.

  • Come up with better detection methods for fibroids. These are benign tumors in the uterus that grow with estrogen, and compete with the fetus for space. This is more of a problem later in the pregnancy when space is tighter. This one might make sense if you don't think embryos matter but do think second or third trimester fetuses matter. This also disproportionately affects black mothers, so it may be underfunded.

  • Encourage people to switch to methods of birth control that prevent ovulation or fertilization instead of implantation. For example, estrogen over progesterone. This one is the odd one out, in that it's one where you're not just more highly prioritizing something people already think would be good. IUDs, for example, are really good in many ways, but they work by preventing implantation. This is at least less tractable socially than the others, because you'd get a huge fight.

Pat Myron @ 2024-02-22T22:58 (+5)

@Ariel Simnegar air pollution's another significant factor in pregnancy loss:
https://www.thelancet.com/journals/lanplh/article/PIIS2542-5196(20)30268-0/fulltext

"exposure contributed to 29.2% of total annual pregnancy loss in this region"

MaxRa @ 2024-02-22T11:35 (+6) in response to New Open Philanthropy Grantmaking Program: Forecasting

I don't think there's actually a risk of CAISID damaging their EA networks here, fwiw, and I don’t think CAISID wanted to include their friendships in this statement.

My sense is that most humans are generally worried about disagreeing with what they perceive to be a social group’s opinion, so I spontaneously don’t think there’s much specific to EA to explain here.

CAISID @ 2024-02-22T22:40 (+3)

You are correct in that I was referring more to the natural risks associated with disagreeing with a major funder in a public space (even though OP have a reputation for taking criticism very well), and wasn't referring to friendships. I could well have been more clear, and that's on me.

StochasticCat @ 2024-02-22T21:57 (+1) in response to More to explore on 'Our Final Century'

The critical review of The Precipice links to a domain that is no longer up.

Sandra Malagon @ 2024-02-21T21:00 (+14) in response to Coworking space in Mexico City

I would like to have some answers about the concerns that several local members, including myself, have expressed about this place, including cost, optics, etc. Additionally, in November, we were advised not to go to this space because there was not enough room. Has this changed in any way? I am very concerned that what could be the main EA Align workspace in Mexico does not represent what those of us working here are looking for. But perhaps my bias is large, and I am not understanding what should be prioritized in opening this space. I have detailed my questions more thoroughly in this inquiry. 

AmAristizabal @ 2024-02-22T21:50 (+27)

Hey Sandra, thanks for your questions. Hopefully the following clarifications will help give useful context as to why we’re excited about this space. 

The scope of our program

  • The office space and our broader project is a university program focused exclusively on AI. It is not an EA space, and it’s not meant to do EA community building in Mexico. Many of our fellows and visitors are not part of the EA community. We would be happy to see other initiatives aimed at EA community building in Mexico and Mexico City. 
  • We would like to point out that the program is part of a Mexican university. Jaime and I (the two primary staff members) are from Colombia, and the vast majority of our colleagues at ITAM who have worked closely with us on various aspects of the fellowship are Mexicans. We're really grateful for their work and want to make sure their work is acknowledged.

Some benefits of this space

  • We have carefully considered the upsides and downsides of the current coworking space, and are now pretty confident about choosing it. This is both for logistical reasons and because we’ve had overwhelmingly positive feedback from fellows and visitors (several of them Latin Americans). 
  • We’ve found the space is worth the cost and in practice cheaper than many alternatives because it offers all the operational facilities that the fellowship needs. If we had picked a different coworking space, we would have had to compensate by hiring an additional staff member to figure out things like catering, hosting talks, furniture, etc. It is worth noting that the staff curates a weekly menu for us to accommodate vegans. From our experiences with other event spaces in CDMX and LMICs, this is quite hard to find. Given this is a university program, there are additional constraints and requirements for the space(s) we use. 

We also have considered locals, and people from latam and LMICs more generally

  • We have thought a lot about the effects of programs like these on locals, and much of our work is aimed at diversifying the pool of people working on important problems within AI. 
  • The current set up of the coworking space has meant we have been able to accept visitors from LMICs and subsidize spots for those who wouldn’t be able to attend otherwise. 
  • Condesa is a more gentrified and international area of Mexico City. In our experience, that has come with some benefits for a global program like ours. For example amenities as you mention, but also allowing fellows and visitors from other low and middle income countries and underrepresented backgrounds to move comfortably around the area (e.g. non-spanish speakers from other LMICs).
  • We were surprised to hear your concerns, as we haven’t received any similar feedback so far (just for quick context to readers: the writer of these comments has never been to our office space). We aren’t aware of any incidents of discrimination experienced during our fellowship or the co-working space more generally -  we’ve found the staff (most are Mexican) of the broader co-working space (imagine a WeWork) to be very kind and welcoming. If there are specific incidents you’re aware of, we’d encourage you to let us, or the Community Health team know. 

While we are part of a Mexican university, and are mindful and respectful of local norms, we are also proud of having kickstarted a programme with a truly global focus in which members from various cultural backgrounds feel welcome.

Nithin Ravi @ 2024-02-22T21:26 (+1) in response to The Case for Animal-Inclusive Longtermism

I'm excited to see where animal inclusive longtermism can go and what interventions begin to develop in this space.

InvisibleGPT @ 2024-02-22T21:13 (–10) in response to EA Aligned Coworking Spaces in Low- and Middle-Income Countries (Mexico in particular): With or Without Local Members?

Great post. It doesn’t just apply to workspaces but also startups.

I would also like to ask such EA organizations; why are you choosing Mexico as your base? Is it simply because it is more cost effective for you and has a certain “authentic” factor that works for you?

Or is it because you are actually interested in applying your EA ideals to make an impact locally?

Linch @ 2024-02-19T05:35 (+44) in response to Linch's Quick takes

My default story is one where government actors eventually take an increasing (likely dominant) role in the development of AGI. Some assumptions behind this default story:

1. AGI progress continues to be fairly concentrated among a small number of actors, even as AI becomes percentage points of GDP.

2. Takeoff speeds (from the perspective of the State) are relatively slow.

3. Timelines are moderate to long (after 2030 say). 

If what I say is broadly correct, I think this may have has some underrated downstream implications For example, we may be currently overestimating the role of values or insitutional processes of labs, or the value of getting gov'ts to intervene(since the default outcome is that they'd intervene anyway). Conversely, we may be underestimating the value of clear conversations about AI that government actors or the general public can easily understand (since if they'll intervene anyway, we want the interventions to be good). More speculatively, we may also be underestimating the value of making sure 2-3 are true (if you share my belief that gov't actors will broadly be more responsible than the existing corporate actors).

Happy to elaborate if this is interesting.

Sharmake @ 2024-02-22T21:03 (+1)

I basically grant 2, sort of agree with 1, and drastically disagree with three (that timelines will be long.)

Which makes me a bit weird, since while I do have real confidence in the basic story that governments are likely to influence AI a lot, I do have my doubts that governments will try to regulate AI seriously, especially if timelines are short enough.

James Herbert @ 2024-02-22T18:53 (+7) in response to Are there any EAs in Nottingham who want to (re)start a local group?

Are you in touch with Joris and the university group support team at the Centre for Effective Altruism? They might be able to help. More info here: https://www.notion.so/centreforeffectivealtruism/Organizer-Support-Program-OSP-0aa5dc55d97e444da347d8d227db93d4?pvs=4

Good luck!

Joris P @ 2024-02-22T20:51 (+2)

Hey Tom, welcome to the Forum! 

 

I happened to see this comment, excited to get in touch! And thanks for the recommendation James :)

Want to shoot us an email at unigroups [at] centreforeffectivealtruism [dot] org? I've already asked someone to do a search in our system for any people in your area!

Neil Warren @ 2024-02-22T14:01 (+1) in response to Detachment vs attachment [AI risk and mental health]

Hello! Thanks for commenting! 

  1. How does that work? In your specific case, what are you invested in but also are detached from the outcome? I can imagine enjoying life as working like this: eg I don't care what I'm learning about if I'm reading a book for pleasure. Parts of me also enjoy the work I tell myself helps with AI safety. But there are certainly some parts of it that I dislike, but that I do anyway, because I attach a lot of importance to the outcome. 
  2. Those are interesting points! 
    1. 1) Mud-dredging makes rationality a necessity. If you've taken DMT and have had a cosmic revelation where you discovered that everything is connected and death is an illusion, then you don't need to actively not die. I know people to whom death or life is all the same: my point is that if you care about the life/death outcome, you must be on the offensive, somewhat. If you sit in the same place for long enough, you die. There are posts about "rationality = winning", and I'm not going to get into semantics but what I meant here by rationality was "that which gets what you want". You can't afford to eg ignore truth when something you value is at risk. Part of it was referencing this post, which made clear for me that entangling my rationality with reality more thoroughly would force me into improving it. 
    2. 2) I'm not sure what you mean. We may be talking about two different things: what I meant by "rationality" was specifically what gets you good performance. I didn't mean some daily applied system which has both pros and cons to mental health or performance. I'm thinking about something wider than that.

As for that last point, I seem to have regrettably framed creativity and rationality as mutually incompatible. I wrote in the drawbacks of muddredging that aiming at something can impede creativity, which I think is true. The solution for me is splitting time up into  "should" injunctions time and free time fooling around. Not a novel solution or anything. Again it's a spectrum, so I'm not advocating for full-on muddgredging: that would be bad for performance (and mental health) in the long run. This post is the best I've read that explores this failure mode. I certainly don't want to appear like I'm disparaging creativity. 

(However, I do think that rationality is more important than creativity. I care more about making sure my family members don't die than about me having fun, and so when I reflect on it all I decide that I'll be treating creativity as a means, not an end, for the time being. It's easy to say I'll be using creativity as a means, but in practice, I love doing creative things and so it becomes an end.) 

VictorW @ 2024-02-22T20:19 (+1)

An example of invested but not attached: I'm investing time/money/energy into taking classes about subject X. I chose subject X because it could help me generate more value Y that I care about. But I'm not attached to getting good at X, I'm invested in the process of learning it.

I feel more confused after reading your other points. What is your definition of rationality? Is this definition also what EA/LW people usually mean? (If so, who introduces this definition?)

When you say rationally is "what gets you good performance", that seems like it could lead to arbitrary circular reasoning about what is and isn't rational. If I exaggerate this concern and define rationality as "what gets you the best life possible", that's not a helpful definition because it leads to the unfalsifiable claim that rationality is optimal while providing no practical insight.

SanteriK @ 2024-02-21T08:00 (+31) in response to New Open Philanthropy Grantmaking Program: Forecasting

As the program is about forecasting, what is your stance on the broader field of foresight & futures studies? Why is forecasting more promising than some other approaches to foresight?

Vasco Grilo @ 2024-02-22T20:17 (+4)

Thanks for asking, SanteriK! For context, reader may want to check the (great!) post A practical guide to long-term planning – and suggestions for longtermism.

Grayden @ 2024-02-21T06:53 (+76) in response to New Open Philanthropy Grantmaking Program: Forecasting

I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.

Vasco Grilo @ 2024-02-22T20:14 (+4)

Thanks for the comment, Grayden. For context, readers may want to check the question post Why is EA so enthusiastic about forecasting?.

Rethink Priorities @ 2024-02-22T20:00 (+8) in response to Who's hiring? (Feb-May 2024)

Rethink Priorities is seeking a Global Health and Development Director who will work closely with all team members and have at least initially direct reports of two Senior Research Managers. The Department typically carries out short research projects, often commissioned by philanthropic organizations and other relevant actors in the global health and development space. 

Salary: $125,000 - $125,250 / year for a full-time position (prorated for part-time work)

Location: Remote (we can hire from many countries; some international travel is required)

Deadline: February 25, 2024, at the end of the day (11:59 PM) in US/Eastern (EST) time zone 

Learn more here on our website!

Paul_Lang @ 2024-02-22T19:56 (+1) in response to Fixing the vegetarian plate: A new guide aims to correct misconceptions and educate the health-care community about the vegetarian diet

Hi @Leandro Franz thank you very much for this post. I'd be curious to have a look at your document or a summarized version of it. Could you double check the link to the document? It does not work for me.

cacudaback @ 2024-02-22T19:48 (+1) in response to Who's hiring? (Feb-May 2024)

One for the World is hiring an Executive Director to lead the development and execution of One for the World’s strategy and be responsible for the day-to-day leadership of the organization.

Salary: US$110k-$150k per annum, plus a 0-20% performance-related bonus each year. Pay will be location-adjusted by up to 15% if you live outside the US - let us know if you want to understand your location-based pay in advance of applying.

Location: Strong preference for someone based in the mainland US, ideally on the Northeast/Acela corridor. We will consider strong applications for remote work but would expect commensurate ability to travel and accommodate synchronous meetings across US time zones. Accordingly, we anticipate appointing someone in the US.

Apply here: Application Form and Pack, apply by March 18th to be considered in the first round.  

About the role: We are excited to find an ambitious leader, with a track record of high-level execution, who will bring a clear vision and strategic plan for expanding One for the World's influence and quickly multiplying money moved to our nonprofit Partners.

You should apply if you can galvanize our team, manage and deepen our relationship with our donors, effectively fundraise in corporate environments, and have a strong personal commitment to addressing global poverty. It would be advantageous if you were an exceptional public speaker.

About One for the World: We envision a world where everyone fully embraces their opportunity to give effectively. Our work is to build a movement of people revolutionizing charitable giving to end extreme poverty through education, training, and community building. You can learn more about One for the World on our website here.

More about this position: If you are looking for a role with immense potential impact, this is an opportunity to lead a fast-growing organization, raising money for the world’s most cost-effective nonprofit organizations.

One for the World members have, to date, donated $6 million to our nonprofit partners, preventing nearly 1,000 deaths. We have created communities of donors in workplaces and college campuses across the US, UK, Canada, and Australia.

If you have any questions about the role or would like to recommend a candidate, please reach out to jobs@1fortheworld.org.  

SiebeRozendal @ 2024-02-16T10:57 (+9) in response to Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy

So, what do we think Altman's mental state/true belief is? (Wish this could be a poll)

  1. Values safety, but values personal status & power more
  2. Values safety, but believes he needs to be in control of everything & has a messiah complex
  3. Doesn't really care about safety, it was all empty talk
  4. Something else

I'm also very curious what the internal debate on this is - if I were working on safety inside OpenAI, I'd be very upset.

Matthew_Barnett @ 2024-02-22T19:39 (+1)

There's an IMO fairly simple and compelling explanation for why Sam Altman would want to accelerate AI that doesn't require positing massive cognitive biases or dark motives. The explanation is simply: according to his moral views, accelerating AI is a good thing to do.

It wouldn't be unusual for him to have such a moral view. If one's moral view puts substantial weight on the lives and preferences of currently existing humans, then plausible models of the tradeoff between safety and capabilities say that acceleration can easily be favored. This idea was illustrated by Nick Bostrom in 2003 and more recently by Chad Jones.

Arguably, it is effective altruists who are the unusual ones here. The standard EA theory employed to justify extreme levels of caution around AI is strong longtermism. But most people, probably including Sam Altman, are not strong longtermists.

Shelly Steffler @ 2024-02-22T19:33 (+1) in response to How the idea of "doing good better" makes me feel

"I’ve never felt much pressure in that, just a lot of aching hope." I think that's my new goal; I sometimes feel a lot of pressure. Thank you for sharing!

Shelly Steffler @ 2024-02-22T19:27 (+5) in response to Can we help individual people cost-effectively? Our trial with three sick kids

Wow. I don't have a medical background, but I was volunteering in northern Ghana for several months, and had several opportunities to contribute to individual medical care: an (unsuccessful) brain cancer operation for a 9 year old girl (may she rest in peace), malaria and typhoid medication for a few friends who couldn't afford it, and antibiotics (or pain relief?) for a boy who stepped on a nail. And I worried about the same complexities as you did! Thank you for taking the time, effort, and resources to complete this project and share it with us.

I met a group of nurses, who are teaching First Aid to people, the way we do in North America... do you see that as something that could be more cost-effective?

Adam Binks @ 2024-02-22T19:23 (+3) in response to Let's advertise EA infrastructure projects, Feb 2024

As well as Fatebook for Slack, at Sage we've made other infrastructure aimed at EAs (amongst others!):

  • Fatebook: the fastest way to make and track predictions
  • Fatebook for Chrome: Instantly make and embed predictions, in Google Docs and anywhere else on the web
  • Quantified Intuitions: Practice assigning credences to outcomes with a quick feedback loop
James Herbert @ 2024-02-22T18:53 (+7) in response to Are there any EAs in Nottingham who want to (re)start a local group?

Are you in touch with Joris and the university group support team at the Centre for Effective Altruism? They might be able to help. More info here: https://www.notion.so/centreforeffectivealtruism/Organizer-Support-Program-OSP-0aa5dc55d97e444da347d8d227db93d4?pvs=4

Good luck!

Jakub Stencel @ 2024-02-22T18:20 (+2) in response to In memory of Steven M. Wise

Really devastating news. I had a pleasure to meet Steven. His dedication and warmth was deeply inspiring to me, and his down to earth character made him fun to be around. You will be missed. :(

Jason @ 2024-02-22T16:37 (+2) in response to FTX expects to return all customer money; clawbacks may go away

SBF also claimed that he could have raised enough liquidity to make customers substantially whole given a few more weeks, but was under extreme pressure to declare bankruptcy. I think there's a good chance this is accurate [. . . .]

 

This seems unlikely to me. The books were just in too bad of a shape for anyone conducting even a minimum amount of due diligence to fork over the needed liquid assets. Selling the illiquid assets would have taken time, and in many cases doing so quickly would have depressed the value of those assets. Moreover, suspending withdrawals until liquidity could be obtained would have been the death knell for FTX's enterprise value. So, contra the earlier situations in which investors poured money into FTX, the potential upside would be fairly limited for accepting the risk of whatever landmines might be buried in FTX's financials.

The estate hopes everyone can be made whole as far as recovering the value in USD on the date of filing, but that is based in part on appreciation in the value of crypto and to a lesser extent on use of the trustee's muscular powers in bankruptcy (such as clawing back ~$30M from EVF, getting out of expensive sponsorship deals, etc.).

Finally, even assuming it was possible to get FTX into shape to attract liquidity, that would have involved massive effort. The universe in which SBF hires an army of forensic accountants to untangle FTX's disastrous accounting very quickly is a universe in which a lot of outsiders now have proof of very serious fraud. Those people are not likely to allow SBF to hide the extent of the fraud from would-be saviors.

bern @ 2024-02-22T18:10 (+1)

Sorry, I just meant the second part ("was under extreme pressure to declare bankruptcy")

Chris Leong @ 2024-02-20T23:52 (+3) in response to Meta EA Regional Organizations (MEAROs): An Introduction

I’d imagine the natural functions of city and national groups to vary substantially.

Rockwell @ 2024-02-22T17:57 (+2)

I think that's a common intuition! I'm curious if there were particular areas covered (or omitted) from this post that you see as more clearly the natural function of one versus the other.

I'll note that a couple factors seem to blur the lines between city and national MEARO functions:

-Size of region (e.g. NYC's population is about 8 million, Norway's is about 5.5 million)
-Composition of MEAROs in the area (e.g. many national MEAROs end up with a home base city or grew out of a city MEARO, some city MEAROs are in countries without a national MEARO)

I could see this looking very different if more resources went toward assessing and intentionally developing the global MEARO landscape in years to come.

Ramiro @ 2024-02-22T17:40 (+2) in response to Estimates on expected effects of movement/pressure group/field building?

I think there's a relevant distinction to be made between field building (i.e., developing a new area of expertise to provide advice to decision-makers - think about the history of gerontology) and movement building (which makes me think of advocacy groups, free masons, etc.). Of course, many things lie in-between, such as neoliberals & Mont Pelerin Society.

Yanni Kyriacos @ 2024-02-21T23:26 (+3) in response to I have a some questions for the people at 80,000 Hours

I don't think it is fair to assume anything about my intentions without first asking - maybe you have a question about my intentions? I'm happy to answer any.

Am I right in saying that you think that me using the word "partnering" is inaccurate when when describing 80k listing jobs at labs, having people on their podcast to promote their initiatives and their people? etc

I'd be happy to use another word, as it doesn't really change the substance of my claims.

Rebecca @ 2024-02-22T16:58 (+2)

I do think it’s inaccurate to say that 80k listing a job at an organisation indicates a partnership with them. Otherwise you’d have to say that 80k is partnering with e.g. the US, UK, Singapore and EU governments and the UN.

Re the podcast, I don’t think that’s the central purpose or effect. On the podcast homepage, the only lab employee in the highlighted episode section works on information security, and that is pitched as the focus of the episode.

I am disappointed at how soft-balled some of the podcast episodes have been, and I agree it’s plausible that for some guests it would be better if they weren’t interviewed, if that’s the trade-off. However I also think that overstating the case by describing it in a way that would give a mistaken impression to onlookers is unlikely to do anything to persuade 80k about it.

Jason @ 2024-02-22T14:40 (+6) in response to New Open Philanthropy Grantmaking Program: Forecasting

Why do you think there is currently little/no market for systematic meritocratic forecasting services (SMFS)? Even under a lower standard of usefulness -- that blending SMFS in with domain-expert forecasts would improve the utility of forecasts over using only domain-expert input -- that should be worth billions of dollars in the financial services industry alone, and billions elsewhere (e.g., the insurance market).

I don't think the drivers of low "societal sanity" are fundamentally about current ability to estimate probabilities. To use a current example, the reason 18% of Americans believe Taylor Swift's love life is part of a conspiracy to re-elect Biden isn't that our society lacks resources to better calibrate the probability that this is true. The desire to believe things that favor your "team" runs deep in human psychology. The incentives to propagate such nonsense are, sadly, often considerable. The technological structures that make disseminating nonsense easier are not going away.

MaxRa @ 2024-02-22T16:43 (+8)

I agree that things like confirmation bias and myside bias are huge drivers impeding "societal sanity". And I also agree that it won't help a lot here to develop tools to refine probabilities slightly more.

That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it's currently mostly attracting a niche of people who thrive for higher epistemic ideals, who try to contribute to better forecasts on important topics, etc. I currently feel like it's not intractable for quantitative forecasts to become more common in epistemic spaces filled with reasonable enough people (e.g. journalism, politics, academia). Kinda similar to how tracking KPIs where probably once a niche new practice and are now standard practice.

bern @ 2024-02-22T16:04 (+1) in response to FTX expects to return all customer money; clawbacks may go away

Agree. In fact, SBF himself described FTX International as insolvent on his substack. 

Although I think people may be using the term "solvency" in slightly different ways in discussions around FTX. I think that in FTX's case, illiquidity effectively amounted to insolvency, and that it's uncertain how much they could have sold their illiquid assets for. If for some reason you were to trust SBF's own estimate of $8b, their total assets would have (just) covered their total liabilities.

Sullivan & Cromwell's John Ray said in December 2022 "We’ve lost $8bn of customer money" and I think most people have interpreted this as FTX having a net asset value of minus $8b. Presumably, though, Ray was referring either to the temporary shortfall in liquid funds or to the accounting discrepancy that was uncovered that summer/fall.

SBF also claimed that he could have raised enough liquidity to make customers substantially whole given a few more weeks, but was under extreme pressure to declare bankruptcy. I think there's a good chance this is accurate, in part because most of the pressure came from Sullivan & Cromwell and a former partner of the firm, who are now facing a class action lawsuit for their alleged role in the fraud.

(If anyone has evidence that FTX's liabilities did in fact exceed its assets by $8b at the time of the bankruptcy, I would be interested in seeing it.)

Jason @ 2024-02-22T16:37 (+2)

SBF also claimed that he could have raised enough liquidity to make customers substantially whole given a few more weeks, but was under extreme pressure to declare bankruptcy. I think there's a good chance this is accurate [. . . .]

 

This seems unlikely to me. The books were just in too bad of a shape for anyone conducting even a minimum amount of due diligence to fork over the needed liquid assets. Selling the illiquid assets would have taken time, and in many cases doing so quickly would have depressed the value of those assets. Moreover, suspending withdrawals until liquidity could be obtained would have been the death knell for FTX's enterprise value. So, contra the earlier situations in which investors poured money into FTX, the potential upside would be fairly limited for accepting the risk of whatever landmines might be buried in FTX's financials.

The estate hopes everyone can be made whole as far as recovering the value in USD on the date of filing, but that is based in part on appreciation in the value of crypto and to a lesser extent on use of the trustee's muscular powers in bankruptcy (such as clawing back ~$30M from EVF, getting out of expensive sponsorship deals, etc.).

Finally, even assuming it was possible to get FTX into shape to attract liquidity, that would have involved massive effort. The universe in which SBF hires an army of forensic accountants to untangle FTX's disastrous accounting very quickly is a universe in which a lot of outsiders now have proof of very serious fraud. Those people are not likely to allow SBF to hide the extent of the fraud from would-be saviors.

Yanni Kyriacos @ 2024-02-21T23:26 (+3) in response to I have a some questions for the people at 80,000 Hours

I don't think it is fair to assume anything about my intentions without first asking - maybe you have a question about my intentions? I'm happy to answer any.

Am I right in saying that you think that me using the word "partnering" is inaccurate when when describing 80k listing jobs at labs, having people on their podcast to promote their initiatives and their people? etc

I'd be happy to use another word, as it doesn't really change the substance of my claims.

Rebecca @ 2024-02-22T16:35 (+2)

I’m not assuming anything - I’m stating how it appears to me (ie I said ‘this seems like X to me’, not ‘this is X’).

Yanni Kyriacos @ 2024-02-21T23:30 (+3) in response to I have a some questions for the people at 80,000 Hours

If everyone I targeted with marketing initiatives listened to an entire 3 hour podcast my job (as a marketer) would be a lot easier. 

Of 80k's entire reach, I'd be surprised if 1% had listened to an entire 3 hour podcast in the last 6 months with a lab.

Most people will glance at their content and see that they're "working together" (you can replace "working together" with "partnership" or whatever phrase you think is more accurate).

Rebecca @ 2024-02-22T16:33 (+4)

Most people will glance at their content and see that they're "working together"

I still don’t see how that would be the conclusion people would draw

Gemma Paterson @ 2024-02-22T15:45 (+2) in response to From salt intake reduction to labor migration: Announcing top ideas for the AIM 2024 CE Incubation Program

On the LMIC migration platform, I found this UK based for-profit doing what looks like a similar thing:

https://www.getborderless.io/about-us

Filip_Murar @ 2024-02-22T16:31 (+3)

Hi Gemma, thanks for sharing! That platform indeed has several similarities with our proposed nonprofit idea (though also some differences, such as our focus low- and mid-skilled workers and on a specific country of origin rather than a specific destination country). Exciting to see more work being done in this otherwise quite neglected space!

Paul_Lang @ 2024-02-22T16:28 (+1) in response to Veg*ns, what supplements do you take?

I am also lacto-vegetarian and wanted to buy https://veganpowah.com/product/vegan-powah-180/. They have some good info about their ingredients on that website. However, they are out of stock, so I purchased most ingredients in powder form (except for things I take separately/don't need like Omega3 (I have a product with higher EPA; also Idk how vegan powah got oil into powder form - and have concerns about chemical stability if I mix it in myself), iron (inhibits zinc absorption, so I take it separately) selenium (I just eat ~2 brazil nuts/day) and B vitamins (I have high B12, should probably check the others some time)). I mixed things together in increasing order of amount (i.e. put the ingredient with the lowest mass to the ingredient with the second lowest mass into an empty yoghurt bucket, rolled that around, added the ingredient with the third highest mass, rolled the yoghurt bucket around...). I hope everything is mixed reasonably well. At least when I mix my exercise-recovery shake like that, the brown cocoa powder is smoothly distributed. I was thinking about putting the mixture into capsules, but that seems a big effort, so I just put the powder into my breakfast cereal. Maybe I should check if this is OK.

Jason @ 2024-02-22T14:40 (+6) in response to New Open Philanthropy Grantmaking Program: Forecasting

Why do you think there is currently little/no market for systematic meritocratic forecasting services (SMFS)? Even under a lower standard of usefulness -- that blending SMFS in with domain-expert forecasts would improve the utility of forecasts over using only domain-expert input -- that should be worth billions of dollars in the financial services industry alone, and billions elsewhere (e.g., the insurance market).

I don't think the drivers of low "societal sanity" are fundamentally about current ability to estimate probabilities. To use a current example, the reason 18% of Americans believe Taylor Swift's love life is part of a conspiracy to re-elect Biden isn't that our society lacks resources to better calibrate the probability that this is true. The desire to believe things that favor your "team" runs deep in human psychology. The incentives to propagate such nonsense are, sadly, often considerable. The technological structures that make disseminating nonsense easier are not going away.

MaxRa @ 2024-02-22T16:28 (+8)

Thanks, I think that's a good question. Some (overlapping) reasons that come to mind that I give some credence to:

a) relevant markets are simply making an error in neglecting quantified forecasts

  • e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
  • I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and quantified

b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that's my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)

c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it

d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)

e) maybe more systematically I'm thinking that it's often not in the interest of entrenched powers to have forecasters call bs on whatever they're doing.

  • in corporate hierarchies people in power prefer the existing credentialism, and oppose new dimensions of competition
  • in other arenas there seems to be a constant risk of forecasters raining on your parade

f) maybe previous forecast-like practices ("futures studies", "scenario planning") maybe didn't yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I've seen associated with these words)

Ben Millwood @ 2024-02-14T22:12 (+12) in response to FTX expects to return all customer money; clawbacks may go away

My understanding (for whatever it's worth) is that most of the reason why a full repayment looks feasible now is a combination of:

  • Creditors are paid back the dollar value of their assets at the time of bankruptcy. Economically it's a bit like everyone was forced to sell all their crypto to FTX at bankruptcy date, and then the crypto FTX held appreciated a bunch in the meantime.
  • FTX held a stake in Anthropic, and for general AI hype reasons that's likely to have appreciated a lot too.

I think it's reasonable to think of both of these as luck, and certainly a company relying on them to pay their debts is not solvent.

bern @ 2024-02-22T16:21 (+3)

Perhaps. But it sounds like many[1] have been treating the fact that FTX did in fact face a liquidity crisis as strong (conclusive?) evidence of SBF's excessive risk-taking in a way that's relevant for intent. And now they claim that the extent to which customers are made whole or FTX was insolvent is not relevant.

It feels like people in general are happy to attribute good luck to his decisions but not bad luck.

  1. ^

    Including the prosecution: "its customers were left with billions of dollars in losses", "the defendant talked with his inner circle about...how customers could never be repaid", "Billions of dollars from thousands of people gone", "there is no serious dispute that around $10 billion went missing"...

JoshuaBlake @ 2024-02-14T08:27 (+31) in response to FTX expects to return all customer money; clawbacks may go away

Bear in mind that even if FTX can pay everyone back now, that does not mean they were solvent at the point they were put into bankruptcy.

bern @ 2024-02-22T16:04 (+1)

Agree. In fact, SBF himself described FTX International as insolvent on his substack. 

Although I think people may be using the term "solvency" in slightly different ways in discussions around FTX. I think that in FTX's case, illiquidity effectively amounted to insolvency, and that it's uncertain how much they could have sold their illiquid assets for. If for some reason you were to trust SBF's own estimate of $8b, their total assets would have (just) covered their total liabilities.

Sullivan & Cromwell's John Ray said in December 2022 "We’ve lost $8bn of customer money" and I think most people have interpreted this as FTX having a net asset value of minus $8b. Presumably, though, Ray was referring either to the temporary shortfall in liquid funds or to the accounting discrepancy that was uncovered that summer/fall.

SBF also claimed that he could have raised enough liquidity to make customers substantially whole given a few more weeks, but was under extreme pressure to declare bankruptcy. I think there's a good chance this is accurate, in part because most of the pressure came from Sullivan & Cromwell and a former partner of the firm, who are now facing a class action lawsuit for their alleged role in the fraud.

(If anyone has evidence that FTX's liabilities did in fact exceed its assets by $8b at the time of the bankruptcy, I would be interested in seeing it.)

Ulrik Horn @ 2024-02-20T11:57 (+1) in response to An Analysis of Engineering Jobs on the 80,000 Hours Job Board

Excellent post Jessica, and intriguing! On the point about jobs outside the West, I would be curious to learn more if anyone has looked into it:
-A lot of outbreaks of disease happens in poorer countries - could there be opportunities to deliver engineered disease management tools in these locations? Might even overlap a bit with AIM/CE work.
-What about engineering work on semiconductors and/or chips? I am super naive about this but would guess Taiwan and the Netherlands would also be potential locations to have lots of impact as an engineer?

I also have a question which is more personal:
-As a mech eng myself I am curious about the mapping of engineering disciplines to cause areas? I would naively think that mech eng maps onto CC and nuclear, while bio eng maps more onto biosec and alt proteins.

Jessica Wen @ 2024-02-22T15:53 (+6)

Thanks for your comment Ulrik! Some thoughts on your bullet points:

- I think there probably are opportunities to do biosecurity work in LMICs (e.g. Africa CDC's Biosafety and Biosecurity initiative, the Southeast Asia Strategic Multilateral Dialogue on Biosecurity) but these seem mostly policy-based rather than focused on technical interventions (likely because there's just more private and public money for developing technical interventions in high-income countries).

- 80k mentioned that they would be adding more technical governance jobs (i.e., more roles in semiconductors/chips) in the near future, so hopefully the geographical bias might shift somewhat. However, my intuition is that the US will continue to be a hotspot for jobs in this sector because of the sheer size and concentration of semiconductor companies (and maybe because of the higher likelihood of actually affecting governance/regulations/standards?)

- We did some mapping of engineering disciplines to cause areas, which you can see here and on our Resource Portal (it is by no means comprehensive – we even miss out nuclear!) Turns out mechanical engineers are pretty useful in a lot of cause areas. Hope that's helpful!

Gemma Paterson @ 2024-02-22T15:45 (+2) in response to From salt intake reduction to labor migration: Announcing top ideas for the AIM 2024 CE Incubation Program

On the LMIC migration platform, I found this UK based for-profit doing what looks like a similar thing:

https://www.getborderless.io/about-us

ezrah @ 2024-02-21T23:35 (+12) in response to Can we help individual people cost-effectively? Our trial with three sick kids

Loved this post. Like sawyer wrote - it made me emotional and made me think, and feels like a great example of what EA should be.

There actually is a non-profit I'm aware of (no affiliation) that hits a lot of the criteria mentioned in the comments - https://saveachildsheart.org/, they treat life-threatening heart disease in developing countries, often by paying for transportation to Israel where the children receive pro-bono treatment from a hospital the nonprofit has a partnership with. From a (very) quick look at their financial statements and annual report, it looks like it costs them around ~$6,300 to save a life, although that number could be significantly off in either direction (by looking through the annual report, it looks like the nonprofit is not especially focused on the most cost-effective parts of its programming, and does many activities that look like PR, which is probably morally good if it allows them to scale. On the other hand, it's not clear from the AR what the severity of the disease is in the children treated, and what share of their treatments are actually life saving).

Your post, and nonprofits like this, make me think of something EA often misses from it's birds-eye approach to solutions - leverage. Both you and saveachildsheart use their leverage (your proximity, their partnership with a first world  medical institution) to be impressively cost-effective, but leverage is hard to spot in a-priori spreadsheets.

Jason @ 2024-02-22T15:27 (+6)

Could you say a bit more about the ~$6,300 figure? I have 547 lives saved from the annual report (p. 5) and about $9.5MM USD in expenses from the financial statements. Admittedly, most of this is related to the establishment of a "Children's Hospital at Wolfson" -- but it's not clear to me that these costs should be excluded. I suppose that the organization is doing its current work without said hospital existing yet, but the presence and magnitude of that expenditure makes me wonder -- at a minimum -- whether they have room for more funding at ~$6,300.

By rough analogy, it wouldn't be appropriate for an organization to fundraise separately for bednets and for distribution costs, and quote a cost-effectiveness figure to distribution-cost donors of (distribution costs / total impact).

MHR @ 2024-02-22T14:11 (+9) in response to Upcoming EA conferences in 2024

FYI the EAG Boston link currently goes to the 2023 event

OllieBase @ 2024-02-22T15:03 (+6)

Thank you! Fixed. I had forgotten what year we're in.

Devon Fritz @ 2024-02-22T14:37 (+15) in response to New Open Philanthropy Grantmaking Program: Forecasting

I think one of the major problems I see in EA as a whole is a fairly loose definition of 'impact'.

I find the statement is more precise if you put "longtermism" where "EA" is. Is that your sense as well?

CAISID @ 2024-02-22T14:49 (+4)

I think that's a good modification of my initial point, you may well be right.