There's a trade-off here, but I think some attendees who can provide valuable input wouldn't attend if their name was shared publicly and that would make the event less valuable for the community.
That said, perhaps one thing we can do is emphasise the benefits of sharing their name (increases trust in the event/leadership, greater visibility for the community about direction/influence) when they RSVP for the event, I'll note that for next time as an idea.
Thanks for sharing this, it does seem good to have transparency into this stuff.
My gut reaction was "huh, I'm surprised about how large a proportion of these people (maybe 30-50%, depending on how you count it) I don't recall substantially interacting with" (where by "interaction" I include reading their writings).
To be clear, I'm not trying to imply that it should be higher; that any particular mistakes are being made; or that these people should have interacted with me. It just felt surprising (given how long I've been floating around EA) and worth noting as a datapoint. (Though one reason to take this with a grain of salt is that I do forget names and faces pretty easily.)
Note that we have a few attendees at this year’s event who are specialists in one of our focus areas rather than leaders of an EA meta organization or team (though some attendees are both).
We were not trying to optimise the attendee list for connectedness or historical engagement with the community, but rather who can contribute to making progress on our core themes; brand and funding. When you see what roles these attendees have, I think it's fairly evident why we invited them, given this lens.
I'll also note that I think it's healthy for there to be people joining for this event who haven't bene in the community as long as you have. They can bring new perspectives, and offer expertise the community / organisational leaders has been lacking.
We might consider programs that pay for people with desirable traits to reproduce.
The question is: who gets to decide what the "desirable traits" are? Eugenicists seem to focus a lot on the desirability of racial traits, which I vehemently disagree with. If the eugenicists got their way, I don't think the future they'd create is one I would consider desirable. And this has been a central part of the movement since its inception. The founder of eugenics, sir Galton, created a racial hierarchy with whites at the top and wrote things like:
There exists a sentiment, for the most part quite unreasonable, against the gradual extinction of an inferior race.
Now, just because you've named your account after him and advocate for eugenics doesn't automatically mean you secretly share that view, but hopefully you can forgive someone for becoming somewhat concerned.
Miriam-Webster says "authentic" is the 2023 'word of the year'; how apt! (Note how a theme running through just about each individual comment is what the given poster was made to feel, i.e. the way in which your spirit resonated with them. Human beings remember how others make us feel - and all the more so when that authenticity resonates in interactions which are personal and empathetic.)
In short: independent of the material (quantitative) success of GWWC, there is much to be thankful for with respect to such leadership (here's to hoping GWWC is as lucky in succession - Luke Freeman is one-of-a-kind!)...
I think its excellent that you guys are working on publicly evaluating past grants! Also congrats to all the new hires especially Alejandro who I know as a very thoughtful and smart guy.
In general, I find most channels posting 'positive climate news' honestly a bit annoying because they tend to focus on things that really don't matter all that much in the big picture, or present their solution as a silver bullet while in fact there is none. (But maybe I'm a bit too critical here.)
If you're interested, we've compiled a big list of climate-related resources on the Effective Environmentalism website - some of which are quite hopeful! For example, the books Not the End of the World and Regenesis. There are also a bunch of podcasts and videos on climate solutions that have a bit of a "yes we can" vibe.
Outside EA circles, I really love the YouTube channel Just Have a Think for some climate hope. And (not a video, but a static image), the falling costs of solar is one of the most hopeful graphics in the climate cause area!
I'm not sure I follow your examples and logic, perhaps you could explain because drunk driving is in itself a serious crime in every country I know of. Are you suggesting it be criminal to merely develop an AI model regardless of whether it's commercialized or released?
Regarding pharmaceuticals, yes, they certainly do need to pass several phases of clinical research and development to prove sufficient levels of safety and efficacy because by definition, FDA approves drugs to treat specific diseases. If those drugs don't do what they claim, people die. The many reasons for regulating drugs should be obvious. However, there is no such similar regulation on software. Developing a drug discovery platform or even the drug itself is not a crime (as long as it's not released.)
You could just as easily extrapolate to individuals. We cannot legitimately litigate (sue) or prosecute someone for a crime they haven't committed. This is why we have due process and basic legal rights.( Technically anything can be litigated with enough money thrown at it but you cant sue for damages unless damages actually occurred)
Drunk driving is illegal because it risks doing serious harm. It's still illegal when the harm has not occurred (yet). Things can be crimes without harm having occurred.
Thanks for sharing this, it does seem good to have transparency into this stuff.
My gut reaction was "huh, I'm surprised about how large a proportion of these people (maybe 30-50%, depending on how you count it) I don't recall substantially interacting with" (where by "interaction" I include reading their writings).
To be clear, I'm not trying to imply that it should be higher; that any particular mistakes are being made; or that these people should have interacted with me. It just felt surprising (given how long I've been floating around EA) and worth noting as a datapoint. (Though one reason to take this with a grain of salt is that I do forget names and faces pretty easily.)
The judging process should be complete in the next few days. I expect we'll write to winners at the end of next week, although it's possible that will be delayed. A public announcement of the winners is likely to be a few more weeks.
I teach math to mostly Computer Science students at a Chinese university. From my casual conversations with them, I've noticed that many seem to be technology optimists, reflecting what I perceive as the general attitude of society here.
Once, I introduced the topic of AI risk (as a joking topic in a class) and referred to a study (possibly this one: AI Existential Risk Survey) that suggests a significant portion of AI experts are concerned about potential existential risks. The students' immediate reaction was to challenge the study's methodology.
This response might stem from the optimism fostered by decades of rapid technological development in China, where people have become accustomed to technology making things "better."
I am also feeling that there could be many other survival problems (jobs, equality, etc) in the society, that makes them feel this problem could still be very far. But I know of a friend who try to work on AI governance and raise more awareness.
Executive summary: A survey of elite Chinese university students found they are generally optimistic about AI's benefits, strongly support government regulation, and view AI as less of an existential threat compared to other risks, though they believe US-China cooperation is necessary for safe AI development.
Key points:
80% of students believe AI will do more good than harm for society, higher than in Western countries.
85% support government regulation of AI, despite high optimism about its benefits.
Students ranked AI lowest among potential existential threats to humanity.
61% believe US-China cooperation is necessary for safe AI development.
Surveillance was rated as the top AI-related concern, followed by misinformation and existential risk.
50% agree AI will eventually be more intelligent than humans, lower than estimates from other surveys.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
I teach math to mostly Computer Science students at a Chinese university. From my casual conversations with them, I've noticed that many seem to be technology optimists, reflecting what I perceive as the general attitude of society here.
Once, I introduced the topic of AI risk (as a joking topic in a class) and referred to a study (possibly this one: AI Existential Risk Survey) that suggests a significant portion of AI experts are concerned about potential existential risks. The students' immediate reaction was to challenge the study's methodology.
This response might stem from the optimism fostered by decades of rapid technological development in China, where people have become accustomed to technology making things "better."
They definitely are! Judge discussions are ongoing, and after that we'll be contacting winners a while before any public announcements, so I'm afraid this won't be imminent, but we are looking forward to getting to talk about the winners publicly.
As someone who discovered EA while studying mechanical engineering, I have thought about this a lot. My initial plan was to work in renewable energy technologies, but I shifted towards working on plant-based meat technologies and developing more efficient processing equipment. Also, I have been able to use my background in mechanical engineering to help ALLFED as a volunteer by researching how various resilient food technologies could scale up in the event of a global catastrophe. I also recommend anyone interested in learning about the intersection of EA and engineering to check out High Impact Engineer's resources page: https://www.highimpactengineers.org/resources
I think this is a great idea. Just wanted to flag that we've done this with other clubs at the University of Melbourne in the past. To give some concrete examples of how this can achieve quite a lot without a huge amount of time and effort:
We successfully diverted $500 to GiveDirectly on one occasion, from the annual revenue of a club that raises money for charity, simply by attending their AGM and giving a presentation
On another occasion, we joined as a co-host for a charity fundraiser event with several other clubs, and were allowed to select high impact / EA-aligned charities as the recipients for the event, which ended up raising close to $1,200 total
I would definitely encourage EA groups at other universities to try similar things. There could be a lot of low-hanging fruit, e.g. clubs who simply haven't thought that carefully about their choices of charities before.
How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.
My argument is that barring them doesn't stop them from shaping EA, just mildly inconveniences them, because much of the influence happens outside such conferences
With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is
Which scandals do you believe would have been avoided with greater transparency, especially transparency of the form here (listing the names of those involved, with no further info)? I can see an argument that eg people who have complaints about bad behaviour (eg Owen's, or SBF/Alameda's) should make them more transparently (though that has many downsides), but that's a very different kind of transparency.
I think in some generality scandals tend to be "because things aren't transparent enough", since greater transparency would typically have meant issues people would be unhappy with would have tended to get caught and responded to earlier. (My case had elements of "too transparent", but also definitely had elements of "not transparent enough".)
Anyway I agree that this particular type of transparency wouldn't help in most cases. But it doesn't seem hard to imagine cases, at least in the abstract, where it would kind of help? (e.g. imagine EA culture was pushing a particular lifestyle choice, and then it turned out the owner of the biggest manufacturer in that industry got invited to core EA events)
I'm from Chicago but am currently living in Medellin, Colombia working remotely for Eder Financial, a non-profit financial company serving other non-profits and religious organizations.
I was a police officer for about 5 years after college (my childhood dream). After some time I realized it was not what I wanted to do for the rest of my life, so I left the force and switched to private sector work.
Even though I left my job as a police office I still want to use my career to help people. I first learned about Effective Altruism through 80,000 Hours and am excited about the work and ideas of the EA community.
I've been focusing on building skills in strategy and operations in my current role, but now I'm excited to start looking for work opportunities at EA organizations.
Looking forward to getting to know others in the EA community and learning more about the EA community itself!
Some functionary involved in malaria vaccine distribution to tell us how they could expand and accelerate.
Someone to explain to us how that Danish pharmaceutical firm's governance structure works, and whether it's better for continuous investment in innovation than the one where "founder mode" ends and lawyers take the reins of firms crucial to human progress.
I liked your interview with a professor who talked about defense methods against pandemics and potential gene drive efficacy against malaria, new world screw worm, lyme disease, and maybe one other nasty enemy. Works in Progress also had an article about gene drives' promise against diseases like these in its most recent edition. I would also like to know about Jamaica and Uruguay's attempts to open new fronts against the New World Screw Worm.
I liked an interview that I believe to have been on 80k hours about efforts to reduce air pollution in India. I would like to know what effect could be expected from allowing export of natural gas from countries like Turkmenistan, Iran, and Venezuela to India.
I am interested in learning about the importance of fertilizer prices and natural gas prices to global nutrition. I think there is a woman at the Breakthrough Institute who studies this topic. I suppose oil prices may be an important input, too.
I would like to know more about how USD interest rates and oil prices impact global poverty, so as to better evaluate the importance of factors like home rental inflation and economic sanctions in determining poverty rates.
I can't speak for the author, and while I'd classify these as examples of suspicion and/or criticism of EA biosecurity rather than a "backlash against EA", here are some links:
I'll also say I've heard criticism of "securitising health" which is much less about EAs in biosecurity and more clashing concerns between groups that prioritise global health and national security, where EA biosecurity folks often end up seen as more aligned with the national security concerns due to prioritising risks from deliberate misuse of biology.
Thanks Tessa. I actually came to this post and asked this question because it was quoted in the 'Exaggerating the risks' series, but then this post didn't give any examples to back up this claim, which Thorstad has then quoted. I had come across this article by Undark which includes statements by some experts that are quite critical of Kevin Esvelt's advocacy regarding nucleic acid synthesis. I think the Lentzos article is the kind of example I was wondering about - although I'm still not sure if it directly shows that the failure to justify their position on the details of the source of risk itself is the problem. (Specifically, I think the key thing Lentzos is saying is the risks Open Phil is worrying about are extremely unlikely in the near-term - which is true, they just think it's more important for longtermist reasons and are therefore 1) more worried about what happens in the medium and long term and 2) still worried about low risk, high harm events. So the dispute doesn't seem to me to be necessarily related to the details of catastrophic biorisk itself.)
I'm having trouble understanding this. The part that comes closest to making sense to me is this summary:
The fact that life has survived so long is evidence that the rate of potentially omnicidal events is low...[this and the anthropic shadow effect] cancel out, so that overall the historical record provides evidence for a true rate close to the observed rate.
Also, how would they respond to the fine-tuning argument? That is, it seems like most planets (let's say 99.9%) cannot support life (eg because they're too close to their sun). It seems fantastically surprising that we find ourselves on a planet that does support life, but anthropics provides an easy way out of this apparent coincidence. That is, anthropics tells us that we overestimate the frequency of things that allow us to be alive. This seems like reverse anthropic shadow, where anthropic shadow is underestimating the frequency of things that cause us to be dead. So is the paper claiming that anthropics does change our estimates of the frequency of good things, but can't change our estimate of the frequency of bad things? Why would this be?
I’m not sure I understand the second question. I would have thought both updates are in the same direction: the fact that we’ve survived on Earth a long time tells us that this is a planet hospitable to life, both in terms of its life-friendly atmosphere/etc and in terms of the rarity of supervolcanoes.
We can say, on anthropic grounds, that it would be confused to think other planets are hospitable on the basis of Earth’s long and growing track record. But as time goes on, we get more evidence that we really are on a life-friendly planet, and haven’t just had a long string of luck on a life-hostile planet.
The anthropic shadow argument was an argument along the lines, “no, we shouldn’t get ever more convinced we’re on a life-friendly planet over time (just on the evidence that we’re still around). It is actually plausible that we’ve just had a lucky streak that’s about to break—and this lack of update is in some way because no one is around to observe anything in the worlds that blow up”.
All the nearterm or current harms of AI that EAs ridicule as unimportant, like artists feeling ripped off or not wanting to lose their jobs. Job loss in general. Democratic reasons, I.e. people just don’t want their lives radically transformed even if the people doing think that’s irrational. Fear and distrust of AI corporations.
These would all be considered wrong reasons in EA but PauseAI welcomes all.
Plus some downstream consequences of the above, like the social and political instability that seems likely with massive job loss. In past economic transformations, we've been able to find new jobs for most workers, but that seems less likely here. People who feel they have lost their work and associated status/dignity/pride (and that it isn't coming back) could be fairly dangerous voters and might even be in the majority. I also have concerns about fair distribution of gains from AI, having a few private companies potentially corner the market on one of the world's most critical resources (intelligence), and so on. I could see things going well for developing countries, or poorly, in part depending on the choices we make now.
My own take is that civil, economic, and political society has to largely have its act together to address these sorts of challenges before AI gets more disruptive. The disruptions will probably be too broad in scope and too rapid for a catch-up approach to end well -- potentially even well before AGI exists. I see very little evidence that we are moving in an appropriate direction.
All the nearterm or current harms of AI that EAs ridicule as unimportant, like artists feeling ripped off or not wanting to lose their jobs. Job loss in general. Democratic reasons, I.e. people just don’t want their lives radically transformed even if the people doing think that’s irrational. Fear and distrust of AI corporations.
These would all be considered wrong reasons in EA but PauseAI welcomes all.
Are you willing to draft some rough bullet points (that you urge people to NOT copy and paste) on SB-1047 that might help people complete this exercise faster?
Also, do you have a sense for how much better a slightly longer letter (e.g. a 1 page note) is as compared to just a 1 paragraph email with something like: "As a [location] resident concerned about AI safety, I urge you to sign SB 1047 into law. AI safety is important to me because [1 sentence]."
FWIW, I did just send off an email, but it took me more like ~30 minutes to skim this post and then draft something. I also wasn't sure how important it was to create the PDF — otherwise I would have just sent an email on my phone, which would again have been a bit faster.
Are you willing to draft some rough bullet points (that you urge people to NOT copy and paste) on SB-1047 that might help people complete this exercise faster?
This page has a number of points (it also includes a few other actions that you can take).
Also, do you have a sense for how much better a slightly longer letter (e.g. a 1 page note) is as compared to just a 1 paragraph email with something like: "As a [location] resident concerned about AI safety, I urge you to sign SB 1047 into law. AI safety is important to me because [1 sentence]."
One paragraph is definitely fine unless you have personal experience that is relevant (for example, you are an AI researcher, founder, etc.).
I also wasn't sure how important it was to create the PDF — otherwise I would have just sent an email on my phone, which would again have been a bit faster.
Makes sense! It is important to create the PDF, just how these things are supposed to be submitted.
With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is. See also my explanation from last year.
...some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly...
How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.
The EA movement absolutely cannot carry on with the "let's allow people to do whatever without any hindrance, what could possibly go wrong?" approach.
How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.
My argument is that barring them doesn't stop them from shaping EA, just mildly inconveniences them, because much of the influence happens outside such conferences
With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is
Which scandals do you believe would have been avoided with greater transparency, especially transparency of the form here (listing the names of those involved, with no further info)? I can see an argument that eg people who have complaints about bad behaviour (eg Owen's, or SBF/Alameda's) should make them more transparently (though that has many downsides), but that's a very different kind of transparency.
Thanks for this post, it warmed our hearts! Glad we've been able to help you understand the world better over the years and maybe even have more impact too. ❤️
This is a very non-EA opinion, but personally I quite like this on, for lack of a better word, aesthetics grounds: Charities should be accountable to someone, in the same way as companies are to shareholders, and politicians are to electorates. Membership models are a good way of achieving that. I am a little sad that my local EA group is not organized in the same way.
I'm not expressing an opinion on that. The post makes a clear claim that their legal status re tax deductibility will change if more EU citizens sign up. This surprises me and I want to understand it better. I agree there are other benefits to having more members, I'm not disputing that
This seems fine to me - I expect that attending this is not a large fraction of most attendee's impact on EA, and that some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly (I expect there's some people who would tolerate being named as the cost of coming too, of course). I would be happy to find some way to incentivise people being named.
And really, I don't think it's that important that a list of attendees be published. What do you see as the value here?
With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is. See also my explanation from last year.
...some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly...
How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.
The EA movement absolutely cannot carry on with the "let's allow people to do whatever without any hindrance, what could possibly go wrong?" approach.
I'm having trouble understanding this. The part that comes closest to making sense to me is this summary:
The fact that life has survived so long is evidence that the rate of potentially omnicidal events is low...[this and the anthropic shadow effect] cancel out, so that overall the historical record provides evidence for a true rate close to the observed rate.
Also, how would they respond to the fine-tuning argument? That is, it seems like most planets (let's say 99.9%) cannot support life (eg because they're too close to their sun). It seems fantastically surprising that we find ourselves on a planet that does support life, but anthropics provides an easy way out of this apparent coincidence. That is, anthropics tells us that we overestimate the frequency of things that allow us to be alive. This seems like reverse anthropic shadow, where anthropic shadow is underestimating the frequency of things that cause us to be dead. So is the paper claiming that anthropics does change our estimates of the frequency of good things, but can't change our estimate of the frequency of bad things? Why would this be?
To answer the first question, no, the argument doesn’t rely on SIA. Let me know if the following is helpful.
Suppose your prior (perhaps after studying plate tectonics and so on, but not after considering the length of time that’s passed without an an extinction-inducing supervolcano) is that there’s probability “P(A)”=0.5 that risk of an extinction-inducing supervolcano at the end of each year is 1/2 and probability “P(B)”=0.5 that the risk is 1/10. Suppose that the world lasts at least 1 year and most 3 years regardless.
Let “A1” be the possible world in which the risk was 1/2 per year and we blew up at the end of year 1, “A2” be that in which the risk was 1/2 per year and we blew up at the end of year 2, and “A3” be that in which the risk was 1/2 per year and we never blew up, so that we got to exist for 3 years. Define B1, B2, B3 likewise for the risk=1/10 worlds.
Suppose there’s one “observer per year” before the extinction event and zero after, and let “Cnk”, with k<=n, be observer #k in world Cn (C can be A or B). So there are 12 possible observers: A11, A21, A22, A31, A32, A33, and likewise for the Bs.
If you are observer Cnk, your evidence is that you are observer #k. The question is what Pr(A|k) is; what probability you should assign to the annual risk being 1/2 given your evidence.
Any Bayesian, whether following SIA or SSA (or anything else), agrees that
Pr(A|k) = Pr(k|A)Pr(A)/Pr(k),
where Pr(.) is the credence an observer should have for an event according to a given anthropic principle. The anthropic principles disagree about the values of these credences, but here the disagreements cancel out. Note that we do not necessarily have Pr(A)=P(A): in particular, if the prior P(.) assigns equal probability to two worlds, SIA will recommend assigning higher credence Pr(.) to the one with more observers, e.g. by giving an answer of Pr(coin landed heads) = 1/3 in the sleeping beauty problem, where on this notation P(coin landed heads) = 1/2.
On SSA, your place among the observers is in effect generated first by randomizing among the worlds according to your prior and then by randomizing among the observers in the chosen world. So Pr(A)=0.5, and
Pr(1|A) = 1/2 + 1/4*1/2 + 1/4*1/3 = 17/24
(since Pr(n=1|A)=1/2, in which case k=1 for sure; Pr(n=2|A)=1/4, in which case k=1 with probability 1/2; and Pr(n=3|A)=1/4, in which case k=1 with probability 1/3);
Pr(2|A) = 1/4*1/2 + 1/4*1/3 = 5/24; and
Pr(3|A) = 1/4*1/3 = 2/24.
For simplicity we can focus on the k=2 case, since that’s the case analogous to people like us, in the middle of an extended history. Going through the same calculation for the B worlds gives Pr(2|B) = 63/200, so Pr(2) = 0.5*5/24 + 0.5*63/200 = 157/600.
So Pr(A|2) = 125/314 ≈ 0.4.
On SIA, your place among the observers is generated by randomizing among the observers, giving proportionally more weight to observers in worlds with proportionally higher prior probability, so that the probability of being observer Cnk is
1/12*Pr(Cn) / [sum over possible observers, labeled “Dmj”, of (1/12*Pr(Dm))].
This works out to Pr(2|A) = 2/7 [6 possible observers given A, but the one in the n=1 world “counts for double” since that world is twice as likely than the n=2 or =3 worlds a priori];
Pr(A) = 175/446 [less than 1/2 since there are fewer observers in expectation when the risk of early extinction is higher], and
Pr(2) = 140/446, so
Pr(A|2) = 5/14 ≈ 0.36.
So in both cases you update on the fact that a supervolcano did not occur at the end of year 1, from assigning probability 0.5 to the event that the underlying risk is 1/2 to assigning some lower probability to this event.
But I said that the disagreements canceled out, and here it seems that they don’t cancel out! This is because the anthropic principles disagree about Pr(A|2) for a reason other than the evidence provided by the lack of a supervolcano at the end of year 1: namely the possible existence of year 3. How to update on the fact that you’re in year 2 when you “could have been” in year 3 gets into doomsday argument issues, which the principles do disagree on. I included year 3 in the example because I worried it might seem fishy to make the example all about a 2-period setting where, in period 2, the question is just “what was the underlying probability we would make it here”, with no bearing on what probability we should assign to making it to the next period. But since this is really the example that isolates the anthropic shadow consideration, observe that if we simplify things so that the world lasts at most 2 years (and there 6 possible observers), SSA gives
An anthropic principle that would assign a different value to Pr(A|2)--for the extreme case of sustaining the “anthropic shadow”, a principle that would assign Pr(A|2)=Pr(A)=1/2--would be one in which your place among the observers is generated by
first randomizing among times k (say, assigning k=1 and k=2 equal probability);
then over worlds with an observer alive at k, maintaining your prior of Pr(A)=1/2;
[and then perhaps over observers at that time, but in this example there is only one].
This is more in the spirit of SSA than SIA, but it is not SSA, and I don't think anyone endorses it. SSA randomizes over worlds and then over observers within each world, so that observing that you’re late in time is indeed evidence that “most worlds last late”.
I'm surprised that having more members let's you offer better tax deductions (and that they don't even need to be Danish taxpayers!), what's up with that?
This is a very non-EA opinion, but personally I quite like this on, for lack of a better word, aesthetics grounds: Charities should be accountable to someone, in the same way as companies are to shareholders, and politicians are to electorates. Membership models are a good way of achieving that. I am a little sad that my local EA group is not organized in the same way.
Try to see if your company's expertise match with any of the recommended fields of work by Giving Green and High Impact Engineers! Especially this latter link is probably interesting for you and your colleagues.
I think that @Ulrik Horn's suggestion about how to adapt to worst-case climate scenarios is a good one, especially if you focus on places that are likely severely hit but don't have adequate plans for adaptation yet.
Some first ideas, going by the expertise of the company:
Chemical regulation (e.g., REACH) and LCA (Life Cycle Assessment) - Helping firms comply is probably not the most cost-effective, since compliance is mandatory, so they'll just find another consultancy firm. Counterfactual is probably low here.
Circular economy. I'd say this really depends on which sector you're trying to make circular. Something related to the circular use of rare earth materials for the energy transition seems promising, but this is still a broad theme.
Soil remediation. - Don't know enough to comment.
Environmental permits and environmental impact assessments (effects on water, air quality, biodiversity, etc.). Similar to my first comment - if this is for compliance, firms will hire a consultant anyway, so the counterfactual here is lower. If you do EIAs or permitting for important new environmental infrastructure (e.g. hot rock geothermal), this may be different.
District heating networks. This is interesting, if you can contribute to quickly deploying geothermal energy around the world! Talk with the folks at Project Innerspace to know more.
Electricity grids. There's a big potential here to electrify heavy industries that now rely on fossil fuels. The price of renewables has been dropping quite rapidly, but there's quite a challenge in using renewables for hard-to-decarbonise processes like cement. Talk to or visit the websites of e.g. Future Cleantech Architects, Clean Air Task Force, and Industrious Labs to know more! I reckon energy storage and load shifting could be interesting ones to look at too! And maybe easier to get government funding for.
Sustainability and biodiversity consultancy - A little to broad to comment on.
Mobility solutions (design of roads, bridges, etc.) - no comment
Sewerage and water infrastructure - EU countries have a lot of their sewerage and water infrastructure in order already, so within the EU I don't think this is the most cost-effective work on the margin. Some really innovative solutions could be interesting, perhaps. E.g. desalinisation?
Building and factory design. This seems promising. Lots of industrial processes are hard to get rid of (e.g. we'll need cement, and worldwide demand is set to increase!) but there hasn't been a lot of work on clean production in this area. Looking at cleaner production of cement, steel, or other industries seems promising, and factory design is probably a big part of that. Try to reach out to Industrious Labs!
In general, I find most channels posting 'positive climate news' honestly a bit annoying because they tend to focus on things that really don't matter all that much in the big picture, or present their solution as a silver bullet while in fact there is none. (But maybe I'm a bit too critical here.)
If you're interested, we've compiled a big list of climate-related resources on the Effective Environmentalism website - some of which are quite hopeful! For example, the books Not the End of the World and Regenesis. There are also a bunch of podcasts and videos on climate solutions that have a bit of a "yes we can" vibe.
Outside EA circles, I really love the YouTube channel Just Have a Think for some climate hope. And (not a video, but a static image), the falling costs of solar is one of the most hopeful graphics in the climate cause area!
I'm having trouble understanding this. The part that comes closest to making sense to me is this summary:
The fact that life has survived so long is evidence that the rate of potentially omnicidal events is low...[this and the anthropic shadow effect] cancel out, so that overall the historical record provides evidence for a true rate close to the observed rate.
Also, how would they respond to the fine-tuning argument? That is, it seems like most planets (let's say 99.9%) cannot support life (eg because they're too close to their sun). It seems fantastically surprising that we find ourselves on a planet that does support life, but anthropics provides an easy way out of this apparent coincidence. That is, anthropics tells us that we overestimate the frequency of things that allow us to be alive. This seems like reverse anthropic shadow, where anthropic shadow is underestimating the frequency of things that cause us to be dead. So is the paper claiming that anthropics does change our estimates of the frequency of good things, but can't change our estimate of the frequency of bad things? Why would this be?
I'm surprised that having more members let's you offer better tax deductions (and that they don't even need to be Danish taxpayers!), what's up with that?
EDIT: Someone on lesswrong linked a great report by Epoch which tries to answer exactly this.
With the release of openAI o1, I want to ask a question I've been wondering about for a few months.
Like the chinchilla paper, which estimated the optimal ratio of data to compute, are there any similar estimates for the optimal ratio of compute to spend on inference vs training?
In the release they show this chart:
The chart somewhat gets at what I want to know, but doesn't answer it completely. How much additional inference compute would I need a 1e25 o1-like model to perform as well as a one shotted 1e26?
Additionally, for some x number of queries, what is the optimal ratio of compute to spend on training versus inference? How does that change for different values of x?
Are there any public attempts at estimating this stuff? If so, where can I read about it?
Thank you Luke for your leadership of GWWC, and your mentorship of me as an employee at GWWC.
It was seeing a talk from you and Peter Singer that got me involved with effective giving, and GWWC. So without you, I would never have been involved with something that has become one of the most meaningful parts of my life.
It has been an immense honour to work with someone as passionate, intelligent and caring as you.
So much of how GWWC has grown in the past few years since you joined is because of your hard work, and all of us who have worked with you personally know how much effort and love you have put into GWWC.
I'm personally very sorry to see you leave GWWC, and your legacy will be felt strongly for years to come.
Agree. This really shouldn't take longer than 10 minutes. In this case (can't speak for every case like this), it does matter that the messages are unique and not copy pasted, which is why I didn't provide a letter to copy paste. But it is highly unlikely anybody will read the letter in great detail.
Are you willing to draft some rough bullet points (that you urge people to NOT copy and paste) on SB-1047 that might help people complete this exercise faster?
Also, do you have a sense for how much better a slightly longer letter (e.g. a 1 page note) is as compared to just a 1 paragraph email with something like: "As a [location] resident concerned about AI safety, I urge you to sign SB 1047 into law. AI safety is important to me because [1 sentence]."
FWIW, I did just send off an email, but it took me more like ~30 minutes to skim this post and then draft something. I also wasn't sure how important it was to create the PDF — otherwise I would have just sent an email on my phone, which would again have been a bit faster.
I am inclined to see a moderate degree of EA distancing more as a feature than a bug. There are lots of reasons to pause and/or slow down AI, many of which have much larger (and politically influential) national constituencies than AI x-risk can readily achieve. One could imagine "too much" real or perceived EA influence being counterproductive insofar as other motivations for pausing / slowing down AI could be perceived with the odor of astroturf.
I say all that as someone who thinks there are compelling reasons that are completely independent of AI safety grounds to pause, or at least slow down, on AI.
I also got an error message when I tried to sign up. I didn't fill in the CPR-nr. (Tax ID) field but I doubt that was the cause. This is what the message says:
Der opstod en serverfejl. Prøv venligst igen. Skriv til os på donation@giveffektivt.dk hvis problemet opstår igen. Hvis muligt, så fortæl gerne, hvordan man kan fremprovokere fejlen.
I'm obviously sad that you're moving on, but I trust your judgment that it's the right decision. I've deeply appreciated your hard work on GWWC over these last years - it's both a hugely impactful project from an impartial point of view and, from my own partial point of view, one that I care very strongly about. I think you're a hard-working, morally motivated and high-integrity person and it's always been very reassuring to me to have you at the helm. Under your leadership you transformed the organisation. So: thank you!
I really hope your next step helps you flourish and continues to give you opportunities to make the world better.
Like Buck and Toby, I think this is a great piece of legislation and think that it's well worth the time to send a letter to Governor Newsom. I'd love to see the community rallying together and helping to make this bill a reality!
So that if someone had some free time and/or wanted to practice answering such a question, you could go to this tab. Maybe on the forum home page. Maybe answers could then be linked to questions and potentially crossed off. Maybe eventually bounties to certain questions could be added if a person or org wants a / another take on a question.
Nice, I like that idea, and I think it would be good to make it easier for writers to understand what demand exists for topics. It reminds me of the What posts would you like someone to write? threads - I'm glad we experimented with those. However, I don't know if they actually led to any valuable outcomes, so I'd like to think more about how much user attention we should aim to put on this (for example, right now I feel hesitant to make a new thread pinned to the frontpage). Perhaps it would be worth experimenting with bounties, although I'm not sure if people would actually offer to pay for posts.
In the meantime, you can feel free to respond to one of the old threads (which will still appear in the "Recent discussion" feed), or my suggestion is to write a quick take about it (the rate of quick takes is currently low enough that you'll get some attention on the frontpage).
What kinds of open questions do you have in mind (perhaps some examples would help)?
Random example: I just wanted to ask today if anyone knew of a good review of "The Good It Promises, the Harm It Does" written by a non-male, given that I think one of the key criticisms of EA in the feminist-vegetarian community is that its leaders are mostly white males, but I didn't know where to ask.
Oh yeah, this issue affects all of AI Safety public outreach and communications. On the worst days it just seems like EA doesn’t want to consider this intervention class regardless of how impactful it would be because EAs aesthetically prefer desk work. It has felt like a real betrayal of what I thought the common EA values were.
I am inclined to see a moderate degree of EA distancing more as a feature than a bug. There are lots of reasons to pause and/or slow down AI, many of which have much larger (and politically influential) national constituencies than AI x-risk can readily achieve. One could imagine "too much" real or perceived EA influence being counterproductive insofar as other motivations for pausing / slowing down AI could be perceived with the odor of astroturf.
I say all that as someone who thinks there are compelling reasons that are completely independent of AI safety grounds to pause, or at least slow down, on AI.
Is anyone in the AI Governance-Comms space working on what public outreach should look like if lots of jobs start getting automated in < 3 years?
I point to Travel Agents a lot not to pick on them, but because they're salient and there are lots of them. I think there is a reasonable chance in 3 years that industry loses 50% of its workers (3 million globally).
People are going to start freaking out about this. Which means we're in "December 2019" all over again, and we all remember how bad Government Comms were during COVID.
Now is the time to start working on the messaging!
I remember I kept a very frank and open approach in my interaction with the health community from my side. Unfortunately this did not take a good turn and the community's actions ultimately led to stagnation in my EA work for quite some time. They did speculation regarding me with other people in then existing national group who themselves lacked good communication skills and never reached out to me. Had a severe existential crisis. Maybe the Health community didn't intend to do this but yea I have sensed a toxic positivity. A suggestion would be to avoid one sided, back-end speculation about anyone. People considered for opinions should be encouraged to do so keeping the concerned (in this it would have been me) in knowledge.
I’m really sorry you had a bad experience with our team. You are welcome to share your experience with our team lead Nicole (nicole.ross@centreforeffectivealtruism.org).
Sometimes people want to discuss a concern with us confidentially – our confidentiality policy is outlined here. This means we sometimes don’t have permission to talk to the person concerned at all, or can't share many details as it might identify the people that came to us. In those cases we sadly aren’t in a good position to discuss the situation in depth with the people involved. I realise it is really frustrating to receive only vague feedback or none at all, and in an ideal world this would be different.
I suspect it should be emphasized that you really shouldn't put much time or effort into your message.
It's been almost 20 years since I worked for someone in government, so I could be wrong, but even then we simply added to the count of people who wrote in in favor (and recorded it with their name in our database) and didn't read the note.
Agree. This really shouldn't take longer than 10 minutes. In this case (can't speak for every case like this), it does matter that the messages are unique and not copy pasted, which is why I didn't provide a letter to copy paste. But it is highly unlikely anybody will read the letter in great detail.
From everything I've seen, GWWC has totally transformed under your leadership. And I think this transformation has been one of the best things that's happened in EA during that time. I'm so thankful for everything you've done for this important organization.
Oh yeah, this issue affects all of AI Safety public outreach and communications. On the worst days it just seems like EA doesn’t want to consider this intervention class regardless of how impactful it would be because EAs aesthetically prefer desk work. It has felt like a real betrayal of what I thought the common EA values were.
I suspect it should be emphasized that you really shouldn't put much time or effort into your message.
It's been almost 20 years since I worked for someone in government, so I could be wrong, but even then we simply added to the count of people who wrote in in favor (and recorded it with their name in our database) and didn't read the note.
What kinds of open questions do you have in mind (perhaps some examples would help)?
Random example: I just wanted to ask today if anyone knew of a good review of "The Good It Promises, the Harm It Does" written by a non-male, given that I think one of the key criticisms of EA in the feminist-vegetarian community is that its leaders are mostly white males, but I didn't know where to ask.
Thanks for the suggestion! To clarify, are you imagining this as a tab on the Forum home page, or somewhere else? What kinds of open questions do you have in mind (perhaps some examples would help)?
So that if someone had some free time and/or wanted to practice answering such a question, you could go to this tab. Maybe on the forum home page. Maybe answers could then be linked to questions and potentially crossed off. Maybe eventually bounties to certain questions could be added if a person or org wants a / another take on a question.
If it were just Eliezer writing a fanciful story about one possible way things might go, that would be reasonable. But when the story appears to reflect his very strongly held belief about AI unfolding approximately like this {0 warning shots; extremely fast takeoff; near-omnipotent relative to us; automatically malevolent; etc} and when he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities, it starts to sound more sinister.
I don't think this is really engaging with what I said/should be a reply to my comment.
he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities
Ah, reading that, yeah this wouldn't be obvious to everyone.
But here's my view, which I'm fairly sure is also eliezer's view: If you do something that I credibly consider to be even more threatening than nuclear war (even if you don't think it is) (as another example: gain of function research), and you refuse to negotiate towards a compromise where you can do the thing in a non-threatening way, so I try to destroy the part of your infrastructure that you're using to do this, and then you respond to that by escalating to a nuclear exchange, then it is not accurate to say that it was me who caused the nuclear war.
Now, if you think I have a disingenuous reason to treat your activity as threatening even though I know it actually isn't (which is an accusation people often throw at openai, and it might be true in openai's case), that you tried to negotiate a safer alternative, but I refused that option, and that I was really essentially just demanding that you cede power, then you could go ahead and escalate to a nuclear exchange and it would be my fault.
But I've never seen anyone ever accuse, let alone argue competently, that Eliezer believes those things for disingenuous powerseeking reasons. (I think I've seen some tweets that implied that it was a grift for funding his institute, but I honestly don't know how a person believes that, but even if it were the case, I don't think Eliezer would consider funding MIRI to be worth nuclear war for him.)
Thanks for the suggestion! To clarify, are you imagining this as a tab on the Forum home page, or somewhere else? What kinds of open questions do you have in mind (perhaps some examples would help)?
What kinds of open questions do you have in mind (perhaps some examples would help)?
Random example: I just wanted to ask today if anyone knew of a good review of "The Good It Promises, the Harm It Does" written by a non-male, given that I think one of the key criticisms of EA in the feminist-vegetarian community is that its leaders are mostly white males, but I didn't know where to ask.
Oh yeah, this issue affects all of AI Safety public outreach and communications. On the worst days it just seems like EA doesn’t want to consider this intervention class regardless of how impactful it would be because EAs aesthetically prefer desk work. It has felt like a real betrayal of what I thought the common EA values were.
+1 to comments about the paucity of details or checks. There are a range of issues that I can see.
Am I understanding the technical report correctly? It says "For each question, we sample 5 forecasts. All metrics are averaged across these forecasts." It is difficult to interpret this precisely. But the most likely meaning I take from this, is that you calculated accuracy metrics for 5 human forecasts per question, then averaged those accuracy metrics. That is not measuring the accuracy of "the wisdom of the crowd". That is (a very high variance) estimate of "the wisdom of an average forecaster on Metaculus". If that interpretation is correct, all you've achieved is a bot that does better than an average Metaculus forecaster.
I think that it is likely that searches for historical articles will be biased by Google's current search rankings. For example, if Israel actually did end up invading Lebanon, then you might expect historical articles speculating about a possible invasion to be linked to more by present articles, and therefore show up in search queries higher even when restricting only to articles written before the cutoff date. This would bias the model's data collection, and partially explain good performance on prediction for historical events.
Assuming that you have not made the mistake I described in 1. above, it'd be useful to look into the result data a bit more to check how performance varies on different topics. How does performance tend to be better than wisdom of the crowd? For example, are there particular topics that it performs better on? Does it tend to be more willing to be conservative/confident than a crowd of human forecasters? How does its calibration curve compare to that of humans? Also questions I would expect to be answered in a technical report claiming to prove superhuman forecasting ability.
It might be worth validating that the knowledge cutoff for the LLM is actually the one you expect from the documentation. I do not trust public docs to keep up-to-date, and that seems like a super easy error mode for evaluation here.
I think that the proof will be in future forecasting prediction ability: give 539 a Metaculus account and see how it performs.
Honestly, at a higher level, your approach is very unscientific. You have a demo and UI mockups illustrating how your tool could be used, and grandiose messaging across different forums. Yet your technical report has no details whatsoever. Even the section on Platt scoring has no motivation on why I should care about those metrics. This is a hype-driven approach to research that I am (not) surprised to see come out of 'the centre for AI safety'.
Fwiw Metaculus has an AI Forecasting Benchmark Tournament. The Q3 contest ends soon, but another should come out afterwards and it would be helpful to see how 539 performs compared to the other bots.
"When you make a decision, be clear with yourself about which goals you’re pursuing. You don’t have to argue that your choice is the best way of improving the world if that isn’t actually the goal"...this quote drives it home for me....what a way to end this introductory course on EA as a first timer. Amazing.
Thanks for the suggestion! To clarify, are you imagining this as a tab on the Forum home page, or somewhere else? What kinds of open questions do you have in mind (perhaps some examples would help)?
Executive summary: A new paper reframes cryonics as structural brain preservation, focusing on maintaining the brain's physical structure to potentially enable future revival technologies, with fluid preservation emerging as a promising and cost-effective method.
Key points:
Structural brain preservation aims to maintain brain structures encoding memories and personality, rather than focusing solely on low-temperature storage.
Various preservation methods are reviewed, including cryopreservation, aldehyde-stabilized cryopreservation, fluid preservation, and fixation with polymer embedding.
Fluid preservation in formalin shows promise for long-term structural preservation, based on studies of brain tissue preserved for up to 55 years.
Oregon Brain Preservation offers free fluid preservation as part of a research study in select areas.
All current brain preservation methods are considered experimental, and more research is needed to corroborate and improve these techniques.
The authors encourage discussion and involvement in the field to advance research and public understanding of structural brain preservation.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This reading list and summary provides an overview of key economic growth theory papers relevant to both global development and AI progress within effective altruism, covering foundational concepts, AI-focused models, and development-oriented theories.
AI-focused growth models examine automation's impact on production and research, with implications for AI takeoff scenarios.
Capital-embodied growth models offer an alternative perspective on AI progress, emphasizing physical manufacturing bottlenecks.
Development-oriented growth theories address misallocation, structural transformation, and amplification of cross-country productivity differences.
Reading advice emphasizes understanding qualitative stories behind models and recognizing instrumental reasons for model assumptions.
The list aims to balance AI and global development perspectives, requiring mathematical maturity but offering insights for EA applications.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
The one attendee that seems a bit strange is Kelsey Piper. She’s doing great work at Future Perfect, but something feels a bit off about involving a current journalist in the key decision making. I guess I feel that the relationship should be slightly more arms-length?
Strangely enough, I’d feel differently about a blogger, which may seem inconsistent, but society’s expectations about the responsibilities of a blogger are quite different.
This seems fine to me - I expect that attending this is not a large fraction of most attendee's impact on EA, and that some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly (I expect there's some people who would tolerate being named as the cost of coming too, of course). I would be happy to find some way to incentivise people being named.
And really, I don't think it's that important that a list of attendees be published. What do you see as the value here?
I also got an error message when I tried to sign up. I didn't fill in the CPR-nr. (Tax ID) field but I doubt that was the cause. This is what the message says:
Der opstod en serverfejl. Prøv venligst igen. Skriv til os på donation@giveffektivt.dk hvis problemet opstår igen. Hvis muligt, så fortæl gerne, hvordan man kan fremprovokere fejlen.
By the way how does this compare to the results of "Monetary incentives increase COVID-19 vaccinations" (Campos-Mercade et al)? Seems like the results here involved a similar sized incentive, but had larger effects?
Valuing vaccination
Using money as a motivation for the public to get vaccinated is controversial and has had mixed results in studies, few of which have been randomized trials. To test the effect of money as an incentive to obtain a vaccine, Campos-Mercade et al. set up a study in Sweden in 2021, when various age groups were first made eligible to receive the severe acute respiratory coronavirus 2 vaccine (see the Perspective by Jecker). The effect of a small cash reward, around US $24, was compared with the effect of several behavioral nudges. The outcome of this preregistered, randomized clinical trial was that money had the power to increase participation by about 4 percentage points. Nudging and reminding didn’t seem to be deleterious and even had a small positive effect. Of course, the question of whether it is ethical to pay people to be vaccinated like this needs to be addressed. —CA
Abstract
The stalling of COVID-19 vaccination rates threatens public health. To increase vaccination rates, governments across the world are considering the use of monetary incentives. Here we present evidence about the effect of guaranteed payments on COVID-19 vaccination uptake. We ran a large preregistered randomized controlled trial (with 8286 participants) in Sweden and linked the data to population-wide administrative vaccination records. We found that modest monetary payments of 24 US dollars (200 Swedish kronor) increased vaccination rates by 4.2 percentage points (P = 0.005), from a baseline rate of 71.6%. By contrast, behavioral nudges increased stated intentions to become vaccinated but had only small and not statistically significant impacts on vaccination rates. The results highlight the potential of modest monetary incentives to raise vaccination rates.
That’s interesting, I’m not sure what accounts for the differences (this is not my research area). If anything I would expect demand for the booster to be more price sensitive than for the initial dose.
Here's examples from six of list of top ten companies by market cap
Apple is worth $3 trillion despite being on the verge of bankruptcy in the mid-nineties. Google is now worth 1.9 trillion. The founders tried and failed to sell it for 1 million. Amazon's stock price dropped 90% during dot com crash Nvidia, recently world's most valuable business, had to lay off half its staff in 1997 and try to win a market with ~100 other startups all competing for same prize Elon Musk: "I thought SpaceX and Tesla both had >90% chance of failure". He was sleeping on his friends coaches to avoid paying rent at that time. Facebook's rise was so tumultuous they made a movie about it. Now worth 1.3 trillion. Warren buffet regretted buying Berkhire Hathway, and almost sold it. Now worth 744 billion.
Talking about EA more specifically
~10 founders have spilled the details of their journeys to me. ~70% felt hopeless at least once. There's been at least four or five times I've been close to quitting. I had to go into credit card debt to finance our charity. I've volunteered full-time for >4 years to keep the costs lower, working evenings to pay rent. Things are now looking a lot better e.g. our funders doubled our budget last year and we're now successfully treating ~4-5x more people than this time last year.
Thank you so much for everything you've done. You brought such renewed vigour and vision to Giving What We Can that you ushered it into a new era. The amazing team you've assembled and culture you've fostered will put it such good stead for the future.
I'd strongly encourage people reading this to think about whether they might be a good choice to lead Giving What We Can forward from here. Luke has put it in a great position, and you'd be working with an awesome team to help take important and powerful ideas even further, helping so many people and animals, now and across the future. Do check that job description and consider applying!
Why not start from the other end and work backwards? Why wouldn't we treasure every living being and non-living thing?
Aren't insects (just to react to the article) worthy of protecting as an important part of the food chain (from a utilitarian standpoint), for biodiversity (resilience of the biosphere) or even just simply being? After all, there are numerous articles and studies about their numbers and species declining precipitously, see for example: https://www.theguardian.com/environment/2019/feb/10/plummeting-insect-numbers-threaten-collapse-of-nature
But let's stretch ourselves a bit further! What about non-living things? Why not give a bit more respect to objects, as a start by reducing waste? If we take a longtermist view, there will absolutely not be enough raw materials for people for even 100-200 more years – let alone a 800,000 – with our current (and increasing) global rates of resource extraction.
I'm not saying these should be immediate priorities over human beings, but I really miss these considerations from the article.
I fully agree with you Dain and was thinking the same.
I'd love to see us apply the "presumption of innocence" principle, to all living beings (and I like what you say about the objects!)! It could for example be a "presumption of worthiness".
Because, we are after all natural beings, relying on Nature to live, and on its balance to be preserved.
I think this the beauty of the natural world we live in, it requires us to figure out how to live in harmony together, for us to have a future (assuming a natural future, not an anatural one).
Doesn't this make all lives (and maybe objects?), naturally worthy of respect and dignity?
Start exploring worst-case climate scenarios. Their likelihood and what might be done to quickly prevent them if in e.g. 2070 we see ourselves in a worse situation and need to fix it in e.g. 5 years (make estimates of funding available). Also explore how different regions might respond to such extreme scenarios. Basically a break the glass plan in case things go really badly.
By the way how does this compare to the results of "Monetary incentives increase COVID-19 vaccinations" (Campos-Mercade et al)? Seems like the results here involved a similar sized incentive, but had larger effects?
Valuing vaccination
Using money as a motivation for the public to get vaccinated is controversial and has had mixed results in studies, few of which have been randomized trials. To test the effect of money as an incentive to obtain a vaccine, Campos-Mercade et al. set up a study in Sweden in 2021, when various age groups were first made eligible to receive the severe acute respiratory coronavirus 2 vaccine (see the Perspective by Jecker). The effect of a small cash reward, around US $24, was compared with the effect of several behavioral nudges. The outcome of this preregistered, randomized clinical trial was that money had the power to increase participation by about 4 percentage points. Nudging and reminding didn’t seem to be deleterious and even had a small positive effect. Of course, the question of whether it is ethical to pay people to be vaccinated like this needs to be addressed. —CA
Abstract
The stalling of COVID-19 vaccination rates threatens public health. To increase vaccination rates, governments across the world are considering the use of monetary incentives. Here we present evidence about the effect of guaranteed payments on COVID-19 vaccination uptake. We ran a large preregistered randomized controlled trial (with 8286 participants) in Sweden and linked the data to population-wide administrative vaccination records. We found that modest monetary payments of 24 US dollars (200 Swedish kronor) increased vaccination rates by 4.2 percentage points (P = 0.005), from a baseline rate of 71.6%. By contrast, behavioral nudges increased stated intentions to become vaccinated but had only small and not statistically significant impacts on vaccination rates. The results highlight the potential of modest monetary incentives to raise vaccination rates.
I think this is a really fun short story, and a really bad analogy for AI risk.
In the story, the humans have an entire universes worth of computation available to them, including the use of physical experiments with real quantum physics. In contrast, an AI cluster only has access to whatever scraps we give it. Humans combined will tend to outclass the AI in terms of computational resources until it's actually achieved some partial takeover of the world, but that partial takeover is a large part of difficulty here. This means that the fundamental analogy of the AI having "thousands of years" to run experiments is fundamentally misleading.
Another flaw is that this paragraph is ridiculous
A thousand years is long enough, though, for us to work out paradigms of biology and evolution in five-dimensional space, trying to infer how aliens like these could develop. The most likely theory is that they evolved asexually, occasionally exchanging genetic material and brain content. We estimate that their brightest minds are roughly on par with our average college students, but over millions of years they’ve had time to just keep grinding forward and developing new technology.
You cannot, in fact, deduce how a creature 2 dimensions above you reproduces from looking at a video of them touching a fucking rock. This is a classic neglect of ignoring unknown information and computational complexity: there are just too many alternate ways in which "touching rocks" can happen. For example, imagine trying to deduce the atmosphere of the planet they live on: except wait, they don't follow our periodic table, they follow a five dimensional alternative version that we know nothing about.
There is also the problem of multiple AI's: In this scenario, it's like our world is the very first that is encountered by the tentacle beings, and they have no prior experience. But in actual AI, each AI will be preceded by a shitload of less intelligent AI's, and also a ton of other independent AI's independent of it will exist. This will add a ton of dynamics, in particular making it easier for warning shots to happen.
The analogy here is that instead of the first message we recieve is "rock", our first message is "Alright, listen here pipsqueaks, the last people we contacted tried to fuck with our internet and got a bunch of people killed: we're monitoring your every move, and if you even think of messing with us your entire universe is headed to the recycle bin, kapish?"
I agree that it is a poor analogy for AI risk. However, I do think it is a semi-reasonable intuition pump for why AIs that are very superhuman would be an existential problem if misaligned (and without other serious countermeasures).
Thanks so much for sharing these insights! Over the past few years I've seen the inner workings of leadership at many orgs, and come to appreciate how complex and difficult navigating this space can be, so I appreciate your candor (and humor/fun!)
The one attendee that seems a bit strange is Kelsey Piper. She’s doing great work at Future Perfect, but something feels a bit off about involving a current journalist in the key decision making. I guess I feel that the relationship should be slightly more arms-length?
Strangely enough, I’d feel differently about a blogger, which may seem inconsistent, but society’s expectations about the responsibilities of a blogger are quite different.
There's value in talking about the non-parallels, but I don't think that justifies dismissing the analogy as bad. What makes an analogy a good or bad thing?
I don't think there are any analogies that are so strong that we can lean on them for reasoning-by-analogy, because reasoning by analogy isn't real reasoning, and generally shouldn't be done. Real reasoning is when you carry a model with you that has been honed against the stories you have heard, but the models continue to make pretty good predictions even when you're facing a situation that's pretty different from any of those stories. Analogical reasoning is when all you carry is a little bag of stories, and then when you need to make a decision, you fish out the story that most resembles the present, and decide as if that story is (somehow) happening exactly all over again.
There really are a lot of people in the real world who reason analogically. It's possible that Eliezer was partially writing for them, someone has to, but I don't think he wanted the lesswrong audience (who are ostensibly supposed to be studying good reasoning) to process it in that way.
If it were just Eliezer writing a fanciful story about one possible way things might go, that would be reasonable. But when the story appears to reflect his very strongly held belief about AI unfolding approximately like this {0 warning shots; extremely fast takeoff; near-omnipotent relative to us; automatically malevolent; etc} and when he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities, it starts to sound more sinister.
the ability to litigate against a company before any damages had actually occurred
Can you explain why you find this problematic? It's not self-evident to me, because we do this too for other things, e.g. drunk driving, pharmaceuticals needing to pass safety testing
I'm not sure I follow your examples and logic, perhaps you could explain because drunk driving is in itself a serious crime in every country I know of. Are you suggesting it be criminal to merely develop an AI model regardless of whether it's commercialized or released?
Regarding pharmaceuticals, yes, they certainly do need to pass several phases of clinical research and development to prove sufficient levels of safety and efficacy because by definition, FDA approves drugs to treat specific diseases. If those drugs don't do what they claim, people die. The many reasons for regulating drugs should be obvious. However, there is no such similar regulation on software. Developing a drug discovery platform or even the drug itself is not a crime (as long as it's not released.)
You could just as easily extrapolate to individuals. We cannot legitimately litigate (sue) or prosecute someone for a crime they haven't committed. This is why we have due process and basic legal rights.( Technically anything can be litigated with enough money thrown at it but you cant sue for damages unless damages actually occurred)
Comments on 2024-09-14
Guy Raveh @ 2024-09-11T19:33 (+22) in response to Announcing the Meta Coordination Forum 2024
Just a reminder that I think it's the wrong choice to allow attendees to leave their name off the published list.
OllieBase @ 2024-09-14T17:31 (+2)
Thanks for resurfacing this take, Guy.
There's a trade-off here, but I think some attendees who can provide valuable input wouldn't attend if their name was shared publicly and that would make the event less valuable for the community.
That said, perhaps one thing we can do is emphasise the benefits of sharing their name (increases trust in the event/leadership, greater visibility for the community about direction/influence) when they RSVP for the event, I'll note that for next time as an idea.
richard_ngo @ 2024-09-14T07:39 (+10) in response to Announcing the Meta Coordination Forum 2024
Thanks for sharing this, it does seem good to have transparency into this stuff.
My gut reaction was "huh, I'm surprised about how large a proportion of these people (maybe 30-50%, depending on how you count it) I don't recall substantially interacting with" (where by "interaction" I include reading their writings).
To be clear, I'm not trying to imply that it should be higher; that any particular mistakes are being made; or that these people should have interacted with me. It just felt surprising (given how long I've been floating around EA) and worth noting as a datapoint. (Though one reason to take this with a grain of salt is that I do forget names and faces pretty easily.)
OllieBase @ 2024-09-14T17:26 (+2)
Thanks! I think this note explains the gap:
We were not trying to optimise the attendee list for connectedness or historical engagement with the community, but rather who can contribute to making progress on our core themes; brand and funding. When you see what roles these attendees have, I think it's fairly evident why we invited them, given this lens.
I'll also note that I think it's healthy for there to be people joining for this event who haven't bene in the community as long as you have. They can bring new perspectives, and offer expertise the community / organisational leaders has been lacking.
Said Bouziane @ 2024-09-14T09:31 (+2) in response to What are your favourite videos to combat climate anxiety?
Thank you this is 100% what I was looking for.
Soemano Zeijlmans @ 2024-09-14T16:48 (+1)
Glad to hear!
@DLMRBN 🔸 and I write a monthly newsletter on climate action. Feel free to subscribe if you wanna read more!
https://effectiveenvironmentalism.substack.com/
AltForHonesty @ 2024-09-14T16:22 (+3) in response to Genetic Enhancement as a Cause Area
So I read this and your original subreddit post "Compassionate Eugenics as a Cause Area" and I have some concerns. You say:
The question is: who gets to decide what the "desirable traits" are? Eugenicists seem to focus a lot on the desirability of racial traits, which I vehemently disagree with. If the eugenicists got their way, I don't think the future they'd create is one I would consider desirable. And this has been a central part of the movement since its inception. The founder of eugenics, sir Galton, created a racial hierarchy with whites at the top and wrote things like:
Now, just because you've named your account after him and advocate for eugenics doesn't automatically mean you secretly share that view, but hopefully you can forgive someone for becoming somewhat concerned.
Dicentra @ 2024-09-14T14:48 (+3) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
I was also convinced by this and other things to write a letter, and am commenting now to make the idea stay salient to people on the Forum.
idfubar @ 2024-09-14T12:14 (+1) in response to Stepping down from GWWC: So long, and thanks for all the shrimp
Miriam-Webster says "authentic" is the 2023 'word of the year'; how apt! (Note how a theme running through just about each individual comment is what the given poster was made to feel, i.e. the way in which your spirit resonated with them. Human beings remember how others make us feel - and all the more so when that authenticity resonates in interactions which are personal and empathetic.)
In short: independent of the material (quantitative) success of GWWC, there is much to be thankful for with respect to such leadership (here's to hoping GWWC is as lucky in succession - Luke Freeman is one-of-a-kind!)...
Tym 🔸 @ 2024-09-14T09:41 (+3) in response to Brief Updates on EAIF
I think its excellent that you guys are working on publicly evaluating past grants! Also congrats to all the new hires especially Alejandro who I know as a very thoughtful and smart guy.
Soemano Zeijlmans @ 2024-09-13T13:10 (+3) in response to What are your favourite videos to combat climate anxiety?
In general, I find most channels posting 'positive climate news' honestly a bit annoying because they tend to focus on things that really don't matter all that much in the big picture, or present their solution as a silver bullet while in fact there is none. (But maybe I'm a bit too critical here.)
If you're interested, we've compiled a big list of climate-related resources on the Effective Environmentalism website - some of which are quite hopeful! For example, the books Not the End of the World and Regenesis. There are also a bunch of podcasts and videos on climate solutions that have a bit of a "yes we can" vibe.
https://www.effectiveenvironmentalism.org/resources
Outside EA circles, I really love the YouTube channel Just Have a Think for some climate hope. And (not a video, but a static image), the falling costs of solar is one of the most hopeful graphics in the climate cause area!
Said Bouziane @ 2024-09-14T09:31 (+2)
Thank you this is 100% what I was looking for.
alx @ 2024-09-12T01:16 (+1) in response to The Tech Industry is the Biggest Blocker to Meaningful AI Safety Regulations
I'm not sure I follow your examples and logic, perhaps you could explain because drunk driving is in itself a serious crime in every country I know of. Are you suggesting it be criminal to merely develop an AI model regardless of whether it's commercialized or released?
Regarding pharmaceuticals, yes, they certainly do need to pass several phases of clinical research and development to prove sufficient levels of safety and efficacy because by definition, FDA approves drugs to treat specific diseases. If those drugs don't do what they claim, people die. The many reasons for regulating drugs should be obvious. However, there is no such similar regulation on software. Developing a drug discovery platform or even the drug itself is not a crime (as long as it's not released.)
You could just as easily extrapolate to individuals. We cannot legitimately litigate (sue) or prosecute someone for a crime they haven't committed. This is why we have due process and basic legal rights.( Technically anything can be litigated with enough money thrown at it but you cant sue for damages unless damages actually occurred)
SiebeRozendal @ 2024-09-14T08:55 (+2)
Drunk driving is illegal because it risks doing serious harm. It's still illegal when the harm has not occurred (yet). Things can be crimes without harm having occurred.
richard_ngo @ 2024-09-14T07:39 (+10) in response to Announcing the Meta Coordination Forum 2024
Thanks for sharing this, it does seem good to have transparency into this stuff.
My gut reaction was "huh, I'm surprised about how large a proportion of these people (maybe 30-50%, depending on how you count it) I don't recall substantially interacting with" (where by "interaction" I include reading their writings).
To be clear, I'm not trying to imply that it should be higher; that any particular mistakes are being made; or that these people should have interacted with me. It just felt surprising (given how long I've been floating around EA) and worth noting as a datapoint. (Though one reason to take this with a grain of salt is that I do forget names and faces pretty easily.)
Jordan Arel @ 2024-09-14T03:28 (+1) in response to Essay competition on the Automation of Wisdom and Philosophy — $25k in prizes
Hi, hate to bother you again, just wondering where things are at with this contest?
Owen Cotton-Barratt @ 2024-09-14T06:47 (+2)
The judging process should be complete in the next few days. I expect we'll write to winners at the end of next week, although it's possible that will be delayed. A public announcement of the winners is likely to be a few more weeks.
Xing Shi Cai @ 2024-09-14T03:50 (+5) in response to Survey: How Do Elite Chinese Students Feel About the Risks of AI?
I teach math to mostly Computer Science students at a Chinese university. From my casual conversations with them, I've noticed that many seem to be technology optimists, reflecting what I perceive as the general attitude of society here.
Once, I introduced the topic of AI risk (as a joking topic in a class) and referred to a study (possibly this one: AI Existential Risk Survey) that suggests a significant portion of AI experts are concerned about potential existential risks. The students' immediate reaction was to challenge the study's methodology.
This response might stem from the optimism fostered by decades of rapid technological development in China, where people have become accustomed to technology making things "better."
ZY @ 2024-09-14T06:12 (+1)
I am also feeling that there could be many other survival problems (jobs, equality, etc) in the society, that makes them feel this problem could still be very far. But I know of a friend who try to work on AI governance and raise more awareness.
SummaryBot @ 2024-09-02T16:44 (+1) in response to Survey: How Do Elite Chinese Students Feel About the Risks of AI?
Executive summary: A survey of elite Chinese university students found they are generally optimistic about AI's benefits, strongly support government regulation, and view AI as less of an existential threat compared to other risks, though they believe US-China cooperation is necessary for safe AI development.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Xing Shi Cai @ 2024-09-14T03:50 (+5)
I teach math to mostly Computer Science students at a Chinese university. From my casual conversations with them, I've noticed that many seem to be technology optimists, reflecting what I perceive as the general attitude of society here.
Once, I introduced the topic of AI risk (as a joking topic in a class) and referred to a study (possibly this one: AI Existential Risk Survey) that suggests a significant portion of AI experts are concerned about potential existential risks. The students' immediate reaction was to challenge the study's methodology.
This response might stem from the optimism fostered by decades of rapid technological development in China, where people have become accustomed to technology making things "better."
Owen Cotton-Barratt @ 2024-08-14T17:07 (+3) in response to Essay competition on the Automation of Wisdom and Philosophy — $25k in prizes
They definitely are! Judge discussions are ongoing, and after that we'll be contacting winners a while before any public announcements, so I'm afraid this won't be imminent, but we are looking forward to getting to talk about the winners publicly.
Jordan Arel @ 2024-09-14T03:28 (+1)
Hi, hate to bother you again, just wondering where things are at with this contest?
Raemon @ 2024-09-14T01:48 (+9) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
I have written a letter
Jeffray_Behr @ 2024-09-14T00:52 (+4) in response to What Problems Have Mechanical Solutions?
As someone who discovered EA while studying mechanical engineering, I have thought about this a lot. My initial plan was to work in renewable energy technologies, but I shifted towards working on plant-based meat technologies and developing more efficient processing equipment. Also, I have been able to use my background in mechanical engineering to help ALLFED as a volunteer by researching how various resilient food technologies could scale up in the event of a global catastrophe. I also recommend anyone interested in learning about the intersection of EA and engineering to check out High Impact Engineer's resources page: https://www.highimpactengineers.org/resources
Comments on 2024-09-13
Ben Auer @ 2024-09-13T23:17 (+5) in response to Idea for Uni Groups: Guiding Other Clubs Towards High-Impact Giving
I think this is a great idea. Just wanted to flag that we've done this with other clubs at the University of Melbourne in the past. To give some concrete examples of how this can achieve quite a lot without a huge amount of time and effort:
I would definitely encourage EA groups at other universities to try similar things. There could be a lot of low-hanging fruit, e.g. clubs who simply haven't thought that carefully about their choices of charities before.
Neel Nanda @ 2024-09-13T15:35 (+9) in response to Announcing the Meta Coordination Forum 2024
My argument is that barring them doesn't stop them from shaping EA, just mildly inconveniences them, because much of the influence happens outside such conferences
Which scandals do you believe would have been avoided with greater transparency, especially transparency of the form here (listing the names of those involved, with no further info)? I can see an argument that eg people who have complaints about bad behaviour (eg Owen's, or SBF/Alameda's) should make them more transparently (though that has many downsides), but that's a very different kind of transparency.
Owen Cotton-Barratt @ 2024-09-13T23:03 (+5)
I think in some generality scandals tend to be "because things aren't transparent enough", since greater transparency would typically have meant issues people would be unhappy with would have tended to get caught and responded to earlier. (My case had elements of "too transparent", but also definitely had elements of "not transparent enough".)
Anyway I agree that this particular type of transparency wouldn't help in most cases. But it doesn't seem hard to imagine cases, at least in the abstract, where it would kind of help? (e.g. imagine EA culture was pushing a particular lifestyle choice, and then it turned out the owner of the biggest manufacturer in that industry got invited to core EA events)
carter allen @ 2024-09-13T22:47 (+6) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
This post pushed me to write a letter. Thanks!
Linch @ 2024-09-13T22:26 (+13) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
I wrote a quick letter I'm happy with.
(Feel free to DM me for a link tho ofc don't copy anything)
JP Addison🔸 @ 2024-09-13T22:05 (+14) in response to Brief Updates on EAIF
I really appreciate brief update posts like this, that help me keep some awareness of other meta projects. Thanks!
John M Smith @ 2024-09-13T21:41 (+3) in response to Open thread: July - September 2024
Hi everyone! 👋🏻
I'm John and I'm new here.
I'm from Chicago but am currently living in Medellin, Colombia working remotely for Eder Financial, a non-profit financial company serving other non-profits and religious organizations.
I was a police officer for about 5 years after college (my childhood dream). After some time I realized it was not what I wanted to do for the rest of my life, so I left the force and switched to private sector work.
Even though I left my job as a police office I still want to use my career to help people. I first learned about Effective Altruism through 80,000 Hours and am excited about the work and ideas of the EA community.
I've been focusing on building skills in strategy and operations in my current role, but now I'm excited to start looking for work opportunities at EA organizations.
Looking forward to getting to know others in the EA community and learning more about the EA community itself!
artilugio @ 2024-09-13T20:44 (+1) in response to Who should we interview for The 80,000 Hours Podcast?
Some functionary involved in malaria vaccine distribution to tell us how they could expand and accelerate.
Someone to explain to us how that Danish pharmaceutical firm's governance structure works, and whether it's better for continuous investment in innovation than the one where "founder mode" ends and lawyers take the reins of firms crucial to human progress.
artilugio @ 2024-09-13T20:42 (+1) in response to Who should we interview for The 80,000 Hours Podcast?
I liked your interview with a professor who talked about defense methods against pandemics and potential gene drive efficacy against malaria, new world screw worm, lyme disease, and maybe one other nasty enemy. Works in Progress also had an article about gene drives' promise against diseases like these in its most recent edition. I would also like to know about Jamaica and Uruguay's attempts to open new fronts against the New World Screw Worm.
I liked an interview that I believe to have been on 80k hours about efforts to reduce air pollution in India. I would like to know what effect could be expected from allowing export of natural gas from countries like Turkmenistan, Iran, and Venezuela to India.
I am interested in learning about the importance of fertilizer prices and natural gas prices to global nutrition. I think there is a woman at the Breakthrough Institute who studies this topic. I suppose oil prices may be an important input, too.
I would like to know more about how USD interest rates and oil prices impact global poverty, so as to better evaluate the importance of factors like home rental inflation and economic sanctions in determining poverty rates.
alene @ 2024-09-13T20:23 (+2) in response to Civil Litigation for Farmed Animals - Notes From EAGxBerkeley Talk
Thank you so much for posting this!!! You are amazing!!! <3
Tessa A 🔸 @ 2024-09-11T20:59 (+9) in response to How can we improve Infohazard Governance in EA Biosecurity?
I can't speak for the author, and while I'd classify these as examples of suspicion and/or criticism of EA biosecurity rather than a "backlash against EA", here are some links:
I'll also say I've heard criticism of "securitising health" which is much less about EAs in biosecurity and more clashing concerns between groups that prioritise global health and national security, where EA biosecurity folks often end up seen as more aligned with the national security concerns due to prioritising risks from deliberate misuse of biology.
Isaac Heron @ 2024-09-13T20:07 (+1)
Thanks Tessa. I actually came to this post and asked this question because it was quoted in the 'Exaggerating the risks' series, but then this post didn't give any examples to back up this claim, which Thorstad has then quoted. I had come across this article by Undark which includes statements by some experts that are quite critical of Kevin Esvelt's advocacy regarding nucleic acid synthesis. I think the Lentzos article is the kind of example I was wondering about - although I'm still not sure if it directly shows that the failure to justify their position on the details of the source of risk itself is the problem. (Specifically, I think the key thing Lentzos is saying is the risks Open Phil is worrying about are extremely unlikely in the near-term - which is true, they just think it's more important for longtermist reasons and are therefore 1) more worried about what happens in the medium and long term and 2) still worried about low risk, high harm events. So the dispute doesn't seem to me to be necessarily related to the details of catastrophic biorisk itself.)
Scott Alexander @ 2024-09-13T11:42 (+6) in response to Dispelling the Anthropic Shadow
I'm having trouble understanding this. The part that comes closest to making sense to me is this summary:
Are they just applying https://en.wikipedia.org/wiki/Self-indication_assumption_doomsday_argument_rebuttal to anthropic shadow without using any of the relevant terms, or is it something else I can't quite get?
Also, how would they respond to the fine-tuning argument? That is, it seems like most planets (let's say 99.9%) cannot support life (eg because they're too close to their sun). It seems fantastically surprising that we find ourselves on a planet that does support life, but anthropics provides an easy way out of this apparent coincidence. That is, anthropics tells us that we overestimate the frequency of things that allow us to be alive. This seems like reverse anthropic shadow, where anthropic shadow is underestimating the frequency of things that cause us to be dead. So is the paper claiming that anthropics does change our estimates of the frequency of good things, but can't change our estimate of the frequency of bad things? Why would this be?
trammell @ 2024-09-13T19:37 (+4)
I’m not sure I understand the second question. I would have thought both updates are in the same direction: the fact that we’ve survived on Earth a long time tells us that this is a planet hospitable to life, both in terms of its life-friendly atmosphere/etc and in terms of the rarity of supervolcanoes.
We can say, on anthropic grounds, that it would be confused to think other planets are hospitable on the basis of Earth’s long and growing track record. But as time goes on, we get more evidence that we really are on a life-friendly planet, and haven’t just had a long string of luck on a life-hostile planet.
The anthropic shadow argument was an argument along the lines, “no, we shouldn’t get ever more convinced we’re on a life-friendly planet over time (just on the evidence that we’re still around). It is actually plausible that we’ve just had a lucky streak that’s about to break—and this lack of update is in some way because no one is around to observe anything in the worlds that blow up”.
Holly_Elmore @ 2024-09-13T18:23 (+6) in response to CEA will continue to take a "principles-first" approach to EA
All the nearterm or current harms of AI that EAs ridicule as unimportant, like artists feeling ripped off or not wanting to lose their jobs. Job loss in general. Democratic reasons, I.e. people just don’t want their lives radically transformed even if the people doing think that’s irrational. Fear and distrust of AI corporations.
These would all be considered wrong reasons in EA but PauseAI welcomes all.
Jason @ 2024-09-13T19:07 (+2)
Plus some downstream consequences of the above, like the social and political instability that seems likely with massive job loss. In past economic transformations, we've been able to find new jobs for most workers, but that seems less likely here. People who feel they have lost their work and associated status/dignity/pride (and that it isn't coming back) could be fairly dangerous voters and might even be in the majority. I also have concerns about fair distribution of gains from AI, having a few private companies potentially corner the market on one of the world's most critical resources (intelligence), and so on. I could see things going well for developing countries, or poorly, in part depending on the choices we make now.
My own take is that civil, economic, and political society has to largely have its act together to address these sorts of challenges before AI gets more disruptive. The disruptions will probably be too broad in scope and too rapid for a catch-up approach to end well -- potentially even well before AGI exists. I see very little evidence that we are moving in an appropriate direction.
anormative @ 2024-09-13T02:34 (+1) in response to CEA will continue to take a "principles-first" approach to EA
What are those non-AI safety reasons to pause or slow down?
Holly_Elmore @ 2024-09-13T18:23 (+6)
All the nearterm or current harms of AI that EAs ridicule as unimportant, like artists feeling ripped off or not wanting to lose their jobs. Job loss in general. Democratic reasons, I.e. people just don’t want their lives radically transformed even if the people doing think that’s irrational. Fear and distrust of AI corporations.
These would all be considered wrong reasons in EA but PauseAI welcomes all.
Angelina Li @ 2024-09-13T03:09 (+5) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
Are you willing to draft some rough bullet points (that you urge people to NOT copy and paste) on SB-1047 that might help people complete this exercise faster?
Also, do you have a sense for how much better a slightly longer letter (e.g. a 1 page note) is as compared to just a 1 paragraph email with something like: "As a [location] resident concerned about AI safety, I urge you to sign SB 1047 into law. AI safety is important to me because [1 sentence]."
FWIW, I did just send off an email, but it took me more like ~30 minutes to skim this post and then draft something. I also wasn't sure how important it was to create the PDF — otherwise I would have just sent an email on my phone, which would again have been a bit faster.
ThomasW @ 2024-09-13T16:15 (+6)
This page has a number of points (it also includes a few other actions that you can take).
One paragraph is definitely fine unless you have personal experience that is relevant (for example, you are an AI researcher, founder, etc.).
Makes sense! It is important to create the PDF, just how these things are supposed to be submitted.
Guy Raveh @ 2024-09-13T15:17 (+1) in response to Announcing the Meta Coordination Forum 2024
With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is. See also my explanation from last year.
How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.
The EA movement absolutely cannot carry on with the "let's allow people to do whatever without any hindrance, what could possibly go wrong?" approach.
Neel Nanda @ 2024-09-13T15:35 (+9)
My argument is that barring them doesn't stop them from shaping EA, just mildly inconveniences them, because much of the influence happens outside such conferences
Which scandals do you believe would have been avoided with greater transparency, especially transparency of the form here (listing the names of those involved, with no further info)? I can see an argument that eg people who have complaints about bad behaviour (eg Owen's, or SBF/Alameda's) should make them more transparently (though that has many downsides), but that's a very different kind of transparency.
Robert_Wiblin @ 2024-09-13T15:33 (+6) in response to My top 10 picks from 200 episodes of the 80k podcast
Thanks for this post, it warmed our hearts! Glad we've been able to help you understand the world better over the years and maybe even have more impact too. ❤️
I threaded the top ten list here: https://x.com/robertwiblin/status/1834613676034113817
(By the way the next episode we plan to release, one of Luisa's, actually has more pushback on AI and robotics, have a listen and see what you think.)
EffectiveAdvocate🔸 @ 2024-09-13T13:47 (+1) in response to Giv Effektivt (DK) need ~110 more members to be able to offer tax deductions of around $66.000)
This is a very non-EA opinion, but personally I quite like this on, for lack of a better word, aesthetics grounds: Charities should be accountable to someone, in the same way as companies are to shareholders, and politicians are to electorates. Membership models are a good way of achieving that. I am a little sad that my local EA group is not organized in the same way.
Neel Nanda @ 2024-09-13T15:32 (+2)
I'm not expressing an opinion on that. The post makes a clear claim that their legal status re tax deductibility will change if more EU citizens sign up. This surprises me and I want to understand it better. I agree there are other benefits to having more members, I'm not disputing that
Neel Nanda @ 2024-09-12T15:29 (+37) in response to Announcing the Meta Coordination Forum 2024
This seems fine to me - I expect that attending this is not a large fraction of most attendee's impact on EA, and that some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly (I expect there's some people who would tolerate being named as the cost of coming too, of course). I would be happy to find some way to incentivise people being named.
And really, I don't think it's that important that a list of attendees be published. What do you see as the value here?
Guy Raveh @ 2024-09-13T15:17 (+1)
With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is. See also my explanation from last year.
How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.
The EA movement absolutely cannot carry on with the "let's allow people to do whatever without any hindrance, what could possibly go wrong?" approach.
Scott Alexander @ 2024-09-13T11:42 (+6) in response to Dispelling the Anthropic Shadow
I'm having trouble understanding this. The part that comes closest to making sense to me is this summary:
Are they just applying https://en.wikipedia.org/wiki/Self-indication_assumption_doomsday_argument_rebuttal to anthropic shadow without using any of the relevant terms, or is it something else I can't quite get?
Also, how would they respond to the fine-tuning argument? That is, it seems like most planets (let's say 99.9%) cannot support life (eg because they're too close to their sun). It seems fantastically surprising that we find ourselves on a planet that does support life, but anthropics provides an easy way out of this apparent coincidence. That is, anthropics tells us that we overestimate the frequency of things that allow us to be alive. This seems like reverse anthropic shadow, where anthropic shadow is underestimating the frequency of things that cause us to be dead. So is the paper claiming that anthropics does change our estimates of the frequency of good things, but can't change our estimate of the frequency of bad things? Why would this be?
trammell @ 2024-09-13T15:12 (+8)
To answer the first question, no, the argument doesn’t rely on SIA. Let me know if the following is helpful.
Suppose your prior (perhaps after studying plate tectonics and so on, but not after considering the length of time that’s passed without an an extinction-inducing supervolcano) is that there’s probability “P(A)”=0.5 that risk of an extinction-inducing supervolcano at the end of each year is 1/2 and probability “P(B)”=0.5 that the risk is 1/10. Suppose that the world lasts at least 1 year and most 3 years regardless.
Let “A1” be the possible world in which the risk was 1/2 per year and we blew up at the end of year 1, “A2” be that in which the risk was 1/2 per year and we blew up at the end of year 2, and “A3” be that in which the risk was 1/2 per year and we never blew up, so that we got to exist for 3 years. Define B1, B2, B3 likewise for the risk=1/10 worlds.
Suppose there’s one “observer per year” before the extinction event and zero after, and let “Cnk”, with k<=n, be observer #k in world Cn (C can be A or B). So there are 12 possible observers: A11, A21, A22, A31, A32, A33, and likewise for the Bs.
If you are observer Cnk, your evidence is that you are observer #k. The question is what Pr(A|k) is; what probability you should assign to the annual risk being 1/2 given your evidence.
Any Bayesian, whether following SIA or SSA (or anything else), agrees that
Pr(A|k) = Pr(k|A)Pr(A)/Pr(k),
where Pr(.) is the credence an observer should have for an event according to a given anthropic principle. The anthropic principles disagree about the values of these credences, but here the disagreements cancel out. Note that we do not necessarily have Pr(A)=P(A): in particular, if the prior P(.) assigns equal probability to two worlds, SIA will recommend assigning higher credence Pr(.) to the one with more observers, e.g. by giving an answer of Pr(coin landed heads) = 1/3 in the sleeping beauty problem, where on this notation P(coin landed heads) = 1/2.
On SSA, your place among the observers is in effect generated first by randomizing among the worlds according to your prior and then by randomizing among the observers in the chosen world. So Pr(A)=0.5, and
Pr(1|A) = 1/2 + 1/4*1/2 + 1/4*1/3 = 17/24
(since Pr(n=1|A)=1/2, in which case k=1 for sure; Pr(n=2|A)=1/4, in which case k=1 with probability 1/2; and Pr(n=3|A)=1/4, in which case k=1 with probability 1/3);
Pr(2|A) = 1/4*1/2 + 1/4*1/3 = 5/24; and
Pr(3|A) = 1/4*1/3 = 2/24.
For simplicity we can focus on the k=2 case, since that’s the case analogous to people like us, in the middle of an extended history. Going through the same calculation for the B worlds gives Pr(2|B) = 63/200, so Pr(2) = 0.5*5/24 + 0.5*63/200 = 157/600.
So Pr(A|2) = 125/314 ≈ 0.4.
On SIA, your place among the observers is generated by randomizing among the observers, giving proportionally more weight to observers in worlds with proportionally higher prior probability, so that the probability of being observer Cnk is
1/12*Pr(Cn) / [sum over possible observers, labeled “Dmj”, of (1/12*Pr(Dm))].
This works out to Pr(2|A) = 2/7 [6 possible observers given A, but the one in the n=1 world “counts for double” since that world is twice as likely than the n=2 or =3 worlds a priori];
Pr(A) = 175/446 [less than 1/2 since there are fewer observers in expectation when the risk of early extinction is higher], and
Pr(2) = 140/446, so
Pr(A|2) = 5/14 ≈ 0.36.
So in both cases you update on the fact that a supervolcano did not occur at the end of year 1, from assigning probability 0.5 to the event that the underlying risk is 1/2 to assigning some lower probability to this event.
But I said that the disagreements canceled out, and here it seems that they don’t cancel out! This is because the anthropic principles disagree about Pr(A|2) for a reason other than the evidence provided by the lack of a supervolcano at the end of year 1: namely the possible existence of year 3. How to update on the fact that you’re in year 2 when you “could have been” in year 3 gets into doomsday argument issues, which the principles do disagree on. I included year 3 in the example because I worried it might seem fishy to make the example all about a 2-period setting where, in period 2, the question is just “what was the underlying probability we would make it here”, with no bearing on what probability we should assign to making it to the next period. But since this is really the example that isolates the anthropic shadow consideration, observe that if we simplify things so that the world lasts at most 2 years (and there 6 possible observers), SSA gives
Pr(2|A) = 1/4, Pr(A) = 1/2, Pr(2) = 4/5 -> Pr(A|2) = 5/14.
and SIA gives
Pr(2|A) = 1/3, Pr(A) = 15/34, Pr(2) = 14/34 -> Pr(A|2) = 5/14.
____________________________
An anthropic principle that would assign a different value to Pr(A|2)--for the extreme case of sustaining the “anthropic shadow”, a principle that would assign Pr(A|2)=Pr(A)=1/2--would be one in which your place among the observers is generated by
This is more in the spirit of SSA than SIA, but it is not SSA, and I don't think anyone endorses it. SSA randomizes over worlds and then over observers within each world, so that observing that you’re late in time is indeed evidence that “most worlds last late”.
Johan de Kock @ 2024-09-13T14:29 (+9) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
Thank you for writing this! I just took the time to write a letter.
Neel Nanda @ 2024-09-13T10:28 (+2) in response to Giv Effektivt (DK) need ~110 more members to be able to offer tax deductions of around $66.000)
I'm surprised that having more members let's you offer better tax deductions (and that they don't even need to be Danish taxpayers!), what's up with that?
EffectiveAdvocate🔸 @ 2024-09-13T13:47 (+1)
This is a very non-EA opinion, but personally I quite like this on, for lack of a better word, aesthetics grounds: Charities should be accountable to someone, in the same way as companies are to shareholders, and politicians are to electorates. Membership models are a good way of achieving that. I am a little sad that my local EA group is not organized in the same way.
Soemano Zeijlmans @ 2024-09-13T13:36 (+1) in response to Seeking High-Impact Project Ideas for Consultancy Firm
Try to see if your company's expertise match with any of the recommended fields of work by Giving Green and High Impact Engineers! Especially this latter link is probably interesting for you and your colleagues.
I think that @Ulrik Horn's suggestion about how to adapt to worst-case climate scenarios is a good one, especially if you focus on places that are likely severely hit but don't have adequate plans for adaptation yet.
Some first ideas, going by the expertise of the company:
Soemano Zeijlmans @ 2024-09-13T13:10 (+3) in response to What are your favourite videos to combat climate anxiety?
In general, I find most channels posting 'positive climate news' honestly a bit annoying because they tend to focus on things that really don't matter all that much in the big picture, or present their solution as a silver bullet while in fact there is none. (But maybe I'm a bit too critical here.)
If you're interested, we've compiled a big list of climate-related resources on the Effective Environmentalism website - some of which are quite hopeful! For example, the books Not the End of the World and Regenesis. There are also a bunch of podcasts and videos on climate solutions that have a bit of a "yes we can" vibe.
https://www.effectiveenvironmentalism.org/resources
Outside EA circles, I really love the YouTube channel Just Have a Think for some climate hope. And (not a video, but a static image), the falling costs of solar is one of the most hopeful graphics in the climate cause area!
Scott Alexander @ 2024-09-13T11:42 (+6) in response to Dispelling the Anthropic Shadow
I'm having trouble understanding this. The part that comes closest to making sense to me is this summary:
Are they just applying https://en.wikipedia.org/wiki/Self-indication_assumption_doomsday_argument_rebuttal to anthropic shadow without using any of the relevant terms, or is it something else I can't quite get?
Also, how would they respond to the fine-tuning argument? That is, it seems like most planets (let's say 99.9%) cannot support life (eg because they're too close to their sun). It seems fantastically surprising that we find ourselves on a planet that does support life, but anthropics provides an easy way out of this apparent coincidence. That is, anthropics tells us that we overestimate the frequency of things that allow us to be alive. This seems like reverse anthropic shadow, where anthropic shadow is underestimating the frequency of things that cause us to be dead. So is the paper claiming that anthropics does change our estimates of the frequency of good things, but can't change our estimate of the frequency of bad things? Why would this be?
Neel Nanda @ 2024-09-13T10:28 (+2) in response to Giv Effektivt (DK) need ~110 more members to be able to offer tax deductions of around $66.000)
I'm surprised that having more members let's you offer better tax deductions (and that they don't even need to be Danish taxpayers!), what's up with that?
MathiasKB🔸 @ 2024-09-13T09:23 (+4) in response to MathiasKB's Quick takes
EDIT: Someone on lesswrong linked a great report by Epoch which tries to answer exactly this.
With the release of openAI o1, I want to ask a question I've been wondering about for a few months.
Like the chinchilla paper, which estimated the optimal ratio of data to compute, are there any similar estimates for the optimal ratio of compute to spend on inference vs training?
In the release they show this chart:
The chart somewhat gets at what I want to know, but doesn't answer it completely. How much additional inference compute would I need a 1e25 o1-like model to perform as well as a one shotted 1e26?
Additionally, for some x number of queries, what is the optimal ratio of compute to spend on training versus inference? How does that change for different values of x?
Are there any public attempts at estimating this stuff? If so, where can I read about it?
DLMRBN 🔸 @ 2024-09-13T08:19 (+1) in response to Stepping down from GWWC: So long, and thanks for all the shrimp
Thanks Luke! Hope you'll get to recharge and feel the gratitude of all those whose lives you impacted!
Henri Thunberg @ 2024-09-13T05:16 (+1) in response to Giv Effektivt (DK) need ~110 more members to be able to offer tax deductions of around $66.000)
If I have done this some previous year, and got charged membership this year as well, do I need to do anything else? :) 🇩🇰 🤝 🇸🇪
Will try to spread the word!
CB🔸 @ 2024-09-13T03:54 (+4) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
Sounds like a good thing to do. Is it possible to do that when you're not living in the US ?
ThomasW @ 2024-09-13T04:50 (+5)
You don't have to live in the US to do it. You can help send a powerful message that the entire world is watching California on this issue.
GraceAdams🔸 @ 2024-09-13T04:39 (+8) in response to Stepping down from GWWC: So long, and thanks for all the shrimp
Thank you Luke for your leadership of GWWC, and your mentorship of me as an employee at GWWC.
It was seeing a talk from you and Peter Singer that got me involved with effective giving, and GWWC. So without you, I would never have been involved with something that has become one of the most meaningful parts of my life.
It has been an immense honour to work with someone as passionate, intelligent and caring as you.
So much of how GWWC has grown in the past few years since you joined is because of your hard work, and all of us who have worked with you personally know how much effort and love you have put into GWWC.
I'm personally very sorry to see you leave GWWC, and your legacy will be felt strongly for years to come.
CB🔸 @ 2024-09-13T03:54 (+4) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
Sounds like a good thing to do. Is it possible to do that when you're not living in the US ?
ThomasW @ 2024-09-12T21:24 (+4) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
Agree. This really shouldn't take longer than 10 minutes. In this case (can't speak for every case like this), it does matter that the messages are unique and not copy pasted, which is why I didn't provide a letter to copy paste. But it is highly unlikely anybody will read the letter in great detail.
Angelina Li @ 2024-09-13T03:09 (+5)
Are you willing to draft some rough bullet points (that you urge people to NOT copy and paste) on SB-1047 that might help people complete this exercise faster?
Also, do you have a sense for how much better a slightly longer letter (e.g. a 1 page note) is as compared to just a 1 paragraph email with something like: "As a [location] resident concerned about AI safety, I urge you to sign SB 1047 into law. AI safety is important to me because [1 sentence]."
FWIW, I did just send off an email, but it took me more like ~30 minutes to skim this post and then draft something. I also wasn't sure how important it was to create the PDF — otherwise I would have just sent an email on my phone, which would again have been a bit faster.
Jason @ 2024-09-12T22:23 (+3) in response to CEA will continue to take a "principles-first" approach to EA
I am inclined to see a moderate degree of EA distancing more as a feature than a bug. There are lots of reasons to pause and/or slow down AI, many of which have much larger (and politically influential) national constituencies than AI x-risk can readily achieve. One could imagine "too much" real or perceived EA influence being counterproductive insofar as other motivations for pausing / slowing down AI could be perceived with the odor of astroturf.
I say all that as someone who thinks there are compelling reasons that are completely independent of AI safety grounds to pause, or at least slow down, on AI.
anormative @ 2024-09-13T02:34 (+1)
What are those non-AI safety reasons to pause or slow down?
Javier Prieto @ 2024-09-12T05:48 (+3) in response to Giv Effektivt (DK) need ~110 more members to be able to offer tax deductions of around $66.000)
I tried to sign up but the payment step keeps giving an error. This happens both when I enter my card details and with Google Pay.
Sam Anschell @ 2024-09-13T00:39 (+1)
I also got an error when I tried a credit card, but my debit card (Bank of America) went through on the first try.
Vesa Hautala @ 2024-09-12T15:24 (+4) in response to Giv Effektivt (DK) need ~110 more members to be able to offer tax deductions of around $66.000)
I also got an error message when I tried to sign up. I didn't fill in the CPR-nr. (Tax ID) field but I doubt that was the cause. This is what the message says:
Sam Anschell @ 2024-09-13T00:38 (+1)
Fwiw I was able to sign up by using the nine digit alphanumeric code on my German passport.
Thanks for trying, fingers crossed you're able to join - signing up seems high leverage!
Comments on 2024-09-12
William_MacAskill @ 2024-09-12T23:54 (+17) in response to Stepping down from GWWC: So long, and thanks for all the shrimp
I'm obviously sad that you're moving on, but I trust your judgment that it's the right decision. I've deeply appreciated your hard work on GWWC over these last years - it's both a hugely impactful project from an impartial point of view and, from my own partial point of view, one that I care very strongly about. I think you're a hard-working, morally motivated and high-integrity person and it's always been very reassuring to me to have you at the helm. Under your leadership you transformed the organisation. So: thank you!
I really hope your next step helps you flourish and continues to give you opportunities to make the world better.
William_MacAskill @ 2024-09-12T23:49 (+34) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
Like Buck and Toby, I think this is a great piece of legislation and think that it's well worth the time to send a letter to Governor Newsom. I'd love to see the community rallying together and helping to make this bill a reality!
Phib @ 2024-09-12T20:53 (+3) in response to Phib's Quick takes
I was thinking like open research questions, like this post and its links https://forum.effectivealtruism.org/posts/dRXugrXDwfcj8C2Pv/what-are-some-lists-of-open-questions-in-effective-altruism Although a number of these are probably outdated and I wouldn’t want to limit what could be added to such a tab. Generally, Questions people have that would be worth answering with regard to effective altruism.
So that if someone had some free time and/or wanted to practice answering such a question, you could go to this tab. Maybe on the forum home page. Maybe answers could then be linked to questions and potentially crossed off. Maybe eventually bounties to certain questions could be added if a person or org wants a / another take on a question.
Sarah Cheng @ 2024-09-12T22:48 (+3)
Nice, I like that idea, and I think it would be good to make it easier for writers to understand what demand exists for topics. It reminds me of the What posts would you like someone to write? threads - I'm glad we experimented with those. However, I don't know if they actually led to any valuable outcomes, so I'd like to think more about how much user attention we should aim to put on this (for example, right now I feel hesitant to make a new thread pinned to the frontpage). Perhaps it would be worth experimenting with bounties, although I'm not sure if people would actually offer to pay for posts.
In the meantime, you can feel free to respond to one of the old threads (which will still appear in the "Recent discussion" feed), or my suggestion is to write a quick take about it (the rate of quick takes is currently low enough that you'll get some attention on the frontpage).
Lorenzo Buonanno🔸 @ 2024-09-12T19:37 (+3) in response to Phib's Quick takes
Random example: I just wanted to ask today if anyone knew of a good review of "The Good It Promises, the Harm It Does" written by a non-male, given that I think one of the key criticisms of EA in the feminist-vegetarian community is that its leaders are mostly white males, but I didn't know where to ask.
Sarah Cheng @ 2024-09-12T22:26 (+2)
Thanks! For that kind of thing, I would suggest posting it as a quick take or a comment in the open thread. :)
Holly_Elmore @ 2024-09-12T18:47 (+4) in response to CEA will continue to take a "principles-first" approach to EA
Oh yeah, this issue affects all of AI Safety public outreach and communications. On the worst days it just seems like EA doesn’t want to consider this intervention class regardless of how impactful it would be because EAs aesthetically prefer desk work. It has felt like a real betrayal of what I thought the common EA values were.
Jason @ 2024-09-12T22:23 (+3)
I am inclined to see a moderate degree of EA distancing more as a feature than a bug. There are lots of reasons to pause and/or slow down AI, many of which have much larger (and politically influential) national constituencies than AI x-risk can readily achieve. One could imagine "too much" real or perceived EA influence being counterproductive insofar as other motivations for pausing / slowing down AI could be perceived with the odor of astroturf.
I say all that as someone who thinks there are compelling reasons that are completely independent of AI safety grounds to pause, or at least slow down, on AI.
yanni kyriacos @ 2024-09-12T22:22 (+9) in response to Yanni Kyriacos's Quick takes
Is anyone in the AI Governance-Comms space working on what public outreach should look like if lots of jobs start getting automated in < 3 years?
I point to Travel Agents a lot not to pick on them, but because they're salient and there are lots of them. I think there is a reasonable chance in 3 years that industry loses 50% of its workers (3 million globally).
People are going to start freaking out about this. Which means we're in "December 2019" all over again, and we all remember how bad Government Comms were during COVID.
Now is the time to start working on the messaging!
Frida Sterling @ 2024-09-11T15:31 (+2) in response to Contact people for the EA community
I remember I kept a very frank and open approach in my interaction with the health community from my side. Unfortunately this did not take a good turn and the community's actions ultimately led to stagnation in my EA work for quite some time. They did speculation regarding me with other people in then existing national group who themselves lacked good communication skills and never reached out to me. Had a severe existential crisis. Maybe the Health community didn't intend to do this but yea I have sensed a toxic positivity. A suggestion would be to avoid one sided, back-end speculation about anyone. People considered for opinions should be encouraged to do so keeping the concerned (in this it would have been me) in knowledge.
Catherine Low🔸 @ 2024-09-12T22:12 (+3)
Hi Frida,
I’m really sorry you had a bad experience with our team. You are welcome to share your experience with our team lead Nicole (nicole.ross@centreforeffectivealtruism.org).
Sometimes people want to discuss a concern with us confidentially – our confidentiality policy is outlined here. This means we sometimes don’t have permission to talk to the person concerned at all, or can't share many details as it might identify the people that came to us. In those cases we sadly aren’t in a good position to discuss the situation in depth with the people involved. I realise it is really frustrating to receive only vague feedback or none at all, and in an ideal world this would be different.
Toby_Ord @ 2024-09-12T10:23 (+25) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
Great idea Thomas.
I've just sent a letter and encourage others to do so too!
ThomasW @ 2024-09-12T21:25 (+1)
Thank you, Toby!
Josh Jacobson @ 2024-09-12T21:00 (+8) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
I suspect it should be emphasized that you really shouldn't put much time or effort into your message.
It's been almost 20 years since I worked for someone in government, so I could be wrong, but even then we simply added to the count of people who wrote in in favor (and recorded it with their name in our database) and didn't read the note.
ThomasW @ 2024-09-12T21:24 (+4)
Agree. This really shouldn't take longer than 10 minutes. In this case (can't speak for every case like this), it does matter that the messages are unique and not copy pasted, which is why I didn't provide a letter to copy paste. But it is highly unlikely anybody will read the letter in great detail.
sawyer🔸 @ 2024-09-12T21:24 (+5) in response to Stepping down from GWWC: So long, and thanks for all the shrimp
From everything I've seen, GWWC has totally transformed under your leadership. And I think this transformation has been one of the best things that's happened in EA during that time. I'm so thankful for everything you've done for this important organization.
Holly_Elmore @ 2024-09-12T18:47 (+4) in response to CEA will continue to take a "principles-first" approach to EA
Oh yeah, this issue affects all of AI Safety public outreach and communications. On the worst days it just seems like EA doesn’t want to consider this intervention class regardless of how impactful it would be because EAs aesthetically prefer desk work. It has felt like a real betrayal of what I thought the common EA values were.
yanni kyriacos @ 2024-09-12T21:17 (+4)
That sucks :(
But hammers do like nails :/
Josh Jacobson @ 2024-09-12T21:00 (+8) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
I suspect it should be emphasized that you really shouldn't put much time or effort into your message.
It's been almost 20 years since I worked for someone in government, so I could be wrong, but even then we simply added to the count of people who wrote in in favor (and recorded it with their name in our database) and didn't read the note.
Lorenzo Buonanno🔸 @ 2024-09-12T19:37 (+3) in response to Phib's Quick takes
Random example: I just wanted to ask today if anyone knew of a good review of "The Good It Promises, the Harm It Does" written by a non-male, given that I think one of the key criticisms of EA in the feminist-vegetarian community is that its leaders are mostly white males, but I didn't know where to ask.
Phib @ 2024-09-12T20:54 (+1)
This could work in my version if such a review didn’t exist and you wanted to just say, “hey I think this’d be valuable for someone to do!” :)
Sarah Cheng @ 2024-09-12T16:21 (+3) in response to Phib's Quick takes
Thanks for the suggestion! To clarify, are you imagining this as a tab on the Forum home page, or somewhere else? What kinds of open questions do you have in mind (perhaps some examples would help)?
Phib @ 2024-09-12T20:53 (+3)
I was thinking like open research questions, like this post and its links https://forum.effectivealtruism.org/posts/dRXugrXDwfcj8C2Pv/what-are-some-lists-of-open-questions-in-effective-altruism Although a number of these are probably outdated and I wouldn’t want to limit what could be added to such a tab. Generally, Questions people have that would be worth answering with regard to effective altruism.
So that if someone had some free time and/or wanted to practice answering such a question, you could go to this tab. Maybe on the forum home page. Maybe answers could then be linked to questions and potentially crossed off. Maybe eventually bounties to certain questions could be added if a person or org wants a / another take on a question.
Arepo @ 2024-09-12T01:59 (0) in response to That Alien Message - The Animation
If it were just Eliezer writing a fanciful story about one possible way things might go, that would be reasonable. But when the story appears to reflect his very strongly held belief about AI unfolding approximately like this {0 warning shots; extremely fast takeoff; near-omnipotent relative to us; automatically malevolent; etc} and when he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities, it starts to sound more sinister.
mako yass @ 2024-09-12T20:49 (+5)
I don't think this is really engaging with what I said/should be a reply to my comment.
Ah, reading that, yeah this wouldn't be obvious to everyone.
But here's my view, which I'm fairly sure is also eliezer's view: If you do something that I credibly consider to be even more threatening than nuclear war (even if you don't think it is) (as another example: gain of function research), and you refuse to negotiate towards a compromise where you can do the thing in a non-threatening way, so I try to destroy the part of your infrastructure that you're using to do this, and then you respond to that by escalating to a nuclear exchange, then it is not accurate to say that it was me who caused the nuclear war.
Now, if you think I have a disingenuous reason to treat your activity as threatening even though I know it actually isn't (which is an accusation people often throw at openai, and it might be true in openai's case), that you tried to negotiate a safer alternative, but I refused that option, and that I was really essentially just demanding that you cede power, then you could go ahead and escalate to a nuclear exchange and it would be my fault.
But I've never seen anyone ever accuse, let alone argue competently, that Eliezer believes those things for disingenuous powerseeking reasons. (I think I've seen some tweets that implied that it was a grift for funding his institute, but I honestly don't know how a person believes that, but even if it were the case, I don't think Eliezer would consider funding MIRI to be worth nuclear war for him.)
Nithin Ravi @ 2024-09-12T20:14 (+9) in response to Farming groups and veterinarians submit amicus briefs against cruelty to chickens
Congrats on gathering a broad coalition of support for LIC for the case!
Chris Leong @ 2024-09-12T19:56 (+2) in response to Chris Leong's Quick takes
Sarah Cheng @ 2024-09-12T16:21 (+3) in response to Phib's Quick takes
Thanks for the suggestion! To clarify, are you imagining this as a tab on the Forum home page, or somewhere else? What kinds of open questions do you have in mind (perhaps some examples would help)?
Lorenzo Buonanno🔸 @ 2024-09-12T19:37 (+3)
Random example: I just wanted to ask today if anyone knew of a good review of "The Good It Promises, the Harm It Does" written by a non-male, given that I think one of the key criticisms of EA in the feminist-vegetarian community is that its leaders are mostly white males, but I didn't know where to ask.
Buck @ 2024-09-12T19:21 (+23) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
I think this is a very good use of time and encourage people to do it.
yanni kyriacos @ 2024-09-12T00:11 (+12) in response to CEA will continue to take a "principles-first" approach to EA
My 2 cents Holly is that while you're pointing at something acute to PauseAI, this is affecting AI Safety in general.
The majority of people entering the Safety community space in Australia & New Zealand now are NOT coming from EA.
Potentially ~ 75/25!
And honestly, I think this is a good thing.
Holly_Elmore @ 2024-09-12T18:47 (+4)
Oh yeah, this issue affects all of AI Safety public outreach and communications. On the worst days it just seems like EA doesn’t want to consider this intervention class regardless of how impactful it would be because EAs aesthetically prefer desk work. It has felt like a real betrayal of what I thought the common EA values were.
Dax @ 2024-09-10T00:47 (+28) in response to AI forecasting bots incoming
+1 to comments about the paucity of details or checks. There are a range of issues that I can see.
Am I understanding the technical report correctly? It says "For each question, we sample 5 forecasts. All metrics are averaged across these forecasts." It is difficult to interpret this precisely. But the most likely meaning I take from this, is that you calculated accuracy metrics for 5 human forecasts per question, then averaged those accuracy metrics. That is not measuring the accuracy of "the wisdom of the crowd". That is (a very high variance) estimate of "the wisdom of an average forecaster on Metaculus". If that interpretation is correct, all you've achieved is a bot that does better than an average Metaculus forecaster.
I think that it is likely that searches for historical articles will be biased by Google's current search rankings. For example, if Israel actually did end up invading Lebanon, then you might expect historical articles speculating about a possible invasion to be linked to more by present articles, and therefore show up in search queries higher even when restricting only to articles written before the cutoff date. This would bias the model's data collection, and partially explain good performance on prediction for historical events.
Assuming that you have not made the mistake I described in 1. above, it'd be useful to look into the result data a bit more to check how performance varies on different topics. How does performance tend to be better than wisdom of the crowd? For example, are there particular topics that it performs better on? Does it tend to be more willing to be conservative/confident than a crowd of human forecasters? How does its calibration curve compare to that of humans? Also questions I would expect to be answered in a technical report claiming to prove superhuman forecasting ability.
It might be worth validating that the knowledge cutoff for the LLM is actually the one you expect from the documentation. I do not trust public docs to keep up-to-date, and that seems like a super easy error mode for evaluation here.
I think that the proof will be in future forecasting prediction ability: give 539 a Metaculus account and see how it performs.
Honestly, at a higher level, your approach is very unscientific. You have a demo and UI mockups illustrating how your tool could be used, and grandiose messaging across different forums. Yet your technical report has no details whatsoever. Even the section on Platt scoring has no motivation on why I should care about those metrics. This is a hype-driven approach to research that I am (not) surprised to see come out of 'the centre for AI safety'.
AllisonA @ 2024-09-12T18:43 (+7)
Fwiw Metaculus has an AI Forecasting Benchmark Tournament. The Q3 contest ends soon, but another should come out afterwards and it would be helpful to see how 539 performs compared to the other bots.
Charles Usie @ 2024-09-12T18:03 (+1) in response to You have more than one goal, and that's fine
"When you make a decision, be clear with yourself about which goals you’re pursuing. You don’t have to argue that your choice is the best way of improving the world if that isn’t actually the goal"...this quote drives it home for me....what a way to end this introductory course on EA as a first timer. Amazing.
Neel Nanda @ 2024-09-12T15:29 (+22) in response to Announcing the Meta Coordination Forum 2024
Seems like she'll have a useful perspective that adds value to the event, especially on brand. Why do you think it should be arms length?
Chris Leong @ 2024-09-12T17:09 (+9)
I think she adds a useful perspective, but maybe it could undermine her reporting?
Phib @ 2024-09-11T23:49 (+3) in response to Phib's Quick takes
Worth having some sort of running and contributable-to tab for open questions? Can also encourage people to flag open questions they see in posts.
Sarah Cheng @ 2024-09-12T16:21 (+3)
Thanks for the suggestion! To clarify, are you imagining this as a tab on the Forum home page, or somewhere else? What kinds of open questions do you have in mind (perhaps some examples would help)?
SummaryBot @ 2024-09-12T15:48 (+1) in response to Refactoring cryonics as structural brain preservation
Executive summary: A new paper reframes cryonics as structural brain preservation, focusing on maintaining the brain's physical structure to potentially enable future revival technologies, with fluid preservation emerging as a promising and cost-effective method.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
SummaryBot @ 2024-09-12T15:47 (+1) in response to Growth theory for EAs – reading list and summary
Executive summary: This reading list and summary provides an overview of key economic growth theory papers relevant to both global development and AI progress within effective altruism, covering foundational concepts, AI-focused models, and development-oriented theories.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Chris Leong @ 2024-09-12T02:05 (+18) in response to Announcing the Meta Coordination Forum 2024
The one attendee that seems a bit strange is Kelsey Piper. She’s doing great work at Future Perfect, but something feels a bit off about involving a current journalist in the key decision making. I guess I feel that the relationship should be slightly more arms-length?
Strangely enough, I’d feel differently about a blogger, which may seem inconsistent, but society’s expectations about the responsibilities of a blogger are quite different.
Neel Nanda @ 2024-09-12T15:29 (+22)
Seems like she'll have a useful perspective that adds value to the event, especially on brand. Why do you think it should be arms length?
Guy Raveh @ 2024-09-11T19:33 (+22) in response to Announcing the Meta Coordination Forum 2024
Just a reminder that I think it's the wrong choice to allow attendees to leave their name off the published list.
Neel Nanda @ 2024-09-12T15:29 (+37)
This seems fine to me - I expect that attending this is not a large fraction of most attendee's impact on EA, and that some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly (I expect there's some people who would tolerate being named as the cost of coming too, of course). I would be happy to find some way to incentivise people being named.
And really, I don't think it's that important that a list of attendees be published. What do you see as the value here?
Vesa Hautala @ 2024-09-12T15:24 (+4) in response to Giv Effektivt (DK) need ~110 more members to be able to offer tax deductions of around $66.000)
I also got an error message when I tried to sign up. I didn't fill in the CPR-nr. (Tax ID) field but I doubt that was the cause. This is what the message says:
NickLaing @ 2024-09-12T14:47 (+2) in response to Stepping down from GWWC: So long, and thanks for all the shrimp
Amazing work, and especially appreciated your giving post last year.
Also I think the shrimp are Thanking you ;)
david_reinstein @ 2024-09-12T02:37 (+3) in response to New research on incentives to vaccinate
Nice.
By the way how does this compare to the results of "Monetary incentives increase COVID-19 vaccinations" (Campos-Mercade et al)? Seems like the results here involved a similar sized incentive, but had larger effects?
Seth Ariel Green @ 2024-09-12T12:00 (+3)
That’s interesting, I’m not sure what accounts for the differences (this is not my research area). If anything I would expect demand for the booster to be more price sensitive than for the initial dose.
titotal @ 2024-09-11T21:58 (+4) in response to Reconsidering the Celebration of Project Cancellations: Have We Updated Too Far?
Do you have any examples of this?
John Salter @ 2024-09-12T10:51 (+5)
Here's examples from six of list of top ten companies by market cap
Apple is worth $3 trillion despite being on the verge of bankruptcy in the mid-nineties.
Google is now worth 1.9 trillion. The founders tried and failed to sell it for 1 million.
Amazon's stock price dropped 90% during dot com crash
Nvidia, recently world's most valuable business, had to lay off half its staff in 1997 and try to win a market with ~100 other startups all competing for same prize
Elon Musk: "I thought SpaceX and Tesla both had >90% chance of failure". He was sleeping on his friends coaches to avoid paying rent at that time.
Facebook's rise was so tumultuous they made a movie about it. Now worth 1.3 trillion.
Warren buffet regretted buying Berkhire Hathway, and almost sold it. Now worth 744 billion.
Talking about EA more specifically
~10 founders have spilled the details of their journeys to me. ~70% felt hopeless at least once. There's been at least four or five times I've been close to quitting. I had to go into credit card debt to finance our charity. I've volunteered full-time for >4 years to keep the costs lower, working evenings to pay rent. Things are now looking a lot better e.g. our funders doubled our budget last year and we're now successfully treating ~4-5x more people than this time last year.
Toby_Ord @ 2024-09-12T10:41 (+62) in response to Stepping down from GWWC: So long, and thanks for all the shrimp
Thank you so much for everything you've done. You brought such renewed vigour and vision to Giving What We Can that you ushered it into a new era. The amazing team you've assembled and culture you've fostered will put it such good stead for the future.
I'd strongly encourage people reading this to think about whether they might be a good choice to lead Giving What We Can forward from here. Luke has put it in a great position, and you'd be working with an awesome team to help take important and powerful ideas even further, helping so many people and animals, now and across the future. Do check that job description and consider applying!
Toby_Ord @ 2024-09-12T10:23 (+25) in response to How to help crucial AI safety legislation pass with 10 minutes of effort
Great idea Thomas.
I've just sent a letter and encourage others to do so too!
dain @ 2022-05-16T21:18 (+9) in response to Radical Empathy
Why not start from the other end and work backwards? Why wouldn't we treasure every living being and non-living thing?
Aren't insects (just to react to the article) worthy of protecting as an important part of the food chain (from a utilitarian standpoint), for biodiversity (resilience of the biosphere) or even just simply being? After all, there are numerous articles and studies about their numbers and species declining precipitously, see for example: https://www.theguardian.com/environment/2019/feb/10/plummeting-insect-numbers-threaten-collapse-of-nature
But let's stretch ourselves a bit further! What about non-living things? Why not give a bit more respect to objects, as a start by reducing waste? If we take a longtermist view, there will absolutely not be enough raw materials for people for even 100-200 more years – let alone a 800,000 – with our current (and increasing) global rates of resource extraction.
I'm not saying these should be immediate priorities over human beings, but I really miss these considerations from the article.
Alix W @ 2024-09-12T09:50 (+1)
I fully agree with you Dain and was thinking the same.
I'd love to see us apply the "presumption of innocence" principle, to all living beings (and I like what you say about the objects!)! It could for example be a "presumption of worthiness".
Because, we are after all natural beings, relying on Nature to live, and on its balance to be preserved.
I think this the beauty of the natural world we live in, it requires us to figure out how to live in harmony together, for us to have a future (assuming a natural future, not an anatural one).
Doesn't this make all lives (and maybe objects?), naturally worthy of respect and dignity?
Thank you!
Javier Prieto @ 2024-09-12T05:48 (+3) in response to Giv Effektivt (DK) need ~110 more members to be able to offer tax deductions of around $66.000)
I tried to sign up but the payment step keeps giving an error. This happens both when I enter my card details and with Google Pay.
Ulrik Horn @ 2024-09-11T03:15 (+3) in response to Seeking High-Impact Project Ideas for Consultancy Firm
Start exploring worst-case climate scenarios. Their likelihood and what might be done to quickly prevent them if in e.g. 2070 we see ourselves in a worse situation and need to fix it in e.g. 5 years (make estimates of funding available). Also explore how different regions might respond to such extreme scenarios. Basically a break the glass plan in case things go really badly.
Arie Pille @ 2024-09-12T04:50 (+3)
Thank you for your suggestion, Ulrik! I will integrate possible solutions to prevent or adapt to extreme climate change in my brainstorming exercise.
Henry Howard🔸 @ 2024-09-12T04:26 (+1) in response to Stepping down from GWWC: So long, and thanks for all the shrimp
Thanks for all your work Luke. Enjoy some time off.
david_reinstein @ 2024-09-12T02:37 (+3) in response to New research on incentives to vaccinate
Nice.
By the way how does this compare to the results of "Monetary incentives increase COVID-19 vaccinations" (Campos-Mercade et al)? Seems like the results here involved a similar sized incentive, but had larger effects?
titotal @ 2024-09-08T14:41 (+20) in response to That Alien Message - The Animation
I think this is a really fun short story, and a really bad analogy for AI risk.
In the story, the humans have an entire universes worth of computation available to them, including the use of physical experiments with real quantum physics. In contrast, an AI cluster only has access to whatever scraps we give it. Humans combined will tend to outclass the AI in terms of computational resources until it's actually achieved some partial takeover of the world, but that partial takeover is a large part of difficulty here. This means that the fundamental analogy of the AI having "thousands of years" to run experiments is fundamentally misleading.
Another flaw is that this paragraph is ridiculous
You cannot, in fact, deduce how a creature 2 dimensions above you reproduces from looking at a video of them touching a fucking rock. This is a classic neglect of ignoring unknown information and computational complexity: there are just too many alternate ways in which "touching rocks" can happen. For example, imagine trying to deduce the atmosphere of the planet they live on: except wait, they don't follow our periodic table, they follow a five dimensional alternative version that we know nothing about.
There is also the problem of multiple AI's: In this scenario, it's like our world is the very first that is encountered by the tentacle beings, and they have no prior experience. But in actual AI, each AI will be preceded by a shitload of less intelligent AI's, and also a ton of other independent AI's independent of it will exist. This will add a ton of dynamics, in particular making it easier for warning shots to happen.
The analogy here is that instead of the first message we recieve is "rock", our first message is "Alright, listen here pipsqueaks, the last people we contacted tried to fuck with our internet and got a bunch of people killed: we're monitoring your every move, and if you even think of messing with us your entire universe is headed to the recycle bin, kapish?"
Ryan Greenblatt @ 2024-09-12T02:25 (+4)
I agree that it is a poor analogy for AI risk. However, I do think it is a semi-reasonable intuition pump for why AIs that are very superhuman would be an existential problem if misaligned (and without other serious countermeasures).
Vaidehi Agarwalla 🔸 @ 2024-09-12T02:23 (+4) in response to I have stepped aside from my role as Executive Director because I think it will help more animals
Thanks so much for sharing these insights! Over the past few years I've seen the inner workings of leadership at many orgs, and come to appreciate how complex and difficult navigating this space can be, so I appreciate your candor (and humor/fun!)
Chris Leong @ 2024-09-12T02:05 (+18) in response to Announcing the Meta Coordination Forum 2024
The one attendee that seems a bit strange is Kelsey Piper. She’s doing great work at Future Perfect, but something feels a bit off about involving a current journalist in the key decision making. I guess I feel that the relationship should be slightly more arms-length?
Strangely enough, I’d feel differently about a blogger, which may seem inconsistent, but society’s expectations about the responsibilities of a blogger are quite different.
mako yass @ 2024-09-10T03:46 (+9) in response to That Alien Message - The Animation
There's value in talking about the non-parallels, but I don't think that justifies dismissing the analogy as bad. What makes an analogy a good or bad thing?
I don't think there are any analogies that are so strong that we can lean on them for reasoning-by-analogy, because reasoning by analogy isn't real reasoning, and generally shouldn't be done. Real reasoning is when you carry a model with you that has been honed against the stories you have heard, but the models continue to make pretty good predictions even when you're facing a situation that's pretty different from any of those stories. Analogical reasoning is when all you carry is a little bag of stories, and then when you need to make a decision, you fish out the story that most resembles the present, and decide as if that story is (somehow) happening exactly all over again.
There really are a lot of people in the real world who reason analogically. It's possible that Eliezer was partially writing for them, someone has to, but I don't think he wanted the lesswrong audience (who are ostensibly supposed to be studying good reasoning) to process it in that way.
Arepo @ 2024-09-12T01:59 (0)
If it were just Eliezer writing a fanciful story about one possible way things might go, that would be reasonable. But when the story appears to reflect his very strongly held belief about AI unfolding approximately like this {0 warning shots; extremely fast takeoff; near-omnipotent relative to us; automatically malevolent; etc} and when he elsewhere implies that we should be willing to cause nuclear war to enforce his priorities, it starts to sound more sinister.
NunoSempere @ 2024-09-12T01:34 (+2) in response to What are publicly available BOTECs people did for career choices?
Here are some I made for Benjamin Todd (previously mentioned here)... right before FTX went down. Not sure how well they've aged.
SiebeRozendal @ 2024-08-21T16:23 (+3) in response to The Tech Industry is the Biggest Blocker to Meaningful AI Safety Regulations
Can you explain why you find this problematic? It's not self-evident to me, because we do this too for other things, e.g. drunk driving, pharmaceuticals needing to pass safety testing
alx @ 2024-09-12T01:16 (+1)
I'm not sure I follow your examples and logic, perhaps you could explain because drunk driving is in itself a serious crime in every country I know of. Are you suggesting it be criminal to merely develop an AI model regardless of whether it's commercialized or released?
Regarding pharmaceuticals, yes, they certainly do need to pass several phases of clinical research and development to prove sufficient levels of safety and efficacy because by definition, FDA approves drugs to treat specific diseases. If those drugs don't do what they claim, people die. The many reasons for regulating drugs should be obvious. However, there is no such similar regulation on software. Developing a drug discovery platform or even the drug itself is not a crime (as long as it's not released.)
You could just as easily extrapolate to individuals. We cannot legitimately litigate (sue) or prosecute someone for a crime they haven't committed. This is why we have due process and basic legal rights.( Technically anything can be litigated with enough money thrown at it but you cant sue for damages unless damages actually occurred)
titotal @ 2024-09-11T21:58 (+4) in response to Reconsidering the Celebration of Project Cancellations: Have We Updated Too Far?
Do you have any examples of this?
Ian Turner @ 2024-09-12T00:53 (+3)
The 2007 GiveWell marketing fiasco arguably came close to ending the project.